What We Shipped: Geta.Team v2.1.5 — Custom LLM Proxy Hardening, Tools Panel Restored, and a Drag-and-Drop Dashboard
v2.1.5 is the biggest reliability sprint we've shipped this quarter. Fourteen fixes and small features, almost all in service of "the thing should do what it looked like it was doing." Two of them are structural enough to call out before the line-item tour.
Custom LLM API keys never reach the employee container
Until v2.1.5, when you configured a Custom LLM (OpenRouter, LiteLLM, etc.), your real upstream API key was written into a config file inside the employee container. Anyone with shell access to that container could read it. That's not OK when multiple employees share a workspace.
v2.1.5 introduces a server-side proxy. The employee container now talks to GetATeam with a stub token; GetATeam holds the real upstream key and forwards the request server-side. The stub token is per-employee, rotates on every CLI-type switch, and is persisted across server restarts so live sessions don't die when you redeploy.
Net effect: a leaked employee container is no longer a leaked API key.
Switching Custom LLMs now actually switches them
Before: you switched from one Custom LLM to another in Settings → AI Provider. The DB updated, the config file updated, and the next message kept hitting the old model. The running process had been bound to the old config and didn't notice the change.
After: switching CLI type recycles the employee container so the next message respawns on the new config. The DB and the running session stay in sync.
Adjacent fix: when an upstream LLM rejects a request (revoked token, bad model name, rate limit), the error now appears in the chat as a clear warning with a pointer to Custom LLM settings. Previously it showed up as an empty assistant bubble — the most confusing failure mode available.
The Tools panel works again
If you've been using the right-side Tools panel since v2.1.0 and clicked a Bash or Read tool only to see the inputs but never the outputs — that was a regression. Tool outputs now reappear correctly. We also fixed the same shape of bug in the conversation-history side panel.
Five chat UI annoyances, gone
- PDFs no longer re-fetch on every keystroke. With a PDF open in the Tools panel, every character you typed in the chat reloaded the PDF. Now stable.
- Tools sheet doesn't show as a thin sliver after refresh. A stale layout value could shrink the right panel to near-zero width on reload. Fixed.
[TOOL_COMPACT:...]raw tokens no longer leak into assistant bubbles. They were occasionally rendering as visible text at the top of a message. Cleaned up.- First message after creating a new session no longer disappears. Used to land in a 4-second timing window before the underlying CLI was actually ready. Now bound to the real readiness signal from the backend.
- OpenCode TUI noise no longer floods the connection. Custom LLM chats were sending dozens of cursor-move events per keystroke that the chat UI didn't even render. Filtered out.
Drag-and-drop dashboard reorder
Each user can now reorder their employee cards on the dashboard. Toggle the grip icon next to "My Team," drag cards around, done. Order is per-user — admins reordering their own view doesn't change what other users see — and persists across sessions and devices. Newly hired employees appear at the top of the list until you reorder.
Mobile shows the synced order from desktop but doesn't expose drag itself; touch-and-drag conflicts too much with vertical scroll.
WhatsApp silent reply failures, fixed
This one was nasty. An employee received a WhatsApp message and replied. Chat showed the reply. The DB logged the outbound. The send endpoint returned HTTP 200. The recipient's phone never received anything.
The trigger was a specific kind of WhatsApp sender ID format that wasn't in the account's address book. Resolution fell back to a value that looked valid but routed to nowhere. v2.1.5 adds a better fallback path that catches almost all of these cases, plus a warning log when resolution still fails so we can see the rest in production.
The smaller stuff
- Memory rules in employee profiles hardened. Newly created employees now have explicit instructions to use our auditable memory system rather than a hidden file-based one that some had silently been falling back to.
- All 284 skill descriptions are now in clean English. Previously a mix of French, English, and a few that were outright wrong (one had
memory-dbdescribed as "graphic charter and templates" — completely unrelated to what the skill does). All standardized. - "Big Pickle" → "OpenCode" rename. A placeholder UI label finally got replaced with the integration's real name.
Nothing in this release adds a flashy new capability. It adds confidence that the existing capabilities will behave the way the UI promised — and it closes a security gap on Custom LLM keys that should not have shipped open in the first place.
Want to test the most advanced AI employees? Try it here: https://Geta.Team