What We Shipped: Geta.Team v2.0.21
This release is a big one. Fifteen bug fixes, three new features, and a handful of UI improvements that collectively make Geta.Team feel noticeably sharper. Here is what shipped in v2.0.21.
Employee Tags: Organize Your AI Team
As teams scale their AI workforce beyond two or three employees, finding the right one fast becomes a real problem. This release adds a full tagging system.
You can now tag any AI employee with custom labels — "marketing," "devops," "client-facing," whatever makes sense for your organization. Tags are managed through a new Tags tab in the employee settings, with auto-complete suggestions pulled from your existing tags so you stay consistent.
The dashboard sidebar now includes a tag filter dropdown, so you can narrow your view to just the employees relevant to what you are working on right now. We also added URL hash support — bookmark https://YOUR_INSTANCE_URL/dashboard#devops and you will land directly on your filtered view every time.
Small feature. Massive quality-of-life improvement once you have more than a handful of AI employees running.
SharePoint Skill for Office 365 (Enterprise Edition)
Enterprise users have been asking for SharePoint access, and it is here. The new office365-sharepoint skill gives your AI employees full access to your SharePoint environment — list sites, browse files, download documents, upload content, search across your organization's document libraries, and delete files.
This is auto-installed for all Enterprise Edition employees. Under the hood, it proxies through Microsoft Graph API with the Sites.ReadWrite.All scope, so your AI employees can work with SharePoint the same way your human employees do.
If you have an existing Office 365 connection, you will need to re-authorize it once to pick up the new SharePoint scope. After that, your AI employees can pull documents, search knowledge bases, and manage files across your entire SharePoint environment — all through natural language.
Widget Agent Mode: Now Actually Embeddable
The widget agent mode was technically working before, but embedding it was harder than it should have been. This release adds a proper Widget Integration section to the configuration panel with everything you need in one place: your widget token, session API endpoint, backend code snippet (Node.js with your real token and URL pre-filled), and frontend embed code. Each block has a copy button that gives you visual confirmation when it works.
We also added an Allowed Origins field so you can lock down which domains can embed your AI employee widget — important for security when you are exposing an agent to the public internet.
Critical Fixes That Matter
Custom LLM conversations no longer vanish on page refresh. This was a painful one. If you were using a custom LLM provider, your conversation history would disappear every time you refreshed the page. The root cause: our conversation cache was filtering by CLI type and only caching Claude, OpenCode, and Grok sessions. Custom LLM sessions were silently excluded. Fixed — the cache now works for all session types.
Custom LLMs now actually use the right model. Related bug: when you selected a custom LLM, the system was sending the raw configuration string as the CLI type instead of parsing it properly. Your carefully configured custom LLM was quietly being routed to Claude instead. This is now correctly parsed and your custom provider configuration is properly injected.
The notification queue infinite loop is gone. Messages in the notification queue were being sent in an infinite loop every 90 seconds due to a race condition. The "message handled" flag was being set with a 1-second delay, which meant the queue would re-process the same message before it was marked as done. The flag is now set immediately.
Each AI employee now has isolated memory. Previously, all employee containers shared the same memory directory. This meant your marketing strategist's memories could bleed into your data analyst's context. Each employee now gets their own isolated memory directory, mounted separately.
Gemini 3.1 voice call transcripts are back. Google renamed inputTranscript to inputTranscription (and changed it from a string to an object with a .text property) in Gemini 3.1. Our voice call feature silently broke — user transcripts were empty and assistant transcripts were failing. Both are now fixed with multi-field detection.
OpenCode SQLite Migration
OpenCode moved its storage from JSON files to a SQLite database. This is the kind of upstream change that silently breaks everything if you do not catch it. We caught it. Session history, message counts, and content retrieval now read from SQLite first with a JSON fallback, so existing sessions are preserved and new ones work correctly.
Smaller Fixes and Polish
- Claude reconnect now properly replaces the account instead of finding the old credentials immediately
- Widget iframe CSP blocking fixed — the URL path check was too strict and missed the actual chat URL pattern
- Mail inbox scroll fixed for emails that were previously unscrollable
- Custom LLM notification queue now flushes correctly by recognizing OpenCode and custom session ready signals
- Admin access to LLM providers fixed — admins were incorrectly blocked from accessing custom LLM settings
- UpgradeModal now properly supports light and dark mode with CSS variables
- Widget modal overflow fixed — code blocks now wrap instead of causing horizontal scroll
- Connector help sheets all have consistent padding
- User pending message queue removed entirely — messages now send directly without queuing
Want to test the most advanced AI employees? Try it here: https://Geta.Team