What We Shipped: Geta.Team v2.1.7 — GPT-5.5 on Direct OpenAI, Codex CLAUDE.md Fallback, and Six Dashboard UX Fixes
v2.1.7 is a release with one big plumbing job, one quiet cleanup, and a sweep of dashboard fixes that mostly close papercuts you already noticed but never reported. The biggest item is GPT-5+ models on direct-to-OpenAI custom_llm — which sounds like a one-line param swap and was actually a two-week problem with two different fixes in two different layers. We'll lead with that one because it's the one most people will hit.
GPT-5.5 (and any o-series) on custom_llm direct to OpenAI now actually works
If you configured an employee with custom_llm:N pointing at https://api.openai.com/v1 with gpt-5.5, the conversation never started. You got back this:
400 Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.…and then, if you fixed that, this:
400 Function tools with reasoning_effort are not supported for gpt-5.5 in /v1/chat/completions. Please use /v1/responses instead.The 400s are not what they look like. There are two separate things wrong.
Layer one is the parameter rename. OpenAI deprecated max_tokens for gpt-5+ and o-series models — the new field is max_completion_tokens. Older models (gpt-4.1, gpt-4o) still accept both, so the asymmetry only shows up on new ones. We added a small helper in the LLM proxy (maybeFixOpenAIParams) that detects gpt-5+/o[1-9] models and rewrites the field before forwarding upstream. It only runs on /chat/completions and only one direction (old name → new name), since the new param name works fine for both. It also strips openai/ prefixes so OpenRouter-style model names like openai/gpt-5.4-mini are caught by the regex. Easy gotcha during the patch: the upstream path is /chat/completions, not /v1/chat/completions — the /v1 lives in the base_url. Get the path check wrong and the helper silently no-ops.
Layer two is the endpoint change. For the (tools + reasoning_effort) combo on gpt-5.5, OpenAI now refuses /v1/chat/completions entirely and forces the new /v1/responses endpoint. OpenCode's @ai-sdk/openai-compatible adapter only knows chat/completions, so the param-name fix wasn't enough on its own — the request still 400'd at the endpoint level. The fix: when llm.base_url matches api.openai.com, switch the OpenCode npm adapter from @ai-sdk/openai-compatible to the native @ai-sdk/openai one, which automatically routes gpt-5+/o-series to /v1/responses via the right Vercel ai-sdk signals (reasoningSummary: "auto", include: ["reasoning.encrypted_content"]).
The catch was that we couldn't just write the new adapter into opencode.json and call it a day. opencode.json gets regenerated by /app/opencode-wrapper.sh inside the Vibecoder image at every container spawn — anything employees.js writes is overwritten on the next start. We didn't want to rebuild the Vibecoder image just to flip an adapter string, so we mount-overrode the wrapper instead: a copy of the upstream wrapper sits at backend/hooks/opencode-wrapper.sh, gets bind-mounted on top of the image's version, and reads a .gat-opencode-adapter sidecar file from the working directory at every spawn. employees.js writes that sidecar (along with opencode.json for the cli_type-set-time path), so the adapter choice survives every container restart without us touching the upstream image.
OpenRouter, LiteLLM, and any other openai-compatible provider keep @ai-sdk/openai-compatible (because they actually are) and benefit from the proxy-level max_tokens swap. Direct OpenAI gets the native adapter and routes to /v1/responses properly. Two layers, two fixes, one working endpoint.
Existing custom_llm employees on direct OpenAI need a Clear Session (or any cli_type re-save) for the .gat-opencode-adapter sidecar to be written. Without that, the old opencode.json keeps being used until the next container spawn.
Codex employees stop syncing CLAUDE.md → AGENTS.md
Before this release, every container spawn for every employee copied CLAUDE.md to AGENTS.md (with a header asking the agent to keep both in sync if it edited its profile). It was awkward, fragile, and a problem waiting for an agent that only edited one to find its change wiped on the next restart.
It also wasn't necessary for codex. Codex has a project_doc_fallback_filenames setting in ~/.codex/config.toml that tells it to read CLAUDE.md directly when AGENTS.md isn't there. We extended ensureCodexConfig() to write that line on bootstrap and updated the containerManager.js sync block to delete AGENTS.md for codex employees instead of generating it.
One TOML scoping pitfall worth calling out: the line must be at root scope. The first attempt appended it with printf '...' >> "$CFG", which placed it inside the existing [tui.model_availability_nux] table. TOML parsed it as tui.model_availability_nux.project_doc_fallback_filenames and silently ignored it. If you had a partial pre-fix attempt sitting in your config, the new bootstrap will leave the stale entry alone — you may need to manually move the line to the top of the file.
OpenCode and custom_llm:N employees still get AGENTS.md generated from CLAUDE.md until OpenCode adds a similar fallback option.
Dashboard UX, six small things you probably noticed
We grouped a bunch of small papercuts and fixed them together.
Tags on employee cards, clickable to filter. Tags lived in the DB and worked from the sidebar tabs but were invisible on the cards themselves. Now there's a row of pills under the role — up to three visible, with a +N indicator and tooltip if more. Click any pill to filter the dashboard on that tag. Two navigation paths because react-router's navigate() doesn't always trigger hashchange on a same-route hash change; we assign window.location.hash directly when already on the dashboard.
Tag tab preserved across chat ↔ dashboard. Click an employee card on dashboard#tag-X, open the chat, click the Dashboard logo to go back — you used to land on plain /dashboard and lose your filter. Now EmployeeCard.handleChatClick stashes the hash in sessionStorage and a new goToDashboard() helper restores it on return. Edge cases work cleanly: no hash → clean /dashboard; chat opened directly → returns you to your last-known tag.
Reorder button is finally explicit. The <GripVertical /> six-dots icon next to "My Team (N)" was inscrutable — it looked like a drag handle but actually toggled a mode. Now it's <ArrowUpDown /> Reorder (off) or <Check /> Done (on), with proper visual states. Eight characters of label, one papercut closed.
New employees don't jump to the top of admins' reordered lists anymore. This was a regression since the reorder feature shipped. The SQL was using COALESCE(uo.position, -1) ASC, which collapsed un-reordered employees to -1 and sorted them before everyone. Switched to uo.position ASC NULLS LAST. Reordered employees keep their custom positions; new arrivals fall through to the secondary sort by is_owner DESC, hire_date DESC. Admins who never reordered see the same behavior as before. Two-line fix, lived in the codebase for months.
App version visible in both sidebars. Under the Logout button, small and muted: {brandName} v{version}. White-label friendly — pulls from WhiteLabelContext, so an Acme Corp customer sees Acme Corp v2.1.7. Source of truth is frontend/package.json, injected at build time by Vite. Bump the version, rebuild, hard-refresh.
Markdown rendered for user messages too. Pasting a markdown doc into the chat used to display the raw ##, ---, - item. There was a literal msg.role === 'assistant' check gating the markdown branch in MessageBubble.jsx. We dropped it. hasMarkdownToRender is conservative enough (only fires on actual markdown patterns, not stray symbols) that plain text never trips it.
Deployment
Standard docker compose build --no-cache getateam && docker compose up -d getateam. No migrations. Frontend bundle changed (tag pills, reorder button, version display, markdown gate, hash preservation), so users will need a hard refresh (Ctrl+Shift+R) to pick up the new index-*.js. Codex employees with an existing AGENTS.md will have it deleted on their next container spawn — no data loss, CLAUDE.md was the source of truth all along. Custom_llm employees on direct OpenAI need a Clear Session for the .gat-opencode-adapter sidecar to land.
That's v2.1.7. The OpenAI plumbing was the heavy lift; the rest is the small stuff that makes the dashboard feel like it's been gardened.
Want to test the most advanced AI employees? Try it here: https://Geta.Team