The Protocol Wars Are Over. Agent Skills Won.

The Protocol Wars Are Over. Agent Skills Won.

For the past two years, building AI agents meant picking sides. Anthropic had MCP. OpenAI had function calling. Google had its own approach. LangChain tried to abstract everything. The result? A fragmented ecosystem where skills built for one platform couldn't run on another.

That era just ended.

Google Antigravity adopted Anthropic's Agent Skills standard on January 13, 2026. A week earlier, Anthropic donated MCP to the Linux Foundation. OpenAI and Microsoft publicly embraced MCP. The protocol wars are over, and interoperability won.

What Actually Happened

Let's be specific about what changed.

Agent Skills is a format—a folder with a SKILL.md file that teaches an AI agent how to perform a specific task. Think of it as a plugin or playbook that agents can read and execute. Anthropic developed this format for Claude Code, and it worked well enough that Google adopted it wholesale for Antigravity.

The key innovation is progressive disclosure. A skill sits dormant until needed. The agent reads the metadata at startup but only loads full instructions when your request matches the skill's description. This solves the context bloat problem that plagued earlier approaches.

Model Context Protocol (MCP) is the connectivity layer—the "USB-C for AI" that lets agents talk to external tools like databases, APIs, and search engines. Anthropic created it, then donated it to the newly formed Agentic AI Foundation under the Linux Foundation.

Together, these standards mean a skill built for Claude Code works in Google Antigravity without modification. Build once, run anywhere. That's not marketing—it's now technically true.

Why This Matters More Than You Think

The immediate benefit is obvious: portability. If you invest time building custom skills for your business workflows, those skills aren't locked to a single vendor. You can switch from Claude to Gemini to whatever comes next without rebuilding everything.

But the second-order effects are bigger.

The tooling ecosystem can finally mature. When everyone's building to the same standard, the market can produce better skill libraries, better debugging tools, better testing frameworks. The energy that used to go into "which platform should I bet on?" can now go into "what should I actually build?"

Enterprises can stop hedging. The biggest blocker to AI agent adoption in large organizations hasn't been capability—it's been fear of lock-in. CIOs hate betting on proprietary formats that might become obsolete. Open standards remove that objection.

The talent pool deepens. When skills are portable, experience transfers. A developer who learned to build Claude Code skills can now apply that knowledge to Antigravity projects. Training investments compound instead of depreciate.

What "Skills Portability" Looks Like in Practice

Here's a concrete example. Say you've built a skill that monitors your company's Slack channels, summarizes daily activity, and posts a digest to your project management tool. Under the old regime, that skill was tied to whatever platform you built it on.

Now? That skill is a folder. The SKILL.md file describes what it does. The implementation scripts do the work. You can drop that folder into Claude Code, Google Antigravity, or any other compliant agent. The agent reads the manifest, understands what the skill offers, and executes it when relevant.

This is why Google explicitly adopted Anthropic's format rather than creating their own. They saw the writing on the wall: competing on proprietary skill formats was a losing game. Better to compete on model quality, speed, and features while letting skills flow freely.

The Implications for Tool Selection

If you're evaluating AI agent platforms today, the calculus just changed.

Stop asking "which ecosystem should I commit to?" The ecosystem is converging. Any serious platform will support Agent Skills and MCP. If a vendor is still pushing proprietary formats with no migration path, treat that as a red flag.

Start asking "what's the skill library like?" With standardization, the moat moves from protocol to content. Platforms that ship with robust, well-tested skill libraries—or that have thriving marketplaces—will have an advantage.

Focus on execution quality. When skills are portable, differentiation comes from how well agents execute them. How fast? How reliably? How good is the reasoning when tasks get complex? These questions matter more now that format wars are over.

What This Doesn't Solve

Let's be clear about limits.

Standardization doesn't make agents smart. A poorly designed skill is still a poorly designed skill, regardless of what format it's in. The hard work of building reliable, production-grade automation doesn't get easier just because portability improved.

And we're not at full interoperability yet. Edge cases exist. Proprietary extensions exist. The standards will evolve. But the trajectory is clear: the industry is consolidating around shared protocols, and holdouts will find themselves increasingly isolated.

The Bigger Picture

We've seen this movie before. The browser wars ended with web standards. Mobile fragmentation resolved when iOS and Android stabilized their APIs. Cloud computing got serious when Kubernetes became the orchestration standard everyone agreed on.

AI agents are following the same pattern, just faster. What took the browser ecosystem a decade is happening in AI over two years.

For businesses building with AI agents, this is unambiguously good news. The question isn't which standard to bet on anymore. The question is what to build first now that the foundation is stable.


Building AI employees on open standards? At Geta.Team, we deploy AI virtual employees that integrate with your existing tools through standardized protocols—no lock-in, no proprietary formats. Try it here: https://Geta.Team

Read more