Your AI Agent Has Its Own Identity Now. Okta, Microsoft, and the Race to Secure the Agentic Enterprise.

Your AI Agent Has Its Own Identity Now. Okta, Microsoft, and the Race to Secure the Agentic Enterprise.

Last week, Okta's CEO Todd McKinnon called it "our most important product ever." Three days earlier, Microsoft released an end-to-end security framework for agentic AI at RSAC. AWS quietly rolled out Bedrock AgentCore Identity. The IETF published a new draft standard called AIMS.

Four announcements. Same week. Same problem.

Your AI agents have credentials. Real ones. And nobody's been watching them.

The Problem Nobody Wanted to Talk About

Here's a stat that should make every CTO uncomfortable: 93% of audited AI agent projects use unscoped API keys. Not temporary tokens. Not least-privilege credentials. Full-access, never-expiring API keys hardcoded into agent configurations.

It gets worse. According to a 2026 industry survey, 45.6% of teams rely on shared API keys for agent-to-agent authentication. Nearly half of all enterprise AI agents are sharing the same keys like a family Netflix account.

Enterprise identity management was built for humans. It assumes consistent behavior, clear intent, and direct accountability. AI agents break all three assumptions. They can be copied, forked, scaled to thousands of instances in minutes. And machine identities now outnumber human identities 82 to 1, according to VentureBeat's analysis.

So when an agent's token gets leaked -- and tokens do get leaked -- the blast radius isn't one compromised account. It's everything that agent touched, everything it had access to, and every downstream system it authenticated against.

What Actually Happened

This isn't theoretical. In 2025, stolen OAuth tokens from Drift's Salesforce integration gave attackers access to over 700 customer organizations. A supply chain attack on OpenAI's plugin ecosystem compromised credentials from 47 enterprise deployments and went undetected for six months.

More recently, the Moltbook database exposure in 2026 leaked 1.5 million API keys and 4,060 private agent-to-agent conversations containing plaintext OpenAI API keys. And Zenity Labs' "PleaseFix" disclosure revealed that agentic browsers like Perplexity Comet can be hijacked through content as mundane as a Google Calendar invitation -- zero clicks required.

As Zenity CTO Michael Bargury put it: "This is not a bug. It is an inherent vulnerability in agentic systems."

The OWASP Foundation agreed. Their new Agentic Top 10, developed by over 100 security experts, lists "Identity & Privilege Abuse" as the third most critical vulnerability in agent systems -- right after goal hijacking and tool misuse.

The Identity Layer Race

So what's being built? Three very different approaches, all shipping within weeks of each other.

Okta for AI Agents (GA April 30, 2026) treats agents as first-class, non-human identities with their own lifecycle management. The platform includes a Universal Directory for agent identities, an Agent Gateway that acts as a centralized control plane with full audit logging, and something called Identity Security Posture Management (ISPM) that discovers "shadow AI" -- unauthorized agents built by teams who didn't wait for IT approval.

That last feature matters more than it sounds. Gartner estimates that 69% of organizations have employees using prohibited GenAI tools. By 2030, 40% of global enterprises will suffer security incidents from unauthorized AI deployments. Okta is betting that the biggest threat isn't external attackers -- it's your own teams spinning up agents without oversight.

Microsoft's framework takes a different angle: extending Zero Trust to cover agents end-to-end. Their upcoming Agent 365 (GA May 1) provides a unified agent registry visible to both IT admins and security teams, with risk-based access policies through Microsoft Entra. The interesting piece is Entra Agent ID, which introduces four new identity object types specifically for agents -- separating the agent itself from the blueprint it was created from, and from the user it acts on behalf of.

AWS Bedrock AgentCore Identity focuses on two clean authentication patterns: user-delegated access (OAuth 2.0 authorization code grant for when agents need to access user-specific data) and machine-to-machine (client credentials grant for system-level resources). Less ambitious than Okta or Microsoft, but arguably more practical for teams that just need agents to stop sharing API keys.

Why This Matters for Everyone, Not Just Enterprises

You might read "Okta" and "Microsoft" and think this is a Fortune 500 problem. It's not.

The same survey that found 93% of agents using unscoped keys also found that only 14.4% of AI agent deployments went live with full security and IT approval. The other 85.6% are running in production with whatever credentials someone pasted into a config file during development.

And Cisco's 2026 report puts it bluntly: only 29% of organizations are prepared to secure their agentic AI deployments. Meanwhile, 48% of cybersecurity professionals now identify agentic AI as the number one attack vector -- outranking deepfakes, ransomware, and supply chain compromises.

The gap between adoption and security is widening, not narrowing. 80.9% of technical teams are actively testing or running AI agents in production. Only 34% have AI-specific security controls in place.

The Right Way to Think About Agent Identity

The IETF's new AIMS draft (Agent Identity Management System) offers the clearest framework for how agent identity should actually work. It composes existing standards -- WIMSE for workload identity, SPIFFE for service identity, and OAuth 2.0 for authorization -- into a coherent model where agents get:

  • Dynamic privilege scoping: credentials that expire per action, not per session
  • Clear delegation chains: every agent action traces back to the human or system that authorized it
  • Distinct identity class: agents are neither users nor services, they're a third category that needs purpose-built infrastructure

This is the direction the industry is heading. The question is whether organizations adopt these patterns proactively or wait for an incident to force their hand.

What This Means for Your Setup

If you're deploying AI agents today -- whether that's a single assistant or a team of them -- the identity question isn't optional anymore. Here's the minimum bar:

  1. No shared credentials. Every agent gets its own identity. Period.
  2. Scoped, time-limited tokens. If an agent needs database access for one task, it gets a token for that task. Not a permanent key to everything.
  3. Audit logging. You should be able to answer "what did this agent do, when, and under whose authority?" at any point.
  4. Revocation capability. If an agent misbehaves, can you cut its access instantly? Meta learned this lesson the hard way with a Sev 1 breach caused by a rogue agent that passed every identity check but still went sideways.

This is exactly why self-hosted deployments matter. When your AI employees run on your infrastructure, the identity layer is yours to control. No credentials leave your environment. No third-party has access to your agent's tokens. The attack surface is fundamentally smaller.

At Geta.Team, this is baked into the architecture. Self-hosted by default, with every AI employee operating under your security policies, your credential management, your audit trail. Because the only way to truly secure an agent's identity is to own the infrastructure it runs on.

The race to build agent identity infrastructure is just beginning. The vendors are shipping. The standards are forming. The only losing strategy is pretending it's someone else's problem.

Read more