Enterprise identity was created for humans, not AI agents



Presented by 1Password


Adding agent capabilities to enterprise environments is fundamentally reshaping the threat model by introducing a new class of actor into identity systems. The problem: AI agents are taking action within sensitive enterprise systems, logging in, fetching data, calling LLM tools, and executing workflows, often without the visibility or control that traditional identity and access systems were designed to impose.

AI tools and autonomous agents are proliferating across enterprises faster than security teams can instrument or govern them. At the same time, most identity systems still assume static users, long-lived service accounts, and rough role assignments. They were not designed to represent delegated human authority, short-lived execution contexts, or agents operating in narrow decision circuits.

As a result, IT leaders must take a step back and rethink the trust layer itself. This change is not theoretical. NIST Zero trust architecture (SP 800-207) explicitly states that “all subjects, including applications and non-human entities, are considered untrusted until they are authenticated and authorized.”

In an agent world, that means AI systems must have explicit and verifiable identities of their own, not operate through inherited or shared credentials.

"Enterprise IAM architectures are designed to assume that all system identities are human, meaning they have consistent behavior, clear intent, and direct human responsibility for enforcing trust." says Nancy Wang, CTO at 1Password and venture partner at Felicis. “Agent systems break those assumptions. An AI agent is not a user that can be trained or periodically reviewed. It is software that can be copied, forked, scaled out, and left running in tight execution cycles across multiple systems. If we continue to treat agents like humans or static service accounts, we lose the ability to clearly represent who they act for, what authority they have, and how long that authority should last.”

How AI Agents Turn Development Environments into Security Risk Zones

One of the first places where these identity assumptions break down is the modern development environment. The integrated developer environment (IDE) has evolved beyond a simple editor into an orchestrator capable of reading, writing, executing, searching, and configuring systems. With an AI agent at the center of this process, rapid injection transitions are not just an abstract possibility; They become a specific risk.

Because traditional IDEs were not designed with AI agents as a core component, adding non-original AI capabilities introduces new types of risks that traditional security models were not built to account for.

For example, AI agents inadvertently violate trust boundaries. A seemingly harmless README could contain hidden directives that trick a wizard into exposing credentials during standard scanning. Project content from untrusted sources can alter agent behavior in unintended ways, even when that content has no obvious resemblance to a message.

Input sources now extend beyond files that are deliberately executed. Agents incorporate tool documentation, configuration files, file names, and metadata as part of their decision-making processes, influencing how they interpret a project.

Trust is eroded when agents act without intention or responsibility

When you add deterministic, highly autonomous agents operating with elevated privileges, with the ability to read, write, execute, or reconfigure systems, the threat grows. These agents have no context or ability to determine whether an authentication request is legitimate, who delegated that request, or what boundaries should be set around that action.

"With agents, it cannot be assumed that they have the ability to make accurate judgments and they certainly lack a moral code." says Wang. "Each of their actions must be appropriately limited and access to sensitive systems and what they can do within them must be more clearly defined. The tricky part is that they continually take action, so they also need to continually be limited."

Where traditional IAM fails with agents

Traditional identity and access management systems operate under several basic assumptions that agent AI violates:

Static privilege models fail with autonomous agent workflows: Conventional IAM grants role-based permissions that remain relatively stable over time. But agents execute chains of actions that require different levels of privileges at different times. Least privilege can no longer be a set it and forget it setting. It should now be dynamically scoped with each action, with automatic update and expiration mechanisms.

Human responsibility collapses for software agents: Legacy systems assume that each identity can be traced back to a specific person who can be held responsible for actions taken, but agents completely blur this line. It is now unclear when an agent acts or under what authority it operates, which is already a huge vulnerability. But when that agent is duplicated, modified, or left running long after it has served its original purpose, the risk multiplies.

Behavior-based detection fails with continuous agent activity: While human users follow recognizable patterns, such as logging in during business hours, accessing familiar systems, and performing actions that align with their job duties, agents continually operate on multiple systems simultaneously. That not only multiplies the potential for damage to a system, but also causes legitimate workflows to be flagged as suspicious by traditional anomaly detection systems.

Agent identities are often invisible to traditional IAM systems: Traditionally, IT teams can configure and manage more or less identities that operate within their environment. But agents can create new identities dynamically, operate through existing service accounts, or leverage credentials in ways that make them invisible to conventional IAM tools.

"It’s all the context, the intent behind an agent, and traditional IAM systems don’t have any capability to manage that." says Wang. "This convergence of different systems makes the challenge broader than mere identity, requiring context and observability to understand not only who acted, but also why and how."

Rethink security architecture for agent systems

Protecting agent AI requires rethinking enterprise security architecture from the ground up. Several key changes are necessary:

Identity as a control plane for AI agents: Instead of treating identity as one security component among many, organizations should recognize it as the fundamental control plane for AI agents. Major security vendors are already moving in this direction, and identity is being integrated into every security solution and stack.

Context-aware access as a requirement for agent AI: Policies must become much more granular and specific, defining not only what an agent can access, but under what conditions. This means considering who invoked the agent, what device it is running on, what time restrictions apply, and what specific actions are allowed within each system.

Handling zero-knowledge credentials for autonomous agents: One promising approach is to keep credentials completely out of sight of agents. Using techniques such as agent autocomplete, credentials can be injected into authentication flows without agents seeing them in plain text, similar to how password managers work for humans, but extended to software agents.

Auditability requirements for AI agents: Traditional audit logs that track API calls and authentication events are insufficient. Agent auditability requires capturing who the agent is, under what authority they operate, what scope of authority they were given, and the full chain of actions taken to achieve a workflow. This mirrors the detailed activity logging used for human employees, but must be adapted to software entities that execute hundreds of actions per minute.

Enforce trust boundaries between humans, agents and systems: Organizations need clear, enforceable boundaries that define what an agent can do when invoked by a specific person on a particular device. This requires separating intent from execution: understanding what a user wants an agent to accomplish from what the agent actually does.

The future of enterprise security in an agent world

As agent AI becomes integrated into everyday business workflows, the security challenge is not whether organizations will adopt agents; it’s about whether the systems that govern access can evolve to keep pace.

Locking down AI at the edge is unlikely to grow, but neither will scaling legacy identity models. What is required is a shift towards identity systems that can account for context, delegation and responsibility in real time, both between humans, machines and AI agents.

“Tiered functionality for agents in production won’t just come from smarter models,” says Wang. “It will come from predictable authority and enforceable trust boundaries. Enterprises need identity systems that can clearly represent who an agent acts for, what it is allowed to do, and when that authority expires. Without that, autonomy becomes an unmanaged risk. With that, agents become governable.”


Sponsored articles are content produced by a company that pays to publish or has a business relationship with VentureBeat, and are always clearly marked. For more information, contact sales@venturebeat.com.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *