AI agents can’t hold a master key
By Eric Barroca
Published on March 23, 2026.
The author argues that the only identity model that scales safely is the delegate identity model. Agents are highly-information-intensive agents and can create unauthorized access and exposure at scale. The author states that an agent that can "do anything" poses an enterprise liability due to its extensive access control capabilities. This is where permissions must be enforced at the moment of access, not after generation or after review, and not through prompts. The solution involves delegating access to all systems and maintaining the same constraints, policies, and permission boundaries as the person who asked it to act. This approach doesn't eliminate complexity, but amplifies it. The article suggests that delegating AI agents to tasks requires careful coordination across security, infrastructure, and application layers.
Read Original Article