Trace3 Blog | All Possibilities Live In Technology

The Agentic Enterprise is Here: Time for Security to Catch Up!

Written by Lars Hegenberg | July 10, 2025
By Lars Hegenberg| Trace3 Innovation Researcher

 

The rise of generative AI was fast. The rise of agentic AI? Even faster.

In the race to unlock value, agent development has become democratized, and many enterprises are embracing agentic features without fully understanding the risks. AI agents promise automation and autonomy — and for the most part, they already deliver. But while organizations rush to deploy by leveraging new protocols like MCP, threat actors are adapting just as quickly, finding new ways to exploit emerging gaps.

Not every so-called “AI agent” is truly agentic, which means some solutions create more confusion for cybersecurity leaders than adding true agentic risk. Still, the architecture that powers real agentic systems — and the way it connects to enterprise systems — introduces a fresh kind of security challenge.

This blog focuses on that architecture: what it looks like, the different components that need to be secured, and emerging approaches that try to address the evolving security risks.

And while many of these solutions or approaches still lack maturity, they raise some fundamental questions that enterprises need to address when deep diving into the world of agents.

 

Why Securing AI Agents Is Uniquely Challenging

AI agents don’t behave like traditional software — they make decisions, sequence actions, and adapt based on evolving context. Their stochastic nature introduces patterns that existing security programs weren’t designed to handle:

  • Autonomy & Unpredictability: AI agents operate with a high degree of autonomy – they plan multi-step actions and make decisions without step-by-step human instructions. This dynamic, self-directed behavior makes their outcomes less predictable than traditional software.

  • Dynamic Tool Use: Unlike static applications, agents can call APIs, invoke tools, and even write code on the fly to achieve goals. They effectively blur the line between data and code, meaning an injected prompt or data can turn into real actions on systems.

  • Persistent Memory & Context: AI agents carry state (remembering past interactions or using long-term memory stores). This stateful memory enables continuity but also expands the attack surface – malicious inputs can drastically influence future agent behavior.

  • Evolving Threat Vectors: like prompt injection attacks or model tampering – target the agent’s logic and context, exploiting weaknesses that don’t exist in standard apps. Agents also often run with user-like privileges, so if compromised they can misuse sensitive access in ways traditional software wouldn’t.

  • Lack of Standardized Security Frameworks: Since AI agents only burst onto the scene at the beginning of last year, there is a lack of mature, industry-wide standards for this new technology. Frameworks like Model Context Protocol (MCP) are just beginning to emerge, and many organizations are left building custom security approaches from scratch. This fragmentation leads to inconsistent risk coverage. 

 

What Needs to be Secured in Agentic Systems

Even though AI agents are often seen as this new tool that magically finds a way to execute any command given by a user, there are many components and interactions that need to happen on the backend for an agent to work reliably (see figure 1). Each of these tools and workflows will have to be protected: 

  • Non-human identities (NHIs): AI agents are essentially NHIs that take on dynamically shifting roles and require access to applications, data, services etc. This poses access management challenges and creates additional attack surfaces through APIs and uncontrolled data exchange. Unlike static service accounts, managing these NHIs demands real-time credential delivery, permission scoping, and behavioral monitoring. Like human identities, there should be on/offboarding mechanisms in place, and data exchanges should be carefully governed.

  • Agent Inputs & Outputs: All inputs (user prompts, instructions, files) need validation and sanitization, since malicious or unpredictable inputs can hijack the agent’s behavior. At the same time, an agent’s output or action needs to be monitored and filtered, as these could leak sensitive data or violate policies. Controls like DLP (Data Loss Prevention) and output vetting are critical to prevent data leakage or harmful content generation.

  • Planning/Orchestration Logic: This is about protecting the agent’s orchestration – the logic that sequences its actions and tool use. An attacker who manipulates this logic would alter the agent’s plans and actions. Emerging frameworks like MCP aim to introduce secure patterns for agent orchestration, memory use, and tool/API calls. Ensuring the agent only executes authorized steps (and can’t be tricked into skipping or adding steps) is key.

  • Tools and Memory Stores: Most agents use tools (e.g. database queries, CRMs etc.) which expand the attack surface. There should be a limit as to which tools the agent can interact with, which commands or functions the agent can invoke, and outputs should be validated. For example, if an agent can execute code, it must be in a restricted environment to prevent system misuse. If the agent has a memory or knowledge base (e.g. vector databases), it also needs protection. This requires implementation of access controls and checks on the memory content.

  • MCP: The protocol standardizing how AI models discover and interact with external tools, has gained massive traction. And while it can provide centralized oversight and control, current authentication and authorization approaches lack enterprise-grade features like OAuth compliance, SSO integration, and granular permission management. It raises additional questions like: Which tools do agents gain access to? Who is managing requests for API keys? How do you prevent misuse? And are there malicious MCP servers connected to the environment? 

 

Addressing Security Risks 

Even though adoption rates are rising at a blistering pace, practical security measures and solutions for AI agents are still trailing. So, while the market is still catching up, a priority for any organization should be to develop robust internal security policies that can answer key questions around how AI agents will be leveraged.

  • Use Cases: Which use cases are acceptable for agents within the organization?

  • Ownership: Who will own and be accountable for agents’ security and compliance?

  • Agent Lifecycle: What are the procedures for secure onboarding, updating, and offboarding of agents?

  • Authentication: How will agents authenticate themselves to systems and APIs securely, and what standards will be enforced?

  • Credential Sharing: How will API keys and credentials issued for AI agent access be managed to prevent unauthorized use, especially in cases of oversharing or repurposing keys beyond their intended scope or user?

  • Data: What data should agents gain access to? How will sensitive data be protected in transit and at rest?

  • Model layer: What LLMs are agents relying on and are these models safe to use?

  • Tool Use: What approved tools and platforms will agents gain access to, and how will usage be regulated?

  • Performance/Reliability: How will the performance and reliability of agents be monitored and maintained?

  • Input/Output Monitoring: How will input and output activities be monitored to detect and prevent unauthorized or harmful actions?

  • Regulatory Frameworks: How will policies proactively address and comply with evolving regulatory frameworks?

The next logical set of questions would be – how do I make sure my agents don’t violate any of these policies? How do I gain control over the behavior of a tool that is stochastic in nature?

And even though emerging solutions are trying to address this with AI firewalls or AI gateways that claim to block actions and enforce policies and agent behavior in real time – we are still a long way out from a comprehensive, end-to-end solution.

Nevertheless, we have seen strong offerings and best practices that can unlock visibility into agentic systems and manage credentials/access of these:

 

AI agent Discovery:

Since you can’t protect what you can’t see, discovery of AI agents is essential. What makes this a particular challenge is the democratization of agent development, as well as existing vendors introducing agentic workflows through their platforms (e.g. ServiceNow). To provide visibility, there are multiple possible approaches: For homegrown agent initiatives, one focus area should be written code to identify AI agents in development or already in production. This would include code repository inspections to detect AI frameworks and libraries, tokens, as well as any dependencies with AI logic. Solutions can also help create an automated AI Bill of Materials (AI-BOM) that provides critical insights into the datasets, models, and configurations that power AI systems.

Other security solutions take a more integrated approach by connecting to the broader AI ecosystem to discover AI assets across the stack. This would include Cloud AI services (e.g. Bedrock, SageMaker), Data and AI solutions (Snowflake, Jupyter Notebooks), MLOps frameworks, as well as third-party agents. Another way to detect early deployments and shadow IT is by monitoring network traffic and API calls. Ideally, these solutions don’t just provide continuous asset discovery, but give rich context such as what the capabilities and who the owners of a given agent are. Finally, some solutions scan for agentic activity using monitoring and observability controls installed on the endpoint. This would also highlight MCP server processes running locally.

Examples of emerging solutions: Zenity, Operant.ai, Noma Security, Pillar Security

 

Access & Credentials Management:

As AI agents take on tasks involving sensitive data and systems, they must be treated as dynamic identities—subject to strict access governance. This starts with uniquely identifying agents, including their purpose, capabilities, limitations and ownership. IAM systems should be able to issue, discover, govern and control a wide set of client credentials used by these agents. To ensure robust authentication practices, agents should have to verify their identity through token-based or certificate-based authentication (mutual TLS), private keys (JWTs), or user authentication via OAuth/SAML (authenticating the agent as the user for whom it is operating). For short-lived or task-specific agents, dynamic or just-in-time access issuance can greatly reduce any agent privileges.

MCP is currently on track to becoming the industry standard for how agents interact reliably with enterprise systems. However, when it comes to authorization, it can complicate scalability by requiring stateful servers and fragmenting OAuth practices. To address this, authorization servers should be separated from resource servers. This means treating the MCP server as a resource server only, and using an external, dedicated authorization server for OAuth flows. When it comes to API access for AI agents, it should be tied to both roles and scopes to allow for fine-grained access while also considering role-based governance. Using scopes and roles in conjunction enables granular permissions while also accounting for business logic and hierarchies.

Solutions like Descope adopt these best practices with an agentic identity control plane that controls how agents access applications, restricts agents to specific scopes/tools, and creates, manages, and revokes agentic permissions. Descope also secures remote MCP servers with enterprise grade authorization by automating e.g. OAuth 2.0/2.1, dynamic client registration, and authorization code flow with PKCE. As regulations evolve, especially for agents touching sensitive data, organizations must ensure lifecycle oversight and prepare for future compliance requirements.

Examples of emerging solutions: Descope, Barndoor.ai, Clutch Security, Oasis Security

 

Putting Security into Action  

While leaders are busy developing AI agent security strategies, chances are multiple unsupervised agents are already taking on workflows within the organization. To kick-start any security program, robust company guidelines and policies should be put in place as soon as possible. These policies should cover anything from permissible use cases and tool use, to ownership, lifecycle management, and data access. These policies should be updated regularly to keep up with emerging frameworks, protocols, and regulations.

As AI agents proliferate, so will shadow AI. That means the next step is gaining visibility. Map the agents already running in your environment and flag those being built in the dark. Inventory any agents that need attention based on their level of agency and permissions, and govern the entire lifecycle – from onboarding to offboarding. Also ensure accountability along the way by tracking ownership of agents.

When agents take on real roles, they'll need to be treated like digital coworkers. Identity and access controls must evolve, extending Zero Trust to cover agents’ decisions, tools, and context. This is where emerging solutions like Descope have already developed strong capabilities.

While protocols like MCP offer new promising ways to centralize oversight and control, they also introduce new risks. Stay up to date by following public benchmarks and frameworks like OWASP’s Top 10, run red teaming exercises, and monitor the regulatory landscape closely.

Finally, evaluate emerging security solutions that tackle AI-specific security threats and address the latest frameworks or trends with their offerings. While runtime security and complex policy enforcement tools for agentic workflows still lack maturity, expect new solutions to come out that address the probabilistic nature of AI agents.

Agentic AI is rewriting how work gets done — but let’s not forget about security.

 

Lars is an Innovation Researcher on Trace3's Innovation Team, where he is focused on demystifying emerging trends & technologies across the enterprise IT space. By vetting innovative solutions, and combining insights from leading research and the world's most successful venture capital firms, Lars helps IT leaders navigate through an ever-changing technology landscape.