Trace3 Blog | All Possibilities Live In Technology

Unpacking MCP’s Security Challenges and the Defenses Rising to Meet Them

Written by Katherine Walther | August 21, 2025
By Katherine Walther | Trace3 SVP of Innovation

 

Anthropic introduced the Model Context Protocol (MCP) in late 2024, and by summer 2025, enterprises across industries are adopting the open-source standard to enable direct communication between AI agents and tools, data repositories, services, and more.

It’s one of the fastest adoption motions the Trace3 Innovation team has seen in over a decade. The interoperability it unlocks is powerful but without guardrails, it introduces meaningful risk.

As adoption accelerates, security leaders are asking the right questions: how do we enable this technology securely, without slowing innovation? Defining secure, efficient agent-to-tool interactions is now essential to move this space forward.

Equally exciting is the wave of emerging startups building thoughtful, security-first approaches to MCP enablement, signaling not just a new protocol, but a new market being born.

 

Quick Refresher on MCP

MCP is like a universal adapter for AI. It acts a bridge between an AI model and your organization assets such as calendars, CRMs, internal APIs, databases, and more. It standardizes how AI queries and acts across systems. Instead of custom one-off integrations, MCP provides a universal API-like interface for connection. This enables more agentic behaviors for AI models.

 

Why CISOs Should Care

When AI agents start invoking tools and acting on behalf of users, the security model must change. The attack surface expands in ways that traditional AppSec and IAM strategies weren’t built to handle.

Here are the most pressing concerns:

  • Prompt Injection: Hidden instructions can be embedded in content the AI reads, tricking it into performing unintended actions (like leaking data or calling risky tools).

  • Tool Confusion or Spoofing: Without guardrails, AI agents can be manipulated into using the wrong tool, or a malicious one that looks legitimate.

  • Over-Privileged Access: If an agent uses system-level credentials instead of user-level ones, it could operate far beyond the user’s intent or role.

  • Secret Leakage: Misconfigured tools can accidentally expose API keys, credentials, or sensitive data via logs or error messages.

  • Informal Secret Sharing: Secret sharing between peers bypasses enterprise controls and is a non-retractable action. This is a serious and often underappreciated security concern.

  • Lack of Visibility: If actions taken by AI agents aren’t logged or traceable, the organization loses accountability, auditability, and containment.

MCP is a powerful protocol, but it is not secure by default. And that’s where emerging solutions are stepping in.

 

How the Market Is Responding

The security community is responding with innovative solutions that fall into six core approaches. Here’s how those strategies are taking shape, and where we see traction in the market:

Guardrails and Proxies: Security layers are being introduced between AI agents and the tools they call. These proxy layers inspect prompts, tool descriptions, and outputs in real time, filtering out malicious instructions, preventing prompt injection, and rewriting dangerous requests. (e.g. Klavis, Pillar Security, Barndoor, Archestra)

Identity Binding and Least Privilege: Vendors are applying mature IAM principles to AI agents. This means tying every action back to the authenticated user, using OAuth with scoped permissions, and ensuring agents never act beyond what the user is allowed to do. (e.g. Descope, Keycard)

Secure Tool Hosting and Execution: Rather than leaving tools to run in unsecured local environments, some companies are offering hardened, managed hosting for MCP tools, complete with encryption, role-based access control, tenant isolation, and built-in logging. (e.g. Skyz)

Policy-Based Approvals: Sensitive actions (like deleting records or initiating transactions) are being gated behind policy-based approvals. These systems ask for user confirmation or escalate decisions when risk thresholds are hit. AI may propose the action, but a human confirms it. (e.g. Outtake)

Full Auditability and Monitoring: New tooling is being designed to give CISOs full visibility into AI-driven activity. This includes logging every tool call, tagging who/what/why, detecting anomalies, and integrating with SIEMs to allow real-time alerting and post-event analysis. (e.g. MCP‑USE, Outtake)

Non-Human Identity & Agent Governance: A growing category of solutions focuses on giving AI agents first-class identity, separate from the user, but fully auditable and policy bound. Some NHI solutions will provide visibility into informal secret sharing while providing the governance guardrails. (e.g. Clutch Security)

 

Adopting MCP Securely

If you’re evaluating or deploying MCP in your environment, here’s where to start:

Treat AI Like a Privileged User: Bind every action to an authenticated identity. Use scoped, short-lived tokens. Avoid hardcoded or system-level credentials. Think: zero trust for agents.

Add a Safety Net (Guardrails): Use a proxy layer to inspect prompts, tool descriptions, and outputs. Filter what goes into and comes out of the model. This blocks prompt injection and malicious tool usage before it starts.

Secure the Tools Themselves: If you’re hosting MCP tools, sandbox them. Encrypt everything. Apply RBAC. Isolate tenants. If you’re not ready to do that internally, look at managed options that can take on the risk.

Define Policy for Sensitive Actions: Some actions should never be automatic. Introduce human approvals or automated escalation for anything high impact. AI can suggest, but people should approve.

Log Everything and Monitor for Abuse: If an agent does something and no one sees it… it’s a risk. Log every action, including parameters and outcomes. Integrate with your SIEM. Flag unusual behaviors (e.g. mass deletes, off-hours access, or unexpected tool calls).

 

Final Thoughts

MCP is redefining how AI interacts with real-world systems. It opens enormous opportunities but also expands the risk surface in ways we’re only beginning to understand.

The good news? Security leaders don’t need to block innovation to stay safe.

By adopting a layered approach, identity-first, with guardrails, secure execution, approvals, and observability, enterprises can move forward with confidence.

If you’re curious to learn more or want to stay on top of the latest enhancements in this space, feel free to reach out to us at innovation@trace3.com.

 

Katherine Walther is the SVP of Innovation at Trace3, where she transforms enterprise IT challenges into innovative solutions. Dedicated to disseminating information about the future of technology to IT leaders across a wide variety of domains. Pairing a unique combination of real-world technology experience with insight from the world’s largest venture capital firms, her focus is to deliver market trends in the key areas impacting industry leading organizations. Based out of Scottsdale, Arizona Katherine leverages her 22 years of both tactical and strategic IT experience to help organizations’ transform leveraging emerging technologies.