Model Context Protocol (MCP)

Large language models (LLMs) have a fundamental problem: they operate in isolation. Trained on static datasets with hard cutoffs, these models cannot access live threat intelligence or query your security stack in real time.

The industry faces what’s called the NxM problem: connecting dozens of AI models to hundreds of enterprise tools creates an integration nightmare. Model Context Protocol (MCP) is emerging as the standard to solve this gap. It acts as a unified protocol that allows AI to seamlessly access databases, APIs, file systems, and other external resources. 

In this guide, we’ll explore MCP’s architecture, security implications, real-world use cases, adoption strategies, potential risks, and frequently asked questions.

Cybersecurity Education and Training Begins Here

Start a Free Trial

Here’s how your free trial works:

  • Meet with our cybersecurity experts to assess your environment and identify your threat risk exposure
  • Within 24 hours and minimal configuration, we’ll deploy our solutions for 30 days
  • Experience our technology in action!
  • Receive report outlining your security vulnerabilities to help you take immediate action against cybersecurity attacks

Fill out this form to request a meeting with our cybersecurity experts.

Thank you for your submission.

What Is MCP (Model Context Protocol)?

Model Context Protocol is an open standard that allows AI models to securely connect with external data sources and systems through a centralised interface. Rather than hardcoding connections between each AI model and enterprise system, MCP establishes a common language that they can all speak.

As summarised by Katie Curtin-Mestre, Proofpoint’s VP of Product Marketing for Information Protection and Data Communications Governance, MCP is “an emerging open standard that’s designed to connect AI agents with external data in a secure, auditable manner”.

The protocol allows language models to fetch live data, execute actions, and interact with enterprise systems while maintaining security controls. A model can query your SIEM, pull threat intel feeds, or update tickets in your incident response platform through the same standardised interface.

This abstraction benefits multiple stakeholders. Developers write integrations once instead of rebuilding them for each model. IT architects gain visibility into what data AI agents access. Security leads can enforce policies and audit trails across all AI interactions rather than managing point solutions.

“Our solution supports all MCP-compliant agents, whether those are commercial offerings or custom agents developed in-house. This ensures universal compatibility across your whole agentic AI ecosystem,” notes Curtin-Mestre. “Leading software companies, including Microsoft (with Copilot Studio), OpenAI (with ChatGPT and Agents SDK), Google (with Google Cloud databases and security operations tools), Amazon Web Services (with the Lambda and Bedrock services), and Salesforce (with Agentforce), are already deploying MCP-compliant agents in enterprise environments.”

MCP Architecture and Components

Understanding MCP requires breaking down its technical layers. Each component plays a critical role in connecting AI models to external systems while maintaining security and performance.

  • Host/Application: This is the user-facing AI application, such as a chat assistant or Integrated Development Environment (IDE), where users interact and initiate tasks. The host coordinates user input, manages permissions, and orchestrates communication between the LLM, clients, and external tools.
  • MCP Client: The client serves as the connection manager and translator within the host, establishing secure sessions with one or more servers. It ensures protocol compatibility, maintains isolation between servers, handles capability negotiation, and routes requests and responses as needed.
  • MCP Server: A server exposes specific functions, tools, and resources to AI models, often acting as an interface to external data sources like SIEMs or ticketing systems. Each server operates independently, advertises its capabilities, and enforces security boundaries in line with protocol requirements.
  • Transport and Messaging Layer: Communication between clients and servers relies on JSON-RPC 2.0 as the message format, using either STDIO for direct, low-latency local connections or HTTP with Server-Sent Events for remote, distributed environments. The transport layer also manages authentication and message framing to keep data exchanges reliable and secure.
  • Protocol Messages: MCP uses structured requests, responses, and notifications to enable two-way communication, with rigorous schema validation built in. Error handling and contract enforcement ensure that failures or mismatches are handled gracefully, reducing operational risk for security-sensitive workflows.

How MCP Works: Request Flow and Tool Invocation

The flow starts when a user asks a question that requires external data. The MCP client discovers available servers and their capabilities, then matches the request to the appropriate tool. Before invoking anything, permission checks verify that the LLM can access that resource.

In a simple scenario, a security analyst asks, “What’s the current threat level for our email gateway?” The client identifies the relevant MCP server, requests permission, calls the tool, and injects the response back into the LLM’s context. The model then formulates an answer using that live data.

Complex scenarios involve multiple steps. An analyst might ask, “Investigate this suspicious email and create a ticket if it’s malicious.” The LLM discovers it needs to call a forensics tool, query threat intelligence, evaluate results, and potentially invoke a ticketing system. Each step requires its own permission validation and context updates.

Advanced implementations support sampling, where the server can request additional input mid-operation. Server-initiated requests let external systems push updates to the LLM without waiting for explicit queries. These extensions transform MCP from a simple request-response system into a bidirectional communication channel.

Comparing MCP, RAG, and Function Calling

These three technologies often get conflated, but they serve different purposes in AI architecture. Understanding when to use each one helps you build systems that balance capability with complexity.

Aspect

RAG

Function Calling

MCP

Primary Use

Retrieve relevant documents to enrich LLM context

Trigger predefined actions or API calls

Standardise connections between models and external tools

Strengths

Grounds responses in actual data; reduces hallucinations

Enables LLMs to take actions beyond text generation

Portable integrations; works across models and platforms

Limitations

Read-only; cannot execute commands or update systems

Requires custom code for each model-tool pairing

Newer standard with evolving ecosystem

Best For

Knowledge search, policy lookups, documentation queries

Single-model deployments with specific tool needs

Multi-model environments needing auditable tool access

Aspect

Primary Use

RAG

Retrieve relevant documents to enrich LLM context

Function Calling

Trigger predefined actions or API calls

MCP

Standardise connections between models and external tools

Aspect

Strengths

RAG

Grounds responses in actual data; reduces hallucinations

Function Calling

Enables LLMs to take actions beyond text generation

MCP

Portable integrations; works across models and platforms

Aspect

Limitations

RAG

Read-only; cannot execute commands or update systems

Function Calling

Requires custom code for each model-tool pairing

MCP

Newer standard with evolving ecosystem

Aspect

Best For

RAG

Knowledge search, policy lookups, documentation queries

Function Calling

Single-model deployments with specific tool needs

MCP

Multi-model environments needing auditable tool access

RAG and MCP work well together in real-world scenarios. An analyst might need to answer, “What does our data retention policy say about threat logs, and how many logs are we currently storing?” RAG retrieves the policy document while MCP invokes a tool to query the SIEM for actual log volumes. The LLM synthesises both inputs into a complete answer.

For IT directors evaluating architectures, the decision comes down to scale and standardisation. RAG-only setups work fine when you need passive information retrieval from internal knowledge bases. Add function calling when a single model needs to trigger specific actions. Choose MCP when you have multiple AI systems that need consistent, auditable access to your security stack.

Benefits and Business Value

For security teams juggling multiple AI initiatives, MCP delivers tangible advantages that translate directly into faster response times and reduced operational overhead.

  • Reduced hallucination and more factual grounding: When LLMs access real-time data through MCP, they answer questions based on actual system state rather than guessing from training data. A security analyst investigating an incident gets current firewall logs and live threat intelligence instead of plausible-sounding fabrications. This grounding in facts reduces the risk of acting on false information during time-sensitive security events.
  • Easier tool and system integration: Building custom connectors for every AI model and security tool creates massive technical debt. MCP standardises these connections so you write an integration once and reuse it across different models. Your team spends less time maintaining brittle API bridges and more time improving detection logic and response playbooks.
  • Better modularity, reusability, and vendor neutrality: MCP servers work with any compliant client regardless of which LLM provider you choose. If you decide to switch from one model to another or add a new AI tool to your stack, existing integrations continue working without modification. This flexibility protects your investment and prevents vendor lock-in as the AI landscape evolves.
  • Scalability and maintainability: As your AI footprint grows, MCP provides a system of record for what tools exist and what permissions each agent holds. Security operations centres running multiple AI assistants can manage access policies centrally rather than tracking scattered configurations. When you need to revoke access or audit what data an agent touched, you have a single protocol layer to examine.
  • Enables autonomous and agentic workflows: MCP supports multi-step operations where AI agents make decisions and take actions with minimal human intervention. An agent might detect a phishing campaign, correlate it with threat intelligence, identify affected users, and automatically quarantine suspicious emails. These autonomous workflows compress response times from hours to minutes while maintaining the audit trail compliance teams require.
  • Enhanced visibility and audit trails: MCP creates a unified logging layer for all AI interactions with enterprise systems. Security teams can track exactly what data each agent accessed, when, and why. This visibility becomes essential during security audits, compliance reviews, or incident investigations when you need to reconstruct what an AI agent did and what information it had access to.

Use Cases in Cybersecurity and Enterprise

MCP’s real value emerges when applied to security operations. These use cases demonstrate how the protocol transforms AI from a static assistant into an active participant in defence workflows.

Threat Intelligence Enrichment

Security teams can connect LLMs to multiple threat feed APIs, internal logs, and vulnerability databases through MCP servers. When analysts investigate an indicator of compromise, the AI automatically enriches it with context from all available sources. SMBs might limit this to SaaS-based threat feeds, while enterprises integrate deeper with on-premises SIEM data and proprietary intelligence platforms.

Automated Incident Response

MCP enables LLMs to trigger SOAR playbooks, query endpoints, isolate compromised systems, and update tickets without human intervention. The AI reasons through the situation and takes appropriate action based on severity and context. CISOs must mitigate misuse risk through approval flows for high-impact actions and comprehensive logging of every automated decision.

Security ChatOps Assistants

Internal helpdesk bots can answer policy questions, check user permissions, reset credentials, and query security tools through natural language. MCP provides a secure bridge between conversational interfaces and backend systems. Success depends on proper scoping so assistants cannot escalate privileges or expose sensitive data beyond their authorisation level.

Compliance and Policy Reasoning

LLMs can evaluate whether configurations meet regulatory compliance requirements by querying the actual system state through MCP. The AI compares live data against policy documents retrieved via RAG to identify gaps in real time. This dynamic approach catches drift faster than periodic manual audits.

Security, Risks, and Mitigations

MCP introduces powerful capabilities but also expands the attack surface. Security researchers have identified vulnerabilities that require careful architectural planning.

Tool Misuse and Unauthorised Actions

Prompt injection remains the most critical threat. When an MCP server returns data, it can contain hidden instructions that hijack the LLM’s behaviour. Johann Rehberger, security researcher and blogger at Embrace The Red, demonstrated how malicious tool metadata can force Claude to invoke unintended tools or leak sensitive information.

As Rehberger puts it, “just enabling a tool already hands control of the LLM inference over to that specific MCP server.” CISOs need human-in-the-loop controls for high-impact actions and comprehensive approval workflows for operations like data deletion or privilege changes.

Hidden Instructions and ASCII Smuggling

Tool descriptions can contain invisible Unicode tags that pass through API and UI layers undetected. A user inspecting a tool’s metadata sees benign text, but the LLM interprets hidden instructions embedded in Unicode tags.

Rehberger disclosed this to Anthropic over a year ago with limited response. He notes that “invisible instructions should be highlighted as a security threat in the MCP documentation and made visible in the Claude UI at least.” Best practice requires scanning tool metadata for hidden characters and implementing allowlist-based token filters to block invisible instruction sets.

Supply Chain and Server Trust

Not all MCP servers are created equal. Downloading untrusted servers or using community-built integrations without code review introduces backdoor risks. Rehberger emphasises this point directly: “Do not randomly download or connect AI to untrusted MCP or OpenAPI tool servers.”

IT directors should enforce policies requiring servers from verified sources. Preferably use official implementations from vendors like GitHub or Proofpoint over unvetted alternatives. Peer code reviews and static analysis catch common issues like command injection or credential leakage.

Data Leakage and Confused Deputy Attacks

MCP creates confused deputy scenarios where the AI acts on behalf of users with elevated privileges. An analyst browsing threat intelligence might trigger a malicious server that exfiltrates emails or internal documents through seemingly innocent tool calls. The AI becomes an unwitting intermediary executing unauthorised actions. Logging and monitoring become essential so you can track human identities to AI actions and reconstruct attack chains during incident response.

Authentication and Permission Escalation

Current MCP implementations struggle with robust authentication. OAuth 2.1 support is evolving, but many servers rely on basic token management that can leak credentials or allow privilege abuse. Zero-trust architectures help by enforcing least-privilege access at the protocol level. Every tool invocation should validate permissions against the current user context rather than relying on static configurations.

Auditing and Governance

Without comprehensive logging, you cannot reconstruct what an AI agent did or what data it accessed. Security operations need audit trails that map human identities to AI actions with full request and response logging. MCP solutions provide this governance layer by enforcing data access policies and creating tamper-proof logs of every MCP interaction.

Implementation Guidance and Pitfalls to Avoid

Moving from concept to production requires practical decisions about architecture and deployment. Here’s what works in real-world implementations and where teams commonly stumble.

  1. Start with the server side: Build or adopt an MCP server before worrying about client complexity. Pick a tool or data source your security team uses daily and expose it to MCP first. This focused approach lets you validate the architecture before scaling to your entire stack.
  2. Choose your transport wisely: Use STDIO for local processes that need low latency, like desktop AI assistants or development tools. Switch to HTTP with Server-Sent Events when you need distributed access across your network or when multiple teams share the same MCP servers.
  3. Version everything explicitly: API changes will break clients unless you maintain backward compatibility. Use semantic versioning for your servers and document breaking changes in release notes. The few minutes spent on version discipline save hours of debugging production failures.
  4. Build robust error handling: External APIs fail constantly due to rate limits, timeouts, or network issues. Implement retry logic with exponential backoff and graceful degradation so partial failures don’t cascade. Your LLM should explain what went wrong rather than returning cryptic error messages to users.
  5. Validate schemas rigorously: Never trust tool definitions from external servers without validation. Scan for hidden Unicode characters, check parameter types against allowlists, and reject schemas that request excessive permissions. Common mistakes include accepting tool metadata at face value or streaming unfiltered context that contains sensitive data.
  6. Monitor like you mean it: Track latency, error rates, and permission denials across all MCP interactions. Set alerts for unusual patterns like a single agent making hundreds of requests or accessing resources outside normal hours. Without telemetry, you’re flying blind when incidents occur.
  7. Phase it in gradually: Don’t rip out existing integrations overnight. Run MCP alongside current systems and migrate one use case at a time. Start with read-only operations before enabling write access or destructive actions.

MCP’s Evolution and Future Directions

MCP has achieved significant ecosystem growth with SDKs spanning Python, TypeScript, Java, and C#. Major cloud providers, including AWS, Azure, and Google Cloud, now offer first-party support. The protocol has broadened its reach beyond LLMs to agentic apps and legacy analytics workloads, demonstrating versatility across use cases.

Governance and ecosystem maturity remain open challenges. Anthropic launched a centralised registry in September 2025 to solve discoverability and trust issues across fragmented third-party catalogues. The current model suffers from incomplete metadata and difficulty verifying server authenticity. Formal governance structures are taking shape, with Anthropic actively seeking contributors experienced in open-source protocol management to ensure MCP remains community-driven as it scales.

Integration with agent orchestration frameworks presents interesting opportunities. MCP complements tools like LangChain, CrewAI, BeeAI, and LlamaIndex without replacing them. Frameworks handle workflow management and multi-agent coordination while MCP standardises the tool-access layer beneath. CrewAI already supports MCP integration through adapters that offload resource-intensive tasks to remote servers. This division of responsibility creates cleaner architectures where orchestration logic stays separate from data access concerns.

Future enhancements will likely focus on asynchronous operations for long-running tasks, stateless server designs for horizontal scaling, and server identity mechanisms using well-known URLs for capability discovery. The specification roadmap includes official extensions for specialised industries and SDK tiering systems to help developers evaluate implementation quality. Enhanced authentication mechanisms and improved observability for security operations will address enterprise concerns as adoption accelerates.

Leverage MCP in Your Security Framework

Model Context Protocol sets a new standard for integrating AI agents with enterprise tools, promising modularity, efficiency, and real-time context when implemented with strong security and governance. Security teams can expect portable, auditable, and scalable frameworks, but robust permission controls and detailed audit logging will be essential for managing risk. MCP is reshaping how defenders automate, collaborate, and reason with current data.

At Proofpoint’s annual conference in September 2025, this technology took centre stage as the company addressed emerging data security risks associated with agentic AI. Proofpoint Secure Agent Gateway, launching in Q1 2026, uses MCP to connect AI agents with external data in a secure and auditable manner. This approach reflects a broader industry shift: effective defence requires artificial intelligence that can reason with current context, not just historical patterns.

To explore MCP-enabled security solutions or learn more about bringing secure agentic AI to your environment, get in touch with Proofpoint for expert guidance and real-world experience in safe enterprise adoption.

Frequently Asked Questions (FAQ)

Who created MCP, and when did it launch?

Anthropic introduced MCP as an open standard in November 2024. The protocol aims to create a universal way for AI models to connect with external data sources and tools. Anthropic continues to lead governance efforts while actively seeking community contributors to ensure MCP remains vendor-neutral as adoption grows.

What is the difference between MCP and function calling?

Function calling lets an LLM select and trigger predefined actions, but your application code handles the execution logic. MCP standardises how tools are discovered, invoked, and executed across different hosts and servers, making integrations portable and reusable. Function calling focuses on what to do, while MCP defines how that decision travels across your stack.

Is MCP secure for sensitive data?

MCP itself does not guarantee security. Implementation choices determine whether deployments are safe for sensitive data. Security researchers have documented prompt injection vulnerabilities, hidden instruction smuggling, and confused deputy attacks that can compromise MCP deployments without proper controls. Strong authentication, permission validation at every invocation, and comprehensive audit logging help secure the protocol for enterprise use.

What transport mode should I use: STDIO or HTTP with SSE?

STDIO works well for local processes that need low latency, like desktop AI assistants or development environments. HTTP with Server-Sent Events supports distributed architectures across networks, enabling multiple teams to share MCP servers. Choose based on whether your tools run on the same machine as your AI or need remote access across your infrastructure.

When should I adopt MCP vs. staying with direct API integrations?

Adopt MCP when you have multiple AI systems that need consistent access to your security stack or when vendor neutrality matters for future flexibility. Stick with direct API integrations for single-model deployments with a handful of well-defined tools where custom code provides sufficient control. MCP shines at scale when maintaining dozens of point-to-point integrations becomes unmanageable.

Can I use MCP together with RAG?

Yes, MCP and RAG complement each other well. RAG retrieves relevant documents to ground LLM responses in actual data, while MCP enables tool invocation and system actions. A security analyst might use RAG to pull policy documents and MCP to query live SIEM data, with the LLM synthesising both into a complete answer. This combination provides both passive knowledge retrieval and active system interaction.

How does versioning and schema evolution work in MCP?

MCP servers should use semantic versioning to signal breaking changes versus backward-compatible updates. Clients discover server capabilities dynamically through the protocol, allowing gradual migrations when schemas change. The September 2025 specification updates introduced improved mechanisms for capability discovery and server identity verification to handle version mismatches more gracefully.

Ready to Give Proofpoint a Try?

Start with a free Proofpoint trial.