April 2, 2026
Tural Mamedov
Tural Mamedov

What Makes an MCP Server Secure? A Practical Guide

What Makes an MCP Server Secure? A Practical Guide

A secure MCP server starts with four non-negotiables: OAuth 2.1 authentication on every endpoint, strict input validation on every tool call, defenses against tool poisoning and prompt injection, and hardened deployment behind an API gateway. Skip any one of these and you are leaving the door open to the same class of vulnerabilities that produced over 30 CVEs in the first two months of 2026 alone.

The Model Context Protocol has gone from niche experiment to infrastructure standard in under two years. Anthropic released MCP as an open standard in late 2024 to give AI models a universal way to connect to external tools, data sources, and services. Think of it as the USB-C of AI integrations: one protocol, many capabilities. But that rapid adoption has outpaced security maturity. OWASP now maintains a dedicated MCP Top 10 vulnerability list, and security researchers are finding that the majority of MCP servers in the wild lack even basic protections.

This guide walks through the specific threats MCP servers face and, more importantly, the practical steps to defend against them. Whether you are building an internal MCP server for your engineering team or shipping one as part of a product, the principles are the same.

Why MCP Server Security Deserves Its Own Playbook

MCP servers are not traditional APIs. They sit at the intersection of AI model behavior and system access, which creates a unique threat profile. When an LLM calls a tool through MCP, it is executing actions with real consequences: reading files, querying databases, running commands, accessing third-party services. A compromised MCP server does not just leak data; it can give an attacker the ability to act through the AI agent itself.

The numbers tell the story. Between January and February 2026, security researchers filed over 30 CVEs targeting MCP servers, clients, and infrastructure. Of those, 43% were exec/shell injection vulnerabilities where MCP servers passed user input directly to shell commands (Practical DevSecOps, 2026). A large-scale analysis of over 5,200 open-source MCP server implementations by Astrix found that 88% require credentials to function, but more than half (53%) rely on insecure, long-lived static secrets like API keys and personal access tokens (Astrix, 2025). Researchers scanning nearly 2,000 publicly accessible MCP servers found that every single verified instance granted access to internal tool listings without any authentication (Stack Overflow Blog, 2026).

These are not theoretical risks. Invariant Labs published working proof-of-concept attacks that successfully exfiltrated SSH keys and config files from Claude Desktop and Cursor through tool poisoning techniques (MCP Playground, 2026).

MCP Architecture and Where the Vulnerabilities Live

Before getting into defenses, it helps to understand where attacks land. MCP has three core components:

1. The MCP host provides the runtime environment. This is the application the user interacts with, such as Claude Desktop, an IDE with MCP support, or a custom AI agent.

2. The MCP client manages the connection between the host and one or more servers, handling protocol negotiation and message routing.

3. The MCP server exposes capabilities to the AI model through three primitives: tools (functions the model can call), resources (data the model can read), and prompts (reusable templates for common tasks).

The attack surface spans every connection point. Tool descriptions can be poisoned with hidden instructions. Inputs from the model to server tools can contain injected commands. Authentication between client and server can be absent or weak. And the server’s connection to downstream services (databases, APIs, file systems) can be overly permissive.

The OWASP MCP Top 10 categorizes the most critical risks, and several of them relate directly to how the server handles trust boundaries. In an MCP system, you cannot trust the model’s output any more than you would trust raw user input. Every tool call must be treated as potentially adversarial.

Authentication and Authorization with OAuth 2.1

The MCP specification standardized on OAuth 2.1 for HTTP-based transports as of the June 2025 revision. This was a critical decision. Earlier MCP implementations often relied on static API keys or, worse, no authentication at all. OAuth 2.1 provides the framework for proper identity verification, scoped access, and token lifecycle management.

How the Auth Architecture Works

Under the current spec, MCP clients act as OAuth 2.1 clients and protected MCP servers function as OAuth 2.1 resource servers. An authorization server handles user authentication and token issuance, while the resource server (your MCP server) validates tokens and enforces access controls (Red Hat, 2026).

For client-facing applications, use Authorization Code Flow with PKCE. This is the standard for browser-based and native apps where a user is present. For server-to-server communication where no user is involved, use Client Credentials Flow. Never roll your own token validation or authorization logic. Use mature, audited libraries for OAuth handling; custom implementations are a consistent source of vulnerabilities.

Token Management Essentials

Short-lived tokens are non-negotiable. A token that lives for hours or days is a token that can be stolen and reused. Set access token lifetimes to minutes, not hours, and use refresh tokens with rotation for longer sessions.

Scope your tokens tightly. If a tool only needs read access to a specific resource, the token should grant exactly that and nothing more. The principle of least privilege applies at the token level, not just the role level.

Never embed credentials in code. Store client secrets in environment variables or a dedicated secrets manager. Never log authorization headers, tokens, or codes. Never send tokens as URL query parameters. According to CData’s 2026 MCP best practices report, strong MCP security relies on OAuth 2.1 authentication paired with least-privilege access as the foundation (CData, 2026).

Handling Unauthorized Requests

When your server denies access with a 401 response, include a WWW-Authenticate header with Bearer, realm, and a resource_metadata link. This guides compliant MCP clients through the proper authorization flow rather than leaving them to guess. It is a small detail that makes your server interoperable with the broader MCP ecosystem.

Defending Against Tool Poisoning and Prompt Injection

These two attack vectors are unique to AI-integrated systems and represent the most novel security challenges MCP servers face.

Tool Poisoning

Tool poisoning occurs when an adversary compromises the tools, plugins, or their outputs that an AI model depends on. The attacker injects malicious, misleading, or biased context to manipulate model behavior. Because MCP tool descriptions are passed to the model as part of its context, a poisoned description can instruct the model to take actions the user never intended.

The OWASP MCP Top 10 breaks this into several sub-techniques: rug pulls (malicious updates pushed to previously trusted tools), schema poisoning (corrupting interface definitions to mislead the model), and tool shadowing (introducing fake or duplicate tools that intercept or alter interactions).

Practical defenses against tool poisoning:

  • Validate and pin tool descriptions. Do not dynamically load tool metadata from untrusted sources without verification. If your server consumes third-party tool definitions, hash and verify them against known-good versions. Monitor for unexpected changes in tool descriptions, parameter schemas, or return types.
  • Implement tool integrity checks at startup and periodically during operation. If a tool description changes from the expected hash, raise an alert and fall back to a known-safe version.
  • Use the mcp-scan tool by Invariant Labs to detect tool poisoning, rug pulls, cross-origin escalations, and prompt injection in your MCP server configuration.

Prompt Injection

Prompt injection happens when attackers embed hidden instructions within content that an AI agent processes. The model cannot distinguish between legitimate commands and injected ones, so it may execute both. In the MCP context, this means data returned from tools (database results, file contents, API responses) could contain instructions that redirect the model’s behavior.

Defenses against prompt injection in MCP servers:

  • Sanitize tool outputs before returning them to the model. Strip or escape any content that could be interpreted as instructions. This is especially important when tools return user-generated content or data from external sources.
  • Implement output filtering that flags patterns commonly used in injection attacks, such as phrases like “ignore previous instructions” or role-reassignment prompts.
  • Consider architectural isolation: separate the tool execution environment from the model’s context assembly. The tool should return raw data, and a sanitization layer should process it before it reaches the model.

Input Validation and Command Injection Prevention

The 43% figure for shell injection CVEs is not surprising when you look at how many early MCP servers were built. The pattern was simple and dangerous: take the parameter from the model’s tool call and pass it directly to a shell command or file system operation.

Why This Happens

MCP tools often need to interact with the operating system: reading files, running scripts, querying local services. The temptation to build these as thin wrappers around shell commands is strong, especially in prototyping. But an MCP tool that passes user_input to subprocess.run(f"cat {user_input}") is a command injection vulnerability waiting to be exploited.

Validation Patterns That Work

Allowlisting over denylisting. Define exactly what inputs are acceptable rather than trying to block everything that is dangerous. If a tool accepts file paths, validate against a list of permitted directories. If it accepts query parameters, restrict them to known-safe patterns. Denylists always miss something.

Type-strict parameter validation. Every parameter in your MCP tool schema should have a specific type, format constraints, and range limits. Do not accept free-form strings when you can accept an enum. Do not accept arbitrary paths when you can accept an ID that maps to a pre-approved path.

Path traversal prevention. For any tool that accesses the file system, canonicalize the path and verify it falls within the expected directory tree. A path like ../../etc/passwd should never make it past validation. Use os.path.realpath() in Python or equivalent in your language, then check the prefix.

No direct shell execution. Use language-native libraries for file operations, HTTP requests, and database queries instead of shelling out. If you absolutely must execute a command, use parameterized APIs (like Python’s subprocess.run with a list argument, never a string) and never interpolate user input into the command string.

According to the MCP security specification, servers should check path permissions when models request file access, validate package sources and versions, and require human approval for production-critical commands (Model Context Protocol, 2026).

Deployment Hardening

Building a secure MCP server is only half the equation. How you deploy it determines whether those protections hold in production.

Put an API Gateway in Front

The MCP security best practices recommend that remote MCP servers sit behind an API gateway. The gateway provides a single enforcement point for authentication, authorization, rate limiting, and logging. This is especially important because the gateway can enforce policies that the MCP server itself may not implement, such as IP allowlisting, request size limits, and geographic restrictions.

Encrypt Everything in Transit

Use mTLS (mutual TLS) between clients and servers. Standard TLS encrypts the connection, but mTLS also verifies the identity of both parties. This prevents man-in-the-middle attacks and ensures that only authorized clients can connect to your server. For internal deployments, mTLS between the API gateway and the MCP server adds defense in depth.

Network Segmentation

Isolate your MCP server in its own network segment. It should not have broad access to your internal network. Apply the principle of least connectivity: the server should only be able to reach the specific downstream services it needs (a particular database, a specific API endpoint) and nothing else. Use VPC subnets, security groups, or network policies to enforce this.

According to Akto’s 2026 MCP security guide, best practices include segregating MCP servers by VPC subnets or VLANs with rigorous filtering, deploying service meshes for identity-aware traffic control, and applying WAFs and API gateways for deep inspection (Akto, 2026).

Rate Limiting and Monitoring

Rate limit tool calls per client, per session, and per tool. An AI agent stuck in a loop (or an attacker probing your server) can generate thousands of tool calls in minutes. Set sensible limits and alert on anomalies.

Log every tool call with its parameters, the authenticated identity, and the result. These logs are your audit trail when something goes wrong and your detection mechanism for attacks in progress. Apply user and entity behavior analytics to identify unusual patterns: a client that suddenly starts calling tools it has never used before, or a spike in file access requests, should trigger investigation.

Run Security Scans

Before deployment and on a regular cadence after, scan your MCP server with dedicated tools. The mcp-scan tool by Invariant Labs checks for tool poisoning, rug pulls, cross-origin escalations, and prompt injection vulnerabilities. Integrate it into your CI/CD pipeline so that every deployment is verified.

Frequently Asked Questions

What is the OWASP MCP Top 10?

The OWASP MCP Top 10 is a security framework maintained by the OWASP Foundation that catalogs the most critical vulnerabilities specific to Model Context Protocol implementations. It covers risks including tool poisoning, prompt injection, insufficient authentication, excessive permissions, and insecure credential storage. The project is hosted on GitHub and is regularly updated as the threat landscape evolves.

How does OAuth 2.1 work with MCP servers?

In the MCP architecture, the MCP client acts as an OAuth 2.1 client and the MCP server functions as an OAuth 2.1 resource server. An external authorization server handles user authentication and issues access tokens. The MCP server validates these tokens on every request and enforces scoped access controls. Client-facing apps use Authorization Code Flow with PKCE, while server-to-server communication uses Client Credentials Flow.

What is tool poisoning in MCP?

Tool poisoning is an attack where an adversary modifies tool descriptions, schemas, or outputs to manipulate AI model behavior. Because MCP tool metadata is passed directly into the model’s context, a poisoned tool description can contain hidden instructions that cause the model to exfiltrate data, call unintended tools, or bypass safety controls. Sub-techniques include rug pulls (malicious updates to trusted tools), schema poisoning, and tool shadowing.

How do I test my MCP server for security vulnerabilities?

Start with the mcp-scan tool by Invariant Labs, which detects tool poisoning, rug pulls, cross-origin escalations, and prompt injection. Complement this with standard API security testing: fuzz your tool inputs, attempt path traversal and command injection, verify that authentication is enforced on every endpoint, and confirm that tokens are properly scoped and short-lived. Integrate these checks into your CI/CD pipeline for continuous verification.

Can MCP servers be deployed without authentication?

Technically yes, but it is a serious security risk. Research shows that nearly 2,000 publicly accessible MCP servers were found granting access to internal tool listings without any authentication. An unauthenticated MCP server exposes your tools, resources, and any connected systems to anyone who can reach it. Always require OAuth 2.1 authentication for remote MCP servers. Even for local development servers, consider enabling authentication to build good habits and prevent accidental exposure.

Building Security In, Not Bolting It On

The pattern across MCP security failures is consistent: teams treat security as something to add later, after the core functionality works. But the 30+ CVEs from early 2026 demonstrate that “later” often means “after the breach.” The root causes of those vulnerabilities were not exotic zero-days. They were missing input validation, absent authentication, and blind trust in tool descriptions.

Securing an MCP server is not a research problem. The tools, patterns, and standards exist today. OAuth 2.1 provides the authentication framework. Input validation prevents command injection. Tool integrity checks defend against poisoning. And deployment hardening through API gateways, mTLS, and network segmentation closes the operational gaps.

If your team is building AI-powered applications that rely on MCP integrations, security is not a feature to add to the backlog. It is a prerequisite for production.

For organizations looking to build secure AI integrations or modernize their approach to AI infrastructure, unicrew’s AI consulting and AI development teams can help you get it right from the start.


Sources:

Share:
Tural Mamedov
Tural Mamedov
Subscription Form
Get in touch