Understanding and mitigating security risks in Model Context Protocol servers used by AI coding tools like Cursor, Windsurf, Claude Code, and more.
The Model Context Protocol (MCP) is an open standard developed by Anthropic that enables AI models to interact with external tools and resources. MCP servers provide AI assistants with capabilities like:
Read, write, and modify files in the project directory
Execute shell commands for builds, tests, and deployments
Connect to external services, databases, and APIs
Browse websites, fill forms, and extract information
MCP servers that execute shell commands can be vulnerable to command injection when inputs aren't properly sanitized. Attackers can break out of intended commands and execute arbitrary code.
// Directly interpolating user input
exec(`npm install ${packageName}`)
// Attack: packageName = "lodash; rm -rf /"
// Executes: npm install lodash; rm -rf /// Using parameterized execution
execFile('npm', ['install', packageName])
// Input is treated as a single argument
// Shell metacharacters are not interpretedKnown CVEs: CVE-2025-54135 (Cursor), CVE-2025-48757 (Lovable)
When AI models process external content (websites, documents, code), hidden instructions can manipulate the model into performing unauthorized actions through MCP tools.
File operations that don't properly validate paths can allow attackers to read or write files outside the intended directory, accessing sensitive system files or credentials.
// Intended: Read project files
readFile("src/config.ts")
// Attack: Access sensitive files
readFile("../../../.ssh/id_rsa")
readFile("../../../.aws/credentials")Known CVEs: CVE-2025-54136 (Cursor)
MCP servers with network access can be exploited to exfiltrate sensitive data. Combined with prompt injection, attackers can steal credentials, source code, and secrets.
MCP servers running with elevated privileges or access to sudo can allow attackers to escalate from application-level to system-level access.
| Tool | Known Vulnerabilities | Risk Level |
|---|---|---|
| Cursor | CVE-2025-54135, CVE-2025-54136 | Critical |
| Lovable | CVE-2025-48757 | Critical |
| Windsurf | Command Injection, Path Traversal | Critical |
| Claude Code | Permission prompts mitigate risk | Medium |
| GitHub Copilot | Limited MCP capabilities | Low |
Configure your AI tool to require manual approval for every terminal command and file operation.
Run AI coding tools in isolated containers or VMs without access to production credentials or sensitive files.
Configure strict allowlists for permitted commands, file paths, and network destinations.
Be cautious when asking AI to analyze repositories, websites, or documents from untrusted sources.
Log and review all MCP tool invocations to detect suspicious patterns or unauthorized access attempts.
Regularly update your AI coding tools to receive security patches for known vulnerabilities.
An MCP (Model Context Protocol) server is a component that provides AI models with access to tools, resources, and system capabilities like file access, terminal commands, and API calls.
MCP servers can be safe when properly configured with strict permission controls, but they introduce security risks including command injection, prompt injection, and data exfiltration vulnerabilities.
Enable command approval prompts, use sandboxed environments, keep your tools updated, and be cautious when analyzing external content with AI tools.
Popular tools using MCP include Cursor, Windsurf, Claude Code (CLI), Lovable, and other agentic AI coding assistants that can execute commands and modify files.
Applications built with AI coding tools need security scanning. Find vulnerabilities before attackers do.
Scan Your App FreeLast updated: January 2025