Run Python
MCP Run Python is an MCP server that allows you to run Python code inside a secure sandbox.
It uses a combination of Pyodide and Deno to isolate the code execution from your operating system.
This means an AI agent can perform calculations or run scripts without you having to worry about it accessing files or causing other trouble.
Features
- π Secure Execution: Runs Python code in WebAssembly isolation via Pyodide and Deno
- π¦ Dependency Management: Automatically detects and installs required packages
- π Complete Output Capture: Records stdout, stderr, and return values
- β‘ Async Support: Properly handles asynchronous Python code
- π Detailed Error Reporting: Provides comprehensive error information for debugging
- π Multiple Transport Options: Supports stdio and HTTP transports
- π Integrated Logging: Emits execution logs as MCP logging messages
Use Cases
- AI assistants that need to execute mathematical calculations or data processing tasks
- Educational platforms where users can run code without security concerns
- Code review tools that test Python snippets in isolated environments
- Data analysis workflows that require temporary execution environments
- Automated testing systems that run untrusted code safely
Get Started
You’ll need Python and Deno installed on your system to get started.
1. The simplest way to run the server is with uvx. This command fetches and runs the package without a permanent installation.
uvx mcp-run-python [-h] [--version] [--port PORT] [--deps DEPS] {stdio,streamable-http,example}2. The server has two main transport modes:
stdio: Runs the server locally, communicating over standard input/output. This is perfect for when you’re running an agent as a subprocess on the same machine.streamable-http: Exposes the server over HTTP. This is what you’d use to connect to it remotely from another machine or application.
3. To test that everything is working, you can run the built-in example. It uses numpy, so you need to tell the server to install it with the --deps flag.
uvx mcp-run-python --deps numpy exampleIntegration with Pydantic AI
Hereβs a quick look at how you’d hook this up to a Pydantic AI agent. You create an MCPServerStdio instance and pass it to the Agent.
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
from mcp_run_python import deno_args_prepare
server = MCPServerStdio('uvx', args=['mcp-run-python@latest', 'stdio'], timeout=10)
agent = Agent('claude-3-5-haiku-latest', toolsets=[server])
async def main():
async with agent:
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
print(result.output)
#> There are 9,208 days between January 1, 2000, and March 18, 2025.w
if __name__ == '__main__':
import asyncio
asyncio.run(main())Usage in Code as an MCP Server
For more programmatic control, you can install mcp-run-python directly into your project.
pip install mcp-run-python
# or
uv add mcp-run-pythonWith the package installed, you can use the async_prepare_deno_env function to set up the Deno environment. This function handles creating the environment and installing dependencies. It returns the necessary arguments to pass to MCPServerStdio.
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
from mcp_run_python import async_prepare_deno_env
async def main():
async with async_prepare_deno_env('stdio') as deno_env:
server = MCPServerStdio('deno', args=deno_env.args, cwd=deno_env.cwd, timeout=10)
agent = Agent('claude-3-5-haiku-latest', toolsets=[server])
async with agent:
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
print(result.output)
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
if __name__ == '__main__':
import asyncio
asyncio.run(main())Using the code_sandbox Helper
For more direct usage without a full agent, the library provides a code_sandbox async context manager. This is my preferred way for quick, one-off script executions. It handles spinning up and shutting down the MCP server behind the scenes.
from mcp_run_python import code_sandbox
code = """
import numpy
a = numpy.array([1, 2, 3])
print(a)
a
"""
async def main():
async with code_sandbox(dependencies=['numpy']) as sandbox:
result = await sandbox.eval(code)
print(result)
if __name__ == '__main__':
import asyncio
asyncio.run(main())FAQs
Q: Why does this use Deno and Pyodide?
A: It’s all about security. Pyodide is a version of Python compiled to WebAssembly, which runs in a tightly controlled sandbox. Deno provides a secure, modern runtime for executing this WebAssembly module, with strict permissions that isolate it from the host operating system.
Q: Can I install Python packages at runtime?
A: No, and this is an intentional security feature. You must provide all required dependencies when you first start the server. The server uses a two-step process: first, it installs the dependencies with write permissions, and then it runs the untrusted code with read-only permissions to the installed packages. This prevents the executed code from modifying the environment.
Q: What’s the difference between the stdio and streamable-http transports?
A: stdio is for local communication between processes, like when an agent and the server are running on the same machine. streamable-http runs the server as a web service, so you can connect to it over a network. This is useful if you want to have a central code execution server that multiple agents can use.
Latest MCP Servers
Excalidraw
Claude Context Mode
Context+
Featured MCP Servers
Excalidraw
Claude Context Mode
Context+
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



