Building Your First Model Context Protocol Server
Large language models (LLMs) have revolutionized the way we interact with AI, but they come with inherent limitations. While they excel at predicting and generating text based on their training data, they struggle with performing meaningful real-world tasks independently. This fundamental limitation led to the development of tool-augmented LLMs, which can interact with external services through APIs, enabling capabilities like web searches and file operations.
However, tool-based approaches bring their own set of challenges. While they significantly enhance an LLM’s capabilities, allowing for task automation and data retrieval, they remain constrained by their rigid structure. When APIs evolve, or new use cases emerge, the entire tool ecosystem needs to adapt, creating maintenance overhead and potential points of failure.
This leaves us with the following problem: How can we create more flexible, resilient systems that overcome these limitations? One possible answer is the Model Context Protocol approach, which is gaining much traction in the AI community.
How Does MCP Solve the Problem?
The Model Context Protocol (MCP) addresses these challenges by providing a unified communication layer between LLMs and external services. At its core, MCP is an intermediary that standardizes the way different tools and services interact with language models.
MCP’s key innovation is its ability to translate between different tool specifications and APIs. Rather than requiring each tool to adapt to a specific LLM’s requirements, MCP handles the translation layer, ensuring smooth communication between all components. This standardization eliminates the need for custom integrations and reduces implementation complexity.
The MCP architecture consists of three main components:
- MCP Client: Applications like Cursor or Windsurf, but also LLM providers that interface with the protocol.
- MCP Server: The core component that handles protocol translation and capability management, maintained by service providers (GitHub, Figma and others).
- Services: The actual tools and functionalities that the MCP server connects to.
MCP servers can expose three types of services to clients:
- Resources: These are file-like data structures that clients can access, such as file contents or API responses.
- Tools: Function calls that LLMs can execute (with user approval), enabling interaction with external services.
- Prompts: Prewritten templates that guide users or LLMs in accomplishing specific tasks.
This structured approach ensures that services can expose their functionality consistently and securely while maintaining flexibility for future extensions and modifications.
Let’s Build a Simple MCP Server
To demonstrate the practical implementation of an MCP server, we’ll create a basic example using Node.js with TypeScript. While official SDKs are available for Python, Java, Kotlin and C#, we’ve chosen Node.js for its widespread adoption and excellent TypeScript support.
We’ll build a server for this tutorial that integrates with Stream’s services. Though we’ll be working with Stream-specific API calls, the core principles and patterns we’ll cover apply to any MCP server implementation. Our server will expose two essential tools:
create-user: A tool for user creationget-token: A tool for JWT generation
These tools form the foundation of a basic project and will help you understand the fundamental concepts of MCP server implementation.
Before we begin, ensure you have Node.js version 16 or higher installed on your system. For this tutorial, we’ll be using Node.js version 20.
Project Setup
Let’s start by setting up our development environment. We’ll create a new Node.js project with TypeScript support and install all necessary dependencies to build our MCP server.
First, create a new directory for your project and initialize it with npm:
|
1 2 3 4 |
mkdir sample-server && cd sample-server npm init -y npm install @modelcontextprotocol/sdk zod getstream npm install -D @types/node typescript |
Next, we’ll create the basic project structure by adding a source directory and our main TypeScript file:
|
1 2 |
mkdir src touch src/index.ts |
Since we’re building a module rather than a web application, we must configure our package.json appropriately. Add the following configuration to specify the module type and build script:
|
1 2 3 4 5 6 7 8 9 10 11 12 |
{ "type": "module", "bin": { "sample-server": "./build/index.js" }, "scripts": { "build": "tsc && chmod 755 build/index.js" }, "files": [ "build" ], } |
Finally, we’ll set up TypeScript with a robust configuration that ensures type safety and modern JavaScript features. Create a tsconfig.json file with these settings:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
{ "compilerOptions": { "target": "ES2022", "module": "Node16", "moduleResolution": "Node16", "outDir": "./build", "rootDir": "./src", "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true }, "include": ["src/**/*"], "exclude": ["node_modules"] } |
With these configurations in place, we have a solid foundation for building our MCP server. The project now has TypeScript support, necessary dependencies and proper module configuration.
Setting Up the Basic Server
Now that we have our project structure in place, let’s create the foundation for our MCP server. We’ll use the official MCP SDK, which provides all the necessary building blocks for our implementation.
First, let’s look at the basic server setup. We’ll import the required dependencies and create our server instance:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { z } from "zod"; import { StreamChat } from 'stream-chat'; // Create server instance const server = new McpServer({ name: "sample-server", version: "1.0.0", capabilities: { resources: {}, tools: {}, }, }); |
This creates a bare-bones MCP server with an empty capabilities object. The server is identified by its name and version, which helps clients understand what they’re connecting to. In the following sections, we’ll gradually expand this configuration by adding tools and resources.
Once we’ve set up our server instance, we must connect it to a transport layer that handles client communication. In this case, we’re using stdio (standard input/output) as our transport mechanism:
|
1 2 3 4 5 6 7 8 9 10 |
async function main() { const transport = new StdioServerTransport(); await server.connect(transport); console.error('Sample MCP Server running on stdio'); } main().catch((error) => { console.error('Fatal error in main():', error); process.exit(1); }); |
Adding the Tools to the Server
With our server foundation in place, it’s time to add the tools that will give our MCP server its functionality. In MCP, tools are functions that clients can call to perform specific actions. Let’s explore how to register these tools with our server.
The MCP SDK provides a straightforward way to register tools using the server.tool method. This method requires four key components:
- The tool name — A unique identifier for the tool.
- A description — Clear documentation of what the tool does.
- Input schema — Using Zod for type-safe input validation.
- Execution function — The actual implementation that processes the request.
Let’s look at a practical example by implementing our first tool: the user creation functionality.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
server.tool( 'create-user', 'Create a new user on the Stream backend', { id: z.string().describe('The id of the user to create'), username: z.string().describe('The username of the user to create'), email: z .string() .email() .optional() .describe('The email of the user to create'), }, async ({ id, username, email }) => { const serverClient = StreamChat.getInstance( STREAM_API_KEY, STREAM_API_SECRET ); const user = await serverClient.upsertUser({ id, username, email, }); return { content: [ { type: 'text', text: `User ${username} created successfully`, }, ], }; } ); |
We can also add a second tool for token generation, which follows the same pattern but serves a different purpose:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
server.tool( 'generate-token', 'Generate a token for a user', { userId: z.string().describe('The id of the user to generate a token for'), }, async ({ userId }) => { const serverClient = StreamChat.getInstance( process.env.STREAM_API_KEY!, process.env.STREAM_API_SECRET! ); const token = serverClient.createToken(userId); return { content: [{ type: 'text', text: `Token: ${token}` }], }; } ); |
Both tools demonstrate the power of MCP’s tool registration system. They define their input requirements using Zod schemas, provide precise descriptions and implement specific functionality that clients can invoke. This setup’s type-safe nature helps prevent runtime errors and provides an excellent developer experience through autocompletion and validation.
Adding Your Server to a Client
Now that our MCP server implementation is ready, let’s integrate it with a client application. We’ll use Cursor as our example client, though the process is similar for other MCP-compatible clients.
First, we need to build our server. Thanks to our earlier configuration of the build command, this is as simple as running:
npmrunbuild
Once the build is complete, follow these steps to integrate your server with Cursor:
- Open Cursor settings and navigate to the MCP section in the menu.
- Look for and click the “Add new global MCP server” option (Note: You can also configure this on a per-project basis if preferred).
- Locate your built server file and copy its absolute path.
Now, you’ll need to create a server configuration in JSON format. Add the following to your configuration file, replacing the path placeholder with your actual server path:
|
1 2 3 4 5 6 7 8 9 10 |
{ "mcpServers": { "stream-server": { "command": "node", "args": [ We "</path/to/server>/build/index.js" ] } } } |
With the configuration in place, you can now test your server by requesting it to create a new user. Watch as the server processes your request and returns the results through the MCP protocol.
Summary
This hands-on example showcased how MCP simplifies the process of building tool-augmented AI applications while maintaining type safety and providing a great developer experience. We used Stream’s backend SDK, but the logic can be adapted to any other code you want to deliver. Let us know what MCP server you will build!