Cursor costs $20 per month, which is a week's worth of food for many people. Its closed-source nature has been a pain for the community for too long!
Therefore, we open-sourced a super lightweight command-line project for Cursor-like programming, for everyone to learn and use. You can use mini-cursor in any directory to write programs using AI agents with this directory as the workspace!
This project supports fully local/intranet deployment with no possibility of data leakage to external parties. You can use external APIs or local vllm/ollama services that are compatible with the OpenAI API.
2025-05-14: After persistent efforts over a period of time, various display issues have finally been resolved.
2025-04-28: MCP services for MySQL and ClickHouse will no longer be included as default MCP services.
- MySQL MCP repository: mysql-mcp
- ClickHouse MCP repository: clickhouse-mcp
20250422: Added web interface that can be launched directly from the CLI with mini-cursor web.
20250421: Added support for inference models like deepseek-r1.
20250418: You can use this project to collect high-quality tool call data, or use the MCP services prepared by this project that are essentially identical to Cursor's (except for a code retrieval MCP, which I found to be not very effective in personal use). Additionally, this project adds web search and secure connections to SQL and ClickHouse databases. After testing, it was found that many models may have dangerous operations, so all database-related operations are restricted to read-only.
- Nearly 1:1 reimplementation of Cursor's MCP and Prompt, supports using these MCP services in other VSCode plugins.
- Supports local/remote multi-tool (MCP) invocation
- Supports OpenAI API, allows using local models, ensuring data security.
- Selective tool enablement: Choose which tools to use for each session
- Interactive parameter and server configuration
- One-click pip install, globally available CLI
- Great for secondary development and custom extensions
conda create -n mini-cursor python=3.10
conda activate mini-cursor
git clone https://github.com/the-nine-nation/mini-cursor.git
cd mini_cursor
pip install .mini-cursor initThis command will automatically generate mini_cursor/core/mcp_config.json with the correct Python and MCP script paths, but some parameters need to be filled in manually.
The command line will show the location of the generated JSON file. If you want to use it with other programs, you can copy the JSON content.
Note: If you want to use the web search tool, after generation, be sure to use
mini-cursor mcp-configor directly editmcp_config.jsonto fill in yourBOCHAAI_API_KEY.
Example mcp_config.json snippet:
{
"mcpServers": {
"cursor_mcp": {
"command": sys.executable,
"args": [cursor_mcp_py],
"env": {
"BOCHAAI_API_KEY": "API key from https://open.bochaai.com/ to enable web search for the model"
}
}
}
}If you want the agent to perform automatic web search, you can go to BochaAI to obtain the corresponding api_key to support retrieval.
mini-cursor configFollow the prompts to enter your OpenAI-compatible API Key, Base URL, model name, etc. The config will be saved to .env.
3. Interactively Edit MCP Servers (Comes with all Cursor MCPs and web search by default, no need to modify)
mini-cursor mcp-configInteractively add/edit/delete MCP servers. The config is saved to mini_cursor/core/mcp_config.json.
You can also copy this config and use it in Cursor, etc.
mini-cursor chat- Supports natural language Q&A, code generation, tool invocation, etc.
- Type
helpat any time during chat for available commands.
mini-cursor webThis command starts the web server and automatically opens your default browser to access the web interface. The web interface provides a more visual and user-friendly way to interact with mini-cursor:
- Full chat capabilities with streaming responses
- Visual tool call display
- Configuration management
- Conversation history viewing
You can press Ctrl+C in the terminal to stop the web server when you're done.
| Command | Description |
|---|---|
mini-cursor init |
Initialize MCP config (recommended) |
mini-cursor config |
Interactive API param config (.env) |
mini-cursor mcp-config |
Interactive MCP config editor |
mini-cursor chat |
Start chat agent |
mini-cursor web |
Launch the web interface |
mini-cursor help |
Show help |
In chat mode, you can use:
historyView tool call historymessage historyView message historyclear historyClear message historyserversView available MCP serversconfigEdit API paramsmcp-configEdit MCP confighelpShow helpquitExit chat
Tool management commands:
enable <tool>Enable a specific tooldisable <tool>Disable a specific toolenable-allEnable all toolsdisable-allDisable all toolsmode <all|selective>Set the tool enablement mode
The project has been modularized to improve code readability and maintainability. Here's the core architecture:
mcp_client.py: Main client class that integrates all modules and serves as the entry pointmessage_manager.py: Manages conversation history, including user/system/assistant messagestool_manager.py: Handles tool discovery, tool calls, and maintains tool call historyserver_manager.py: Manages MCP server connections and configurationsdisplay_utils.py: Utility functions for displaying tool histories, servers, and message historiesconfig.py: Central configuration management for API keys, URLs, and other settings
The main client class that coordinates interactions between the LLM, MCP servers, and user. It:
- Manages the chat loop and conversation flow
- Processes user queries through the LLM
- Handles streaming responses
- Orchestrates tool calls based on LLM decisions
Responsible for all aspects of message history management:
- Adding user messages and system prompts
- Tracking assistant responses
- Recording tool calls and their results
- Trimming conversation history to prevent context overflow
- Providing clean history retrieval
Manages all tool-related functionality:
- Discovers and catalogs available tools from all MCP servers
- Finds the appropriate server for each tool
- Executes tool calls with timeout handling
- Maintains detailed tool call history
- Formats tool parameters for API calls
- Manages tool enablement/disablement for selective tool usage
Handles connection and communication with MCP servers:
- Loads server configurations from config files
- Establishes connections to specified servers
- Initializes sessions with each server
- Manages server resources and cleanup
Provides user-friendly display functions:
- Formatted output for tool call history
- Server and tool listings
- Message history visualization
To add new tools to the MCP system:
- Define the tool in
cursor_mcp_all.pyor create a new MCP server - Register the tool with a unique name and schema
- Configure the server in
mcp_config.json
The modular architecture makes it easy to extend functionality:
- For UI changes: modify
display_utils.py - For new message handling: extend
message_manager.py - For enhanced tool capabilities: update
tool_manager.py - For additional server types: modify
server_manager.py
To customize the system prompt:
- Modify the prompt in
cli.pybefore passing toprocess_query - Use different prompts for different functionalities or tools
mini-cursor supports command autocompletion (bash/zsh/fish).
Generate and load the completion script:
eval "$(mini-cursor completion zsh)"eval "$(mini-cursor completion bash)"eval (mini-cursor completion fish)You can also output the completion script to the corresponding config file for permanent autocompletion.
- Support for multiple LLMs: Just change
OPENAI_MODELandOPENAI_BASE_URLin.env. - Custom tools/servers: Use
mini-cursor mcp-configto add local or remote Python services. - Tool extension: Supports file read/write, code editing, terminal commands, web search, etc. See
mini_cursor/core/tool_specs.jsonfor details. - Selective tool usage: Control which tools are enabled using
enable <tool>,disable <tool>, or set the mode withmode <all|selective>. This allows for more controlled, secure, and focused tool usage.
- Command not found after pip install?
- Make sure Python's bin directory is in your PATH, or restart your terminal.
- API Key leak risk?
- Only configure your API Key in the local
.envfile. Do not upload it to public repos.
- Only configure your API Key in the local
- How to switch workspace?
- Change to your target directory before running
mini-cursor chat. The workspace is the current directory.
- Change to your target directory before running
PRs, issues, and secondary development are welcome!
For custom prompts, tools, MCP services, see code comments and mini_cursor/prompt.py.
MIT
Mini-Cursor now includes a FastAPI backend with SSE streaming support. This provides a web API for chat functionality.
Install the required dependencies:
pip install -r requirements.txtTo start the FastAPI server:
python -m mini_cursor.main_apiBy default, the server will run on http://0.0.0.0:8000. You can customize the host and port by setting the HOST and PORT environment variables.
Returns basic information about the API.
Endpoint for chat functionality with SSE streaming. Accepts a JSON payload with:
query(required): The message to send to the AIsystem_prompt(optional): Custom system promptworkspace(optional): Workspace path
Example request:
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"query": "What can you do?"}'This endpoint returns Server-Sent Events (SSE) with the following event types:
start: Indicates the start of processingmessage: AI assistant's text responsesthinking: Reasoning process from models that support ittool_call: Information about tool calls being madetool_result: Results of tool callstool_error: Errors that occur during tool callsdone: Indicates the completion of processingerror: Any errors that occur during processing