MCP server connecting Claude/Cursor to Codex CLI. Enables code analysis via @ file references, multi-turn conversations, sandboxed edits, and structured change mode.
- File Analysis — Reference files with
@src/,@package.jsonsyntax - Multi-Turn Sessions — Conversation continuity with workspace isolation
- Native Resume — Uses
codex resumefor context preservation (CLI v0.36.0+) - Local OSS Models — Run with Ollama or LM Studio via
localProvider - Web Search — Research capabilities with
search: true - Sandbox Mode — Safe code execution with
--full-auto - Change Mode — Structured OLD/NEW patch output for refactoring
- Brainstorming — SCAMPER, design-thinking, lateral thinking frameworks
- Health Diagnostics — CLI version, features, and session monitoring
- Cross-Platform — Windows, macOS, Linux fully supported
claude mcp add codex-cli -- npx -y @trishchuk/codex-mcp-toolPrerequisites: Node.js 18+, Codex CLI installed and authenticated.
{
"mcpServers": {
"codex-cli": {
"command": "npx",
"args": ["-y", "@trishchuk/codex-mcp-tool"]
}
}
}Config locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json | Windows: %APPDATA%\Claude\claude_desktop_config.json
// File analysis
'explain the architecture of @src/';
'analyze @package.json and list dependencies';
// With specific model
'use codex with model gpt-5.5 to analyze @algorithm.py';
// Multi-turn conversations (v1.4.0+)
'ask codex sessionId:"my-project" prompt:"explain @src/"';
'ask codex sessionId:"my-project" prompt:"now add error handling"';
// Brainstorming
'brainstorm ways to optimize CI/CD using SCAMPER method';
// Sandbox mode
'use codex sandbox:true to create and run a Python script';
// Web search
'ask codex search:true prompt:"latest TypeScript 5.7 features"';
// Local OSS model (Ollama)
'ask codex localProvider:"ollama" model:"qwen3:8b" prompt:"explain @src/"';| Tool | Description |
|---|---|
ask-codex |
Execute Codex CLI with file analysis, models, sessions |
brainstorm |
Generate ideas with SCAMPER, design-thinking, etc. |
list-sessions |
View/delete/clear conversation sessions |
health |
Diagnose CLI installation, version, features |
ping / help |
Test connection, show CLI help |
By default the model parameter is omitted and Codex CLI applies the
default model from your ~/.codex/config.toml (e.g. model = "gpt-5.5").
Pass model only when you need to override the configured default for a
single call. Reasoning depth is calibrated per tool:
ask-codex— uses Codex CLI default reasoning (medium). PassreasoningEffort: "high"/"xhigh"for harder tasks.brainstorm,do-act,review-changes— defaultreasoningEffort: "high"(creative ideation, act-check-fix loops, and code review benefit from deeper reasoning).
| Model | Use Case |
|---|---|
gpt-5.5 |
Frontier model for complex coding, research, agentic work |
gpt-5.4 |
Strong model for everyday coding |
gpt-5.4-mini |
Small, fast, cost-efficient for simpler coding tasks |
gpt-5.3-codex |
Coding-optimized model |
gpt-5.2 |
Optimized for professional work and long-running agents |
Multi-turn conversations with workspace isolation:
{ "prompt": "analyze code", "sessionId": "my-session" }
{ "prompt": "continue from here", "sessionId": "my-session" }
{ "prompt": "start fresh", "sessionId": "my-session", "resetSession": true }Environment:
CODEX_SESSION_TTL_MS- Session TTL (default: 24h)CODEX_MAX_SESSIONS- Max sessions (default: 50)
Run with local Ollama or LM Studio instead of OpenAI:
// Ollama
{ "prompt": "analyze @src/", "localProvider": "ollama", "model": "qwen3:8b" }
// LM Studio
{ "prompt": "analyze @src/", "localProvider": "lmstudio", "model": "my-model" }
// Auto-select provider
{ "prompt": "analyze @src/", "oss": true }Requirements: Ollama running locally with a model that supports tool calling (e.g. qwen3:8b).
| Parameter | Description |
|---|---|
model |
Model selection |
sessionId |
Enable conversation continuity |
sandbox |
Enable --full-auto mode |
search |
Enable web search |
changeMode |
Structured OLD/NEW edits |
addDirs |
Additional writable directories |
toolOutputTokenLimit |
Cap response verbosity (100-10,000) |
reasoningEffort |
Reasoning depth: low, medium, high, xhigh |
oss |
Use local OSS model provider |
localProvider |
Local provider: lmstudio or ollama |
| Version | Features |
|---|---|
| v0.60.0+ | GPT-5.2 model family |
| v0.59.0+ | --add-dir, token limits |
| v0.52.0+ | Native --search flag |
| v0.36.0+ | Native codex resume (sessions) |
codex --version # Check CLI version
codex login # AuthenticateUse health tool for diagnostics: 'use health verbose:true'
v2.2.x → v2.3.0: gpt-5.5 as new default, added gpt-5.4-mini, dropped retired models (gpt-5.3-codex-spark, gpt-5.2-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini).
v2.0.x → v2.1.0: gpt-5.4 as new default model, updated fallback chain.
v1.5.x → v1.6.0: Local OSS model support (localProvider, oss), gpt-5.3-codex default model, xhigh reasoning effort.
v1.3.x → v1.4.0: New sessionId parameter, list-sessions/health tools, structured error handling. No breaking changes.
MIT License. Not affiliated with OpenAI.
Documentation | Issues | Inspired by jamubc/gemini-mcp-tool