A Rust TUI coding agent. Connects to any OpenAI-compatible API (Ollama, OpenAI, Anthropic, Google Gemini, OpenRouter, etc.), your ChatGPT subscription via OpenAI Codex, or your GitHub Copilot subscription, and provides an interactive terminal interface for code generation, analysis, and assistance.
Prebuilt binaries:
Download from GitHub Releases.
From source:
cargo install --git https://github.com/leonardcser/smelt.gitRunning smelt with no config file will launch an interactive setup wizard
that walks you through selecting a provider and model.
With Ollama (local):
ollama pull qwen3.5:0.8b
smelt --model qwen3.5:0.8b --api-base http://localhost:11434/v1With OpenAI:
read -s OPENAI_API_KEY && export OPENAI_API_KEY
smelt --model gpt-5.4 --api-base https://api.openai.com/v1 --api-key-env OPENAI_API_KEYWith OpenAI Codex (ChatGPT Pro/Plus subscription):
smelt auth # log in with your ChatGPT account
smelt --model gpt-5.4 # use any Codex-supported modelWith GitHub Copilot:
smelt auth # pick "GitHub Copilot", follow device-code prompt
smelt --model claude-sonnet-4.5 # use any model your Copilot plan exposesWith Anthropic:
read -s ANTHROPIC_API_KEY && export ANTHROPIC_API_KEY
smelt --model claude-opus-4-5 --api-base https://api.anthropic.com/v1 --api-key-env ANTHROPIC_API_KEY- Tool use — file read/write/edit, glob, grep, bash, notebooks, web fetch/search
- Permission system — granular allow/ask/deny per tool, bash pattern, URL, and workspace-scoped approvals
- 4 modes — Normal, Plan, Apply, Yolo (
Shift+Tabto cycle) - Vim mode — full vi keybindings for the input editor
- Sessions — auto-save, resume, fork, rewind conversations
- Compaction — LLM-powered summarization to stay within context limits
(auto-trigger threshold configurable via
SMELT_COMPACT_THRESHOLD_PERCENT, default80) - Reasoning effort — configurable thinking depth (off/low/medium/high/max)
- File references — attach files with
@pathsyntax - Multi-agent — parallel subagents with inter-agent messaging (opt-in)
- Skills — on-demand specialized knowledge via
SKILL.mdfiles - MCP — connect external tool servers via the Model Context Protocol
- Custom commands — user-defined slash commands via markdown files
- Custom instructions — project-level
AGENTS.mdfiles - Input prediction — ghost text suggesting your next message
- Image support — paste from clipboard or reference image files
- Headless mode — scriptable, no TUI
- Interactive setup — guided first-run wizard and
smelt authfor managing providers
Config file: ~/.config/smelt/config.yaml (respects $XDG_CONFIG_HOME).
providers:
- name: ollama
type: openai-compatible # or: openai, anthropic, codex, copilot
api_base: http://localhost:11434/v1
models:
- qwen3.5:27b
- name: openai
type: openai
api_base: https://api.openai.com/v1
api_key_env: OPENAI_API_KEY
models:
- gpt-5.4
- name: codex
type: codex # uses ChatGPT subscription — models fetched automatically
api_base: https://chatgpt.com/backend-api/codex
- name: copilot
type: copilot # uses GitHub Copilot subscription — models fetched automatically
api_base: https://api.individual.githubcopilot.com
defaults:
model: ollama/qwen3.5:27b # provider_name/model_name
auxiliary:
model: openai/gpt-5.4-mini
use_for:
btw: false
settings:
vim_mode: false
auto_compact: false
redact_secrets: true # on by default — scrubs secrets from user input and tool results before they reach the LLMSee the full documentation for all config options, CLI flags, keybindings, permissions, and more.
Full docs are available at leonardcser.github.io/smelt and can be built locally with Zensical:
uv tool install zensical
cd docs && zensical servecargo build # compile
cargo run # run
cargo test # run tests
cargo fmt # format
cargo clippy # lintInspired by Claude Code.
Contributions welcome! Open an issue or pull request.
MIT — see LICENSE.

