A terminal-based chat application with Vim keybindings for interacting with LLMs (Large Language Models). Edit chat messages directly in your terminal using familiar Vim motions and modes.
- Vim-like interface: Normal, Insert, and Visual modes with keybindings inspired by Vim
- Real-time streaming: Watch LLM responses stream in as they're generated
- Message editing: Edit any message in the chat history using Vim motions or directly inside vim
- Model selection: Switch between different LLM providers and models on the fly
- Chat persistence: Save and load chat sessions as JSON files
- Terminal rendering: Render saved chat files to the terminal with
render_chat - Customizable: Configure multiple providers and models via TOML configuration
-
Clone the repository:
git clone https://github.com/yourusername/terminal-gpt.git cd terminal-gpt -
Install with pip:
pip install -e . -
Create a configuration file (see Configuration section below)
- Python 3.8+
urwidfor the terminal UIlitellmfor LLM provider abstraction (currently only OpenAI is implemented)openaiPython package
Create a configuration file at ~/.config/terminal_gpt/config.toml:
# Default model to use
default_model = "gpt-4.1-mini"
[providers.openai]
# Either set the API key directly or use a command to fetch it
api_key = "sk-..."
# OR use a command (useful for password managers)
api_key_cmd = "pass show api/openai"
# List of available models for this provider
models = ["gpt-4.1-mini", "gpt-4.1", "gpt-4o", "gpt-4o-mini"]
# Add more providers as needed
# [providers.anthropic]
# api_key_cmd = "pass show api/anthropic"
# models = ["claude-3-5-sonnet-latest", "claude-3-haiku-latest"]# Start a new chat with the default model
terminal_gpt
# Start with a specific model
terminal_gpt --model gpt-4.1-mini
# Load or save to a specific chat file
terminal_gpt --chat-file ~/chats/my_chat.json# Render a saved chat file to the terminal
render_chat path/to/chat.json
# Specify terminal width (defaults to current terminal width)
render_chat path/to/chat.json 120| Key | Action |
|---|---|
i, a |
Enter insert mode at cursor |
I, A |
Enter insert mode at start/end of message |
j, k |
Move to next/previous message |
gg |
Go to first message |
G |
Go to last message |
o, O |
Add new message below/above |
dd |
Delete current message |
cc |
Clear current message content |
v |
Enter visual mode |
h, l |
Switch message role (user/assistant) |
Ctrl+↑, Ctrl+↓ |
Swap message up/down |
Ctrl+e |
Edit message in external editor |
Ctrl+p |
Open model selection popup |
Enter |
Send message/get response |
q |
Quit application |
- Type to edit message content
Escto return to normal mode
- Use
j/kto select range of messages dto delete selected messagescto clear selected messagesh/lto switch roles of selected messages
Press Ctrl+e to open the current message in your system's default editor ($EDITOR environment variable, defaults to vi). Save and close the editor to update the message in the chat.
Press Ctrl+p to open a popup menu showing all available models. Navigate with j/k and select with Enter.
The application is built using:
urwid: Terminal UI frameworklitellm: LLM provider abstraction (currently only OpenAI backend)- Custom widgets:
ChatHistory: List of chat messages with navigationEditableChatBubble: Individual message bubble with edit capabilityVimKeyHandler: Vim keybinding parser and mode managerVimHeader: Status bar showing mode, model, and key sequence
main.py: CLI entry point and configuration loadingapp.py: Main application class and UI setupcustom_widgets/: UI componentschat.py: Chat message widgetsvimkey.py: Vim keybinding handlingmodel_select.py: Model selection popup
models/: LLM provider implementationsmain.py: Provider routeropenai.py: OpenAI API integration
render_chat.py: Standalone chat file renderersetup.py: Package installation
# Add tests as needed- Add a new implementation in
models/(e.g.,anthropic.py) - Update
models/main.pyto route to the new provider - Add provider configuration to the TOML config
Follow existing Python conventions in the codebase. The project uses type hints and follows PEP 8.
MIT License - see LICENSE file for details.
Contributions are welcome! Please open an issue or pull request for any bugs, feature requests, or improvements.
