Skip to content

itsfernn/terminal-gpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Terminal GPT

A terminal-based chat application with Vim keybindings for interacting with LLMs (Large Language Models). Edit chat messages directly in your terminal using familiar Vim motions and modes.

Demo Video

Features

  • Vim-like interface: Normal, Insert, and Visual modes with keybindings inspired by Vim
  • Real-time streaming: Watch LLM responses stream in as they're generated
  • Message editing: Edit any message in the chat history using Vim motions or directly inside vim
  • Model selection: Switch between different LLM providers and models on the fly
  • Chat persistence: Save and load chat sessions as JSON files
  • Terminal rendering: Render saved chat files to the terminal with render_chat
  • Customizable: Configure multiple providers and models via TOML configuration

Installation

From Source

  1. Clone the repository:

    git clone https://github.com/yourusername/terminal-gpt.git
    cd terminal-gpt
  2. Install with pip:

    pip install -e .
  3. Create a configuration file (see Configuration section below)

Dependencies

  • Python 3.8+
  • urwid for the terminal UI
  • litellm for LLM provider abstraction (currently only OpenAI is implemented)
  • openai Python package

Configuration

Create a configuration file at ~/.config/terminal_gpt/config.toml:

# Default model to use
default_model = "gpt-4.1-mini"

[providers.openai]
# Either set the API key directly or use a command to fetch it
api_key = "sk-..."
# OR use a command (useful for password managers)
api_key_cmd = "pass show api/openai"

# List of available models for this provider
models = ["gpt-4.1-mini", "gpt-4.1", "gpt-4o", "gpt-4o-mini"]

# Add more providers as needed
# [providers.anthropic]
# api_key_cmd = "pass show api/anthropic"
# models = ["claude-3-5-sonnet-latest", "claude-3-haiku-latest"]

Usage

Starting a Chat

# Start a new chat with the default model
terminal_gpt

# Start with a specific model
terminal_gpt --model gpt-4.1-mini

# Load or save to a specific chat file
terminal_gpt --chat-file ~/chats/my_chat.json

Rendering Chat Files

# Render a saved chat file to the terminal
render_chat path/to/chat.json

# Specify terminal width (defaults to current terminal width)
render_chat path/to/chat.json 120

Keybindings

Normal Mode (Default)

Key Action
i, a Enter insert mode at cursor
I, A Enter insert mode at start/end of message
j, k Move to next/previous message
gg Go to first message
G Go to last message
o, O Add new message below/above
dd Delete current message
cc Clear current message content
v Enter visual mode
h, l Switch message role (user/assistant)
Ctrl+↑, Ctrl+↓ Swap message up/down
Ctrl+e Edit message in external editor
Ctrl+p Open model selection popup
Enter Send message/get response
q Quit application

Insert Mode

  • Type to edit message content
  • Esc to return to normal mode

Visual Mode

  • Use j/k to select range of messages
  • d to delete selected messages
  • c to clear selected messages
  • h/l to switch roles of selected messages

External Editor Integration

Press Ctrl+e to open the current message in your system's default editor ($EDITOR environment variable, defaults to vi). Save and close the editor to update the message in the chat.

Model Selection

Press Ctrl+p to open a popup menu showing all available models. Navigate with j/k and select with Enter.

Architecture

The application is built using:

  • urwid: Terminal UI framework
  • litellm: LLM provider abstraction (currently only OpenAI backend)
  • Custom widgets:
    • ChatHistory: List of chat messages with navigation
    • EditableChatBubble: Individual message bubble with edit capability
    • VimKeyHandler: Vim keybinding parser and mode manager
    • VimHeader: Status bar showing mode, model, and key sequence

File Structure

  • main.py: CLI entry point and configuration loading
  • app.py: Main application class and UI setup
  • custom_widgets/: UI components
    • chat.py: Chat message widgets
    • vimkey.py: Vim keybinding handling
    • model_select.py: Model selection popup
  • models/: LLM provider implementations
    • main.py: Provider router
    • openai.py: OpenAI API integration
  • render_chat.py: Standalone chat file renderer
  • setup.py: Package installation

Development

Running Tests

# Add tests as needed

Adding New LLM Providers

  1. Add a new implementation in models/ (e.g., anthropic.py)
  2. Update models/main.py to route to the new provider
  3. Add provider configuration to the TOML config

Code Style

Follow existing Python conventions in the codebase. The project uses type hints and follows PEP 8.

License

MIT License - see LICENSE file for details.

Contributing

Contributions are welcome! Please open an issue or pull request for any bugs, feature requests, or improvements.

Acknowledgements

  • urwid for the terminal UI framework
  • litellm for LLM abstraction
  • Inspired by various terminal-based chat applications

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages