A blazing-fast, elegant AI agent framework for Go with a beautiful terminal UI. Build powerful AI assistants that can see, think, and act through a comprehensive tool system while enjoying a delightful developer experience.
- ๐ Lightning Fast - Leverages Go's concurrency for parallel tool execution
- ๐จ Beautiful TUI - Elegant terminal interface with markdown rendering and animated spinners
- ๐ค 9 LLM Providers - OpenAI, Anthropic, MiniMax, local models, and more
- ๐ ๏ธ Rich Tool System - File operations, bash commands, web search, and more
- ๐ฆ Zero Dependencies - Single static binary, runs anywhere
- ๐ฏ Smart Interactions - ReAct prompting and native function calling
- โก Live Streaming - Real-time response streaming from all providers
- ๐ญ Flexible Modes - Interactive TUI, one-shot queries, or use as a library
graph TB
User[User] --> TUI[Terminal UI]
User --> CLI[CLI Commands]
TUI --> Agent[Agent Core]
CLI --> Agent
Agent --> Memory[Conversation Memory]
Agent --> LLM[LLM Router]
LLM --> OpenAI[OpenAI Client]
LLM --> Anthropic[Anthropic Client]
LLM --> Local[Local Models]
LLM --> Others[Other Providers]
Agent --> Tools[Tool System]
Tools --> FileOps[File Operations]
Tools --> Bash[Bash Commands]
Tools --> Search[Web Search]
Tools --> Wiki[Wikipedia]
subgraph "Tool Execution"
Tools --> Schema[Schema Generator]
Schema --> Validator[Input Validator]
Validator --> Executor[Tool Executor]
end
subgraph "Beautiful TUI"
TUI --> Glamour[Markdown Renderer]
TUI --> Spinner[Loading Animations]
TUI --> Input[Smart Input Box]
end
# Install with go
go install github.com/nachoal/simple-agent-go/cmd/simple-agent@latest
# Or download pre-built binaries
curl -L https://github.com/nachoal/simple-agent-go/releases/latest/download/simple-agent-$(uname -s)-$(uname -m) -o simple-agent
chmod +x simple-agentCreate a .env file:
# Required for at least one provider
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Optional providers
GOOGLE_API_KEY=... # For Gemini models
MINIMAX_API_KEY=... # MiniMax text models (M2.5, M2.5-lightning, etc)
MOONSHOT_API_KEY=... # Kimi (Chinese language)
DEEPSEEK_API_KEY=... # Code-focused
GROQ_API_KEY=... # Fast inference
PERPLEXITY_API_KEY=... # Web-aware responses
# For Google Search tool
GOOGLE_API_KEY=...
GOOGLE_CX=... # Custom Search Engine ID# Start interactive mode (default: OpenAI GPT-4)
simple-agent
# Start interactive mode with unrestricted bash commands (DANGEROUS)
simple-agent --yolo
# Start with a custom toolset for this run
simple-agent --tools read,bash,edit,write
# Start with all registered tools enabled
simple-agent --tools all
# Use a specific model
simple-agent --provider anthropic --model claude-3-opus-20240229
# Allow slower local-model requests up to 15 minutes each
simple-agent --provider lmstudio --model qwen3.5-27b --timeout 15
# Quick one-shot query
simple-agent query "What files are in the current directory?"
# Continue your most recent saved conversation
simple-agent --continue
simple-agent -c
# Browse recent saved conversations and pick one
simple-agent --resume
simple-agent -r
# Resume a specific saved conversation by session ID
simple-agent --resume 20260307_101530_abc123
simple-agent -r 20260307_101530_abc123
# List available tools
simple-agent tools listInteractive sessions are stored under ~/.simple-agent/sessions/. When you quit the TUI, simple-agent prints the exact --resume <session-id> command for that conversation. Resumed sessions reopen in the original workspace path so file tools stay anchored to the same project.
The TUI provides a delightful chat experience:
- ๐จ Rich Formatting - Markdown rendering with syntax highlighting
- โก Smart Input - Auto-expanding textarea that grows with your input
- ๐ Live Updates - Animated spinner during processing
- ๐ Natural Scrolling - Messages flow naturally, no jarring screen clears
โ๏ธ Resize Safe - Transcript and input region reflow cleanly when the terminal size changes- ๐๏ธ Model Switching - Change models on the fly with
/model
/help- Show available commands/tools- List available tools with descriptions/model- Interactively switch between models/reload- Reload runtime context/resources/models/improve <goal>- Run guarded self-improve cycle (requiresSIMPLE_AGENT_ENABLE_IMPROVE=1)/system- View the current system prompt/verbose- Toggle debug mode/clear- Clear conversation (Ctrl+L)/exit- Exit application (Ctrl+C)
| Tool | Description | Example Use |
|---|---|---|
| ๐งฎ calculate | Evaluate math expressions | "What's 2^10 + sqrt(144)?" |
| ๐ read | Read any file | "Show me the contents of main.go" |
| ๐พ write | Create/overwrite files in the current working directory | "Create a Python hello world script" |
| โ๏ธ edit | Modify existing files in the current working directory | "Add error handling to that function" |
| ๐ directory_list | Browse directories in the current working directory | "What's in the src folder?" |
| ๐ฅ๏ธ bash | Run commands (restricted allowlist by default; use --yolo to allow any command) |
"Show git status" |
| ๐ wikipedia | Search Wikipedia | "Tell me about quantum computing" |
| ๐ google_search | Web search (requires API) | "Find the latest Go releases" |
| Provider | Best For | Popular Models |
|---|---|---|
| OpenAI | General purpose | gpt-4-turbo, gpt-3.5-turbo |
| Anthropic | Complex reasoning | claude-3-opus, claude-3-sonnet |
| MiniMax | Coding and long context | MiniMax-M2.5, MiniMax-M2.5-lightning |
| Multimodal tasks | gemini-1.5-pro, gemini-1.5-flash | |
| Moonshot | Chinese language | moonshot-v1-8k, moonshot-v1-128k |
| DeepSeek | Code generation | deepseek-chat, deepseek-coder |
| Groq | Fast inference | llama-3-70b, mixtral-8x7b |
| Perplexity | Web-aware chat | llama-3.1-sonar-huge-128k-online |
| Local | Privacy-focused | Any Ollama/LM Studio model |
Notes:
--timeoutapplies to each LLM request, including local-model providers such as LM Studio and custom OpenAI-compatible endpoints.- File tools (
read,write,edit,directory_list) are confined to the process working directory. Startsimple-agentfrom the repo or sandbox you want it to modify.
Tools are automatically discovered. Create a new file in tools/:
package tools
import (
"context"
"encoding/json"
"fmt"
)
type WeatherTool struct{}
func NewWeatherTool() *WeatherTool {
return &WeatherTool{}
}
func (t *WeatherTool) Name() string {
return "weather"
}
func (t *WeatherTool) Description() string {
return "Get current weather for a city"
}
func (t *WeatherTool) Execute(ctx context.Context, input string) (string, error) {
var params struct {
City string `json:"city" description:"City name"`
}
if err := json.Unmarshal([]byte(input), ¶ms); err != nil {
return "", fmt.Errorf("invalid parameters: %w", err)
}
// Your implementation here
weather := fmt.Sprintf("The weather in %s is sunny, 22ยฐC", params.City)
return weather, nil
}
// Register in tools/exports.go
func NewWeatherToolFunc() Tool { return NewWeatherTool() }Then register it in internal/toolinit/init.go:
registry.Register("weather", tools.NewWeatherToolFunc)Implement the LLMClient interface:
type MyProvider struct {
apiKey string
model string
}
func (p *MyProvider) StreamCompletion(
ctx context.Context,
messages []Message,
) (<-chan StreamChunk, error) {
// Your implementation
}
func (p *MyProvider) GetAvailableModels(ctx context.Context) ([]Model, error) {
// Return supported models
}Public OSS verification:
make harnessFast local verification:
make harness-fastPrivate local harness, including Codex session analysis stored outside the repo:
make harness-privateOpt-in live LM Studio canary inside the private harness:
SIMPLE_AGENT_ENABLE_LIVE_CANARIES=1 make harness-privateRelated docs:
docs/architecture.mddocs/harness-benchmark.mddocs/harness-engineering-task-list.mddocs/codex-analysis.mddocs/validation-matrix.mddocs/runtime-state.mddocs/AGENTS.md
This repo also includes a local-only research layer for Codex-driven optimize/evaluate loops against simple-agent-go.
Files:
research/program.mdresearch/allowed_paths.txtresearch/import_bench_case.pyresearch/run_bench_case.pyresearch/evaluate.shresearch/score.pyresearch/loop.sh
Example:
research/loop.sh --attempts 3 --eval-mode bothImport a benchmark failure case and run the loop against it:
python3 research/import_bench_case.py \
<bench-root>/failure_artifacts/mac/<artifact-dir> \
--passing-result <bench-root>/results/qwen3.5-9b_pi_person_research.json
research/loop.sh --case research/cases/<case-slug> --attempts 2Notes:
- Codex CLI is the research agent;
simple-agent-gois the artifact being optimized. research/results.tsvandresearch/runs/are ignored local artifacts and are not intended for commits.- Imported benchmark cases live under ignored
research/cases/. - Private transcript-derived artifacts still live only under
~/.simple-agent/harness/<repo-slug>/.
import (
"github.com/nachoal/simple-agent-go/agent"
"github.com/nachoal/simple-agent-go/llm/openai"
)
// Create client
client, _ := openai.NewClient(
openai.WithModel("gpt-4-turbo-preview"),
openai.WithTemperature(0.7),
)
// Create agent
ag := agent.New(client,
agent.WithMaxIterations(10),
agent.WithSystemPrompt("You are a helpful assistant"),
)
// Query
response, _ := ag.Query(ctx, "What's the weather like?")
fmt.Println(response.Content)const myPrompt = `You are an expert Go developer.
Always write idiomatic Go code with proper error handling.
Focus on clarity and performance.`
ag := agent.New(client, agent.WithSystemPrompt(myPrompt))- Concurrent Tool Execution - Tools run in parallel when possible
- Streaming Responses - Immediate feedback, no waiting
- Minimal Memory Footprint - Efficient memory usage
- Fast Startup - < 100ms to interactive prompt
# Clone the repository
git clone https://github.com/nachoal/simple-agent-go
cd simple-agent-go
# Install dependencies
go mod download
# Run tests
make test
# Build
make build
# Install locally
make installsimple-agent-go/
โโโ cmd/simple-agent/ # CLI entry point
โโโ agent/ # Core agent logic
โโโ llm/ # LLM provider implementations
โโโ tools/ # Built-in tools
โโโ tui/ # Terminal UI components
โโโ config/ # Configuration management
โโโ internal/ # Internal packages
MIT License - see LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request. Areas of interest:
- ๐ค New LLM providers
- ๐ ๏ธ Additional tools
- ๐จ TUI enhancements
- ๐ Documentation improvements
- ๐งช Test coverage
Built with love using these amazing libraries:
- Bubble Tea - Terminal UI framework
- Glamour - Markdown rendering
- Cobra - CLI framework
- Lipgloss - Terminal styling
Made with โค๏ธ by @nachoal