Skip to content

dgca/local-code-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local Code Agent

An autonomous AI agent that can improve your codebase. Gets work to do via GitHub issues. This project provides a flexible foundation for creating AI agents that can understand issues, generate solutions, and manage pull requests.

Features

  • 🤖 Autonomous Operation: Monitors GitHub issues and automatically handles tasks based on labels
  • 🧠 Local LLM Support: Primary support for Ollama models, with optional OpenAI integration
  • 📚 RAG Pipeline: Efficient context retrieval using ChromaDB for better task understanding
  • 🔄 Feedback Loop: Built-in feedback collection and learning mechanisms
  • 🛠️ Modular Architecture: Easy to extend and customize for specific use cases
  • 🔒 Type Safety: Full TypeScript support with Zod schema validation

Prerequisites

  • Bun (v1.0.0 or higher)
  • Ollama for local LLM support
  • Redis (optional, for persistence)

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/ai-agent-scaffold.git
    cd ai-agent-scaffold
  2. Install dependencies:

    bun install
  3. Copy the environment variables template:

    cp .env.example .env
  4. Configure your environment variables in .env

Configuration

The agent can be configured through environment variables:

  • GitHub Configuration

    • GITHUB_TOKEN: Personal Access Token with repo scope
    • GITHUB_OWNER: Repository owner (username or organization)
    • GITHUB_REPO: Repository name
  • LLM Configuration

    • LLM_PROVIDER: LLM provider ('ollama' or 'openai')
    • LLM_MODEL: Model name (e.g., 'mistral' for Ollama)
    • OLLAMA_BASE_URL: Ollama API URL
    • OPENAI_API_KEY: OpenAI API Key (optional)
  • Storage Configuration

    • REDIS_URL: Redis connection URL (optional)
    • CHROMA_DB_PATH: ChromaDB storage path (optional)
  • Agent Configuration

    • MAX_ITERATIONS: Maximum iterations for the planning loop
    • POLL_INTERVAL: GitHub polling interval in milliseconds
    • APPROVED_LABELS: Labels that trigger agent action
    • PORT: HTTP server port
    • AUTO_START: Whether to start automatically

Usage

  1. Start the agent:

    bun run dev
  2. The agent will start monitoring GitHub issues with the configured labels.

  3. Access the API endpoints:

    • Health check: GET /health
    • Agent control:
      • Start: POST /agent/start
      • Stop: POST /agent/stop
      • Status: GET /agent/status
    • Task management:
      • Get task: GET /tasks?id=<task_id>
      • Add feedback: POST /tasks/:id/feedback

Architecture

The project follows a modular architecture with the following components:

  • Agent Core: Central orchestrator that manages the autonomous workflow
  • GitHub Watcher: Monitors repository issues and creates tasks
  • Task Manager: Handles task lifecycle and persistence
  • LLM Client: Manages interactions with language models
  • RAG Pipeline: Provides relevant context for task understanding
  • Feedback System: Collects and processes feedback for improvement
  • Learning Module: Adapts agent behavior based on feedback

Development

  1. Run in development mode:

    bun run dev
  2. Run tests:

    bun test
  3. Build for production:

    bun run build

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m 'Add amazing feature'
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built with Bun
  • Uses Hono for the HTTP server
  • Powered by Ollama for local LLM support

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors