Skip to content

KevinZhou168/Apollo

Repository files navigation

HackIllinois 2026 Modal MCP

๐Ÿš€ Apollo

The AI that builds its own tools.

Apollo is an agentic system with a truly dynamic toolbox. Give it any goal and it will design, generate, deploy, and orchestrate custom MCP tools on the fly. No predefined toolset, no manual wiring. Just a prompt.

"Plan a trip to Spain" โ†’ Apollo creates a weather API tool, a destination guide tool, and a local events tool, deploys them all in seconds, then uses them to deliver a comprehensive answer.


โœจ Why Apollo?

Most AI agents are limited to a static set of tools chosen at development time. Apollo flips this model:

Traditional Agents Apollo
Fixed tool set at build time Tools generated on demand from any prompt
Manual tool integration Automatic deployment to serverless infra
Limited to pre-built capabilities Unlimited capabilities via dynamic MCP servers

Key capabilities:

  • ๐Ÿ”ง Dynamic Tool Generation โ€” LLM-powered code generation creates custom MCP servers tailored to each task
  • โ˜๏ธ Serverless Deployment โ€” Auto-deploy to Modal with zero infrastructure management
  • ๐Ÿ—‚๏ธ Automatic Registry โ€” Modal.Dict-based tool discovery and lifecycle management
  • ๐Ÿค– Smart Orchestration โ€” Supervisor agent with a ReAct loop for multi-step reasoning
  • ๐Ÿ“ก Real API Integration โ€” Falls back to a curated knowledge base of 1,400+ public APIs to avoid hallucinated endpoints
  • ๐ŸŒŒ Live Visualization โ€” Real-time 3D solar system visualization of the tool-building pipeline

๐Ÿ—๏ธ Architecture

Image


๐Ÿ› ๏ธ Setup

Prerequisites

  • Python 3.10+
  • A Modal account (free tier works)
  • An OpenAI and/or Anthropic API key

1. Clone & Install Dependencies

git clone https://github.com/your-team/apollo.git
cd apollo

pip install modal anthropic openai python-dotenv requests \
            uvicorn starlette sse-starlette

2. Configure Modal

# Authenticate with Modal
modal setup

# Store your API keys as Modal secrets (used by cloud workers)
modal secret create openai-secret OPENAI_API_KEY=sk-proj-...
modal secret create anthropic-secret ANTHROPIC_API_KEY=sk-ant-...

3. Set Environment Variables

Create a .env file in the project root:

LLM_PROVIDER=openai            # or "anthropic"
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...

4. Launch the Demo

python backend.py

Open http://localhost:8080 โ€” type a goal, hit Run, and watch Apollo build and use custom tools in real time.


๐ŸŽฎ Usage

Web Interface (recommended)

python backend.py               # starts on http://localhost:8080
python backend.py --port 9000   # custom port

The web UI streams the full supervisor output live โ€” you'll see tool planning, code generation, deployment, and the final answer rendered in markdown.

CLI โ€” Build Tools Manually

# Generate and deploy MCP tools for a goal
modal run tools_builder.py --goal "research quantum computing papers"

CLI โ€” Run the Supervisor

# Run with auto-build (builds tools if registry is empty)
python supervisor.py --prompt "plan a 5-day trip to Madrid"

# Use existing tools only
python supervisor.py --prompt "what's the weather in Tokyo?" --no-auto-build

# Local test mode (no Modal required)
python supervisor.py --prompt "hello" --test

Registry Management

modal run registry_manager.py --action list          # list registered tools
modal run registry_manager.py --action test          # test all endpoints
modal run registry_manager.py --action clear         # clear registry
modal run registry_manager.py --action add \
  --name "my-tool" --url "https://..."               # add manually

Live Visualization

Apollo includes a 3D solar system visualization that animates the tool-building pipeline in real time:

# Launch visualization server (opens browser automatically)
python viz_server.py

# Or run tools_builder with --viz flag
modal run tools_builder.py --goal "your goal" --viz

๐Ÿ“‚ Project Structure

Apollo/
โ”œโ”€โ”€ backend.py              # Web server โ€” serves UI + streams supervisor output
โ”œโ”€โ”€ supervisor.py           # Main agentic supervisor with ReAct loop
โ”œโ”€โ”€ tools_builder.py        # MCP generation & deployment engine (runs on Modal)
โ”œโ”€โ”€ mcp_builder.py          # Code generation library (used by Modal workers)
โ”œโ”€โ”€ mcp_template.py         # Template for all generated MCP servers
โ”œโ”€โ”€ api_reference.py        # Public-APIs fallback knowledge base (1,400+ APIs)
โ”œโ”€โ”€ registry_manager.py     # CLI for Modal.Dict registry CRUD
โ”œโ”€โ”€ viz_server.py           # SSE server for live pipeline visualization
โ”œโ”€โ”€ ui_server.py            # Alternative UI server
โ”œโ”€โ”€ test_supervisor.py      # Test suite (14 tests)
โ”œโ”€โ”€ notes.txt               # Developer notes & reference commands
โ”œโ”€โ”€ .env                    # Local environment configuration
โ”œโ”€โ”€ ui/
โ”‚   โ””โ”€โ”€ index.html          # Main chat interface
โ”œโ”€โ”€ demo/
โ”‚   โ””โ”€โ”€ index.html          # 3D solar system visualization
โ”œโ”€โ”€ api_reference_data/     # Cached public API database
โ”‚   โ””โ”€โ”€ public_apis.json
โ””โ”€โ”€ generated_mcps/         # Output directory for generated MCP servers

โš™๏ธ How It Works

  1. User submits a prompt โ†’ e.g. "plan a trip to Spain"

  2. Tool Builder plans MCP servers โ†’ the LLM decomposes the goal into 2โ€“6 specific, single-responsibility tools (e.g. weather-forecast, destination-guide, local-events)

  3. Parallel code generation โ†’ Modal workers generate Python code from specs in parallel, using a strict MCP template and validated against real public APIs

  4. Deployment & registration โ†’ each server is deployed with modal deploy, and its endpoint URL is registered in a shared Modal.Dict registry

  5. Supervisor discovers tools โ†’ queries the registry, fetches tools/list from each MCP endpoint, and converts them to the LLM's native tool-calling format

  6. Agentic execution loop โ†’ the LLM decides which tools to call, executes them via MCP JSON-RPC over HTTP, feeds results back, and repeats (up to 10 iterations)

  7. Final answer โ†’ a comprehensive, synthesized response is returned to the user


๐Ÿงช Testing

# Run all tests
python test_supervisor.py

# With pytest
pytest test_supervisor.py -v

# Local test mode (no Modal, no API keys needed)
python supervisor.py --prompt "hello" --test

๐Ÿ”ง Configuration

Variable Default Description
LLM_PROVIDER openai LLM backend (openai or anthropic)
OPENAI_API_KEY โ€” Your OpenAI API key
ANTHROPIC_API_KEY โ€” Your Anthropic API key
REGISTRY_NAME mcp-tool-registry Name of the Modal.Dict registry
MAX_ITERATIONS 10 Max supervisor loop iterations
REQUEST_TIMEOUT 30 MCP HTTP call timeout (seconds)

๐Ÿค” Design Decisions

Why MCP? The Model Context Protocol gives us a standardized interface for tool discovery and execution. Every generated tool speaks the same language, making orchestration trivial.

Why Modal? Serverless deployment means we don't manage infrastructure. Generated tools go from code to live HTTPS endpoint in seconds, with automatic scaling and zero ops burden.

Why not LangGraph? A simple ReAct loop is lighter, faster to debug, and has no additional dependencies. The architecture can evolve to LangGraph later if multi-agent collaboration or complex branching is needed.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors