Inspiration
Currently, AI systems are powerful but fundamentally static. Large language models are bounded by their training data and even advanced agents rely on pre-defined toolsets tailored to narrow domains.
We wanted to explore a different paradigm: what if an AI system could extend its own capabilities on demand?
Our team set out to design an agent that isn’t limited to what it was originally given, one that can identify missing capabilities, create them, and reuse them. Inspired by the idea of exploration (and HackIllinois’s space theme), we built Apollo: a self-extending AI with a dynamic toolbox that grows as it encounters new tasks across domains, from travel planning to academic research to project design.

What it does
Apollo is a self-extending agent that generates and deploys new capabilities on demand.
Instead of being limited to a fixed toolset, Apollo synthesizes MCP servers at runtime, deploys them on Modal, validates them, and adds them to a capability registry that the agent can discover and use. This allows Apollo to handle tasks in domains where no pre-existing tools exist.
In practice, Apollo:
- Interprets a user’s request
- Determines what capabilities are required
- Builds missing tools automatically
- Deploys and validates them
- Reuses them across future tasks
This transforms an AI agent from a static assistant into a capability-growing system.
How we built it
Apollo is a modular agent architecture built around dynamic tool synthesis and serverless deployment.
Core stack
- FastAPI / Starlette backend with SSE streaming
- Supervisor agent using a ReAct loop
- Tool Builder orchestration layer
- FastMCP server framework
- Modal serverless compute + Modal.Dict registry
Architecture overview User → Supervisor Agent → (if missing tools) Tool Builder → MCP Servers → Registry → Supervisor executes
Supervisor agent
- LLM-driven reasoning loop
- Discovers available tools from registry
- Executes multi-step plans
Tool Builder orchestrator
- Decomposes required capabilities
- Spawns one MCP builder agent per tool
- Deploys servers to Modal
- Registers validated tools
Validation layer
- After deployment, each MCP server is evaluated by a deterministic testing agent:
- Executes predefined tests
- Verifies schema + behavior
- On failure → feeds error back to builder
- Retries once (bounded loop)
- On success → registers tool
Registry A distributed capability store (Modal.Dict) mapping tool names → endpoints, enabling runtime discovery and reuse.
Challenges we ran into
Autonomous tool generation reliability Automatically synthesizing API-backed servers is brittle. APIs vary widely in schemas, auth, and error handling. We mitigated this with a validation layer and bounded retries.
Safe deployment and orchestration We needed a way to dynamically deploy servers during agent execution without breaking isolation or scaling. Modal’s serverless containers solved this but required careful orchestration logic.
Capability discovery vs infinite loops Agents that can create tools risk recursive generation. We constrained retries and capped supervisor iterations to ensure termination.
Accomplishments that we're proud of
- Designing a self-extending agent architecture from scratch
- Automating MCP server generation and deployment on Modal
- Building a validation layer for autonomous tool creation
- Enabling persistent capability reuse across tasks
- Delivering a working end-to-end dynamic toolbox system
Most importantly, we demonstrated that agents don’t need to be limited to fixed tools. They can grow.
What we learned
- Dynamic capability synthesis is feasible but requires strong validation
- Serverless infrastructure is ideal for agent-generated services
- Agents need guardrails (bounded retries, iteration caps)
- Architecture matters more than feature count in early systems
- Retrieval tools are the safest first domain for autonomous generation
We also gained experience in distributed system design, agent orchestration, and building production-minded validation into AI workflows.
What's next for Apollo
Apollo currently generates retrieval and analysis tools. Next steps focus on expanding actionability and robustness:
- Authenticated tool support (user-provided API keys)
- Transactional MCPs (calendar, booking, messaging)
- Tool quality scoring and selection
- Capability versioning and updates
- Cross-user capability sharing
- Larger tool libraries and domain packs
Our long-term vision is an AI system that continuously expands its abilities, moving from static assistants toward adaptive, evolving agents.


Log in or sign up for Devpost to join the conversation.