🐸 Frog Framework

https://youtu.be/gTpLuyyTdfM both videos need to be watched we dont have editing software so its 2 vids

Inspiration

Every agent developer has been there: spending weeks fighting LangChain, CrewAI, AutoGen, and n8n just to build simple workflows. Writing 300+ lines of boilerplate instead of focusing on solving real problems.

We asked: What if agent development could be as simple as calling the OpenAI API?

What it does

Frog is a FastAPI micro-service that provides one OpenAI-compatible API for all agent development :

Choose the base model, the workflow and tools and send the content through the api call and thats it. Agent in a single API call.

Same endpoint, different capabilities - just change a parameter and Frog intelligently routes your request.

# Tier 1: Direct chat
curl localhost:8000/v1/chat/completions -d '{"model": "gpt-4o-mini", "messages": [...]}'

# Tier 2: basic tool use with the base model
curl localhost:8000/v1/chat/completions -d '{"model": "gpt-4o-mini", "messages": [...], "tools": ["browser.search"]}'

# Tier 3: Complex workflow by submitting a n8n workflow as json ( this can be cached and then refered to by the hash)
curl localhost:8000/v1/chat/completions -d '{"model": "gpt-4o-mini", "messages": [...], "workflow": {...}}'

How we built it

Core Innovation: Intelligent tier routing based on request parameters

  • Tier 1: Direct proxy to OpenRouter for simple chat
  • Tier 2: LLM-powered workflow planner + tool orchestrator
  • Tier 3: DAG execution engine with dependency management

Tech Stack: FastAPI, Python asyncio, OpenRouter API, Fernet encryption, Docker

Key Features:

  • OpenAI API compatibility (drop-in replacement)
  • Built-in tool registry (browser, Python, HTTP)
  • Real-time streaming across all tiers
  • Automatic workflow generation
  • Production-ready security and error handling

Challenges we ran into

Balancing Power vs Simplicity: Creating a unified API interface required careful abstraction design.

OpenAI Compatibility: Ensuring seamless integration with existing OpenAI codebases while extending functionality beyond basic chat.

Accomplishments that we're proud of

📊 Impact Numbers

  • 97% reduction in code complexity (300+ LOC → 10 LOC)
  • 80% reduction in framework dependencies (5+ → 1)
  • Zero learning curve (OpenAI API compatible)

🎯 Technical Achievements

  • 600 total LOC for the entire micro-service
  • Real-time streaming across all execution modes
  • OpenAI drop-in compatibility

🚀 Developer Experience

# Traditional approach: 300+ lines
import langchain, crewai, autogen
# ... hundreds of lines of setup ...

# Frog approach: 10 lines
import httpx
response = httpx.post("localhost:8000/v1/chat/completions", json={
    "model": "gpt-4o-mini", 
    "messages": [{"role": "user", "content": "Research AI trends"}],
    "tools": ["browser.search", "python.exec"]  # Auto-routes to Tier 2!
})

What we learned

Simplicity beats complexity: The hardest part wasn't building features - it was making complex functionality simple to use. Every API decision was evaluated through developer experience.

AI-native tooling is the future: Using LLMs for automatic workflow planning opens entirely new possibilities for developer productivity.

API design > implementation: Choosing OpenAI compatibility was crucial - developers can leverage existing knowledge while gaining new capabilities.

Constraints drive innovation: The 600 LOC limit forced elegant solutions and eliminated bloat.

What's next for Frog

Short-term (30 days):

  • Enhanced tool registry with plugin support
  • PostgreSQL integration for production
  • Authentication and monitoring dashboard
  • One-click cloud deployment

Medium-term (3-6 months):

  • Visual workflow builder for Tier 3
  • Multi-LLM provider support
  • Enterprise features (RBAC, audit logs)
  • Community tool marketplace

Long-term vision:

  • Frog Cloud hosted service
  • IDE integrations (VS Code, GitHub Actions)
  • Framework migration tools
  • Open source ecosystem

Built With

  • FastAPI - High-performance Python web framework
  • OpenRouter - Multi-LLM API access
  • Python asyncio - Asynchronous execution
  • Fernet - Symmetric encryption for secrets
  • Docker - Containerized deployment

Try it Now🐸

Built With

Share this project:

Updates