Inspiration
The inspiration for Protosynthesis came from the frustration of gluing together disparate AI APIs and data sources. Whether you're building a chatbot that needs Google Maps data or an agent that triggers Stripe payments, the "glue code" often becomes a tangled mess of Python scripts and environment variables. We wanted to smoothen out this process, making connecting an LLM to a database a process of simply drawing a line between two boxes, so we built Protosynthesis to be the visual nervous system for the next generation of AI applications.
What it does
Protosynthesis is a visual workflow builder that lets users create complex, intelligent backend processes without writing a single line of boilerplate code.
- Visual Logic: Users drag and drop "nodes" representing APIs (OpenAI, Gemini, Stripe, Twilio, etc.) and logic blocks (Loops, If/Else).
- Real-time Execution: Watch data flow through your graph in real-time. Nodes light up as they execute, providing immediate visual feedback.
- One-Click Integration: Connect disparate services like sending an SMS via Twilio when a MongoDB record updates, or generating an image with Stability AI based on a user's prompt.
- Dynamic Context: Pass data between nodes seamlessly. The output of an OpenAI call can be directly piped into a Notion page or a Google Sheet.
Key Features
- Moorcheh AI RAG System (Documentation Q&A)
- Workflow_api_schemas (23 documents): Complete API documentation with authentication, parameters, rate limits, and best practices
- Workflow_node_templates (21 documents): Node usage guides with common patterns and workflow examples
- Workflow_instructions (10 documents): Step-by-step how-to guides for platform features.
- Semantic Search: Vector-based retrieval finding top-k relevant documents across namespaces
- LLM Generation: Claude Sonnet 4 generates context-aware answers with source citations.
7 Query Patterns: Node recommendations, connection validation, instructions lookup, API schema info, field mapping, conversational chat, troubleshooting.
Gemini Agentic AI (Function Calling)
Autonomous Workflow Construction: Natural language interface where Gemini 2.0 Flash autonomously builds.
Agentic Loop: Multi-iteration execution allowing Gemini to call multiple tools sequentially, using previous results to inform subsequent decisions.
Context Enrichment: Automatic injection of workflow state, available APIs, and node metadata into prompts.
Real-time Canvas Sync: Tool executions trigger MongoDB updates which auto-refresh the frontend via Zustand state management.
How we built it
Protosynthesis is a modern full-stack application leveraging the best tools for interactivity and performance:
- Frontend: Next.js, ReactFlow (Canvas), TailwindCSS, Zustand
- Backend: Flask, Google Generative AI SDK, Moorcheh AI, Supabase (Auth)
- Database: MongoDB Atlas
Challenges we ran into
- Graph Execution State: Managing the state of an asynchronous, multi-branching graph in real-time was difficult. We had to ensure that downstream nodes waited for data from upstream nodes without freezing the entire execution flow.
- Schema Standardization: Every API has a different structure. Creating a unified "block" interface that could handle the nuances of Stripe's financial objects and OpenAI's chat messages required designing a flexible and robust schema definition system.
Accomplishments that we're proud of
- Live Visual Debugging: Seeing the graph "come alive" as data flows through the wires is incredibly satisfying and makes debugging intuitive.
- 15+ Integrations: We successfully integrated a wide range of powerful APIs, proving that our schema system is scalable.
What's next for Protosynthesis
- Marketplace: A community hub where users can share and fork workflow templates.
- Self-Hosting: Docker containers to let enterprises run Protosynthesis on their own infrastructure.
- Autonomous Agents: New nodes that can run indefinitely, reacting to events and managing long-running processes autonomously.





Log in or sign up for Devpost to join the conversation.