One Question. Many Minds. Collective Intelligence.
A powerful multi-model AI research platform that queries multiple LLMs in true parallel execution and synthesizes their reasoning into one collective consensus. Experience up to 4x faster responses with concurrent agent processing!
MeshMind is a cutting-edge AI orchestration platform that eliminates single-model bias by querying multiple large language models simultaneously. Instead of relying on one AI's perspective, MeshMind creates a "mesh" of AI agents working together to provide more balanced, comprehensive, and nuanced responses.
- 🔀 Multi-Model Orchestration: Run up to 4 AI agents in true parallel execution, each with different models
- ⚡ Ultra-Fast Parallel Processing: All agents execute concurrently - get responses up to 4x faster than sequential processing
- 🤖 Provider Support: Integrate with OpenRouter and Vercel AI Gateway for access to dozens of models
- ⚡ Real-Time Streaming: See responses from each agent as they arrive with beautiful segmented UI
- 🎯 Custom System Prompts: Configure each agent with unique instructions and perspectives
- 🔐 Secure Authentication: OAuth support (Google, GitHub) and traditional email/password
- 💾 Persistent Conversations: All chats saved with full history and agent configurations
- 🎨 Beautiful UI: Modern, responsive interface built with Tailwind CSS and Radix UI
- 🌙 Dark Mode: Seamless theme switching for comfortable viewing
Frontend:
- TanStack Start - Full-stack React framework
- TanStack Router - Type-safe routing
- TanStack Query - Async state management
- Tailwind CSS - Utility-first styling
- Radix UI - Accessible components
- Zustand - State management
Backend:
- Convex - Real-time backend with RPC actions
- Node.js - Server runtime
- JWT + Argon2 - Secure authentication
AI Integration:
- OpenRouter API - Access to GPT, Claude, Gemini, and more
- Vercel AI Gateway - Unified AI model interface
- Custom streaming protocol for multi-agent responses
# Clone the repository
git clone https://github.com/yourusername/meshmind.git
cd meshmind
# Install dependencies
npm install
# Set up environment variables (see .env.example)
cp .env.example .env.local
# Initialize Convex
npx convex dev
# Start development server
npm run devOpen http://localhost:3000 to see MeshMind in action!
Parallel Execution Engine:
// All agents execute concurrently using Promise.allSettled()
const agentPromises = agents.map(async (agent) => {
// Each agent processes independently
const response = await callAIProvider(agent);
return { agent, response };
});
// Wait for all agents in parallel (not sequential!)
const results = await Promise.allSettled(agentPromises);
// Stream results in order as UI receives themPerformance Optimizations:
- Single chat history fetch shared across all agents
- Concurrent web scraping for firecrawl-enabled agents
- Non-blocking error handling per agent
- Streaming results maintain UI responsiveness
- Console logging tracks individual agent timing
- Configure Your Agents: Set up 1-4 AI agents with different models and personalities
- Ask Your Question: Type your query once
- Get Multiple Perspectives: Each agent processes your question simultaneously
- Compare & Analyze: View all responses side-by-side in beautifully segmented cards
- Make Informed Decisions: Synthesize insights from multiple AI perspectives
User Question
↓
┌────┴────┐
│ Mesh │
│ Router │
└────┬────┘
↓
├─────────┬─────────┬─────────┐
│ │ │ │
↓ ↓ ↓ ↓
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Agent 1 │ │ Agent 2 │ │ Agent 3 │ │ Agent 4 │
│ (GPT-4) │ │(Claude) │ │(Gemini) │ │ (Llama) │
└────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘
│ │ │ │
└───────────┴───────────┴───────────┘
↓
⚡ PARALLEL EXECUTION ⚡
All agents run concurrently!
↓
Real-time Streaming to UI
(Results appear as agents complete)MeshMind uses true concurrent processing for all AI agents:
-
Sequential Processing (old): Agent 1 → Wait → Agent 2 → Wait → Agent 3 → Wait → Agent 4
- Total Time: ~40 seconds (4 agents × 10s each)
-
Parallel Processing (MeshMind): Agent 1 + Agent 2 + Agent 3 + Agent 4 (all at once!)
- Total Time: ~10 seconds (limited only by the slowest agent)
- 4x faster for 4 agents! ⚡
Key Benefits:
- All agents start processing simultaneously
- Results stream as they complete (fastest agent displays first)
- No waiting for previous agents to finish
- Robust error handling - one failing agent doesn't block others
- Optimized with shared context loading and
Promise.allSettled()
- Research: Get diverse perspectives on complex topics in seconds, not minutes
- Code Review: Multiple AI models review your code simultaneously - 4x faster feedback
- Writing: Compare creative outputs from different models instantly
- Decision Making: Evaluate options from various AI viewpoints with parallel analysis
- Learning: See how different models approach the same problem in real-time
- Rapid Prototyping: Get multiple implementation strategies simultaneously
- Encrypted API key storage
- JWT-based authentication
- Argon2 password hashing
- Server-side token verification
- OAuth integration (Google, GitHub)
meshmind/
├── convex/ # Backend logic (Convex)
├── src/
│ ├── components/ # React components
│ ├── routes/ # TanStack Router pages
│ ├── zustand/ # State stores
│ └── lib/ # Utilities & helpers
├── .env.example # Environment template
└── package.jsonContributions, issues, and feature requests are welcome! Feel free to check the issues page.
This project is licensed under the MIT License.
Built with ❤️ using:
- TanStack ecosystem
- Convex real-time backend
- OpenRouter & Vercel AI for model access
Made for the TanStack Hackathon
⭐ Star this repo if you find it useful!