π§ MindStream: Agentic Mind Maps for Meeting Accessibility
Built at GemiHacks 2025
π‘ Inspiration
Meetings are often overwhelming β especially for people with ADHD, cognitive disabilities, or those who are hard of hearing. Traditional summaries donβt capture structure, flow, or actionable points in a way that's easy to follow.
We built MindStream to make meetings more accessible by automatically generating mind maps from transcripts β giving users a visual, interactive way to understand what happened and what needs to be done.
π₯ Who It's For
MindStream is designed for:
- π Neurodivergent users who benefit from visual organization of ideas
- π§ People who are hard of hearing, who may rely on transcripts
- π§ Anyone overwhelmed by long meetings or follow-up emails
Our goal: let users see the flow of ideas, understand structure, and retain action items without having to re-read a full transcript.
π§ͺ How It Works (AI Layer)
MindStream has two modes:
β Fast Mode
- One-shot Gemini LLM call to extract mind map nodes from the full transcript.
- Good for short transcripts, fast results.
π§ Smart Agentic Mode (MCP-powered)
- Gemini 1.5 is turned into an agentic system using Model Component Protocol (MCP)
- Gemini is given three tools to call during reasoning:
extract_structureβ extract nodes from transcript chunksmerge_mapsβ combine nodes into a full mind mapagent_memoryβ (optional) recall past sessions and build on them
These tools are accessed via FastAPI endpoints and JSON schemas, giving Gemini the power to reason, plan, and build high-quality maps from long or complex transcripts.
π οΈ How We Built It
- Frontend: React + TypeScript + TailwindCSS + ReactFlow
- Backend: FastAPI + MongoDB + Uvicorn
- AI Engine:
- Gemini 1.5 (via
google.generativeai) - MCP-compatible tool interface
- Gemini 1.5 (via
- Agent: Gemini dynamically selects and calls tools as needed
- Fallback: Classic LangChain-based agent for simpler queries
π What It Looks Like
- Drag-and-drop mind map
- Visual branching of ideas and decisions
- Highlights who said what and what needs to happen next
π§ What We Learned
- How to integrate Gemini 1.5 with real-time tool-calling (MCP)
- The power of autonomous agents to handle long, messy inputs
- How to design interfaces that support cognitive accessibility
- How memory can help LLMs evolve their outputs across sessions
π§ Challenges We Faced
- Handling partial or malformed JSON from LLMs
- Debugging tool-calling flow between Gemini and our backend
- Managing long transcripts across multiple reasoning steps
π What's Next
- Voice β transcript β mind map pipeline (fully automatic)
- Save mind maps per user and let Gemini evolve them over time
- Export to PDF, Notion, or calendar reminders
- Real-time caption + mapping for live meetings
π Final Thoughts
MindStream is more than a summary tool β itβs a visual accessibility layer for meetings. For people who struggle with attention, memory, or auditory processing, this project turns transcripts into structured, actionable visual maps powered by cutting-edge AI.
β¨ We donβt just summarize β we make meetings accessible.
Built With
- amazon-web-services
- fastapi
- javascript
- langchain
- mcp
- mongodb
- python
- react


Log in or sign up for Devpost to join the conversation.