🧠 MindStream: Agentic Mind Maps for Meeting Accessibility

Built at GemiHacks 2025

πŸ’‘ Inspiration

Meetings are often overwhelming β€” especially for people with ADHD, cognitive disabilities, or those who are hard of hearing. Traditional summaries don’t capture structure, flow, or actionable points in a way that's easy to follow.

We built MindStream to make meetings more accessible by automatically generating mind maps from transcripts β€” giving users a visual, interactive way to understand what happened and what needs to be done.

πŸ‘₯ Who It's For

MindStream is designed for:

  • πŸ” Neurodivergent users who benefit from visual organization of ideas
  • 🧏 People who are hard of hearing, who may rely on transcripts
  • 🧠 Anyone overwhelmed by long meetings or follow-up emails

Our goal: let users see the flow of ideas, understand structure, and retain action items without having to re-read a full transcript.

πŸ§ͺ How It Works (AI Layer)

MindStream has two modes:

βœ… Fast Mode

  • One-shot Gemini LLM call to extract mind map nodes from the full transcript.
  • Good for short transcripts, fast results.

🧠 Smart Agentic Mode (MCP-powered)

  • Gemini 1.5 is turned into an agentic system using Model Component Protocol (MCP)
  • Gemini is given three tools to call during reasoning:
    1. extract_structure – extract nodes from transcript chunks
    2. merge_maps – combine nodes into a full mind map
    3. agent_memory – (optional) recall past sessions and build on them

These tools are accessed via FastAPI endpoints and JSON schemas, giving Gemini the power to reason, plan, and build high-quality maps from long or complex transcripts.

πŸ› οΈ How We Built It

  • Frontend: React + TypeScript + TailwindCSS + ReactFlow
  • Backend: FastAPI + MongoDB + Uvicorn
  • AI Engine:
    • Gemini 1.5 (via google.generativeai)
    • MCP-compatible tool interface
  • Agent: Gemini dynamically selects and calls tools as needed
  • Fallback: Classic LangChain-based agent for simpler queries

πŸ” What It Looks Like

  • Drag-and-drop mind map
  • Visual branching of ideas and decisions
  • Highlights who said what and what needs to happen next

🧠 What We Learned

  • How to integrate Gemini 1.5 with real-time tool-calling (MCP)
  • The power of autonomous agents to handle long, messy inputs
  • How to design interfaces that support cognitive accessibility
  • How memory can help LLMs evolve their outputs across sessions

🚧 Challenges We Faced

  • Handling partial or malformed JSON from LLMs
  • Debugging tool-calling flow between Gemini and our backend
  • Managing long transcripts across multiple reasoning steps

πŸ’­ What's Next

  • Voice β†’ transcript β†’ mind map pipeline (fully automatic)
  • Save mind maps per user and let Gemini evolve them over time
  • Export to PDF, Notion, or calendar reminders
  • Real-time caption + mapping for live meetings

🏁 Final Thoughts

MindStream is more than a summary tool β€” it’s a visual accessibility layer for meetings. For people who struggle with attention, memory, or auditory processing, this project turns transcripts into structured, actionable visual maps powered by cutting-edge AI.

✨ We don’t just summarize β€” we make meetings accessible.

Built With

Share this project:

Updates