Inspiration
Traditional AI research tools force everything into a single, linear chat. As conversations grow, ideas get mixed, context is lost, and users are forced to constantly scroll back and forth and mentally juggle multiple topics at once. This is especially challenging for students and academics with disabilities, especially neurodivergent users, users with memory challenges, and users relying on assistive technologies.
We were inspired to rethink how people interact with AI while researching. Instead of asking users to adapt to linear chats, we wanted the interface to adapt to how people actually think: branching, exploring, and diving deeper into specific ideas without losing focus. That insight led to Neuralearn.
What it does
NeuraLearn is an AI-powered, accessibility-first research workspace that organizes learning into a branching knowledge graph.
Instead of one long conversation, users create parallel, focused chat threads represented as nodes in a tree. Each node is its own research context, allowing users to explore subtopics without polluting or switching between ideas.
Key capabilities include:
- A visual, interactive knowledge graph for research
- Context-aware AI chats within each node
- Voice input and voice output for hands-free use
- Smart routing of questions to existing nodes or new branches
- Automatic summaries that evolve as users interact
- Semantic search to prevent duplicate or redundant topics
The result is a learning companion that listens, organizes, and grows with the user.
How we built it
We built Neuralearn as a full-stack web application with accessibility as a core architectural principle.
- Frontend: Next.js, React Flow, TypeScript, TailwindCSS
- Renders the interactive knowledge graph
- Supports keyboard-only navigation and high-contrast modes
- Includes a global mic shortcut for voice interaction
- Backend: Node.js with MongoDB Atlas
- Stores users, sessions, and the knowledge graph structure
- Uses MongoDB Atlas Vector Search with Google embeddings for semantic similarity
- Deployed Ubuntu server on Vultr
- AI & Voice:
- Google Gemini 2.0 Flash for context-aware responses and summaries
- ElevenLabs for high-quality text-to-speech
- Web Speech API for voice input
- Google Gemini 2.0 Flash for context-aware responses and summaries
Each node acts as an independent chat with scoped context, ensuring efficiency, clarity, and reduced cognitive load.
Challenges we ran into
- Context isolation: Designing chat branches that remain focused without losing useful parent context required careful summarization and inheritance rules.
- Production stability: Debugging serverless deployment issues (notably database connectivity) taught us a lot about production-ready infrastructure.
- Accessibility tradeoffs: Supporting keyboard, voice, and visual accessibility simultaneously forced us to simplify UI decisions and prioritize clarity over complexity.
- Scope control: With many possible features, we had to stay disciplined and focus on what truly improved accessibility and usability.
Accomplishments that we're proud of
- Building a fully working parallel chat + knowledge graph system
- Delivering meaningful accessibility features, not just surface-level add-ons
- Successfully integrating voice input and output into a complex research workflow
- Deploying a stable, live demo under real-world constraints
- Creating a product that feels genuinely different from standard AI chat tools
What we learned
- Accessibility is strongest when it’s structural, not bolted on
- Non-linear interfaces can dramatically reduce cognitive load
- Voice interaction is powerful when scoped and intentional
- Production deployment issues surface assumptions you don’t notice locally
- Clear constraints often lead to better product decisions
What's next for Neuralearn.tech
Next, we want to:
- Improve branch summarization and cross-branch synthesis
- Add collaborative research and shared graphs
- Support more document types and citation tools
- Deepen accessibility features based on user feedback
- Explore long-term use in academic and professional research settings
Neuralearn started as a hackathon project, but it has the potential to become a new way people think, learn, and research with AI.
Built With
- css
- elevenlabs
- gemini
- google-cloud
- google-generative-ai
- google-search-api
- google-text-embeddings
- html
- javascript
- mongodb
- mongodb-atlas
- mongodb-atlas-vector-search
- next.js
- node.js
- react
- tailwindcss
- typescript
- vercel
- vultr
- web-speech-api
Log in or sign up for Devpost to join the conversation.