Inspiration

The inspiration for Synapse came from a simple realization, we spend countless hours consuming digital content that fascinates us, yet we retain almost none of it. Every scroll brings us flashes of knowledge, ideas, and creativity that fade moments later. Instead of fighting this habit, we wanted to embrace it. The goal was to create an AI that learns alongside you, studying the same content you see and using it to teach you in meaningful, lasting ways. Synapse was born from the belief that the very information streams that distract us can also become our greatest sources of learning, if understood the right way.

What it does

Synapse is a personal learning companion designed to transform your digital interactions into active knowledge. When you encounter something online that sparks curiosity, a concept you don’t fully understand or a claim that makes you think, you simply capture it with a screenshot. Synapse interprets what you’re viewing, identifies the underlying question or topic, and provides a personalized explanation or lesson in response. It connects related ideas, builds on what you’ve learned before, and helps you form a lasting understanding of the concepts you encounter daily. Over time, Synapse evolves into an intelligent map of your curiosity, reflecting your interests and helping you grow through the very content you consume.

How we built it

We built Synapse using a combination of modern web frameworks, AI services, and graph-based data systems. The backend was developed with FastAPI, serving as the bridge between the user interface and the AI reasoning engine. We integrated a WebSocket layer for real-time communication and debugging during live sessions. For text extraction, we implemented PyTesseract for optical character recognition, enabling the system to interpret text directly from user screenshots. The AI core leverages Gemini, which analyzes both visual and textual data to infer user intent and generate tailored explanations. To organize and relate information, we designed a hybrid storage system combining PostgreSQL with pgvector, Neo4j, and GraphRAG, allowing Synapse to connect concepts, detect semantic relationships, and build a knowledge graph unique to each user. On the frontend, we used React and Next.js to create a clean, responsive interface and Framer Motion to add smooth, meaningful animations that make the experience feel alive. Together, these components form a seamless system capable of understanding context, teaching dynamically, and learning continuously from the user’s digital world.

Challenges we ran into

One of our main challenges involved handling redundant content from still frames during video capture. Because many short-form videos include long segments with minimal motion, our system initially processed and stored duplicate frames repeatedly, creating unnecessary operational overhead and database bloat. To address this, we analyzed frame data within the database and introduced a frame consolidation step that compares pixel differences across sequential frames. If two frames showed little to no change, only one was retained. This optimization dramatically reduced redundancy while preserving accuracy. We also faced challenges synchronizing real-time screenshot processing with backend inference while maintaining low latency. Managing OCR accuracy in visually complex screenshots, with overlapping captions, app overlays, and mixed languages, required multiple iterations. Another challenge was balancing personalization with privacy, ensuring the AI could learn contextually without storing sensitive data. Finally, integrating vector storage with a graph database to represent conceptual relationships proved technically demanding but ultimately rewarding, as it allowed Synapse to reason across ideas rather than isolate them.

Accomplishments that we're proud of

We’re proud of creating an AI system that not only processes user content but genuinely learns from it. Synapse bridges the gap between passive consumption and active understanding, giving users a way to make their everyday scrolling habits intellectually rewarding. We successfully built a unified pipeline connecting screenshot data, AI interpretation, and a growing knowledge graph. The final prototype feels human, capable of understanding intent, remembering context, and adapting its teaching style to the user’s natural digital behavior.

What we learned

Developing Synapse taught us the importance of aligning AI design with human curiosity. We learned that true personalization comes not from asking users what they want to learn, but from seeing what they already engage with. On the technical side, we deepened our understanding of multimodal AI systems, real-time communication layers, and graph-based reasoning architectures. We also realized that empathy in user experience is as important as technical accuracy, learning should feel intuitive, not imposed.

What's next for Synapse

Our next steps include expanding Synapse’s communication abilities to include natural voice explanations, enabling the AI to teach through conversation as well as text. We also plan to incorporate visual generation, allowing the system to create short, context-rich video summaries that illustrate what the user is learning. In the long term, we aim to make Synapse fully privacy-preserving, with local processing and on-device learning capabilities. Ultimately, our goal is to create an AI that grows with each user, learning not just for them, but with them.

Built With

Share this project:

Updates