Inspiration
Most learning tools are either a chatbot or a static course. You don't see how concepts connect, what you've actually learned, or what to do next. AI stays hidden behind a single "ask me anything" box — powerful but opaque. We wanted something different: AI that demystifies itself by putting your knowledge in a visible, structured map and turning "what should I learn?" into a clear path.
Pondr turns your learning into a living knowledge graph where every concept is a node, every prerequisite a visible edge, and uses ML to predict what you're about to forget before you even realize it's slipping. That's what sets us apart from generic LLMs: we don't just answer questions — we build and update a personal knowledge graph, research-backed roadmaps, and step-by-step plans so you always know where you are and what comes next.
What It Does
Pondr is an AI-powered adaptive learning platform that turns your goals into a living knowledge graph and guides you concept by concept.
You sign up, share your background and what you've already learned, then search for a topic (e.g. "machine learning" or "React"). Pondr creates a Hub for that topic and generates a personalized graph — nodes are concepts (easy / intermediate / hard for you), edges are prerequisites. The graph is informed by real learning roadmaps pulled from the Gemini API so concepts and order reflect how people actually learn, not generic placeholders.
From the Canvas you open any concept to get:
- An AI explanation tailored to your background and goal
- Key points, real-world examples, and next steps
- Listen buttons that read the explanation aloud (ElevenLabs TTS)
- A precise YouTube snippet — we search videos, fetch transcripts, and use AI to pick a short segment that explains that concept, so you watch minutes instead of full videos
- Chat to ask follow-up questions in context
You can quiz yourself on one concept or across your entire graph. We generate NotebookLM-style adaptive quizzes (multiple choice, true/false, fill-in, ordering, and more), weight questions toward weaker concepts, and use AI to grade open-ended answers. Passing updates your progress so the graph reflects what you've actually mastered.
The Planner ties it all together: connect Google Calendar, choose how many hours per week you want to study, and Pondr generates a schedule that fits your free slots and assigns concepts with activity types (review, video, quiz, and more).
Voice is built in: speak your topic search on the Hubs page (ElevenLabs STT) and listen to explanations on the concept page, so learning isn't tied to typing and reading.
Throughout, the graph is the product. You don't just get answers — you see your knowledge, your gaps, and your path. Under the hood, an XGBoost regressor personalizes the Ebbinghaus forgetting curve R(t) = e^(−t/S) by learning a unique stability value S per user-concept pair. Trained on a synthetic dataset of 1,000 simulated learners across 5 domains with realistic archetypes (consistent reviewers, binge-then-forget, and gradual decliners), the model outputs a retention score, days until the intervention threshold (0.70) is breached, and a recommended review date.
How We Built It
Stack
Pondr is a full-stack app: a React frontend (Vite, React Flow for the graph, Zustand for state, Tailwind and Framer Motion for UI) and a FastAPI backend with async routes and MongoDB via Motor and Beanie. Gemini drives almost all of the intelligence — graph generation (with real learning roadmaps from Tavily), concept explanations, Feynman and Socratic flows, adaptive quizzes, open-answer grading, chat, and study-schedule generation — while Tavily and Firecrawl supply web context, the YouTube Data API and transcripts feed a snippet pipeline that uses Gemini to pick the best clip per concept, and ElevenLabs handles voice in (search) and out (listen to explanations). Google OAuth handles login and optional Calendar integration so we can read free slots and write study events.
- Tavily — graph generation uses real learning roadmaps pulled live from the web
- YouTube Data API v3 + youtube-transcript-api — the precision snippet pipeline
- ElevenLabs STT + TTS — voice search and audio playback
- Google OAuth + Google Calendar API — login, free-busy, and event creation
- XGBoost + scikit-learn — decay model training and inference
- NumPy + Pandas — synthetic dataset generation and feature engineering
- NetworkX — knowledge graph construction and traversal
- MongoDB — users, hubs, nodes, edges, events
- ngrok — exposes local backend publicly to prevent against common web exploits and implement rate limiting.
Design: Figma to Antigravity
We designed in Figma first. Our design philosophy was intentional: simplicity and futurism — clean, minimal layouts with a dark-mode-first palette and subtle glows that make the knowledge graph feel alive rather than clinical.
We used Antigravity as our primary IDE and it fundamentally changed how fast we moved. Using Antigravity's Figma integration, we imported component specs directly from Figma into Antigravity, which scaffolded the React component structure and Tailwind styling straight from our design tokens. What would have been hours of translation from design to code became a tight loop: design in Figma, sync to Antigravity, get a working scaffold, refine, repeat.
Challenges We Ran Into
Making the graph feel real, not generic. Early graphs were too template-like. We fixed it by feeding Tavily search results into graph generation and using prior history and past hubs so each user gets a different difficulty layout and concept set.
YouTube snippets that actually fit the concept. Searching by concept name often returned long tutorials. We built a pipeline with multiple fallback queries, transcript fetches for top videos, and Gemini selecting a short segment with a clear reason — plus retries and loading states on the concept page.
Keeping the UI responsive. Graph fetch, explanation, snippet, and TTS all hit different APIs. We structured the concept page to load the node and explanation first, then the snippet in parallel with retries, and TTS on demand so the page stays usable under latency and failures.
Accomplishments We're Proud Of
We are proud of building a genuine ML pipeline, not just an API call to Gemini. Our XGBoost model takes engineered behavioral features and produces personalized retention predictions you can actually measure. When the canvas flags a concept as decaying in two days, that number comes from a trained model personalizing the Ebbinghaus curve to you specifically, not a language model making its best guess.
The knowledge graph canvas is something we're especially proud of. Seeing your entire understanding of a subject laid out in front of you, watching nodes shift as you learn and fade as you forget — that feels like a genuinely new way to experience learning. It's the kind of thing that makes people say "I wish I had this when I was cramming the night before."
We are also proud of building the first tool we know of that ties a living knowledge map, ML-predicted cognitive decay, and precision video learning into one coherent loop. These have always lived separately: flashcard apps for retention, YouTube for content, chatbots for questions. Pondr brings them onto one surface where your graph, your gaps, and your next step are always in front of you.
What We Learned
Design-to-code tooling is a force multiplier. The Figma to Antigravity pipeline compressed our design-implementation gap to almost nothing. We spent time on product decisions, not translation work.
Combining LLMs with real external data is where the lift comes from. Tavily + Gemini produces dramatically better graphs than Gemini alone. YouTube transcripts + Gemini produces dramatically more useful snippets than search results alone.
Structure is what makes AI legible. Once the graph and prerequisites were first-class citizens, users could reason about their own learning instead of trusting a black box. Demystifying AI works when you make its outputs visible and structured.
What's Next for Pondr
- Browser extension (half-built): capture "what I'm learning" from any tab and feed it into the graph, creating nodes or refreshing mastery states from real browsing
- Decay and retention v2: fully extend the decay-prediction layer with scientifically optimal spaced-repetition scheduling in the Planner
- Richer calendar and habits: recurring study blocks, streak goals, and nudges when the plan is falling behind
- Mobile / PWA: a focused mobile experience so you can review, listen, and log progress on the go
- Collaboration: shared read-only "learning paths" so educators or peers can share a Pondr graph as a curriculum



Log in or sign up for Devpost to join the conversation.