Inspiration
How many of you would see this equation: $$\int_0^\infty \left(\sum_{k=1}^{\infty} \frac{(-1)^k x^{2k}}{(2k)!}\right)e^{-\alpha x}\, dx$$ and immediately reach for ChatGPT? From algebra to multivariable calculus, many students in these classes find themselves relying on AI tools to finish assignments faster. This widens the gap between those who use AI to deepen understanding and those who rely on it to survive. Inspired by the way real learning happens in office hours—through messy thinking, dialogue, and visualization—we built a multimodal AI Teaching Assistant focused on intuition-building.
What it does
Students can speak their reasoning, sketch ideas, generate graphs with natural language, or type traditionally, and the AI interprets partial understanding, corrects misconceptions, and responds with dynamic visual explanations—including automatically generated animated math videos. Users can also upload course-specific materials so explanations match their class’s notation and philosophy, creating private, personalized office hours anytime. In addition, learners can generate and interact with both 2D and 3D graphs using natural language, exploring mathematical concepts dynamically and intuitively in real time.
How we built it
Using OpenAI’s real-time GPT-4o transcription and text-to-speech tools, we enabled natural spoken interaction, while rapidly iterating on the full-stack infrastructure despite having no prior full-stack experience. Sponsor tools and AI agents accelerated development dramatically, allowing us to integrate retrieval-based personalization and dynamic animation generation into a cohesive learning experience.
Challenges we ran into
AI allowed us to scale and prototype incredibly fast, but that speed meant we had to learn new tools, architectures, and constraints just as quickly, often redesigning components in real time. One of the biggest obstacles we faced was grounding the math animations in step-by-step tutorial reasoning. We navigated token limits, extensive prompt engineering to regulate the AI’s level of autonomy, and the delicate balance between guiding animations and graphs toward well-behaved outputs while still preserving flexibility. Managing multiple input modalities and adapting our design to time and technical constraints required constant iteration and tight coordination across the team.
Accomplishments that we're proud of
We are especially proud of designing and deploying a fully multimodal, personalized learning system that couples a Retrieval-Augmented Generation (RAG) pipeline with generative visual reasoning. Our RAG architecture indexes and embeds course-specific artifacts and textbook resources, processes user query by retrieving semantically relevant chunks in real time, and conditions LLM outputs on this grounded context to produce syllabus-aligned, citation-aware “TA in Office Hours”-esque responses. Then, we developed an animation layer that translates this symbolic reasoning into stepwise, elegant visualizations, enabling students to see abstract transformations unfold dynamically and intuitively. The system highlights smooth, real-time interaction through a digital whiteboard and natural speech input, so students can sketch ideas, write out steps, and talk through their thinking naturally. In addition to a back-and-forth system, we developed a custom low-level graphing experience. Students can use natural language to directly customize functions ranging from the simple line to a sophisticated sinusoidal wave in 3D. To accomplish this, we had to develop a custom layer for the agent to interface with the Desmos API and generate reliable modifications to the graph from organic, sometimes imprecise, user input. We bridged the gap between math and natural language for students who want to learn the graphical relevance of function parameters. Together, LoveLace creates a fluid, back-and-forth experience that feels much closer to real office hours, allowing students to explore, make mistakes, and refine their understanding while still receiving clear, structured guidance.
What we learned
Through this project, we learned how powerful AI agents can be—not just as tools, but as collaborators across design, infrastructure, and debugging. We discovered that the best project is built with a strong vision for how people want to interact with technology. We discovered how to work effectively with agents: breaking ambitious ideas into smaller components, iterating rapidly, and then stitching everything back together into a cohesive system. As first-time hackathon builders, we learned how to take a vision from concept to full-scale application and realized we’re no longer limited by unfamiliar tech stacks or lack of prior experience.
What's next for LoveLace
Next, we plan to expand LoveLace’s accessibility by adding Spanish language support, making high-quality, conceptual math tutoring available to a broader community of learners. We also aim to optimize animation rendering time through GPU acceleration, improving responsiveness so visual explanations feel seamless and real-time.
Built With
- anthropic
- chatgpt
- claude
- codex
- desmos
- docker
- javascript
- lovable
- manim
- openai
- python
- typescript
- vercel
- webrtc
Log in or sign up for Devpost to join the conversation.