We will be undergoing planned maintenance on January 16th, 2026 at 1:00pm UTC. Please make sure to save your work.

Inspiration

During office hours last semester, we noticed something frustrating: students would come in confused about the same slides week after week. The professor had no idea which parts of the lecture materials were causing trouble until students bombed the midterm. Meanwhile, the students were too intimidated to ask questions in class, so they'd just struggle in silence.

We thought, what if every time a student clicked on something confusing, that click could tell the teacher "hey, this needs work"? And what if AI could give instant help while collecting that data? That's how FrictionAI started: turning confusion into actionable insights for everyone.

What it does

FrictionAI is an intelligent PDF viewer that creates a learning feedback loop:

For students: Click any confusing text or diagram in your course materials to get an instant AI explanation tailored to your specific course and context. The AI understands both text and images, so it can explain complex diagrams, equations, or dense paragraphs.

For teachers: See visual heatmaps showing exactly where students struggle. Red hot zones indicate high-confusion areas, yellow shows moderate difficulty, and green means students get it. No more guessing which topics need more explanation.

For the AI: The system learns which explanations work by caching frequently-requested content and refining prompts based on interaction patterns. High-struggle areas automatically get more contextual information in their explanations.

Upload a PDF, students interact with it, and the system generates real-time analytics showing struggle patterns across the entire class.

How we built it

Frontend: Next.js 15 with React 19 for the interactive PDF viewer. We used react-pdf for rendering and custom CSS overlays for the heatmap visualization. TailwindCSS + shadcn/ui for the interface.

Backend: Supabase handled everything, PostgreSQL for storing interactions and chunks, Edge Functions (Deno runtime) for the AI explanation API, and Storage for PDFs. We wrote database triggers to auto-increment interaction counts atomically.

PDF Processing: AWS Lambda running Python with PyMuPDF. It downloads PDFs, extracts text with coordinates, identifies images, and chunks content semantically. The Lambda sends chunk data back to Supabase via REST API.

AI Layer: Google Gemini 2.5 Flash API integrated into Supabase Edge Functions. We built dynamic prompts that include course context, surrounding chunks, and detail level preferences. Explanations get cached in PostgreSQL to reduce API costs.

Analytics: Custom DBSCAN-inspired clustering algorithm that aggregates click coordinates into density blobs. We interpolate colors (green → yellow → red) using logarithmic scaling to make heatmaps visually meaningful.

Challenges we ran into

CORS hell: Getting Edge Functions to work locally vs. production was a nightmare. We kept hitting CORS errors until we realized we needed to add corsHeaders to every Response object, not just the success cases.

Chunking trade-offs: Too small and you get meaningless fragments. Too large and students can't pinpoint confusion. We iterated on the chunking algorithm probably 10 times before landing on semantic grouping by proximity + line breaks.

Real-time heatmaps: Calculating density clusters on every click would kill performance. We ended up pre-computing clusters and only recalculating when interaction count crosses thresholds. Still not perfect.

Multimodal prompting: Gemini needs image descriptions, not raw images. Extracting meaningful descriptions from PDF metadata was tricky, lots of images are just decorative. We had to filter by size and position to find actual diagrams.

Database migrations: We probably reset the local database 20+ times during development. Kept hitting conflicts between migration versions. Eventually learned to check migration timestamps before creating new ones.

Accomplishments that we're proud of

  • It actually works end-to-end. Upload PDF → chunks appear → click generates explanation → heatmap updates. The full pipeline works, which honestly felt uncertain at 2am Sunday.

  • The heatmaps look genuinely cool. Watching the blobs grow and change color as you click feels satisfying. The logarithmic color scaling makes patterns obvious at a glance.

  • Gemini understands diagrams better than we expected. We fed it some gnarly thermodynamics diagrams and it gave coherent explanations. That wasn't a given.

  • Caching reduces costs significantly. Popular chunks get cached after the first explanation, cutting our API calls by ~60% in testing with simulated traffic.

  • Teachers could actually use this. We showed a prototype to a TA friend and they immediately saw value: "I would've known to spend more time on recursion if I'd seen this data."

What we learned

Technical:

  • Supabase Edge Functions are powerful but debugging them locally is rough
  • PyMuPDF's coordinate extraction is amazing for building interactive overlays
  • Database triggers can save you from writing tons of API logic
  • Next.js 19's Server Components make data fetching way cleaner
  • CORS is always the problem. Always.

Product:

  • Struggle patterns are surprisingly consistent, same slides confuse everyone
  • Visual content (diagrams, graphs) generates 2x more clicks than text
  • Context matters for AI explanations; generic responses don't help
  • Teachers want aggregate data, not individual student tracking (privacy!)
  • Students won't use slow features, AI responses need to be <3 seconds

Team:

  • Splitting work by domain (frontend/backend/AI/analytics) let us move fast
  • Git merge conflicts are inevitable; communicate before pushing
  • Deploy early, deploy often, we caught so many bugs in production that worked locally
  • Taking breaks actually helps; some of our best ideas came after stepping away

What's next for FrictionAI

Short-term:

  • Add comparison view: see how struggle patterns differ between semesters
  • Implement "suggested improvements" where AI recommends alternative explanations for hot zones
  • Build teacher notification system: "20% of students clicked this diagram today"
  • Add export feature for heatmap analytics (PDF reports for curriculum committees)

Long-term:

  • Integration with Canvas/Blackboard LMS for seamless adoption
  • Predictive analytics: identify which content will cause confusion before students see it
  • Peer learning suggestions: match confused students with peers who understood that concept
  • Multi-document analysis: find patterns across entire course catalogs
  • Voice explanations for accessibility

Research directions:

  • Does seeing the heatmap actually improve teaching outcomes? (Need A/B testing)
  • Can we predict student performance based on early interaction patterns?
  • What explanation formats work best? (Analogies vs. step-by-step vs. examples)

We built this in less than 24 hours, so there's definitely rough edges. But the core concept works, and we think it could genuinely help students learn and teachers teach. That's worth iterating on.

Share this project:

Updates