Inspiration

We built ScrollWise because we wanted the tool we wished we had in high school. Every one of us has spent a Sunday night watching three hours of YouTube explainers before an exam and walked into class the next morning unable to remember half of it. Not because the content was bad. Because nothing connected it, nothing tested it, and nothing told us what we actually absorbed.

We also thought about our teachers. The ones who would assign a video with no way of knowing whether anyone watched it, let alone understood it. The ones who spent 20 minutes re-teaching something because they were flying blind on what the class already knew.

And we wanted to do it the right way. The hackathon brief asked us to prioritise the ethical use of AI, and most of the edtech we have seen lately does the opposite — surveillance-heavy monitoring, opaque detectors that misjudge students, algorithms optimised for screen time instead of understanding. We wanted to prove that AI in education can be the opposite of that: transparent, respectful, and genuinely on the learner's side.

We asked one question: what if scrolling could finally mean something?

What it does

ScrollWise is an educational short-form video platform that transforms passive scrolling into measurable, structured learning. It is not another app asking students to change their behaviour. It meets them where they already are and makes that time count.

A teacher or student declares a subject like World War 1 or Machine Learning and ScrollWise generates a concept tree breaking that subject into its core building blocks. A TikTok-style filtered feed then delivers only videos relevant to those concepts, pulled from YouTube and verified through transcript analysis.

Teachers can customise how many scrolls trigger a quiz, and when one does appear it is built directly from what was just watched rather than generic questions. Then the centrepiece: the Bubble Map. A living knowledge visualisation that updates in real time, showing concepts in green (strong), yellow (weak), or red (gap). Gap nodes automatically queue the next session's videos to fill what is missing.

Teachers get a dashboard showing every student's bubble map and a class-wide aggregate. Not just "Johnny completed 20 scrolls" but "80% of your class understands the causes of WW1 but only 20% have encountered the Treaty of Versailles, address this tomorrow."

This is what we mean by rethinking how learning is measured and experienced rather than optimising an existing system. We are not grading faster or recommending better videos. We are changing what counts as evidence of learning in the first place — from completion to comprehension, from hours watched to concepts understood.

How we built it

The frontend was built in React using Cursor for rapid development, with a TikTok-style scroll feed using react-player and IntersectionObserver for play and pause on scroll. The backend runs on FastAPI in Python deployed on Railway, handling all external API calls server-side to keep keys secure. The AI layer uses the Anthropic Claude API for concept tree generation, quiz generation from video transcripts, and class insight strings. Content comes from the YouTube Data API v3 with a two-layer filter: title and keyword search first, then transcript concept verification via youtube-transcript-api. The database and auth run on Supabase with a full Postgres schema covering subjects, assignments, video cache, student progress, and quiz attempts with Google SSO built in. All demo data was pre-seeded and transcripts pre-cached so zero live API calls are made on stage.

Ethics and responsible design

The hackathon brief specifically asked us to address fairness, transparency, privacy, and educational integrity. We treated these as design constraints from day one, not an afterthought, and every major product decision was filtered through them.

Transparency. Before a single video plays, the student sees the concept tree the AI generated for their subject and can edit it. There is no black box deciding what they will learn. The bubble map that tracks their progress is visible to them at all times and uses a simple, auditable scoring rule rather than a mysterious model output. Teachers can see and modify the concept tree too, so the AI augments their judgement instead of replacing it.

Privacy. We collect almost nothing. No cameras, no microphones, no biometric data, no keystroke tracking, no webcam proctoring. The only signals we use are which videos a student chose to watch and how they answered short quizzes — data they would generate anyway in any classroom. Every student owns their bubble map and can delete it at any time. Progress data never leaves their Supabase record unless a teacher they joined is viewing their class aggregate.

Fairness. The feed is filtered by concept match, not by a personalisation algorithm trained on engagement. That means a student from an under-resourced school sees the same quality of content as a student with a paid tutor. There is no premium tier that unlocks better learning. We also deliberately avoided detector-style AI that flags students — tools like that have well-documented false-positive problems and disproportionately harm non-native English speakers and neurodivergent learners.

Educational integrity. ScrollWise is not a tool for doing homework faster or a chatbot that writes essays. It does the opposite: it makes informal learning visible and tests whether it actually stuck. Quizzes are generated from the exact transcripts the student watched, so there is no way to pass without having engaged with the content. The teacher dashboard preserves the teacher's role as the authority on what comes next — the AI surfaces gaps, the teacher decides how to address them.

Optimising for understanding, not engagement. Every mainstream short-form platform is engineered to maximise time on screen. We explicitly chose the opposite reward loop: the dopamine hit is not the scroll, it is the bubble map filling up and the quiz score climbing. A student who masters a subject in 15 scrolls instead of 50 is a win for us, not a loss. That alignment between product success and learner success is the ethical core of what we built.

Challenges we ran into

YouTube transcript availability hit us hard early. A large portion of videos have no transcripts at all, and auto-generated ones are too noisy to extract reliable concepts from. We had to build an aggressive pre-caching layer and manually verify videos before the demo to make sure nothing failed live.

YouTube API quota limits were a constant pressure. The Data API v3 gives 10,000 units per day and each search burns 100 of them. During testing we blew through the quota faster than expected, which forced us to cache search results aggressively in the database and stop making redundant calls.

Claude API response parsing caused silent failures early on. Claude would wrap JSON in markdown code fences or add a sentence of preamble before the actual object, which broke JSON.parse() without throwing an obvious error. We added a cleaning step to strip fences and trim whitespace before every parse and wrapped everything in try-catch.

Smooth scroll with YouTube iframes was harder than expected. Iframes do not expose play state or cooperate with native scroll events, so detecting whether a student actually watched a video rather than just scrolled past it took real work. We solved it using IntersectionObserver combined with a play duration threshold so only genuine watches count toward progress.

Browser autoplay restrictions caught us off guard. Browsers block autoplay with sound by default, which broke the scroll feed silently on first load. Muting videos by default with a tap-to-unmute button fixed it and actually felt like the right UX for a scroll feed anyway.

Keeping the demo environment stable was its own challenge. Things that worked perfectly in development broke on deployment in ways that were hard to debug under time pressure. We learned to deploy to Railway early and test against the live URL rather than localhost from day two onwards.

Accomplishments that we're proud of

The Bubble Map is genuinely beautiful and actually works. Watching it update live when a student answers a quiz correctly, nodes shifting from red to yellow to green, is the moment the whole idea becomes real.

The teacher dashboard is the other thing we are proud of. It turns raw scroll data into a single sentence a teacher can act on tomorrow. That felt like we had built something genuinely useful, not just technically impressive.

We are also proud that we shipped a real minimum viable product rather than a slide deck with ambitions. Every layer in the pitch — the concept tree, the filtered feed, the quiz, the bubble map, the teacher dashboard — exists and talks to the others end to end.

And we are proud that the ethical design held throughout. It would have been easier to ship the surveillance-heavy version. We did not, and the product is stronger for it.

What we learned

We learned to fake it first and replace it second. Every API endpoint started with hardcoded data and got swapped for real calls once the visual loop was working. That discipline saved us from a day-three panic.

We learned that the idea matters more than the polish. The bubble map sold the concept even before the animations were perfect. Judges remember the insight, not the pixel gap.

We learned that ethics is a design constraint, not a slide at the end. Once we committed to no surveillance, no engagement optimisation, and full transparency, a lot of design decisions became obvious. The constraints made us faster, not slower.

And we learned that team synergy affects everything. Clear role separation across backend, frontend, and QA plus pitch meant nothing fell through the cracks and no one was blocked waiting on someone else. Three people moving in lockstep outran every obstacle we hit.

What's next for ScrollWise

First we want to build a native mobile app in React Native since this needs to live where students already are. From there, platform expansion to Coursera and Udemy alongside YouTube, creator onboarding so educators can publish directly into the filtered feed, and micro-credential export so the bubble map becomes a shareable learning portfolio.

As we scale, holding the ethical line becomes harder, not easier — more data, more pressure to personalise, more incentive to optimise for retention. We plan to publish our scoring logic openly, keep the student as the sole owner of their bubble map, and never introduce an engagement metric as a success measure internally. The long-term vision is the layer that sits on top of the entire internet and turns watching into knowing — built in a way a student, a teacher, and a parent can all trust.

Built With

Share this project:

Updates