Inspiration

Imagine this: you're a student with big dreams, but all you have is a single laptop and a shaky internet connection. No tablets. No smartphones. No expensive apps. Just your voice, your curiosity, and the will to learn. That’s the reality for millions around the world — and that’s who we built LockIn for.

We were inspired by the idea that technology should lift people up, not leave them behind. In an era where AI and digital tools are advancing rapidly, we sought to create something simple, powerful, and accessible — something that transforms even the most basic device into a personal learning assistant. LockIn was born from a mission: to make smart learning available to everyone.

What it does

LockIn turns any laptop into a conversational tutor and annotation tool. It listens to your voice, understands your questions, and answers clearly—with diagrams or annotations when needed.

It has just two buttons:

  • 🎙️ Push to Talk – to ask anything, hands-free
  • 🧠 Ask Again / Clarify – to go deeper or rephrase

Under the hood, it uses voice recognition and an intelligent assistant to respond like a helpful teacher. It doesn't just answer with plain text—it explains and visually breaks things down when needed. From chemistry mechanisms to economics graphs, LockIn brings concepts to life with simple, interactive annotations.

And the best part? It’s designed to be extremely lightweight, so it works even on older devices with limited resources.

How we built it

Our tech stack had one rule: keep it accessible and fast.

  • Frontend: A simple interface built using Electron, NextJs, TailwindCSS, and JavaScript. The voice button and animations were designed to feel fun yet modern.
  • Speech-to-speech: We used the ElevenLabs text to speech websockets for real-time speech generation.
  • AI assistant: Gemini's 2.5 API powers the assistant logic, generating both answers from audio and image input, and annotation tool calling.
  • Annotation system: Electron and RobotJs allow the app to draw diagrams directly based on explanations and screen.

Challenges we ran into

  • Working in silos backfired: Each of us focused deeply on our part—frontend, backend, AI—but we didn’t communicate or sync often enough. When it came time to merge everything, we ran into a mess of errors and integration issues that ate up a lot of time.

  • Voice + AI in under 30 hours: Combining real-time voice input with AI responses that feel natural was harder than we expected. From handling audio inputs to training and fine-tuning the assistant's behavior, it was a steep climb—especially under the pressure of a tight hackathon deadline.

Accomplishments that we're proud of

  • We managed to get the entire project working—voice input, AI responses, and annotation—all integrated into one flow. It’s not perfect and still needs a few tweaks, but it works!
  • Despite the challenges and tight timeline, we pulled it off in under 30 hours and still had a lot of fun along the way.
  • We each stepped out of our comfort zones and learned something new, whether it was integrating AI, handling voice input, or designing with accessibility in mind.
  • It was our first time building something like this, and we’re proud of how far we got—especially knowing how much we’ve learned.

What we learned

  • How to design for accessibility, not just aesthetics.
  • The power of simplifying: reducing choices made the app feel faster and smarter.
  • Speech interfaces are still underused—but they open up incredible opportunities, especially for those who struggle with typing or reading.
  • Annotation logic requires creativity. We built an entirely new way for AI to translate explanations into visual elements.
  • And yeah… AI doesn’t always get it right, but with the right prompts, it can feel almost human.

What's next for LockIn

Short-term goals:

  • 🌍 Multilingual support – Indonesian, Mandarin, and more
  • 🧱 Annotation templates – prebuilt visuals for math, science, and languages
  • 🧑‍🏫 Teacher Mode – allow tutors to create guided lessons using LockIn

Long-term goals:

  • 📱 Mobile-compatible version – for Android tablets and Chromebooks
  • 🏫 Deploy to schools with limited resources, especially in Southeast Asia
  • 👩‍🦯 Accessibility toolkit – screen reader support, high-contrast mode, tactile mode support
  • 🤝 Collaboration with NGOs or govs to push AI education tools where they’re needed most

Built With

Share this project:

Updates