Inspiration

LockIn grew out of late-night study sessions and marathon study days, when we—like many other college students—found ourselves hunched over laptops, jaw clenched, and utterly unaware of mounting stress. We saw classmates burning out without realizing how much physical tension and anxiety they carried. Existing wearables or single-focus apps didn’t feel right for busy dorm rooms or cramped study halls. We wanted a passive, non-intrusive, AI-powered desk companion that not only tracks posture and stress, but also teaches self-awareness to help prevent burnout.

What it does

LockIn uses your existing webcam to capture a quick frame every few seconds, sending it through Google’s Gemini API for posture-shift counts and “stressed” expression flags. When it detects slouching or tension, LockIn speaks an audio alert using built-in text-to-speech. At the start of each session, LockIn prompts you to enter your study timer and your task at hand, then guides you through that focused block. At session end, you get an in-app downloadable report highlighting a log of your posture lapses and stress peaks.

How we built it

We replaced specialized hardware with a simple browser-based solution: we grab frames via WebRTC and process them with OpenCV in a lightweight Node.js service. Each frame is sent to the Gemini API for emotion and posture analysis. Our React web app, backed by Firebase Auth, asks users for their desired session length and task description, then signals our inference service to start or stop. Firestore streams time-stamped events—posture shifts, stress flags, and user notes—while the browser’s API delivers real-time TTS nudges. Finally, we generate a polished PDF summary client-side using jsPDF, so there’s no extra server load.

Challenges we ran into

Since we moved off Raspberry Pi and needed everything browser-based, keeping frame captures, API calls, and TTS nudges fast enough to feel natural was tough. We had to carefully manage how often frames were sent and how fast the Gemini API could respond without making the system feel sluggish.

Accomplishments that we're proud of

We delivered a fully browser-based, zero-hardware MVP that runs on laptop without requiring extra devices and uses voice-first nudges, powered by built-in text-to-speech, and custom session prompts—students set their own study timer and task description—turning passive monitoring into active habit formation.

What we learned

Building LockIn taught us that software edge can often replace dedicated hardware—leveraging the browser and a lightweight Node.js service let us pivot away from Raspberry Pi constraints without sacrificing performance. We discovered that TTS alerts break the autopilot of long study sessions far more effectively than silent notifications.

What's next for LockIn

We plan to introduce adaptive focus modes that automatically adjust nudge frequency based on detected stress levels and user feedback, making LockIn smarter over time. We’ll build in agentic desktop interventions—when the data show you’re veering off-task, LockIn can quietly close distracting browser tabs, block access to time-sink sites, or suggest switching to a focused playlist or meditation prompt. A mobile companion app will bring study prompts and TTS nudges to phones and tablets for seamless cross-device support. We’re also developing group study analytics—anonymized dashboards for campus wellness centers to spot peak stress periods—and gamification elements like badges and streaks to reward consistent posture and focus habits. Finally, we’ll partner with university counseling centers to validate LockIn’s impact on academic performance and mental health, ensuring our tool not only monitors, but actively guides students toward healthier, more productive study habits.

Built With

Share this project:

Updates