Inspiration

In my experience as someone who's tried pretty much every 'recommended' focus-increasing app or implementation on earth, most focus tools today are static: white noise apps, timers. But human attention isn’t static. It fluctuates constantly depending on environment, fatigue, and stimulation levels. It adapts, so we need software capable of adapting with it.

I was concerned with answering this question: what if your sound environment could respond to your attention in real time?

I also wanted to build something that felt accessible and low-effort, especially for users who struggle with traditional productivity tools. Instead of sliders, menus, and settings, I explored using natural gestures and passive signals.

What it does

Takes your webcam as input, and using your hands and certain recognizable gestures, it applies audio effects that are scientifically proven to help increase cognitive function and concentration.

How I built it

The system combines three main components:

  1. Computer Vision (MediaPipe) I used MediaPipe’s hand and face tracking to extract: Hand landmarks → gesture mapping Face landmarks → approximate attention state

  2. Audio Engine (Tone.js) I built a real-time audio pipeline using: Filters (for frequency control) Playback rate (for speed) Layered ambience (rain, brown noise, etc.) These parameters are continuously updated from gesture input:

  3. Focus Feedback Loop I implemented a simple attention model:

If the face is centered/ eyes are on screen → focused If misaligned → distracted If missing → no attention

A running focus score is updated over time, and when it drops below a threshold: The UI responds A refocus cue is triggered Audio subtly adapts to support attention

Challenges we ran into

The most challenging part of this was the real-time audio modification, as I tried using Python's librosa and sounddevice initially, but Python's audio effects aren't built for real time. And I am terrified of C and all its sisters and brothers (Cpp, C sharp, terrified of every single one).

It was also difficult being a solo team.

Accomplishments that we're proud of

I'm proud of the fact that I made the real-time work and also the inclusion of a pomodoro mechanism.

What we learned

Robust Attention Tracking Smarter adaptive audio models

What's next for DJ Lock-In

As for future improvements, I'd place greater emphasis on focus-tracking using keyboard tracking or even an in-built browser avatar that can track your progress across selected tabs. I would also look into additional mechanics that can improve focus: such as Binaural Beats or Isochronic Tones. I'd also implement a larger song database.

Built With

Share this project:

Updates