Inspiration:
We started by mapping all kinds of speculative "new senses" we could give people: emotional climate, social comfort, environmental risk, even AI detection. As we talked, we kept circling back to one shared experience: sitting in front of a task, wanting to do it, and still not being able to start, without really knowing why. We realized heightened sensing could do something powerful here: make the invisible moment before procrastination visible, and lower the barrier to that very first unit of effort. So LOCK IN grew out of one question: What if you could actually feel your brain's readiness, and let that shape which task you choose, how small the first step is, and when you should try to do it?
What it does:
LOCK IN is a speculative tool that senses your effort state and matches tasks to your capacity. It reads brain and body signals (like prefrontal activation, heart rate variability, and skin conductance) to estimate how much effort your brain can handle right now and how close you are to avoidance. It then:
- Shows this as a simple Effort Dial (green/yellow/red).
- Suggests one micro-step that fits your current state.
- Learns over time when different task types go best for you, so you can plan deep work into your natural "lock in" windows. Instead of pushing you to "try harder," it helps you work with your biology.
How we built it:
We first started extremely low-fi. First, we sketched dozens of concepts on paper: different “new senses” we could give people, possible Effort Dials, ways a tool might listen to brain and body signals, and how it could suggest right-sized next steps. Those sketches helped us narrow in on the core story of LOCK IN: sensing effort state, matching tasks to capacity, and lowering the barrier to first effort. From there, we moved into Figma and created our first static screens for the Effort Dial, Next Step card, and task calibration flow. Once the basic layout felt right, we used Figma Make to turn those screens into interactive prototypes. We worked with Claude to help us write precise prompts for Figma Make (for example, how the Effort Dial should change states, how bottom sheets should animate, and how the calibration flow should progress), and to refine the copy and micro-interactions so the experience stayed clear and gentle at every step.
Challenges we ran into:
- Staying speculative but believable. We had to imagine capabilities that don't exist in consumer products yet (like lightweight EEG + effort inference), while grounding them in today's research on wearable cognitive load detection and neurofeedback.
- Not "mind-controlling" the user. It was tempting to imagine a tool that simply "boosts" you into focus. We had to keep re-centering on agency: LOCK IN should inform and support decisions, not override them.
- Avoiding guilt and over-surveillance. Many productivity tools end up as shame engines. We had to design language, visuals, and safeguards so that sensing more doesn't feel like being watched more.
- Explaining complex biology simply. Turning dopamine, prefrontal to limbic dynamics, and autonomic signals into a story and UI that make sense in a few seconds was a constant design constraint.
Accomplishments we’re proud of:
- Reframing procrastination as a sensing problem, not a willpower problem. We think this lens ("your brain is doing invisible cost to benefit math you can't feel") is both compassionate and scientifically grounded.
- Designing a clear, low-friction core loop. The Effort Dial + one Next Step card is simple enough to glance at and act on, yet backed by a rich speculative sensing stack.
- Integrating multiple touchpoints into one coherent experience. We connected headband, watch, phone, and ambient desk light into a unified story that still feels like something you could imagine using day to day.
- Building for individual agency. LOCK IN is explicitly individual-only, local-first, and opt-in, with a "pause coaching" button always available. That ethical stance is built into the UX, not bolted on at the end.
What we learned:
- People relate most to the "frozen cursor" moment. That shared experience became our anchor and made the science feel relevant instead of abstract.
- Biology-aware tools need very simple UIs. The more complex the sensing under the hood, the more the surface must reduce to one or two clear signals.
- Behavior change is about fit, not force. Matching tasks and micro-steps to real-time capacity felt more humane and realistic than trying to "maximize productivity" at all times.
- Speculative design benefits from real research. Reading about cognitive load wearables, dopamine and effort, and BCSS patterns let us push beyond "just another productivity app" into something that genuinely extends human sensing.
What’s next for LOCK IN:
If we were to take LOCK IN further, we'd want to:
- Co-design with neurodivergent users. People with ADHD and anxiety often experience the most painful task initiation struggles; we'd work with them directly to refine what feels supportive vs. overwhelming.
- Explore therapist/coach collaboration modes. While LOCK IN is individual-first, there could be value in optional, privacy-respecting ways to share patterns with clinicians or coaches to support treatment.
- Speculate further on standards and ethics. If tools can sense pre-procrastination states, what norms and safeguards should exist so this power is used only for the user's own goals?
At its core, we see LOCK IN as a starting point for a new class of tools: biology-aware interfaces that give people new senses of their own cognitive state and help them act more kindly and effectively on that information.
Built With
- figma
Log in or sign up for Devpost to join the conversation.