Inspiration

Although technology has been greatly beneficial for productivity, it is also undeniable that there are many distractions, especially for students who need to focus on schoolwork. However, as education and technology become increasingly intertwined, with assignments and resources being posted online, trying to separate them would be misguided. Our idea provides the benefits of technology while also allowing students to remain "locked in" and stay away from any distractions.

What it does

The purpose of our app is two-fold:

  1. Through employing a CNN trained with our own data, we detect when the user is unfocused and letting out a warning. The app also can log and graph the productivity:unproductivity ratio of past sessions.
  2. With the understanding that physically touching the laptop/keyboard makes becoming off-task easier (eg, messaging, playing a video game, etc), LockedInAI allows the user to perform simple operations with gestures and not have to leave their work.

How we built it

We trained a pre-trained model (ResNet152v2) on our own data created during the hackathon in Google Colab notebooks. The CNN binary classifier was trained on individual frames from the training videos, and we obtained high accuracy. In the app, this model was called every few frames to analyze the user. If the user is detected as unfocused multiple times in a row, the app gives a warning to the user.

For the gesture controls, we used OpenCV's media pipe pose estimation, which tracks 22 points on the hand. Through computing the relationships between different points, we developed three highly accurate gesture controls: left-click, scrolling up, and and scrolling down.

Challenges we ran into

One challenge we ran into was the problem of overfitting when training our CNN. Ultimately, we were able to overcome this issue through recording more training data in diverse conditions.

Built With

Share this project:

Updates