Inspiration
Some of the study methods online don't perfectly align with each person on a day-to-day basis. We wanna have a solution where we can always have good study sessions, no matter how the person is feeling, whether they feel productive, or not.
What it does
Sensora takes your mood and translates it to a tailored study session for each specific user. The user then rates the study session, takes a break, then Sensora will prompt the user a new study session, so no matter the how the user is feeling, we'll be able to always have a plan ready.
How we built it
With the usage of hardware including a camera module, we'll have real-time tracking to monitor the person's facial actions. Then using media pipe in conjunction with openCV library for eye tracking and head tilts, then pairing that with openAI's clip model to connect the detections with emotions. This data is then sent to our computer for logging history and bases on weights that we decide, we'll tailor a new study session for the user, which is sent back to the microcontroller with a text screen to tell the user about their next study session.
Challenges we ran into
One challenge we ran into was the integration of a I2C camera in our wireless embedded architecture instead of the logitech camera. We found midway through the event that our camera wasn't supported by the Esp32's native camera drivers. This forced us to consider our design requirements and how to complete our minimum viable product given the competition's allotted time, which led to us using the Logitech camera instead.
Accomplishments that we're proud of
Some accomplishments we're proud of are when we finally integrated front and back end so that when the camera module tracks and gets the user's facial movements, that data is being sent to the front end and displayed as a live log of how focused said user is. Another great accomplishment was getting the camera module to track the user's facial movements using Python libraries and openCV, as this was an essential part to the success of our project.
What we learned
We learned how to communicate through the back end and ESPs using WiFi as well as build a working embedded system specifically through programming the microcontrollers using Arduino IDE and libraries controlling hardware modules. This way, the ESPs don't need a wired connection in order to send and receive signals from one another.
What's next for Sensora
As for our app, we would like to implement a 4-digit display in order to show users how long they've been studying for, or counting down until their goal study time has been reached. Additionally, having the camera system be more portable and wireless would be an amazing feature, as it allows for more flexibility and better positioning so that the user's facial movements can be tracked efficiently.

Log in or sign up for Devpost to join the conversation.