MLTonin

The Idea

We all love hackathons (hence why we're here right now), but sometimes they can be _ exhausting _! Almost all hackers know the pains of readjusting after pulling an all-nighter, and we want to help reduce the time for this process. We initially built designed this for the purposes of helping hackers adjust after an all-nighter, but we then realized that sleep deprivation is a MASSIVE problem that we can help solve with MLTonin. As work becomes more and more digitalized, there is a similar positive trend for the percent of the population experiencing sleep deprivation. In fact, the effects of sleep deprivation costs the Canadian economy over 20 billion dollars annually due to lost productivity. With MLTonin, the user would work on their computer and we would examine their sleep-deprivation levels and update to their computer to help mitigate the effects of technology on their sleep.

How we built it

We built MLTonin in two main parts, the web interface and the background machine learning processes. The machine learning is done through an ensemble network, composed of a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). We first use a CNN as well as OpenCV to detect a user's facial contours and produce a predicted value for drowsiness at any given point in a session. We then pipe that information into a RNN to extrapolate user behavior for more complex analysis on patterns overtime. The data is normalized using the Wolfram api. These values are weighted and compounded into one value that describes the user's current energy level (from 0-10). That energy level value is sent to our firebase realtime database (which communicates this data with out front-end UI), as well as a windows-api (which automatically adjusts screen-brightness of the user's workspace).

The web interface also has an additional CNN, built using tensorflow.js and face-api.js. This model provides expression analysis, which is used for outlier detection in comparison to the other firebase values. The web interface displays an "Aggregated Sleep Metric" (ASM), which is calculated by weighing the various inputs (the energy level obtained from firebase and expression analysis values from tensorflow.js) and running them through a consensus algorithm. The consensus algorithm checks relative differences and returns the most appropriate energy value with the given inputs.

Challenges we ran into

One of the most major difficulties we encountered is that our models was over-fitting due to the limited size of our datasets. We solved this by using different metrics (e.g., image pixel data, image contour patterns .etc) and therefore diversifying our training data.

Another challenge that we ran into was getting the windows-api working. We encountered bugs interfacing the high-level python scripts with the low-level system apis. We solved the problem by finding third party libraries that skip over the low-level work and using them instead of directly accessing the system apis.

What we learned

We learned so much. In terms of machine learning, we learned techniques in diversifying our inputs as well as how to pipeline different models together. On the web interface, we learned how to design a dashboard (no template was used!) and how to overlay canvases for tensorflow.js.

Plans for the future...

  • Improve RNN and CNN accuracy even more.
  • Potentially add a KNN (k-nearest-neighbors) model to improve the consensus algorithm
  • Add more features to the web interface (e.g., more graphs, text tips on getting good sleep .etc)
  • Have functionality updating gamma and color temperature in addition to just brightness in terms of the windows-api
  • Expand to other operating systems (macOS, linux)

Built With

Share this project:

Updates