Inspiration

Mental health is a serious crisis. Around the world, nearly 1 billion people are living with a mental disorder, making it one of the leading causes of disability globally. In the United States alone, 1 in 5 adults—nearly 60 million people—experience mental illness each year. Despite this, only about 50% receive treatment, leaving millions to struggle without the support they need.

The crisis is very severe among young people. Mental illness now accounts for 45% of the cost of health for the 10–24 years age group, yet mental health only has access to 2% of global health spending on the 10–24 age group. In America, the 18–25 years population has the highest rate of mental illness at over 36%. Peer pressure through social media, school pressure and financial stress, and climate stress are some of the contributing factors.

Suicide is the most tragic indicator of this epidemic—it claimed the lives of nearly 50,000 Americans in 2022 and is the fourth most common cause of death.

Beyond the personal expense, mental illness carries an exorbitant economic cost. Anxiety and depression combined cost the global economy over $1 trillion annually in lost productivity. In the business sector, industries like finance are facing escalating rates of burnout, which are costing employers thousands per employee each year.

This is not just a health problem—it's a crisis for society. The numbers tell us we need better access to care, earlier treatment, and a stronger commitment to mental health care infrastructures in all sectors of society.

What it does

Aura AI is a web application in which users can upload a picture or capture one directly from their webcam. Once a picture is posted, the backend analyzes it with FER and an in-house machine learning model trained to identify facial features. The model scores the user's emotional state, provides a confidence rating, and marks a perceived state as positive ("good") or negative ("bad"). The goal is to provide instant, accessible mental state feedback through visual emotion detection.

How we built it

Frontend: We used the frontend with HTML, CSS, and JavaScript to create a minimalist, easy-to-use interface. It allows users to take a photo with their webcam or upload one from their device.

Backend: We used the backend with FastAPI, a high-performance web framework for Python, and processes image submission and model inference efficiently. For image processing, we utilized FER for input image preprocessing. The processes entailed resizing, normalization, and conversion into suitable formats for machine learning models.

AI/ML Component: We developed a proprietary machine learning model that detects emotional expressions from facial features. It assigns tags like happy, sad, angry, surprised, etc., and uses a probability threshold to determine the confidence of its classification. We deployed this model in the backend to generate results in real-time.

Challenges we faced

The hardest to implement was the creation and integration of the AI/ML model. Since this was our first foray into machine learning for facial emotion detection, we had some issues:

Choosing and formatting the right dataset.

Aiding in learning how to fine-tune an emotion-detecting model.

Ensuring the model could handle varying lights, angles, and facial expressions.

Integrating the model in a timely fashion into the FastAPI backend.

Despite these issues, we learned a great deal and managed to implement the key features with functionality.

What we accomplished that makes us proud

Built our first functional AI/ML model for emotion recognition.

Integrated frontend webcam/photo input with backend successfully.

Implemented an end-to-end system with real-time emotion recognition.

Learned even more about how to utilize machine learning in applying to real-world mental health technology problems.

What we learned

Machine Learning: How to scrape data, clean it up, train a model, and analyze performance metrics like accuracy and confidence.

Backend Development: How to create scalable APIs using FastAPI and interface with image files and ML models in a production environment-like setting.

Frontend-Backend Integration: Merging JavaScript-based webcam recording into Python backend code for seamless user experience.

User-Centered Design: The necessity to present sensitive information like mental health predictions in a thoughtful and clear way.

Future directions for Aura AI

OpenAI Integration: We plan to forward the inferred emotional state to OpenAI's GPT models to provide personalized mental health suggestions, affirmations, or recommendations in accordance with the user's mood.

Model Accuracy Improvements: We'll focus on making the emotion recognition model more accurate and robust by training it on larger, more diverse data sets, applying data augmentation, and exploring deeper networks such as CNNs or vision transformers (ViTs). These will allow the model to generalize across different lighting, ethnicities, and facial expressions.

More Emotional Categories Scale up the model to detect a wider and more subtle range of emotions—more than just basic ones like happy or sad—to encompass more advanced ones like anxiety, frustration, or calmness.

Built With

Share this project:

Updates