Inspiration

Despite their debilitating consequences, sleep apnea and other chronic sleep disorders very frequently go undiagnosed due to the considerable inconvenience and high cost of traditional sleep studies. With Nea, we provide a solution to this problem that empowers individuals to monitor their sleep health from the comfort of their own beds, using the devices they already own.

What it does

Nea uses advanced deep learning technology to pinpoint potential sleep apneic episodes by analyzing sleep audio and video for snoring patterns and breathing abnormalities. The data (total audio, snoring events, obstructed breathing events, body movement) is visualized in Nea for the user along with a calculated AHI (Apnea/Hypopnea Index) that indicates the severity of potential sleep apnea. It provides detailed insights, trends, and a professional PDF report for collaboration with healthcare providers, providing users with valuable information before committing to overnight lab studies.

How we built it

We built Nea using a modern web stack:

  • Frontend:
    • react
    • typescript
    • vite
    • tailwind css
    • radix ui
    • shadcn/ui
  • Backend:
    • FastAPI
    • Railway VPS
    • PostgreSQL
    • SQLModel
  • Deep Learning
    • Scikit-Learn
    • PyTorch
    • OpenCV
    • YOLO
    • PyEDFlib

Challenges we ran into

  • Converting video/audio files to normalized computer-readable formats
  • Accounting for ambient noise in patient environment
  • Converting RML into a text file and data frame for model training
  • Implementing efficient data sampling and normalization for large audio datasets
  • Ensuring accurate detection of sleep events like snoring and breath obstructions
  • Handling file uploads and managing user account records efficiently

Accomplishments that we're proud of

  • Successfully detecting and visualizing possible sleep apnea episodes with 93% accuracy using advanced deep learning
  • Built and fine-tuned a CNN-GRU model using supervised training with an 80-20% data split
  • Developed noise-handling capabilities by augmenting training data and adjusting hyperparameters
  • Surpassed our original audio analysis goal by analyzing video, creating bounding boxes on visual sleep data using YOLO and OpenCV using pixel analysis
  • Created interactive charts that turn complex sleep data into digestible visualizations for users
  • Solved complex technical challenges like converting EDF audio files into readable formats and managing RML events
  • Implemented 48KHz audio sampling while developing techniques to prevent gradient explosion in our model
  • Generated sleep reports in PDF format that users can share with healthcare providers

What we learned

  • Backend is hard
  • Reading documentation is hard
  • File uploads are hard
  • How to open/use a EDF/RML file
  • How to use PyTorch
  • How to do data analysis with Pandas
  • How to convert video/audio files to csv and normalize the results

What's next for Nea

  • Camera features (allow users to record full length videos and improve model to account for sleep movements)
  • Expanding insights to include more sleep health metrics
  • Partnering with healthcare providers to offer a seamless path to professional diagnosis and treatment
  • Developing a mobile app for easier recording and analysis

Sources

Built With

Share this project:

Updates