Inspiration

We wanted to design devices for stroke patients and also utilize technology that was accessible especially to students.

What it does

CerebrAlert is a solution for detecting symptoms regarding movement for individuals who are at risk for strokes. It provides analysis based on user movement patterns which can be used for determining such trends.

How we built it

We built the vision aspect of the project by using OpenCV and mediapipe to be able to map landmarks on the body. We would then use the landmarks to make a decision about the posture and orientation of the user. If a certain condition was met such as the right side of the body being higher than the left side we would flag this and the program would make an alert noise and send an SMS text message and email to an emergency contact. The SMS message was sent using Twilio, and the email was sent using Yagmail. For the hardware we used I2C serial protocol to interface transmission betweeen the acclerometer. We also transferred the data into a CSV file which was parsed into Python for future analysis with Matplotlib. Using plots to represent different physical activities, we better understood how to detect symptoms.

Challenges we ran into

One challenge we ran into was the use of measuring heart rate. We using tools such as a vibration sensor but it was designed for digital output. In addition, we lacked the necessary wires and headers to interface with certain sensors which would have helped expand our project's goals.

Accomplishments that we're proud of

We are proud of researching and understanding the I2C protocol to communicate with the BMA222 accelerometer. We were able to understand the bit fields of this sequence and initialize it to enable transmission which was a good experience.

What we learned

We learned about the different libraries that can be accessed to map landmarks and poses on the human body. We were able to learn about the different hardware components that we could use and troubleshoot the different hardware components.

What's next for CerebrAlert

Cerbralert will eventually add integration between the hardware and vision aspect of the project as well as fine-tunning a model for slurred speech. We plan on using machine learning to be able to consume audio chunks in real-time and determine if the speech is slurred. We hope to also be able to refine the hardware so that we can also obtain heart rate.

https://drive.google.com/file/d/1Cq8fJgkBD6KLXYYHfGc9DdreXT1A0Ql0/view?usp=sharing

Built With

Share this project:

Updates