Inspiration
One In every 250 people suffer from cerebral palsy, where the affected person cannot move a limb properly, And thus require constant care throughout their lifetimes. To ease their way of living, we have made this project, 'para-pal'.
The inspiration for this idea was blended with a number of research papers and a project called Pupil which used permutations to make communication possible with eye movements.
What it does

"What if Eyes can Speak? Yesss - you heard it right!" Para-pal is a novel idea that tracks patterns in the eye movement of the patient and then converts into actual speech. We use the state-of-the-art iris recognition (dlib) to accurately track the eye movements to figure out the the pattern. Our solution is sustainable and very cheap to build and setup. Uses QR codes to connect the caretaker and the patient's app.
We enable paralyzed patients to navigate across the screen using their eye movements. They can select an action by placing the cursor for more than 3 seconds or alternatively, they can blink three times to select the particular action. A help request is immediately sent to the mobile application of the care taker as a push notification
How we built it
We've embraced flutter in our frontend to make the UI - simple, intuitive with modularity and customisabilty. The image processing and live-feed detection are done on a separate child python process. The iris-recognition at it's core uses dlib and pipe the output to opencv. We've developed a desktop-app (which is cross-platform with a rpi3 as well)for the patient and a mobile app for the caretaker.
We also tried running our desktop application on Raspberry Pi using an old laptop screen. In the future, we wish to make a dedicated hardware which can be cost-efficient for patients with paralysis.


Challenges we ran into
Building up the dlib took a significant amount of time, because there were no binaries/wheels and we had to build from source. Integrating features to enable connectivity and sessions between the caretaker's mobile and the desktop app was hard. Fine tuning some parameters of the ML model, preprocessing and cleaning the input was a real challenge.
Since we were from a different time zone, it was challenging to stay awake throughout the 36 hours and make this project!
Accomplishments that we're proud of
- An actual working application in such a short time span.
- Integrating additional hardware of a tablet for better camera accuracy.
- Decoding the input feed with a very good accuracy.
- Making a successful submission for HackPrinceton.
- Team work :)
What we learned
- It is always better to use a pre-trained model than making one yourself, because of the significant accuracy difference.
- QR scanning is complex and is harder to integrate in flutter than how it looks on the outside.
- Rather than over-engineering a flutter component, search if a library exists that does exactly what is needed.
What's next for Para Pal - What if your eyes can speak?
- More easier prefix-less code patterns for the patient using an algorithm like huffman coding.
- More advanced controls using ML that tracks and learns the patient's regular inputs to the app.
- Better analytics to the care-taker.
- More UI colored themes.
Built With
- computer-vision
- deep-learning
- dlib
- flutter
- iris-segmentation
- iris-tracking
- numpy
- open-cv
- pi-cam
- python
- raspberry-pi
- rest-api
- sqlite
- stackoverflow

Log in or sign up for Devpost to join the conversation.