At-home, video recorded EEG (V-AEEG) is a promising advancement in the assessment of Epilepsy.
Effectiveness plummets however, when the video recording misses moments of interest due to patient-side errors.
The gold standard at half the cost - mindEEG helps V-AEEG reach the same effectiveness as pricey in-hospital assessments by enabling live collaboration between healthcare technicians, patients and AI.
Designed with unprecedented speed and reliability, we left behind slow, insecure servers for blazing fast peer-to-peer video and EEG streaming. Using BrainsAtPlay’s API, our platform works with many EEG devices. In addition, powerful image recognition ensures the patient is always within the field of view. Due to not having hardware to work with, we currently only support a synthetic EEG.
Our application consists of a backend and frontend. For the backend, we created an Express server in Node.js that channels users into Socket.io rooms by a unique ID in their URL. This unique ID, also known as an invite link, allows a patient to join the same room as their doctor, provided the doctor sends the invite link to the patient. Once in the same room, we use WebRTC, Socket.io and Peer.js to coordinate a connection between each caller that continuously sends respectives video streams to each other. For the frontend, we use React for the UI. We use Chart.js to visualize the EEG channels and a simple sine-wave generator for the synthetic EEG. To create the image recognition tracker, we use Tensorflow’s COCO-ssd’s image recognition library.
Log in or sign up for Devpost to join the conversation.