Mind Reader aims to use live EEG recordings to predict what object the user is imagining. First we made a GUI tool using pyqt and brainflow to obtain training data. In total, we obtained 500 two second long samples where a user stares and thinks about a provided image and the EEG data is recorded using a 4 channel OpenBCI ganglion. After detrending and applying a bandpass, we used wavelets to decompose the EEG waveforms into frequency-amplitude representation. We feed that into a transformer using pytorch and clip that classifies what object the user is imagining and/or looking at. From this, we have been able to achieve 21% accuracy across a 10 element classification, which is 2.1x better than randomly guessing. In the future, we hope that our GUI can be used and adapted by others to obtain their own training data and models. We also expect better results when using more EEG channels and a larger number of training samples.

Built With

Share this project:

Updates