Inspiration

Helping individuals who are not able to communicate verbally to have the opportunity to interact with others effectively.

What it does

The intention of our project is to utilize flashing visual cues on an alphabet board in order to elicit an EEG response that can be processed and used to predict the user’s intended input.

How we built it

With PhysioLabsXR we were able to connect a g.tec UnicornHybridBlack brain-computer interface to a virtual Unity environment. There, we had a virtual spelling board world in which the user could test/train the predictive model in a colorful and relaxed environment to soothe the mind and improve brain signal quality. On the spelling board, rows or columns would flash in a random sequential order. By supplying a constant stream of event markers to PhysioLabsXR, we can temporally match a flashing event to the corresponding P300 brain wave EEG reaction, defined as an epoch. A user would then simply focus on a letter of interest within the spelling board, and as the letter flashes or remains inert, the corresponding EEG data is collected. After collecting such training data from many random iterations of letters, we trained it on a neural network with a mixture of convolutional and dense layers to predict the presence of the intended letter in each epoch. The predictions are then used to build a probability matrix over the grid of the spelling board, in which the highest probability cell would correspond to the predicted letter.

Challenges we ran into

We initially had many ideas on how to improve the accuracy and efficiency of our P300 speller, including optimized grouping of alpha characters on the spelling board, improved order/timing of flashes, and more complex models for letter prediction. However, we ran into a lot of initial trouble setting up the brain-computer interface, specifically utilizing/creating the right LSL streams and finding the right filters to reasonably interpret the data. Once we finally had the initial code functional, we encountered multiple small, but randomly persistent problems like the shape of an array suddenly being wrong, or software crashes. A particularly frustrating problem was not being able to import our pre-trained CNN model into the relevant Python script, which caused us to lose very valuable time adjusting the model architecture to fit the existing pipeline.

Accomplishments that we're proud of

We are proud of being able to see a very high-quality EEG signal, such that we are able to interpret the user’s actions such as blinking or closing their eyes and relaxing. We are also proud of our Unity environment, as it creates a calming and peaceful environment to collect such high-quality data. Finally, we are proud of our CNN model, which while we may have not been able to properly implement, achieved a >80% accuracy score on predicting relevant epochs.

What we learned

We learned much about BCIs and EEG signal processing. This had been the first experience in this field for every one of our team members so it was an exciting experience to be able to develop with and use the BCI/VR equipment.

What's next for Triple A's

Whether or not this is Triple A’s last experience doing active research/development in the field of BCIs/VR, we will certainly be more aware and appreciative of the rapid technological achievements that are happening in the field. Whether it is in the next ten years or the next hundred, it is clear that the next leap in human-computer interaction is inevitable and we will be eagerly awaiting it.

Built With

Share this project:

Updates