A Frenchman, a Korean, and an Indian walk into Carleton:
Introduction
Hi there, we are team LeBrain James!
The name is an ode to our shared passion for sports, gaming, and neuroscience. It is how we (Aymeric, Nikhil, Seokhee, and Srikanth) became friends and ideated on fun ideas. Aymeric and Seokhee met through a shared neuroscience class at Columbia. Being their first introduction to the domain, their friendship blossomed through a fair share of coursework trauma bonding! On the other side of the planet, Nikhil and Srikanth met at the first Columbia India social event before their programs. Both were fascinated with NeuroTech - but had extremely opposing views on Elon Musk/Neuralink's vision. The discussions got so heated that they ended up becoming best friends. In the BME department, Nikhil, Seokhee, and Aymeric eventually became the default team for group projects. Around the same time, Srikanth and Seokhee worked on similar neuroscience research and connected at the SFN conference. Soon after, the four of them coincidentally ran into each other during a Columbia Basketball Game- and their worlds collided! It also turns out that Srikanth was also in that first trauma-bonding neuroscience course where Aymeric and Seokhee first met!
Inspiration
Through the Neuromancer track, we wanted to dive deep into neurotechnologies, and application interfaces, and learn end-to-end pipelining through g.tec, Cyton, Unicorn Hybrid Black, Unity, OpenBCI, and PhysioLabXR- alongside the Python/Matlab data preprocessing, feature extraction, analytics, and Deep Learning techniques.
We were excited about recreating a traditional keypad-controlled 1st player shooting game with moving targets but with Thought Commands and Control through Brain-Computer Interfaces (BCI).
"When we heard of Neureality and the BCI game track- we just knew we had to team up and block this weekend to hack together!"
What it does
We modify a traditional keyboard control-based 1st player shooting game to enable the actions and gameplay through Brain and Thought Control through EMG and EEG data/mechanisms.
We utilize EMG recordings by classifying them into 3 outputs with the help of unique EMG patterns generated upon different movements - spreading fingers outward, bending wrist inward, and fist clenching.
We also use the 8-channel EEG data to use two classic BCI paradigms - Motor Imagery (Imagined Movement) and Steady State Visual Evoked Response (SSVEP).
How we built it
The project broadly operates on the following pipeline: We use the LSL stream for communication between PhysiolabXR and Unity. Our EEG setup involves an 8-channel buffer (10 seconds). SSVEP classification focuses on the latest 3 seconds from channels 5, 6, 7, and 8 - representing the visual cortex. SSVEP predictions are determined using Canonical Correlation Analysis and a Recurrent Neural Network, resulting in values [0, 1, 2, 3, 4]. For Motor Imagery, predictions are determined using a time-series-based Recurrent Neural Network, which takes the whole buffer as the input and is represented by values [0, 1, 2]. The above predictions are used to trigger various specific actions in Unity, like switching weapons, destroying bots, and deploying special attacks for an integrated gaming experience.
All relevant scripts of code have been uploaded to our GitHub repository and the Technical Doc linked below, along with the process for setup/reproducibility.
Challenges we ran into
A lot of roadblocks we faced came down to 2 issues
1) Hardware Limitations:
The Oculus software is only supported on Windows, and despite sourcing a Windows laptop, we couldn’t complete the installation as it did not have the required graphics card.
We could not communicate between PhysioLabXR and Unity despite having the pipeline all set, because our Windows system (8GB RAM) crashed when both these software ran simultaneously. Using Boot Camp did not help either due to storage constraints.
2) Data Acquisition and Model Training: Defining a robust yet effective data collection activity for such tasks was new to us, and thus building a model on top of this data took longer, and did not perform as accurately as expected. It would have also been beneficial if we were able to perform within-subject cross-session analysis and develop both within-subject and cross-subject models. This would've enhanced the accuracy and stability of the system.
Accomplishments that we're proud of
Despite a lot of hardware compatibility issues, we are proud of how we did not let that stop us from progressing. We were seamlessly able to come together and agree upon the kind of data we needed to collect, and parallely work on building individual functional chunks of the project. We achieved good results on motor imagery decoding, attempted a variety of methods on SSVEP decoding where we learned extensively about frequency domain features, and also were able to build a pipeline where we communicate between PhysioLabxr and unity the results of the real-time signal processing and decoding. Having access to the most sophisticated modern technologies, we got the opportunity to take a deeper dive into how Virtual Reality applications work, and how we can integrate various technologies with it.
What we learned
This hackathon was a great insight into how important it is to understand communication protocols and more importantly, how important it is to have both hardware and software compatibility to accomplish engineering tasks. We brushed up our signal processing and deep learning skills, and feel much more comfortable with this domain now than we were before the hackathon.
What's next for LeBrain
With the data collected, we aim to regroup and experiment with different signal pre-processing techniques and deep learning models to make our models perform better. Getting first-hand exposure to editing games from the back end has inspired us to dabble with VR game development and neuro-tech integration on a more relaxed timeline. This is a great stepping-stone for us to venture into similar industries as we now have experience with cutting-edge technology that we would otherwise not have had access to.
Log in or sign up for Devpost to join the conversation.