Inspiration
The Apple vision pro is just the beginning of the world getting used to BCI products. As BCI’s become better and better , there are bound to be more BCI based video games as it introduced a new dimensionality players can interact with. Currently, most BCI games are based on SSVEP, where there has to be a flashing object within the game, often a less than pleasant to look at block.
What it does
We aimed to create a game that did not require the use of SSVEP. Our project used CNN transformer to classify when the player looks at a certain image for a certain time. So instead of syncing the frequencies of the players brain EEG signals and the flashing block all we need to do is have the player stare at their choice for a few seconds. This model will be able increase the flexibility of future video games as it won’t require VR game developers to use flashing blocks for BCI-based choices.
How we built it
We used gtec's Unicorn to record the EEG data in real time and for training. The data was then sent to PhysiolabXR where it can be processed as well as integrated into Unity. Unity would then send the necessary information to Quest 2.
Challenges we ran into
Most of our time was spent troubleshooting and learning Physiolab as well as Unity. We only really had one consistently working Windows/Linux machine which made it difficult as multiple people were not able to contribute to certain aspects of the project.
Accomplishments that we're proud of
Learning more about and working with the hardware and software necessary for BCI applications, especially in VR/AR.
What's next for ITS
SLEEP. But in seriousness, we would like to be able to train the model more as well as integrate everything to create a more cohesive product.
Log in or sign up for Devpost to join the conversation.