A crude prototype of real time manipulation of an infamous MIDI song file using features of a brain signal. Potential application to guided meditation.
The general pipeline consists of: (1) reading a stream of data from the MUSE 2 headset, (2) preprocessing/extracting features from the signals, (3) reading in segment of MIDI file, applying features to modify the MIDI segments, and then (4) outputting the new MIDI to audio. This is all performed in real time while interfacing with the front-end (5) All operations are controlled in a GUI frontend. More specifically:
(1) MuseLsL was used to create the initial stream, then PyLsL was used to read the stream from the headset. We used a buffer size of 5 seconds, and updated the buffer every 0.25 seconds. This was run in a separate process.
(2) A low pass filter removing frequencies above 55Hz was applied to each received buffer of eeg data. Then features such as variance and the mean power spectral density of the five eeg bands (delta, theta, alpha, beta, gamma) were computed for the selected channel. All these operations were performed using numpy and scipy.
(3) MIDI was manipulated by manipulating its volume, pitch or note offset using MidiFile.
(4) MIDI was converted to audio using pygame and played in 5 second windows at a time.
(5) UI was developed using PyQt5 and its UI designer. Functionalities include: start and stop plotting EEG data, start and stop playing audio, selecting features, and selecting MUSE 2 channels.
Built With
- pyqt5
- python

Log in or sign up for Devpost to join the conversation.