Describe the new feature or enhancement
hi all,
for the 2022 sprint I suggested to add basic support for eye tracking data to MNE.
these data could be useful for several purposes, from improving data cleaning/IC labelling to all types of new analyses s.a. saccade related potentials.
@drammock suggested to post this issue ahead of time to discuss the topic.
I'd be happy to hear your feedback!
Describe your proposed implementation
there are several steps that I think would be necessary for a minimal implementation.
- deciding how to store these data it in the MNE ecosystem
e.g. simply as a misc channel type along the other sensor data?
- handling of different sampling frequencies of the devices and missing data
(as recording periods might only imperfectly overlap)
- basic I/O tools
loading raw eye track data and/or annotiations (blinks, saccades) from different file formats (at least from text/csv files)
- alignment methods
e.g. based on common annotations/trigger
a fuller (but still basic) version could also do (parts of) the following:
- allow to identify events from the raw eye track
most importantly blinks, and saccades (e.g. via the Engbert&Mergenthaler algorithm)
- basic visualization tools
..and then of course there would be a host of specific tools the gaze information might be used for.
this might be beyond the scope of this issue, though.
Describe possible alternatives
..not even opening this box ;)
Additional comments
Describe the new feature or enhancement
hi all,
for the 2022 sprint I suggested to add basic support for eye tracking data to MNE.
these data could be useful for several purposes, from improving data cleaning/IC labelling to all types of new analyses s.a. saccade related potentials.
@drammock suggested to post this issue ahead of time to discuss the topic.
I'd be happy to hear your feedback!
Describe your proposed implementation
there are several steps that I think would be necessary for a minimal implementation.
e.g. simply as a
miscchannel type along the other sensor data?(as recording periods might only imperfectly overlap)
loading raw eye track data and/or annotiations (blinks, saccades) from different file formats (at least from text/csv files)
e.g. based on common annotations/trigger
a fuller (but still basic) version could also do (parts of) the following:
most importantly blinks, and saccades (e.g. via the Engbert&Mergenthaler algorithm)
..and then of course there would be a host of specific tools the gaze information might be used for.
this might be beyond the scope of this issue, though.
Describe possible alternatives
..not even opening this box ;)
Additional comments