Skip to content

ENH: Support OPM coreg #11276

@larsoner

Description

@larsoner

Continuing from #11257 (comment) with @georgeoneill

coreg -- we plan to support OPM coregistration, probably through the existing mne coreg GUI. I'm still coming to understand the details of how this stuff works so it's not easy for me to say the scope here, but I've worked with the 3D plotting stuff there so I could help add the parts we need (I assume the big part would be allowing you to supply a helmet mesh).

We are still trying to understand where to go with this ourselves. Currently in our lab we all posess bespoke scannercasts derived from the anatomy of each participant, so the tranfrormation between the sensors to the anatomy is either identity or a translation (depending if the cRAS information of the anatomical was read correctly).

HOWEVER we've started to collect data from a site which they use generic helmets, so optical scans and point cloud/rigid body registrations are required, we'll have a better handle on this going forward. The rigid helmets will certainly be the case for the Cerca Magnetics systems going forward. The meshes for the Cerca helmets are provided in OPyM.

Yes we'll have to think about this. Let's just consider the rigid-helmet case for now maybe to make our lives easier :)

One thing to know is that, in MNE-Python, all sensor locations (for EEG) are supposed to live in the "head" coordinate frame, defined by the line between LPA and RPA (which become -X and +X) and the line perpendicular to this one through the nasion (+Y) in a right-handed coordinate system (making +Z up). mne coreg is really meant to coregister points in this head coordinate frame with the MRI coordinate frame defined during MRI acquisition. For MEG data, each system can additionally have its own "MEG device" coordinate frame (usually near the center of sensor "sphere" of the helmet). The info['dev_head_t'] is usually set during acquisition to say how to translate from MEG to head, and then mne coreg gets you from MRI to head, so you can go from any frame to any other one.

One way I think we could get this all to work in this framework is:

  1. Read the point-cloud data (we have some functions for this, could add more)
  2. Coregister it to the canonical sensor positions, maybe using ICP after manually marking the N sensor positions in a point cloud visualization in a simple GUI (maybe the iEEG GUI could be repurposed, but if not, I don't think it's hard using pyvista)
  3. Set the info of the raw to contain the extra head shape points in info['dig'], including some dummy/wrong LPA/Nasion/RPA (this will just make things easier in MNE-Python), i.e., present but in an anatomically incorrect "head" coordinate frame
  4. Using mne coreg to coregister the MEG sensors to the MRI, i.e., obtain MEG<->MRI transform
  5. Adding a new option to mne coreg to use the "MRI fiducials" -- which are easily accurately manually marked on the MRI, or estimated from the MNI<->MRI transform given by FreeSurfer -- to overwrite the existing dummy fiducials in the head coordinate frame, which will then overwrite/update the info['dev_head_t'] and also adjust all existing dig points to be in an antomically correct head coordinate frame

At this point we'd have all transforms we need for things to be defined according to MNE-Python's conventions.

It's a bit of hoops to jump through, but if we do this then all viz functions should behave properly, things like BIDS anonymization and uploading should "just work", etc.

One way to move forward with this would actually be for me to try this with our existing OPM dataset, because IIRC its head coordinate frame is not defined correctly. So I could try to make these adjustments to the dataset, and re-upload it.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions