ENH: Add support for Artinis SNIRF data#11926
Conversation
|
@larsoner @agramfort @drammock could you please review. |
larsoner
left a comment
There was a problem hiding this comment.
Sounds like a reasonable workaround to me. It would be awesome to have a test for this. If there are HDF5 files, and you could maybe copy the file to a tmp_path and edit it in-place (?) to change the last sample in time_data to be off by more than 1%. But if this isn't the case then I think we can merge as-is...
|
Good idea. I can write that test |
Co-authored-by: Eric Larson <larson.eric.d@gmail.com>
for more information, see https://pre-commit.ci
|
@rob-luke I've pushed some changes here that you may disagree with, so feel free to revert some of them. Mostly I:
|
|
Thanks @drammock that is amazing! I appreciate you taking the time to make the edits directly! |
|
I'm happy but CI "old" is not. I'll debug tomorrow morning |
|
ok I'm stumped. I created a local env that mirrors the (there's also now an unrelated pip-pre failure dealing with dependency resolution) |
|
Okay I'll look |
mne/io/snirf/tests/test_snirf.py
Outdated
| del f["nirs/data1/time"] | ||
| f.flush() | ||
| f.create_dataset("nirs/data1/time", data=acceptable_time_jitter) | ||
| read_raw_snirf(new_file) |
There was a problem hiding this comment.
| read_raw_snirf(new_file) | |
| with use_log_level("info"), catch_logging() as log: | |
| read_raw_snirf(new_file) | |
| assert "Found jitter of 0.990000%" in log.getvalue() |
(with associated from mne.utils import catch_logging, use_log_level at the top)
This won't fix the CI failure but it makes sure our jitter detection is actually working
There was a problem hiding this comment.
Feel free to push I'm still messing with conda envs
There was a problem hiding this comment.
Okay envs fixed I'll implement it
* upstream/main: Interactive version of plot_evoked_fieldmap (mne-tools#11942) ENH: Add support for Artinis SNIRF data (mne-tools#11926)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Larson <larson.eric.d@gmail.com> Co-authored-by: Daniel McCloy <dan@mccloy.info>
Reference issue
This is an issue as PR.
Artinis provided me with half a dozen files exported from their fNIRS device (the files are too large for me to share).
The sampling times of their devices aren't spaced to the level of precision that we currently expect. Which causes the current implementation of the SNIRF reader to throw an error.
They sent me the following screenshot where they describe the issue...
What does this implement/fix?
I implemented a maximum allowed percentage jitter for the file to be still considered uniformly sampled.
I (fairly arbitrarily) specified that we now allow for a 1% variation in the sampling times relative to the average sampling time difference of the file. I specified this parameter as
MAXIMUM_ALLOWED_SAMPLING_JITTER_PERCENTAGEAdditional information
This solution doesn't seem ideal. I am open to alternative suggestions.
I also tried reducing the precision when calculating
fs_diff = np.around(np.diff(time_data), decimals=4), but this broke support (and unit tests) for other devices.The files exhibiting this behaviour are too large to add to our testing data. I have requested a test file from the manufacturer, but it might not be possible to capture this phenomenon in a small file as it seems to occur only many minutes into a recording. And if we read, crop, export and use the modified file, it will not reflect the manufacturer's formatting.
The submitted code allows me to read the files the manufacturer shared with me.