Skip to content

[MRG] Egimff#4017

Merged
agramfort merged 30 commits intomne-tools:masterfrom
ramonapariciog:egimff
Jul 11, 2017
Merged

[MRG] Egimff#4017
agramfort merged 30 commits intomne-tools:masterfrom
ramonapariciog:egimff

Conversation

@ramonapariciog
Copy link
Copy Markdown
Contributor

mne-mff-reader BETA

MNE Class and functions for loading the EGI '.mff' files from EGI Netstation EEG.

How to import:
Only with the following line:
from mne.io.mff import read_raw_egi_mff

and for create the raw instance:
raw = read_raw_egi_mff(filepath, exclude, include, verbose)

All the pararameters are the same than for the read_raw_egi.
Limitations:

1.- At this moment doesn't support multisubject records, but can be adapted for that.

2.- At the moment has been tested by comparing the results with the obtained by a single subject and notsegmented record raw file, and the tests in the test_egi.py

3.- At the moment all data is loaded to memory, using the preload=True mode.

Thanks to PhD. Guillaume Dumas, PhD. Dennis A. Engeman & PhD. Sheraz Khan

@larsoner
Copy link
Copy Markdown
Member

Those limitations all seem reasonable, we can expand later once this is in.

Is this ready for a round of review?

Copy link
Copy Markdown
Member

@agramfort agramfort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is much more but it's a start

Path for the file
Returns
------------------------
info : dictionary
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dictionary -> dict

input_fname : str
Path for the file
Returns
------------------------
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returns
------------------------
->
Returns
--------

# version = version.byteswap().astype(np.uint32)
# else:
# ValueError('Watchout. This does not seem to be a simple '
# 'binary EGI file.')
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to cleanup?

def _read_events(input_fname, hdr, info):
"""Read events for the record.

in:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to format properly



@verbose
def read_raw_egi_mff(input_fname, montage=None, eog=None, misc=None,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not factorize this with the read_raw_egi to based on extension decide if you use mff or not?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed

def _get_signalfname(filepath, infontype):
import os
from xml.dom.minidom import parse
import re
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to import at the top of the file

##

def _u2sample(microsecs, samprate):
import numpy as np
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

# sampDuration = 1000000/sampRate;
# sampleNum = microsecs/sampDuration;
# remainder = uint64(rem(microsecs, sampDuration));
# sampleNum = fix(sampleNum);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleanup?

"""Read data signal."""
import numpy as np

binfile = filepath + '/' + nbinfile
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

os.path.join

blockbeginsamps[x + 1] = blockbeginsamps[x] + blocknumsamps[x]
# -----------------------------------------------------
summaryinfo = dict(blocks=signal_blocks['blocks'],
eegFilename=signal_blocks['eegFile'],
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if don't use CamelCase for variable names but snake_case

@agramfort agramfort changed the title Egimff [WIP] Egimff Feb 26, 2017
@agramfort
Copy link
Copy Markdown
Member

@ramonapariciog thanks for taking a stab at this. There is a bit of work
to complete this. Maybe an easy way would be to come for a pair programing day with
@jaeilepp at Télécom?

@SherazKhan
Copy link
Copy Markdown
Member

@ramonapariciog excellent Job, @agramfort Ramon did great Job, all camelCases was my bad :-)

@SherazKhan
Copy link
Copy Markdown
Member

@ramonapariciog @Eric89GXL @agramfort may be we all work on this during hackathon ?

@dengemann
Copy link
Copy Markdown
Member

dengemann commented Mar 2, 2017 via email

@SherazKhan
Copy link
Copy Markdown
Member

@dengemann you guys are awesome :)

@jaeilepp
Copy link
Copy Markdown
Contributor

Now the preload=False case works too. I don't dare to remove the epoch reading part although I think it should be made it's own function (read_epochs_egimff or something). I'll leave that to @ramonapariciog since I don't have the data to test it.

Copy link
Copy Markdown
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll stop reviewing for now. I pointed out a few small pattern things, but I suspect you were already planning to clean up most of them.

"""
Created on Fri Jan 13 11:01:39 2017.

@author: ramonapariciog
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clean up docstring

time_n = dateutil.parser.parse(mff_hdr['date'])
info = dict( # leer del info.xml desde read_mff_header
version=version, # duda
year=int(time_n.strftime('%Y')), # duda
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duda?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the current president of Poland, maybe related? 😉



@verbose
def read_raw_egi_mff(input_fname, montage=None, eog=None, misc=None,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed

on timestamps received by the Netstation. As a consequence,
triggers have only short durations.

This step will fail if events are not mutually exclusive.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This .. note:: is very long. Let's move it to the Notes section. If you want, you can add a on or two sentence .. note:: here like "this function will attempt to generate a synthetic trigger channel. For details, see Notes below.".

montage : str | None | instance of montage
Path or instance of montage containing electrode positions.
If None, sensor locations are (0,0,0). See the documentation of
:func:`mne.channels.read_montage` for more information.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should stop putting these in the constructor and have people use the methods instead

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hasn't been addressed yet

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now this is fused with the egi reader. Deprecation should be done in another PR

if len(data) != len(info['ch_names']):
raise ValueError('len(data) does not match '
'len(info["ch_names"])')
logger.info('Creating RawArray with %s data, n_channels=%s, '
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't true anymore?



def read_mff_data(filepath, indtype, startind, lastind, hdr):
"""Function for load the data in a list.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't say "function for" or "helper" anymore, use imperative

"""
import numpy as np
import os
from .general import _bls2blns, _read_signaln, _get_gains
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't nest these

hopefully general does not create a circular import. If it does, some helper function probably needs to be moved

info_fp = os.path.join(filepath, suminfo['infoFile'][0])
gains = _get_gains(info_fp)
else:
print('Multisubject not suported')
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't print

@jaeilepp
Copy link
Copy Markdown
Contributor

I can take a look tomorrow. So the test_egi.mff is supposedly continuous data, not epoched?

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

Hello Jakko,
Yes, it was took as continuous data.

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

Thank you Jakko, if you need something i will try to respond you quickly.

@jaeilepp
Copy link
Copy Markdown
Contributor

Now it should work. I'm a bit worried about the event timing. It seems that the event times in the test data are all smaller than the meas_date. I have no idea how to sync them. It also seems that the it doesn't give any meaningful evoked responses when using the current event timing.

warn('Event outside data range (%ss).' % (i /
info['samp_rate']))
continue
events[n][i] = 2**n
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to use powers of 2 so it'll be easy to mask overlaps.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this create inconsistency with how the other readers work, or do we not assign consecutive values elsewhere?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I noticed that before it was doing basically the same thing as the egi reader (consecutive values). The problem is that it doesn't allow overlapping events.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay then we should probably have a parameter for this. event_numbering='linear' | 'log' or 'consecutive' | 'power' or something. If we want to make 2**n the default here, it should probably be part of this PR, or as part of an immediate follow-up.

# STI 014 is simply the sum of all event channels (powers of 2).
if len(egi_events) > 0:
egi_events = np.vstack([egi_events, np.sum(egi_events, axis=0)])
data[n_channels:] = egi_events
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also made the stim channel construction lazy like this. Now it's created on data fetch stage.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok Jakko, thanks i will test it with other files.

@jaeilepp
Copy link
Copy Markdown
Contributor

ping @ramonapariciog

nsamples = block['nsamples']
else:
raise NotImplementedError('Only continuos files are supported')
numblocks = int(l // blocksize)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ramonapariciog the problem was here. It was previously not doing integer division and then adding an extra block if the data did not align. If I understand correctly, the proper thing to do is to just dump the extra data, since it is only available for the first channels in the block.

@jona-sassenhagen
Copy link
Copy Markdown
Contributor

I'll just say I'd appreciate no longer having to use EEGLAB to import MFF files :)

@jaeilepp
Copy link
Copy Markdown
Contributor

There's still a lot of cleaning to do. Seems like there's a lot of redundant meta data stored. I think the best is to store only the necessary data and then add more later if needed.

@dengemann
Copy link
Copy Markdown
Member

dengemann commented Apr 24, 2017 via email

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

Sure, i will check which data is needed from the first version to discart it.

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

ramonapariciog commented Apr 26, 2017

Hi Jakko, i have problems to use the function again :'(

mne.io.read_raw_egi_mff('mmnbenjamin23022017_20170223_111927.mff', preload=True)
Reading EGI MFF Header from /Users/ghfc/Desktop/mmn/mmnbenjamin23022017_20170223_111927.mff...
    Reading events ...
    Assembling measurement info ...
    Synthesizing trigger channel "STI 014" ...
    Excluding events {} ...
Reading 0 ... 874358  =      0.000 ...   874.358 secs...
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-3-a83585f0f406> in <module>()
----> 1 mne.io.read_raw_egi_mff('/Users/ghfc/Desktop/mmn/mmnbenjamin23022017_20170223_111927.mff', preload=True)

/Users/ghfc/mne-python/mne/io/egi/egimff.py in read_raw_egi_mff(input_fname, montage, eog, misc, include, exclude, preload, verbose)

/Users/ghfc/mne-python/mne/utils.py in verbose(function, *args, **kwargs)
    705         with use_log_level(verbose_level):
    706             return function(*args, **kwargs)
--> 707     return function(*args, **kwargs)
    708
    709

/Users/ghfc/mne-python/mne/io/egi/egimff.py in read_raw_egi_mff(input_fname, montage, eog, misc, include, exclude, preload, verbose)
    234     """
    235     return RawMff(input_fname, montage, eog, misc, include, exclude,
--> 236                   preload, verbose)
    237
    238

/Users/ghfc/mne-python/mne/io/egi/egimff.py in __init__(self, input_fname, montage, eog, misc, include, exclude, preload, verbose)

/Users/ghfc/mne-python/mne/utils.py in verbose(function, *args, **kwargs)
    705         with use_log_level(verbose_level):
    706             return function(*args, **kwargs)
--> 707     return function(*args, **kwargs)
    708
    709

/Users/ghfc/mne-python/mne/io/egi/egimff.py in __init__(self, input_fname, montage, eog, misc, include, exclude, preload, verbose)
    339             info, preload=preload, orig_format=egi_info['orig_format'],
    340             filenames=[file_bin], last_samps=[egi_info['n_samples'] - 1],
--> 341             raw_extras=[egi_info], verbose=verbose)
    342
    343     def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):

/Users/ghfc/mne-python/mne/io/base.py in __init__(self, info, preload, first_samps, last_samps, filenames, raw_extras, orig_format, dtype, verbose)

/Users/ghfc/mne-python/mne/utils.py in verbose(function, *args, **kwargs)
    705         with use_log_level(verbose_level):
    706             return function(*args, **kwargs)
--> 707     return function(*args, **kwargs)
    708
    709

/Users/ghfc/mne-python/mne/io/base.py in __init__(self, info, preload, first_samps, last_samps, filenames, raw_extras, orig_format, dtype, verbose)
    363         self._update_times()
    364         if load_from_disk:
--> 365             self._preload_data(preload)
    366
    367     @verbose

/Users/ghfc/mne-python/mne/io/base.py in _preload_data(self, preload, verbose)

/Users/ghfc/mne-python/mne/utils.py in verbose(function, *args, **kwargs)
    705         with use_log_level(verbose_level):
    706             return function(*args, **kwargs)
--> 707     return function(*args, **kwargs)
    708
    709

/Users/ghfc/mne-python/mne/io/base.py in _preload_data(self, preload, verbose)
    618         logger.info('Reading %d ... %d  =  %9.3f ... %9.3f secs...' %
    619                     (0, len(self.times) - 1, 0., self.times[-1]))
--> 620         self._data = self._read_segment(data_buffer=data_buffer)
    621         assert len(self._data) == self.info['nchan']
    622         self.preload = True

/Users/ghfc/mne-python/mne/io/base.py in _read_segment(self, start, stop, sel, data_buffer, projector, verbose)
    515             self._read_segment_file(data[:, this_sl], idx, fi,
    516                                     int(start_file), int(stop_file),
--> 517                                     cals, mult)
    518             offset += n_read
    519         return data

/Users/ghfc/mne-python/mne/io/egi/egimff.py in _read_segment_file(self, data, idx, fi, start, stop, cals, mult)
    383                 end = count // n_channels + 2
    384                 if sample_start == 0:
--> 385                     block = block[:, s_offset:end]
    386                     sample_sl = slice(sample_start,
    387                                       sample_start + block.shape[1])

TypeError: slice indices must be integers or None or have an __index__ method

i update numpy scipy and also, to use the new version of autoreject and this happens when i try load a file.
The versions:
numpy 1.12.1
scipy 0.19.0

@jaeilepp
Copy link
Copy Markdown
Contributor

Strange that I can't reproduce this even after creating an environment with the same settings as travis.

@codecov-io
Copy link
Copy Markdown

codecov-io commented Apr 27, 2017

Codecov Report

Merging #4017 into master will increase coverage by 0.03%.
The diff coverage is 90.29%.

@@            Coverage Diff             @@
##           master    #4017      +/-   ##
==========================================
+ Coverage   86.18%   86.21%   +0.03%     
==========================================
  Files         358      349       -9     
  Lines       65020    65084      +64     
  Branches     9914     9983      +69     
==========================================
+ Hits        56040    56115      +75     
+ Misses       6247     6222      -25     
- Partials     2733     2747      +14

@jaeilepp
Copy link
Copy Markdown
Contributor

Currently this data is not used. I assume at least GCAL should be used for calibration, but I have no idea what ICAL is. EEGLAB reader seems to recognize calibrated_gains and calibrated_zeros. Could ICAL be zero calibration?

@larsoner
Copy link
Copy Markdown
Member

larsoner commented Apr 27, 2017 via email

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

Hello Jakko, i check again and now i think that is loading the data, but, you already tried using the data with Autoreject?
i get this error:

In [20]: epochs_clean = ar.fit(epochs[::50]).transform(epochs)
Loading data for 18 events and 701 original time points ...
1 bad epochs dropped
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-20-290c8fbf6820> in <module>()
----> 1 epochs_clean = ar.fit(epochs[::50]).transform(epochs)

C:\Program Files\Anaconda3\envs\mne_dev\lib\site-packages\autoreject-0.1.dev0-py3.5.egg\autoreject\autoreject.py in fit(self, epochs)
    515             The epochs object to be fit.
    516         """
--> 517         _check_data(epochs)
    518         if self.cv is None:
    519             self.cv = KFold(len(epochs), n_folds=10)

C:\Program Files\Anaconda3\envs\mne_dev\lib\site-packages\autoreject-0.1.dev0-py3.5.egg\autoreject\autoreject.py in _check_data(epochs)
     38                'incomplete data). Please check that no epoch '
     39                'is dropped when you call epochs.drop_bad_epochs().')
---> 40         raise RuntimeError(msg)
     41
     42

RuntimeError: Some epochs are being dropped (maybe due to incomplete data). Please check that no epoch is dropped when you call epochs.drop_bad_epochs().

Do you think that could be the timeline?

@jaeilepp
Copy link
Copy Markdown
Contributor

jaeilepp commented May 4, 2017

I think this is close to being ready. I couldn't figure out how to use the ICAL values, but the GCALs are now in use. I compared to the converted egi files I have and the data scales are very close to each other.

@larsoner
Copy link
Copy Markdown
Member

larsoner commented May 4, 2017

Can you update manual/io.rst? It would answer the question I was about to ask, which is what's the difference between egi and egi_mff

@larsoner
Copy link
Copy Markdown
Member

larsoner commented May 4, 2017

Regarding ICAL/GCAL can you cross-check with FieldTrip (e.g., http://www.fieldtriptoolbox.org/faq/how_can_i_read_egi_mff_data_without_the_jvm)

Copy link
Copy Markdown
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise LGTM

signal_blocks = _get_blocks(fname)
blocknumsamps = np.sum(signal_blocks['blockNumSamps'])

pibhasref = False
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

either make the name more explicit or add a comment, I don't know what pib means


summaryinfo = dict(eeg_fname=eeg_file,
info_fname=info_files[0],
pibNChans=pibnchans,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no camelCase please

ntrials = 1

# Add the sensor info.
sensor_layout_file = filepath + '/sensorLayout.xml'
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

op.join

for sensor in sensors:
sensortype = int(sensor.getElementsByTagName('type')[0]
.firstChild.data)
if sensortype == 0 or sensortype == 1:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can just say in [0, 1]

if sensor.getElementsByTagName('name')[0].firstChild is None:
sn = sensor.getElementsByTagName('number')[0].firstChild.data
sn = sn.encode()
tmp_label = 'E' + sn.decode()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why encode then decode? why not just u'E' + text_type(sn)?

gain=0,
bits=0,
value_range=0)
unsegmented = 1 if mff_hdr['nTrials'] == 1 else 0
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

= int(mff_hdr['nTrials'] == 1)

unsegmented = 1 if mff_hdr['nTrials'] == 1 else 0
precision = 4
if precision == 0:
RuntimeError('Floating point precision is undefined.')
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unsupported

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... also this can never happen because precision = 4 immediately above...?



class RawMff(BaseRaw):
"""RAWMff class."""
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RawMff

def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
"""Read a chunk of data."""
from ..utils import _mult_cal_one
dtype = '<f4'
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems like this should be read from the header somewhere but it's not currently (and the check above is broken)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find it anywhere in the meta data and it seems fieldtrip has the size hard coded too.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable, would be good to add a comment along these lines

from glob import glob
from os.path import basename, join, splitext

from xml.etree.ElementTree import parse
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be grouped with the standard imports

@jaeilepp
Copy link
Copy Markdown
Contributor

jaeilepp commented May 5, 2017

I couldn't find anything about the calibrations in fieldtrip code base.

Copy link
Copy Markdown
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last structural thing. @agramfort a while ago mentioned rolling this into read_raw_egi and triaging based on file extension. Is that possible?

montage : str | None | instance of montage
Path or instance of montage containing electrode positions.
If None, sensor locations are (0,0,0). See the documentation of
:func:`mne.channels.read_montage` for more information.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hasn't been addressed yet

def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
"""Read a chunk of data."""
from ..utils import _mult_cal_one
dtype = '<f4'
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable, would be good to add a comment along these lines



def _ns(s):
"""Remove namespace, but only it there is a namespace to begin with."""
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it->if

assert_true('RawMff' in repr(raw))
include = ['DIN1', 'DIN2', 'DIN3', 'DIN4',
'DIN5', 'DIN7']
with warnings.catch_warnings(record=True):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add comment about what warnings are being caught

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

Hello guys,
some commentaries:
I need to know for each file the id for the events. The way as i did at the begining was to create the event array with n rows x s columns, n= number of the stimulus channels, s = number of samples. The way to order the stimulus channels is according to the order of those appears on time, and the program creates the event_id dictionary with the relation between the number of the row for each channel, and the STI 014 channel was just the sum. I think that now the function just take the sum of all the channels to create the STI 014 and the include and exclude criteria are not working. I will try to adjust this on the weekend because i need it, and i have the function that does that. During the process was deleted and we wasn't aware about its importance.

@jaeilepp jaeilepp changed the title [WIP] Egimff [MRG] Egimff May 23, 2017
@agramfort
Copy link
Copy Markdown
Member

@ramonapariciog @SherazKhan can you guys test and report any difficulty?

thanks heaps @jaeilepp for the hard work !

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

Yeah sure, thanks @jaeilepp for all, i will test.

@agramfort
Copy link
Copy Markdown
Member

ping @ramonapariciog @SherazKhan

can you guys test and report any difficulty?

@agramfort
Copy link
Copy Markdown
Member

ok let's say it's working :)

@jaeilepp please update what's new and merge

@agramfort agramfort merged commit 7cbd69c into mne-tools:master Jul 11, 2017
@agramfort
Copy link
Copy Markdown
Member

Ok I'll update what's new in master. I feel like closing PRs...

@agramfort
Copy link
Copy Markdown
Member

see 82306db

@ramonapariciog
Copy link
Copy Markdown
Contributor Author

ohhh, i have problems with my computer for now, but yeah, the last version of the function was working without problems.
Sorry, @agramfort, it is good idea to delete the branch now?

@agramfort
Copy link
Copy Markdown
Member

agramfort commented Jul 12, 2017 via email

larsoner pushed a commit to larsoner/mne-python that referenced this pull request Aug 3, 2017
* Create read_egi_mff

I need to understand from where part of the _BaseRaw  class __init__() the _read_segment_file are called to execution, because the data from Sheraz reads the data and only adapting it is enough for a first test.

* The egimff.py was added with the read_raw_egi_mff function. To import use from mne.io import read_raw_egi_mff

* Correction of some PEP8 errors and delete the folder mff bad added

* Raw reader.

* preload=False

* Cleaning.

* Fixes to event reading.

* Fixes.

* Fixes.

* cleaning useless functions

* Cleaning.

* Reading of meta data at the end.

* Update tests. Cleaning.

* Cleaning. Correct number of samples.

* More cleaning.

* Cleaning.

* Update testing dataset.

* Update test.

* Fix.

* Test fix.

* Fixes.

* Docs. Switched back to running number events.

* Gains. Fix to events.

* Cleaning. Apply gains.

* Fixes.

* Fixes.

* Read coordinates. Cleaning.

* Using _combine_triggers to create STI 014 and event_id attribute

* Refactoring.

* Fix.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants