-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
MRG: Speed up functions and tests #4322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #4322 +/- ##
==========================================
+ Coverage 84.42% 86.06% +1.64%
==========================================
Files 346 346
Lines 62897 62941 +44
Branches 9672 9674 +2
==========================================
+ Hits 53099 54169 +1070
+ Misses 7118 6108 -1010
+ Partials 2680 2664 -16 |
| # Create the projection operator | ||
| proj, ncomp, _ = make_projector(info['projs'], ch_names) | ||
| proj, ncomp, _ = make_projector(info['projs'] + noise_cov['projs'], | ||
| ch_names) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this was essentially a bug. Shouldn't we use the union of noise_cov and info projs when computing a whitener / preparing noise cov?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes agreed
|
Okay I'll stop here for now. If the CIs come back happy, ready for review/merge from my end. I think there is a way to (greatly) speed up morph map computation using |
mne/io/proj.py
Outdated
| vecsel.append(p['data']['col_names'].index(name)) | ||
| if name not in bads: | ||
| try: | ||
| idx = p['data']['col_names'].index(name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use a set to make "name in p['data']['col_names']" faster but not use a try
also add a note why you use a set/dict
| # differently (to ensure that std/non-std tSSS windows are correctly | ||
| # handled), but the 4-sec case should hopefully be sufficient. | ||
| st_durations = [4.] # , 10.] | ||
| tols = [325.] # , 200.] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to cleanup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather leave it in case we want to add it back in / be more complete.
|
besides LGTM |
|
travis is not happy |
|
Okay CIs appear to be coming back happy, can you look now @agramfort? |
|
Travis appears to have sped back up (hooray!) but this still contains some nice testing speed boosts. |
|
... and 2 of the 3 slowest tests are now Gamma-map and TF-MxNE if you feel like trying to speed those up: https://travis-ci.org/mne-tools/mne-python/jobs/244540557#L3245 |
mne/surface.py
Outdated
| # weights = weights[np.arange(len(weights))[:, np.newaxis], order] | ||
| # nn_idx = nn_idx[np.arange(len(weights))[:, np.newaxis], order] | ||
| # mv = np.mean(nn_idx == nn_idx_2) | ||
| # assert mv > 0.999 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cleanup or say why it's still here
| import matplotlib.pyplot as plt | ||
| raw_ctf = read_raw_ctf(ctf_fname_continuous).crop(0, 1).load_data() | ||
| raw_ctf.plot() | ||
| plt.close('all') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
closing really helps with speed? if so maybe we should be more systematic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We usually do try include close commands in viz tests. I think it's mostly a thing to keep mpl memory use down. I moved this test because it seemed to make more sense here, although it did also speed things up a little bit.
|
ok merge when you're done then
thanks a lot for taking a stab at this
|
|
Okay merging to save some time. Hopefully can make more progress at some point |
|
thx !
I'll look gamma map and tf-mxne tests asap
|
|
The latest slow-list is here: https://travis-ci.org/mne-tools/mne-python/jobs/245020880#L3230 |
|
any idea why test_read_write_epochs is so slow?
|
|
That function tests a ton of stuff. My guess is that it's actually an I/O bottleneck but I've never profiled those functions so maybe we're doing something unnecessarily slow |
Various tweaks to speed tests and helper functions. Let's see if I broke anything.