Skip to content

Conversation

@larsoner
Copy link
Member

Various tweaks to speed tests and helper functions. Let's see if I broke anything.

@larsoner larsoner added this to the 0.15 milestone Jun 15, 2017
@codecov-io
Copy link

codecov-io commented Jun 15, 2017

Codecov Report

Merging #4322 into master will increase coverage by 1.64%.
The diff coverage is 97.82%.

@@            Coverage Diff             @@
##           master    #4322      +/-   ##
==========================================
+ Coverage   84.42%   86.06%   +1.64%     
==========================================
  Files         346      346              
  Lines       62897    62941      +44     
  Branches     9672     9674       +2     
==========================================
+ Hits        53099    54169    +1070     
+ Misses       7118     6108    -1010     
+ Partials     2680     2664      -16

# Create the projection operator
proj, ncomp, _ = make_projector(info['projs'], ch_names)
proj, ncomp, _ = make_projector(info['projs'] + noise_cov['projs'],
ch_names)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was essentially a bug. Shouldn't we use the union of noise_cov and info projs when computing a whitener / preparing noise cov?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes agreed

@larsoner larsoner changed the title ENH: Speed up functions and tests MRG: Speed up functions and tests Jun 15, 2017
@larsoner
Copy link
Member Author

Okay I'll stop here for now. If the CIs come back happy, ready for review/merge from my end.

I think there is a way to (greatly) speed up morph map computation using sklearn BallTree, but I'll do that in another PR.

mne/io/proj.py Outdated
vecsel.append(p['data']['col_names'].index(name))
if name not in bads:
try:
idx = p['data']['col_names'].index(name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would use a set to make "name in p['data']['col_names']" faster but not use a try

also add a note why you use a set/dict

# differently (to ensure that std/non-std tSSS windows are correctly
# handled), but the 4-sec case should hopefully be sufficient.
st_durations = [4.] # , 10.]
tols = [325.] # , 200.]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to cleanup?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather leave it in case we want to add it back in / be more complete.

@agramfort
Copy link
Member

besides LGTM

@agramfort
Copy link
Member

travis is not happy

@larsoner
Copy link
Member Author

Okay CIs appear to be coming back happy, can you look now @agramfort?

@larsoner
Copy link
Member Author

Travis appears to have sped back up (hooray!) but this still contains some nice testing speed boosts.

@larsoner
Copy link
Member Author

... and 2 of the 3 slowest tests are now Gamma-map and TF-MxNE if you feel like trying to speed those up:

https://travis-ci.org/mne-tools/mne-python/jobs/244540557#L3245

mne/surface.py Outdated
# weights = weights[np.arange(len(weights))[:, np.newaxis], order]
# nn_idx = nn_idx[np.arange(len(weights))[:, np.newaxis], order]
# mv = np.mean(nn_idx == nn_idx_2)
# assert mv > 0.999
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleanup or say why it's still here

import matplotlib.pyplot as plt
raw_ctf = read_raw_ctf(ctf_fname_continuous).crop(0, 1).load_data()
raw_ctf.plot()
plt.close('all')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

closing really helps with speed? if so maybe we should be more systematic?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We usually do try include close commands in viz tests. I think it's mostly a thing to keep mpl memory use down. I moved this test because it seemed to make more sense here, although it did also speed things up a little bit.

@agramfort
Copy link
Member

agramfort commented Jun 20, 2017 via email

@larsoner larsoner merged commit b536330 into mne-tools:master Jun 20, 2017
@larsoner
Copy link
Member Author

Okay merging to save some time. Hopefully can make more progress at some point

@larsoner larsoner deleted the faster-tests branch June 20, 2017 19:40
@agramfort
Copy link
Member

agramfort commented Jun 20, 2017 via email

@larsoner
Copy link
Member Author

The latest slow-list is here:

https://travis-ci.org/mne-tools/mne-python/jobs/245020880#L3230

@agramfort
Copy link
Member

agramfort commented Jun 20, 2017 via email

@larsoner
Copy link
Member Author

That function tests a ton of stuff. My guess is that it's actually an I/O bottleneck but I've never profiled those functions so maybe we're doing something unnecessarily slow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants