MRG, ENH: Use numba to speed up summarize_clusters_stc#8095
Merged
larsoner merged 1 commit intomne-tools:masterfrom Aug 6, 2020
Merged
MRG, ENH: Use numba to speed up summarize_clusters_stc#8095larsoner merged 1 commit intomne-tools:masterfrom
larsoner merged 1 commit intomne-tools:masterfrom
Conversation
larsoner
approved these changes
Aug 5, 2020
Member
larsoner
left a comment
There was a problem hiding this comment.
Nice that a tiny and still readable change can make such a big difference!
Can you update latest.inc to mention this speedup?
agramfort
approved these changes
Aug 5, 2020
Contributor
Author
Sure, I'll do that tomorrow (it's nighttime now in my timezone 😉 ) |
…skip azp] * Update latest.inc
Member
|
Thanks @yh-luo ! |
sharifhsn
added a commit
to sharifhsn/mne-python
that referenced
this pull request
Mar 8, 2026
Add benchmark scripts and feasibility documentation for GPU-accelerating the spatio-temporal cluster-based permutation test, which is the mne-tools#1 computational bottleneck for MNE researchers doing source-space analyses. The connected-component labeling step in _get_components() consumes ~97% of permutation test runtime. This adds: - gpu_accel/benchmark_cluster_cpu.py: CPU baseline benchmark using the MNE sample dataset (fsaverage ico-5, ~20K vertices) - gpu_accel/patch_cupy_poc.py: CuPy proof-of-concept that monkey-patches _get_components with GPU connected_components (NVIDIA) - gpu_accel/FEASIBILITY.md: Full analysis of GPU CCL algorithms, hardware requirements, and a three-phase plan (CuPy PoC → wgpu+Rust → fused pipeline) - CLAUDE.md: Development guide with uv setup instructions and GPU work context Related: mne-tools#5439, mne-tools#12609, mne-tools#7784, mne-tools#8095, mne-tools#13175 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Use Numba to speed up converting spatiotemporal cluster results into SourceEstimate. This PR brings about 35% speedup.
Testing
left_auditory_vs_visual_0_to_None.npz (~148MB)
sampledataset, duplicated to create 7 subjectsoct6source space to compute source estimates then morphed toico5fsaverageBefore
After
~35% speedup (65% of current processing time)
When the cluster results are huge, the for-loop in
mne.stats.summarize_clusters_stcis too time-consuming. I tried to speed up the entire for-loop but it's currently impossible (data typing problem). I hope this PR helps.mne.sys_info