[WIP] Tutorial for LCMV beamformer#7085
Conversation
Codecov Report
@@ Coverage Diff @@
## master #7085 +/- ##
==========================================
+ Coverage 90.06% 90.06% +<.01%
==========================================
Files 454 454
Lines 82353 82523 +170
Branches 13016 13043 +27
==========================================
+ Hits 74172 74326 +154
- Misses 5356 5363 +7
- Partials 2825 2834 +9 |
drammock
left a comment
There was a problem hiding this comment.
here are some comments that weren't easy to translate into simple edits that I could push myself.
| :local: | ||
| :depth: 2 | ||
|
|
||
| A beamformer is a spatial filter that reconstructs source activity by scanning |
There was a problem hiding this comment.
Consider adding a section heading here "background on beamformers" or "introduction to beamformers". I would add to that section a brief discussion of what depth bias is and why it occurs, and mention that there are ways of dealing with it (described below).
There was a problem hiding this comment.
Attempted this in now
| # We will use the sample data set for this tutorial and reconstruct source | ||
| # activity on the trials with left auditory stimulation. Note that | ||
| # beamformers are usually computed in a :class:`volume source space | ||
| # <mne.VolSourceEsitmate>`, as a visualization |
There was a problem hiding this comment.
| # <mne.VolSourceEsitmate>`, as a visualization | |
| # <mne.VolSourceEstimate>`, as a visualization |
oops, I introduced a typo here
| # activity on the trials with left auditory stimulation. Note that | ||
| # beamformers are usually computed in a :class:`volume source space | ||
| # <mne.VolSourceEsitmate>`, as a visualization | ||
| # of only the surface activation can misrepresent the data. |
There was a problem hiding this comment.
would be good to add a bit more detail here about the misrepresentation. You say "beamformers are usually computed in a volume source space" (emphasis added), so are there legitimate reasons to sometimes use a cortical surface constrained estimate? If so, under what conditions is that appropriate?
There was a problem hiding this comment.
In my view, it is not appropriate to use surface models. I have seen people do it, but do not know what their motivation is ...
| # Read forward model | ||
| forward = mne.read_forward_solution(fwd_fname) |
There was a problem hiding this comment.
I know in analysis scripts it's common to do all the file I/O early on, but I wonder if it might be better for the tutorial to wait to load the forward solution until later, when it is needed (and when the need for it is explained). Something more like this:
- load the (raw) data, epoch it, and plot an evoked butterfly plot
- explain why we need covariances, compute them, plot them, interpret them
- explain that we also need a forward solution, and why we need it / what role it plays in the beamformer (the "why" part is currently missing I think). This is where I would load the forward. I would also add a cross-reference to some other example, tutorial, or function explaining how to compute the forward model (as users will typically need to do that step)
- compute the spatial filter
- apply the spatial filter
- visualize reconstructed source
What do you think?
There was a problem hiding this comment.
Tried to incorporate this now, I think it does make it easier to follow!
| # When looking at the covariance matrix plots, we can see that our data is | ||
| # slightly rank-deficient. Thus, we will regularize the covariance matrix by |
There was a problem hiding this comment.
I would include this information right after the covariance plots (before beginning the next subsection). Would be great to also say what exactly it is about the plots that indicates rank-deficiency.
This might be overkill for this tutorial, but plotting the covariance matrix after the 5% regularization would be really informative for many users I think.
There was a problem hiding this comment.
Ah I will have to move this, overlooked before.
| # unit-noise-gain beamformer, which normalizes the beamformer weights to take | ||
| # care of an inherent depth bias. To achieve this, we set |
There was a problem hiding this comment.
see prev. comment: depth bias should be introduced in "background" or "intro" section if possible; that will lead to some changes in wording here.
There was a problem hiding this comment.
I did not move that up, as it is not a problem of beamformers but the forward model. I tried to make it more clear though.
| # as a scalar beamformer. It is also possible to compute a vector beamformer, | ||
| # which gives back three estimates per voxel, corresponding to the three | ||
| # directions of the source. This can be achieved by setting | ||
| # ``pick_ori='vector'``. |
There was a problem hiding this comment.
... and will yield a :class:`volume vector source estimate <mne.VolVectorSourceEstimate>`
| # linearly constrained minimum variance spatial filtering. | ||
| # Biomedical Engineering (1997) vol. 44 (9) pp. 867--880 | ||
| # | ||
| # .. _[2] Gross et al. (2001) Dynamic imaging of coherent sources: Studying |
There was a problem hiding this comment.
oops, these should not have leading underscores, should be .. [2] Gross et al (2001) etc
88e0156 to
88ae4a5
Compare
larsoner
left a comment
There was a problem hiding this comment.
Rendering looks reasonable:
I think we should get rid of examples/inverse/plot_lcmv_beamformer.py and examples/inverse/plot_lcmv_beamformer_volume.py in favor of this. Can you check the content that is in those, ensure that it's (sufficiently) covered here, and then delete those files?
|
@larsoner, I just looked at the examples:
That reminds me: I think a cool |
If using surface source space is not recommended, let's just get rid of that example entirely.
Yes morphing in volume source space should be usable now in principle. I'm not sure how much people have tried it, though. I would just add/adapt that code here and delete the existing example.
Actually I think it would work better as a tutorial. They could remain separate:
It might also make sense just to fold the contrast tutorial text into the general beamformer tutorial that you're making in this PR. I'll let you decide which makes more sense when you work on it :) |
drammock
left a comment
There was a problem hiding this comment.
this is great @britta-wstnr and close to finished IMO. I've suggested several small tweaks and one or two medium-sized changes, hopefully these are not too onerous :)
| # channel types. To do this we will supply a noise covariance matrix to the | ||
| # beamformer, which will be used for whitening. | ||
| # The data covariance matrix should be estimated from a relevant time window | ||
| # and incorporate enough samples for a stable estimate. Here, we use a time |
There was a problem hiding this comment.
can you unpack "enough samples" and "stable estimate" so that readers can develop intutions about how many is enough? What would be evidence of enough vs. not enough?
There was a problem hiding this comment.
Well, MNE-Python actually will throw a warning (but I don't know at which threshold). There are rules of thumb out there (also published) but in the end it is the question whether the source reconstruction looks weird or not ... and whether the covariance matrix shows the typical pattern - but I am not sure how to explain this. Ideas?
There was a problem hiding this comment.
I looked through the compute_covariance code but didn't easily find a "not enough samples" warning, so it must be buried a few layers deep (perhaps in scipy or sklearn code that we call). So I'd say, if there are published rules of thumb, I think it would be good to at least mention them. Something like "...and incorporates enough samples for a stable estimate (XXX is a rule of thumb for the minimum number of samples, according to YYY)." WDYT?
There was a problem hiding this comment.
So Brookes et al. 2008 say a useful rule of thumb "is that the number of samples N must be greater than the total number of channels M". They, however, also discuss to do several beamformer estimates with several window lengths - and if the beamformer estimate changes, one operates "on the slope of [the] power curves", i.e., has not a stable covariance estimate.
Ref: Brookes et al., 2008, Optimising experimental design for MEG beamformer imaging, NeuroImage 39, 1788-1802.
There was a problem hiding this comment.
Maybe then we could say "...enough samples for a stable estimate. A good rule of thumb is to use at minimum as many time samples as there are channels in the data; see [3]_ for more detailed advice on covariance estimation for beamformers."
|
@larsoner I added the morphing from the volume beamformer tutorial. I have two reasons against incorporating the conditions here; first: it will get really long and thus harder for novices to see through. Second, very practical: it's nicer to show the contrasting of conditions on a different data set where it makes more sense: here, you could only do left vs right or visual vs auditory and then the "red and blue blobs" become quite arbitrary. What do you think, also @drammock ? |
|
I agree with @britta-wstnr that doing the multi-subject comparison would be better with a different dataset in a different tutorial or example. This one is long enough already. |
|
I added now the vector case to the tutorial and did some plotting of the components as well. Does anyone know how those components are actually ordered, docstring of |
|
Are we still okay with deleting the two examples? |
|
Yes please delete It should indeed be XYZ in MRI surface RAS coordinates |
|
Deleted the obsolete examples, can someone please double check it's the right ones 😅 ? |
|
Yes you deleted the right ones 😄 |
drammock
left a comment
There was a problem hiding this comment.
@britta-wstnr just two tiny suggestions from me and I'm ready to call this one finished. But it looks like there's a merge conflict now so you'll need to rebase and force-push.
Co-Authored-By: Daniel McCloy <dan@mccloy.info>
Adding fixes by Daniel Co-Authored-By: Daniel McCloy <dan@mccloy.info>
5b674d0 to
70e3224
Compare
|
I added @drammock's suggestions and rebased. |
|
I haven't been doing changelog entries for tutorial / example updates. Merging! Thanks @britta-wstnr |
|
thanks @britta-wstnr <https://github.com/britta-wstnr> !
… |
Creating a tutorial for beamforming.
This is still WIP, some info might be added. But a start, @drammock!