Skip to content

[WIP] Tutorial for LCMV beamformer#7085

Merged
drammock merged 26 commits intomne-tools:masterfrom
britta-wstnr:lcmv_tutorial
Mar 16, 2020
Merged

[WIP] Tutorial for LCMV beamformer#7085
drammock merged 26 commits intomne-tools:masterfrom
britta-wstnr:lcmv_tutorial

Conversation

@britta-wstnr
Copy link
Copy Markdown
Member

Creating a tutorial for beamforming.
This is still WIP, some info might be added. But a start, @drammock!

@codecov
Copy link
Copy Markdown

codecov bot commented Nov 19, 2019

Codecov Report

Merging #7085 into master will increase coverage by <.01%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master    #7085      +/-   ##
==========================================
+ Coverage   90.06%   90.06%   +<.01%     
==========================================
  Files         454      454              
  Lines       82353    82523     +170     
  Branches    13016    13043      +27     
==========================================
+ Hits        74172    74326     +154     
- Misses       5356     5363       +7     
- Partials     2825     2834       +9

Copy link
Copy Markdown
Member

@drammock drammock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here are some comments that weren't easy to translate into simple edits that I could push myself.

:local:
:depth: 2

A beamformer is a spatial filter that reconstructs source activity by scanning
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a section heading here "background on beamformers" or "introduction to beamformers". I would add to that section a brief discussion of what depth bias is and why it occurs, and mention that there are ways of dealing with it (described below).

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Attempted this in now

# We will use the sample data set for this tutorial and reconstruct source
# activity on the trials with left auditory stimulation. Note that
# beamformers are usually computed in a :class:`volume source space
# <mne.VolSourceEsitmate>`, as a visualization
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# <mne.VolSourceEsitmate>`, as a visualization
# <mne.VolSourceEstimate>`, as a visualization

oops, I introduced a typo here

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

# activity on the trials with left auditory stimulation. Note that
# beamformers are usually computed in a :class:`volume source space
# <mne.VolSourceEsitmate>`, as a visualization
# of only the surface activation can misrepresent the data.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be good to add a bit more detail here about the misrepresentation. You say "beamformers are usually computed in a volume source space" (emphasis added), so are there legitimate reasons to sometimes use a cortical surface constrained estimate? If so, under what conditions is that appropriate?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my view, it is not appropriate to use surface models. I have seen people do it, but do not know what their motivation is ...

Comment on lines +45 to +46
# Read forward model
forward = mne.read_forward_solution(fwd_fname)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know in analysis scripts it's common to do all the file I/O early on, but I wonder if it might be better for the tutorial to wait to load the forward solution until later, when it is needed (and when the need for it is explained). Something more like this:

  • load the (raw) data, epoch it, and plot an evoked butterfly plot
  • explain why we need covariances, compute them, plot them, interpret them
  • explain that we also need a forward solution, and why we need it / what role it plays in the beamformer (the "why" part is currently missing I think). This is where I would load the forward. I would also add a cross-reference to some other example, tutorial, or function explaining how to compute the forward model (as users will typically need to do that step)
  • compute the spatial filter
  • apply the spatial filter
  • visualize reconstructed source

What do you think?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried to incorporate this now, I think it does make it easier to follow!

Comment on lines +94 to +95
# When looking at the covariance matrix plots, we can see that our data is
# slightly rank-deficient. Thus, we will regularize the covariance matrix by
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would include this information right after the covariance plots (before beginning the next subsection). Would be great to also say what exactly it is about the plots that indicates rank-deficiency.

This might be overkill for this tutorial, but plotting the covariance matrix after the 5% regularization would be really informative for many users I think.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I will have to move this, overlooked before.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Done now.)

Comment on lines +100 to +101
# unit-noise-gain beamformer, which normalizes the beamformer weights to take
# care of an inherent depth bias. To achieve this, we set
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see prev. comment: depth bias should be introduced in "background" or "intro" section if possible; that will lead to some changes in wording here.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not move that up, as it is not a problem of beamformers but the forward model. I tried to make it more clear though.

# as a scalar beamformer. It is also possible to compute a vector beamformer,
# which gives back three estimates per voxel, corresponding to the three
# directions of the source. This can be achieved by setting
# ``pick_ori='vector'``.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... and will yield a :class:`volume vector source estimate <mne.VolVectorSourceEstimate>`

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

# linearly constrained minimum variance spatial filtering.
# Biomedical Engineering (1997) vol. 44 (9) pp. 867--880
#
# .. _[2] Gross et al. (2001) Dynamic imaging of coherent sources: Studying
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, these should not have leading underscores, should be .. [2] Gross et al (2001) etc

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Copy Markdown
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rendering looks reasonable:

https://18483-1301584-gh.circle-artifacts.com/0/dev/auto_tutorials/source-modeling/plot_beamformer_lcmv.html#sphx-glr-auto-tutorials-source-modeling-plot-beamformer-lcmv-py

I think we should get rid of examples/inverse/plot_lcmv_beamformer.py and examples/inverse/plot_lcmv_beamformer_volume.py in favor of this. Can you check the content that is in those, ensure that it's (sufficiently) covered here, and then delete those files?

@larsoner larsoner added this to the 0.20 milestone Mar 11, 2020
@britta-wstnr
Copy link
Copy Markdown
Member Author

@larsoner, I just looked at the examples:

examples/inverse/plot_lcmv_beamformer.py is very close to what we do here (less explanation of course), but also explores different ways of source orientation selection (None, max-power, normal). It is done in surface space (do be comparable with the normal orientation), that is something I would not want to add to this tutorial as it is not best practice. But it might be nice to explore e.g. the non-orientation selection case as well?

examples/inverse/plot_lcmv_beamformer_volume.py is also very close (same data etc), but adds some morphing at the end. That reminds me: what is the status of the morphing that is useful for subject comparisons? If that is working I think it makes sense to add it, otherwise it might allude to group comparisons being possible while they are not?

That reminds me: I think a cool example to have might be this one: https://brittas-summerofcode.blogspot.com/2017/07/comparing-experimental-conditions-with.html
It's from my GSOC and I am happy to contribute it here - I think it might be valuable to show what to do with the covariance matrix when comparing conditions.

@larsoner
Copy link
Copy Markdown
Member

that is something I would not want to add to this tutorial as it is not best practice. But it might be nice to explore e.g. the non-orientation selection case as well?

If using surface source space is not recommended, let's just get rid of that example entirely.

but adds some morphing at the end. That reminds me: what is the status of the morphing that is useful for subject comparisons? If that is working I think it makes sense to add it, otherwise it might allude to group comparisons being possible while they are not?

Yes morphing in volume source space should be usable now in principle. I'm not sure how much people have tried it, though. I would just add/adapt that code here and delete the existing example.

I think a cool example to have might be this one:

Actually I think it would work better as a tutorial. They could remain separate:

  • the tutorial in this PR can be about beamformers in general, how to use LCMV beamformer options (norms, orientations), eventually morphing, etc.
  • a new / follow-up plot_lcmv_beamformer_conditions.py can cover specifically how to use beamformers with multiple conditions of interest / contrasts.

It might also make sense just to fold the contrast tutorial text into the general beamformer tutorial that you're making in this PR. I'll let you decide which makes more sense when you work on it :)

Copy link
Copy Markdown
Member

@drammock drammock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is great @britta-wstnr and close to finished IMO. I've suggested several small tweaks and one or two medium-sized changes, hopefully these are not too onerous :)

# channel types. To do this we will supply a noise covariance matrix to the
# beamformer, which will be used for whitening.
# The data covariance matrix should be estimated from a relevant time window
# and incorporate enough samples for a stable estimate. Here, we use a time
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you unpack "enough samples" and "stable estimate" so that readers can develop intutions about how many is enough? What would be evidence of enough vs. not enough?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, MNE-Python actually will throw a warning (but I don't know at which threshold). There are rules of thumb out there (also published) but in the end it is the question whether the source reconstruction looks weird or not ... and whether the covariance matrix shows the typical pattern - but I am not sure how to explain this. Ideas?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked through the compute_covariance code but didn't easily find a "not enough samples" warning, so it must be buried a few layers deep (perhaps in scipy or sklearn code that we call). So I'd say, if there are published rules of thumb, I think it would be good to at least mention them. Something like "...and incorporates enough samples for a stable estimate (XXX is a rule of thumb for the minimum number of samples, according to YYY)." WDYT?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So Brookes et al. 2008 say a useful rule of thumb "is that the number of samples N must be greater than the total number of channels M". They, however, also discuss to do several beamformer estimates with several window lengths - and if the beamformer estimate changes, one operates "on the slope of [the] power curves", i.e., has not a stable covariance estimate.

Ref: Brookes et al., 2008, Optimising experimental design for MEG beamformer imaging, NeuroImage 39, 1788-1802.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe then we could say "...enough samples for a stable estimate. A good rule of thumb is to use at minimum as many time samples as there are channels in the data; see [3]_ for more detailed advice on covariance estimation for beamformers."

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

@britta-wstnr
Copy link
Copy Markdown
Member Author

@larsoner I added the morphing from the volume beamformer tutorial.
I will see tomorrow whether I can add in e.g. a vector beamformer.

I have two reasons against incorporating the conditions here; first: it will get really long and thus harder for novices to see through. Second, very practical: it's nicer to show the contrasting of conditions on a different data set where it makes more sense: here, you could only do left vs right or visual vs auditory and then the "red and blue blobs" become quite arbitrary.

What do you think, also @drammock ?

@drammock
Copy link
Copy Markdown
Member

I agree with @britta-wstnr that doing the multi-subject comparison would be better with a different dataset in a different tutorial or example. This one is long enough already.

@britta-wstnr
Copy link
Copy Markdown
Member Author

I added now the vector case to the tutorial and did some plotting of the components as well.

Does anyone know how those components are actually ordered, docstring of VolVectorSourceEstimate says "Each dipole contains three vectors that denote the dipole strength in X, Y and Z directions over time.", so I assumed this is the order (for plotting)?

@britta-wstnr
Copy link
Copy Markdown
Member Author

Are we still okay with deleting the two examples?

@larsoner
Copy link
Copy Markdown
Member

Yes please delete

It should indeed be XYZ in MRI surface RAS coordinates

@britta-wstnr
Copy link
Copy Markdown
Member Author

Deleted the obsolete examples, can someone please double check it's the right ones 😅 ?

@drammock
Copy link
Copy Markdown
Member

Yes you deleted the right ones 😄

Copy link
Copy Markdown
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, @drammock feel free to merge if you're happy

Copy link
Copy Markdown
Member

@drammock drammock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@britta-wstnr just two tiny suggestions from me and I'm ready to call this one finished. But it looks like there's a merge conflict now so you'll need to rebase and force-push.

@britta-wstnr
Copy link
Copy Markdown
Member Author

I added @drammock's suggestions and rebased.
Do I have to add anything to the changelog?

@drammock
Copy link
Copy Markdown
Member

I haven't been doing changelog entries for tutorial / example updates. Merging! Thanks @britta-wstnr

@drammock drammock merged commit 5de659d into mne-tools:master Mar 16, 2020
@agramfort
Copy link
Copy Markdown
Member

agramfort commented Mar 16, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants