Skip to content

[MRG] DOC: update somato example#253

Merged
jasmainak merged 31 commits intojonescompneurolab:masterfrom
rythorpe:improve_somato_example
Feb 2, 2021
Merged

[MRG] DOC: update somato example#253
jasmainak merged 31 commits intojonescompneurolab:masterfrom
rythorpe:improve_somato_example

Conversation

@rythorpe
Copy link
Copy Markdown
Contributor

@rythorpe rythorpe commented Jan 22, 2021

Goals:

  • improve narrative
  • update MNE vertex selection
  • update drive and local network connectivity parameters
  • consolidate plots so that the median nerve response inverse solution (i.e., MNE vertex) and hnn-core simulation are plotted together
  • provide option for multiple trials that, when averaged together, obviously resemble the empirical inverse solution
  • Extra fix: allow sync_within_trials to be set from within Network constructor if add_drives_from_params is True

closes #247

@jasmainak
Copy link
Copy Markdown
Collaborator

let me know when I should look!

@codecov-io
Copy link
Copy Markdown

codecov-io commented Jan 25, 2021

Codecov Report

Merging #253 (1d6e639) into master (87566cf) will decrease coverage by 0.00%.
The diff coverage is 0.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #253      +/-   ##
==========================================
- Coverage   52.93%   52.92%   -0.01%     
==========================================
  Files          23       23              
  Lines        2747     2749       +2     
==========================================
+ Hits         1454     1455       +1     
- Misses       1293     1294       +1     
Impacted Files Coverage Δ
hnn_core/network.py 13.92% <ø> (ø)
hnn_core/viz.py 5.26% <0.00%> (-0.03%) ⬇️
hnn_core/__init__.py 100.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 87566cf...1d6e639. Read the comment docs.

@rythorpe
Copy link
Copy Markdown
Contributor Author

@jasmainak any idea why I seem to produce an MNE vertex time course (shown below) that looks different from the one on our documentation page?

Screenshot from 2021-01-25 23-11-01

Comment thread hnn_core/param/N20.json
Comment thread examples/plot_simulate_somato.py Outdated
@rythorpe rythorpe changed the title WIP: update somato example DOC: update somato example Jan 26, 2021
@rythorpe rythorpe requested a review from jasmainak January 26, 2021 04:59
@jasmainak
Copy link
Copy Markdown
Collaborator

any idea why I seem to produce an MNE vertex time course (shown below) that looks different from the one on our documentation page?

different MNE version :-) I don't think we can really guarantee exactly the same result. But it might be worthwhile to check if the source localization looks correct by doing stc.plot and then picking a vertex manually using the pyvista backend. I wanted to do this myself but for some reason the renderer is broken on my mac ...

@rythorpe
Copy link
Copy Markdown
Contributor Author

rythorpe commented Jan 26, 2021

different MNE version :-) I don't think we can really guarantee exactly the same result. But it might be worthwhile to check if the source localization looks correct by doing stc.plot and then picking a vertex manually using the pyvista backend. I wanted to do this myself but for some reason the renderer is broken on my mac ...

After we resolve this, it might be a good idea to plot the dipole in source space for this example to show how we interpret orientation in the HNN framework. Is there an efficient way to do this that doesn't require a fancy graphics backend?

@jasmainak
Copy link
Copy Markdown
Collaborator

Is there an efficient way to do this that doesn't require a fancy graphics backend?

MNE does have a 'matplotlib' backend for plotting stc. It might be pretty slow though ... and neglected. OpenGL makes it fast. Speed comes at a cost ... always :-P

@jasmainak
Copy link
Copy Markdown
Collaborator

see here the plot: https://mne.tools/stable/auto_tutorials/source-modeling/plot_visualize_stc.html#volume-source-estimates just above "Volume Source Estimates"

Comment thread examples/plot_simulate_somato.py Outdated
@rythorpe
Copy link
Copy Markdown
Contributor Author

see here the plot: https://mne.tools/stable/auto_tutorials/source-modeling/plot_visualize_stc.html#volume-source-estimates just above "Volume Source Estimates"

Can the matplotlib backend render a vector inverse solution?

Comment thread hnn_core/network.py
@jasmainak
Copy link
Copy Markdown
Collaborator

Can the matplotlib backend render a vector inverse solution?

honestly don't know, you just have to try ...

Comment thread hnn_core/param/N20.json
Comment thread examples/plot_simulate_somato.py Outdated
Comment thread examples/plot_simulate_somato.py Outdated
Comment thread examples/plot_simulate_somato.py Outdated
Comment thread examples/plot_simulate_somato.py Outdated
Comment thread examples/plot_simulate_somato.py Outdated
Comment thread examples/plot_simulate_somato.py Outdated
Comment thread examples/plot_simulate_somato.py Outdated
@cjayb
Copy link
Copy Markdown
Collaborator

cjayb commented Jan 26, 2021

This is fantastic stuff @rythorpe !! You should rebase to get @ntolley 's mod to the spike raster. I tried to simplify the plotting sections, but I see why you've done what you have. This is an advanced demo, so more explicit python is ok imho. So cool!

@jasmainak
Copy link
Copy Markdown
Collaborator

Could be some temporary issue. The MPIBackend might be fragile with respect to timings etc. I would actually stick to JoblibBackend for the example if we can. So we don't get stuck with MPI issues in most examples except the one meant to demo it. I restarted CircleCI.

Did you check if you can even use multiple cores on Circle? If yes, and if MPI helps make the examples faster, we should consider having a function to set the backend globally and then set it in CircleCI so that the user does not see it in the examples and they can just run it without MPI. But anyhow, this is for 0.2

@jasmainak
Copy link
Copy Markdown
Collaborator

If it fails again, try to do what I suggest above. But if you really want to stick to MPI, you can make the sphinx output verbose here and then make a push so you see all the outputs not just what is coming from the error log.

Copy link
Copy Markdown
Collaborator

@cjayb cjayb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks good and reads well @rythorpe ! I have only one suggestion: consider adding a static image of the pyvista-generated plot, such as:

3dsnapshot

and adding a 'caption' to it, so non-mne users get an idea of what we're doing here.

@ntolley I'm done here, please merge when @rythorpe has commented on whether he wants to add the figure (either way, great work!)

@jasmainak
Copy link
Copy Markdown
Collaborator

jasmainak commented Jan 30, 2021

@rythorpe if you do add a static image, please directly link to @cjayb 's image above or post a comment with the image and then link to it. That way we don't pollute the git history with images. It's a bit of a hack but will do the job for now

@jasmainak
Copy link
Copy Markdown
Collaborator

You can press 'i' to toggle the interface and get a cleaner image

@rythorpe
Copy link
Copy Markdown
Contributor Author

rythorpe commented Feb 1, 2021

@rythorpe if you do add a static image, please directly link to @cjayb 's image above or post a comment with the image and then link to it. That way we don't pollute the git history with images. It's a bit of a hack but will do the job for now

What do you think of 47e0181? I couldn't figure out how to get a formal caption to show up with an in-line figure in a sphinx-generated document, so I added more text to the paragraph prior. Also, I'm thinking that @cjayb's figure will do nicely since it looks very similar to what the user will produce should they run the code (unlike a cleaned-up version of the figure that I might create).

Alternatively, we could add these figures (lateral and top view, respectively) if anyone prefers:

somato_example_b

somato_example_c

@jasmainak
Copy link
Copy Markdown
Collaborator

I think it looks good. Do you not want to plot on the inflated surface? I can't see the label otherwise ...

@rythorpe
Copy link
Copy Markdown
Contributor Author

rythorpe commented Feb 1, 2021

I think it looks good. Do you not want to plot on the inflated surface? I can't see the label otherwise ...

I'm hesitant to plot this on an inflated surface because the whole point of this exercise is to prompt users to observe how real cortical structure can influence the orientation of a dipolar inverse solution.

@jasmainak
Copy link
Copy Markdown
Collaborator

but you don't lose that information in inflated surface. Sulci and gyri are shown in different colors ...

@rythorpe
Copy link
Copy Markdown
Contributor Author

rythorpe commented Feb 1, 2021

but you don't lose that information in inflated surface. Sulci and gyri are shown in different colors ...

This might be a conversation better had over video call. In theory, you're correct. In practice, however, I'm afraid that new users won't think twice about merely plotting a distributed inverse solution on an inflated brain, selecting the most responsive vertex (or mean or principle component vertex) and orienting the time course in HNN independent of known somatotopic organization and corresponding landmarks.

@jasmainak
Copy link
Copy Markdown
Collaborator

humm ... okay, let's discuss over video. I'll merge this PR when the CI comes green.

I also noticed that you used smoothing=0. I know smoothing can be confusing but it's how folks have been plotting MNE solutions for years. If you see patchy dot-like activity at a certain location, it's not because the activity there is truly 0 but rather that point may not have been sampled in the source space.

@cjayb
Copy link
Copy Markdown
Collaborator

cjayb commented Feb 1, 2021

I think I'm with @rythorpe on not using the inflated surface. However, I just realised my screenshot is actually showing the anterior bank of whatever sulcus is posterior to the post-central one! Is that where the peak vertex is located? It could very well be, although physiologically the source must be on the "next sulcus over", i.e., the posterior bank of the Rolandic fissure, as we describe in the text...

This is getting complicated and potentially confusing! We shouldn't draw too much attention to this, it's supposed to be helpful.

@rythorpe
Copy link
Copy Markdown
Contributor Author

rythorpe commented Feb 1, 2021

I think I'm with @rythorpe on not using the inflated surface. However, I just realised my screenshot is actually showing the anterior bank of whatever sulcus is posterior to the post-central one! Is that where the peak vertex is located? It could very well be, although physiologically the source must be on the "next sulcus over", i.e., the posterior bank of the Rolandic fissure, as we describe in the text...

This is getting complicated and potentially confusing! We shouldn't draw too much attention to this, it's supposed to be helpful.

@cjayb since our method extracts the primary principle component time course from the post-central gyrus label, it's hard to tell exactly where the source we model is located. From the literature, you're right that it should be somewhere in area 3b/1 of the hand representation (i.e., the omega-shaped 'ripple' in the CS). I know it's late for you, but I invited you to @jasmainak and my quick zoom call later today if you wish to join.

@jasmainak
Copy link
Copy Markdown
Collaborator

jasmainak commented Feb 1, 2021 via email

@rythorpe
Copy link
Copy Markdown
Contributor Author

rythorpe commented Feb 1, 2021

How about these two images showing the label of interest and MNE activation, respectively.

fig_1

fig_2

@jasmainak
Copy link
Copy Markdown
Collaborator

jasmainak commented Feb 1, 2021 via email

@rythorpe
Copy link
Copy Markdown
Contributor Author

rythorpe commented Feb 1, 2021

I'm learning that formatting with reStructured text can be frustrating. I've updated the code to produce the figures shown in my previous comment and updated the corresponding links, though I'm open to suggestions for improving the formatting.

Comment thread examples/plot_simulate_somato.py Outdated
# Now we run the simulation over 2 trials so that we can plot the average
# aggregate dipole. As in :ref:`the MPIBackend example
# <sphx_glr_auto_examples_plot_simulate_mpi_backend.py>`, we can use
# ``MPIBackend`` to reduce the simulation time by parallizing across cells in
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going to merge but then noticed there are still references to MPIBackend :)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Fixed in 1d6e639.

Comment thread examples/plot_simulate_somato.py Outdated
Comment thread examples/plot_simulate_somato.py Outdated
Comment on lines +98 to +99
# |mne_label_fig|
# |mne_activity_fig|
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually if you put this after the commented lines, it would look most natural because the plot should appear after the code. But I won't insist if you disagree ...

@jasmainak jasmainak merged commit a4a8ac0 into jonescompneurolab:master Feb 2, 2021
@jasmainak
Copy link
Copy Markdown
Collaborator

Thank you @rythorpe ! It took longer than expected but final product looks great!

@rythorpe rythorpe deleted the improve_somato_example branch February 2, 2021 02:51
@cjayb
Copy link
Copy Markdown
Collaborator

cjayb commented Feb 2, 2021

Looks amazing, really great touch with the figures!!

# .. _this MNE-python example: https://mne.tools/stable/auto_examples/inverse/plot_label_source_activations.html#sphx-glr-auto-examples-inverse-plot-label-source-activations-py # noqa
# .. |mne_label_fig| image:: https://user-images.githubusercontent.com/20212206/106524603-cfe75c80-64b0-11eb-9607-3415195c3e7a.png # noqa
# :width: 400
# .. |mne_activity_fig| image:: https://user-images.githubusercontent.com/20212206/106524542-b514e800-64b0-11eb-835e-497454e75eb9.png # noqa
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you do this as you don't want to have to setup pyvista + vtk on your CI?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it's not really worth the effort to set up headless 3D rendering at this stage of HNN development cycle. We might do it after the 0.1 release though (which will happen soon)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, what is the primary difficulty with setting up the 3D rendering packages on CI?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenGL requires hardware (GPU) which the headless servers don't have/expose. A software emulation layer is needed, which I'm guessing for something as complex as VTK can be a bit of a hassle. Could also be a performance-issue. It's certainly doable (look at mne!), but I'm glad we decided to side-step it for now: there's plenty to work on as it is :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

adding drives from params doesn't set sync_within_trials

6 participants