MRG, ENH: Add brain movies to rendered examples#8265
MRG, ENH: Add brain movies to rendered examples#8265agramfort merged 7 commits intomne-tools:masterfrom
Conversation
|
Okay seems to be working:
HTML sizes can be reduced by reducing the Let me know if people think it's worthwhile continuing and if so I'll add some tests. |
|
honestly this is really cool but after our recent discussions about the
crazy long doc build
I am cautious here. We should first try to reduce time before allowing a
new bump in time.
my 2c
… |
|
At least for dSPM on circle it's only an extra 20 sec. For LCMV volume I agree it's too much |
|
can you activate it only for one example ?
… |
|
Yep, you have to tell it which examples to turn into movies via comments. So I can just deactivate the LCMV one for now. Then the only one that's a movie will be |
|
ok fair enough then !
… |
| # The documentation website's movie is generated with: | ||
| # brain.save_movie(..., tmin=0.05, tmax=0.15, interpolation='linear', | ||
| # time_dilation=20, framerate=10, time_viewer=True) |
There was a problem hiding this comment.
@agramfort the scraper looks for a line that is # brain.save_movie( and if it's there actually makes one using the call (equivalently, anyway). So it allows/forces us to keep our narrative doc descriptive while also allowing us to specify parameters.
|
ok
… |
|
Then the next question is: are there any other examples that you'd like to see turned into movies? I'd like a vector source estimate one, I can look for that. |
|
sounds good. No other idea.
… |
drammock
left a comment
There was a problem hiding this comment.
LGTM pending removal of the LCMV movie for now
|
Pushed a commit to fix the documentation of Should we make |
|
thx @larsoner ! |
|
Should I backport? It does have some nice doc updates in addition to the candy. It would be nice to make _Brain public, maybe with a warning about the API bring incomplete... |





Eventually it would be nice to have animations play for some of our source modeling examples. I plan to:
time_viewer=Falsetobrain.screenshotto allow getting the brain+traces instead of just the tracesusing matplotlib's javascript (maybe just by subclassing their animation class?) and sphinx-gallery's animation code (maybe by making this public? not sure)turning brain outputs into matplotlib figures and using matplotlib_scraper(...) directly# sphinx_gallery_brain_moviecode parsing that doesbrain.save_movie(...)then uses the javascript to embed a movieNot 100% sure it will work or produce movies of a reasonable size, but it would be pretty cool if it worked.