WIP: Improve _Brain memory footprint#8018
Conversation
|
Also how many |
|
Waiting for the first CI report then I'll push again. |
These should already live in
Eventually we will want to support having multiple data overlaid, possibly with different colormaps (?), but we probably need to refactor quite a bit to support it anyway |
mne/viz/_brain/_brain.py
Outdated
| self._hemi_meshes[h] = mesh | ||
| self._hemi_actors[h] = actor | ||
| actor = self._hemi_actors[h] | ||
| self._renderer.add_actor(actor) |
There was a problem hiding this comment.
Are you allowed in VTK to add the same actor to multiple scenes/views?
There was a problem hiding this comment.
In practice, it works great for me but apparently this is not recommended...
Reference: http://vtk.1045678.n5.nabble.com/Sharing-vtkRenderer-td5735337.html
I have to rethink all that
There was a problem hiding this comment.
The example of the thread mentions multiple windows whereas in our case, it's multiple renderers of the same window. I'll rework that anyway
There was a problem hiding this comment.
Everywhere I look says that actors cannot be shared, the same goes for mappers. The only thing that can be shared is the PolyData itself. I'll restart from scratch.
mne/viz/_brain/_brain.py
Outdated
| for mesh in hemi_data['mesh']: | ||
| if hemi_data['mesh'] is not None: | ||
| mesh = hemi_data['mesh'] | ||
| if mesh is not None: |
mne/viz/backends/_pyvista.py
Outdated
| hide=False): | ||
| def _add_polydata(renderer, polydata, name=None, | ||
| hide=False): | ||
| if not hasattr(renderer, "plotter"): |
There was a problem hiding this comment.
Perhaps add a comment about when this can happen?
|
Running mayavi on this PR (should be the same as I get: Adding On this PR it looks like memory is reduced (hooray!), but only by a few hundred MB (better than nothing): Removing the So it must be something about these options. Let's try And then @GuillaumeFavelier it seems that the problem is really having both of these enabled at once, which means it's something about how we do
|
|
Never mind that last point, it doesn't make any sense. I'll work on looking into the |
|
I think I found the circular ref, it was passing the |
|
Other huge memory blowup is actually storing |
|
oh boy...
… |
|
While messing with this, I notice this in testing: PyVista is much slower for Basically I think every call to |
It's a good idea. I mean if those infos are available, we should use them instead of re-calculating from scratch. |
|
... and even if they're not available:
|
3f1b199 to
bd6f426
Compare
|
In the current state, we're close to the goal but I'm not too sure about the memory gain anymore, I need to run At least we don't re-create a |
Yes, we passed In this PR, because we provide the normals directly/manually, the surfaces should be smooth shaded now. Are they? |
|
When the |
|
Great, Phong is back! Thanks @GuillaumeFavelier |






This PR follows #7927 (comment) and uses various solutions to decrease memory consumption:
vtkPolyDatadatasetvtkPolyDatadatasetmesh()and_mesh()self.geo[hemi].nnto avoid unnecessary computationFeel free to comment on this @larsoner