Implement support for dynamic memories in the pooling allocator#5208
Merged
alexcrichton merged 2 commits intobytecodealliance:mainfrom Nov 8, 2022
Merged
Implement support for dynamic memories in the pooling allocator#5208alexcrichton merged 2 commits intobytecodealliance:mainfrom
alexcrichton merged 2 commits intobytecodealliance:mainfrom
Conversation
This is a continuation of the thrust in bytecodealliance#5207 for reducing page faults and lock contention when using the pooling allocator. To that end this commit implements support for efficient memory management in the pooling allocator when using wasm that is instrumented with bounds checks. The `MemoryImageSlot` type now avoids unconditionally shrinking memory back to its initial size during the `clear_and_remain_ready` operation, instead deferring optional resizing of memory to the subsequent call to `instantiate` when the slot is reused. The instantiation portion then takes the "memory style" as an argument which dictates whether the accessible memory must be precisely fit or whether it's allowed to exceed the maximum. This in effect enables skipping a call to `mprotect` to shrink the heap when dynamic memory checks are enabled. In terms of page fault and contention this should improve the situation by: * Fewer calls to `mprotect` since once a heap grows it stays grown and it never shrinks. This means that a write lock is taken within the kernel much more rarely from before (only asymptotically now, not N-times-per-instance). * Accessed memory after a heap growth operation will not fault if it was previously paged in by a prior instance and set to zero with `memset`. Unlike bytecodealliance#5207 which requires a 6.0 kernel to see this optimization this commit enables the optimization for any kernel. The major cost of choosing this strategy is naturally the performance hit of the wasm itself. This is being looked at in PRs such as bytecodealliance#5190 to improve Wasmtime's story here. This commit does not implement any new configuration options for Wasmtime but instead reinterprets existing configuration options. The pooling allocator no longer unconditionally sets `static_memory_bound_is_maximum` and then implements support necessary for this memory type. This other change to this commit is that the `Tunables::static_memory_bound` configuration option is no longer gating on the creation of a `MemoryPool` and it will now appropriately size to `instance_limits.memory_pages` if the `static_memory_bound` is to small. This is done to accomodate fuzzing more easily where the `static_memory_bound` will become small during fuzzing and otherwise the configuration would be rejected and require manual handling. The spirit of the `MemoryPool` is one of large virtual address space reservations anyway so it seemed reasonable to interpret the configuration this way.
Subscribe to Label Actioncc @fitzgen DetailsThis issue or pull request has been labeled: "fuzzing"Thus the following users have been cc'd because of the following labels:
To subscribe or unsubscribe from this label, edit the |
These are causing errors to happen when fuzzing and otherwise in theory shouldn't be too interesting to optimize for anyway since they likely aren't used in practice.
peterhuene
approved these changes
Nov 8, 2022
Member
|
Sorry for the delay on reviewing this. I really need to update my notification filtering to make review requests high-priority as they get lost in the flood. |
Member
Author
|
No worries! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is a continuation of the thrust in #5207 for reducing page faults and lock contention when using the pooling allocator. To that end this commit implements support for efficient memory management in the pooling allocator when using wasm that is instrumented with bounds checks.
The
MemoryImageSlottype now avoids unconditionally shrinking memory back to its initial size during theclear_and_remain_readyoperation, instead deferring optional resizing of memory to the subsequent call toinstantiatewhen the slot is reused. The instantiation portion then takes the "memory style" as an argument which dictates whether the accessible memory must be precisely fit or whether it's allowed to exceed the maximum. This in effect enables skipping a call tomprotectto shrink the heap when dynamic memory checks are enabled.In terms of page fault and contention this should improve the situation by:
Fewer calls to
mprotectsince once a heap grows it stays grown and it never shrinks. This means that a write lock is taken within the kernel much more rarely from before (only asymptotically now, not N-times-per-instance).Accessed memory after a heap growth operation will not fault if it was previously paged in by a prior instance and set to zero with
memset. Unlike Add support for keeping pooling allocator pages resident #5207 which requires a 6.0 kernel to see this optimization this commit enables the optimization for any kernel.The major cost of choosing this strategy is naturally the performance hit of the wasm itself. This is being looked at in PRs such as #5190 to improve Wasmtime's story here.
This commit does not implement any new configuration options for Wasmtime but instead reinterprets existing configuration options. The pooling allocator no longer unconditionally sets
static_memory_bound_is_maximumand then implements support necessary for this memory type. This other change to this commit is that theTunables::static_memory_boundconfiguration option is no longer gating on the creation of aMemoryPooland it will now appropriately size toinstance_limits.memory_pagesif thestatic_memory_boundis to small. This is done to accomodate fuzzing more easily where thestatic_memory_boundwill become small during fuzzing and otherwise the configuration would be rejected and require manual handling. The spirit of theMemoryPoolis one of large virtual address space reservations anyway so it seemed reasonable to interpret the configuration this way.