Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement support for dynamic memories in the pooling allocator #5208

Merged
merged 2 commits into from
Nov 8, 2022

Conversation

alexcrichton
Copy link
Member

This is a continuation of the thrust in #5207 for reducing page faults and lock contention when using the pooling allocator. To that end this commit implements support for efficient memory management in the pooling allocator when using wasm that is instrumented with bounds checks.

The MemoryImageSlot type now avoids unconditionally shrinking memory back to its initial size during the clear_and_remain_ready operation, instead deferring optional resizing of memory to the subsequent call to instantiate when the slot is reused. The instantiation portion then takes the "memory style" as an argument which dictates whether the accessible memory must be precisely fit or whether it's allowed to exceed the maximum. This in effect enables skipping a call to mprotect to shrink the heap when dynamic memory checks are enabled.

In terms of page fault and contention this should improve the situation by:

  • Fewer calls to mprotect since once a heap grows it stays grown and it never shrinks. This means that a write lock is taken within the kernel much more rarely from before (only asymptotically now, not N-times-per-instance).

  • Accessed memory after a heap growth operation will not fault if it was previously paged in by a prior instance and set to zero with memset. Unlike Add support for keeping pooling allocator pages resident #5207 which requires a 6.0 kernel to see this optimization this commit enables the optimization for any kernel.

The major cost of choosing this strategy is naturally the performance hit of the wasm itself. This is being looked at in PRs such as #5190 to improve Wasmtime's story here.

This commit does not implement any new configuration options for Wasmtime but instead reinterprets existing configuration options. The pooling allocator no longer unconditionally sets
static_memory_bound_is_maximum and then implements support necessary for this memory type. This other change to this commit is that the Tunables::static_memory_bound configuration option is no longer gating on the creation of a MemoryPool and it will now appropriately size to instance_limits.memory_pages if the static_memory_bound is to small. This is done to accomodate fuzzing more easily where the static_memory_bound will become small during fuzzing and otherwise the configuration would be rejected and require manual handling. The spirit of the MemoryPool is one of large virtual address space reservations anyway so it seemed reasonable to interpret the configuration this way.

This is a continuation of the thrust in bytecodealliance#5207 for reducing page faults
and lock contention when using the pooling allocator. To that end this
commit implements support for efficient memory management in the pooling
allocator when using wasm that is instrumented with bounds checks.

The `MemoryImageSlot` type now avoids unconditionally shrinking memory
back to its initial size during the `clear_and_remain_ready` operation,
instead deferring optional resizing of memory to the subsequent call to
`instantiate` when the slot is reused. The instantiation portion then
takes the "memory style" as an argument which dictates whether the
accessible memory must be precisely fit or whether it's allowed to
exceed the maximum. This in effect enables skipping a call to `mprotect`
to shrink the heap when dynamic memory checks are enabled.

In terms of page fault and contention this should improve the situation
by:

* Fewer calls to `mprotect` since once a heap grows it stays grown and
  it never shrinks. This means that a write lock is taken within the
  kernel much more rarely from before (only asymptotically now, not
  N-times-per-instance).

* Accessed memory after a heap growth operation will not fault if it was
  previously paged in by a prior instance and set to zero with `memset`.
  Unlike bytecodealliance#5207 which requires a 6.0 kernel to see this optimization this
  commit enables the optimization for any kernel.

The major cost of choosing this strategy is naturally the performance
hit of the wasm itself. This is being looked at in PRs such as bytecodealliance#5190 to
improve Wasmtime's story here.

This commit does not implement any new configuration options for
Wasmtime but instead reinterprets existing configuration options. The
pooling allocator no longer unconditionally sets
`static_memory_bound_is_maximum` and then implements support necessary
for this memory type. This other change to this commit is that the
`Tunables::static_memory_bound` configuration option is no longer gating
on the creation of a `MemoryPool` and it will now appropriately size to
`instance_limits.memory_pages` if the `static_memory_bound` is to small.
This is done to accomodate fuzzing more easily where the
`static_memory_bound` will become small during fuzzing and otherwise the
configuration would be rejected and require manual handling. The spirit
of the `MemoryPool` is one of large virtual address space reservations
anyway so it seemed reasonable to interpret the configuration this way.
@github-actions github-actions bot added the fuzzing Issues related to our fuzzing infrastructure label Nov 4, 2022
@github-actions
Copy link

github-actions bot commented Nov 4, 2022

Subscribe to Label Action

cc @fitzgen

This issue or pull request has been labeled: "fuzzing"

Thus the following users have been cc'd because of the following labels:

  • fitzgen: fuzzing

To subscribe or unsubscribe from this label, edit the .github/subscribe-to-label.json configuration file.

Learn more.

These are causing errors to happen when fuzzing and otherwise in theory
shouldn't be too interesting to optimize for anyway since they likely
aren't used in practice.
@peterhuene
Copy link
Member

Sorry for the delay on reviewing this. I really need to update my notification filtering to make review requests high-priority as they get lost in the flood.

@alexcrichton
Copy link
Member Author

No worries!

@alexcrichton alexcrichton merged commit 50cffad into bytecodealliance:main Nov 8, 2022
@alexcrichton alexcrichton deleted the pooling-dynamic branch November 8, 2022 20:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fuzzing Issues related to our fuzzing infrastructure
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants