Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wasmtime traps are 300x slower in stack-heavy test with backtraces enabled #5037

Closed
Stebalien opened this issue Oct 10, 2022 · 3 comments · Fixed by #5049
Closed

Wasmtime traps are 300x slower in stack-heavy test with backtraces enabled #5037

Stebalien opened this issue Oct 10, 2022 · 3 comments · Fixed by #5049
Labels
bug Incorrect behavior in the current implementation that needs fixing

Comments

@Stebalien
Copy link
Contributor

When wasm_backtrace is enabled, wasmtime is 300x slower in a specific stack-heavy test-case that:

  1. Pushes slightly less than 2048 stack elements (~300 recursive rust calls).
  2. Calls back out into the host, which recurses back into a new instance of the same module (about 1k times).
  3. Eventually traps and rewinds through all 1k recursive calls.

While I'll admit that this is an unusual load, it can cause wasmtime to basically just sit there for ~20 seconds while bailing out, when it should take milliseconds at most.

Proposed resolution: Keep the option to disable backtraces. While very useful for debugging, the ability to disable them is still pretty useful.

Test Case

It's not quite as simple as a single wasm file. My guess is that there's some form of non-linear cost to keeping track of stack traces.

Steps to Reproduce

I can provide a reduced test-case that would help, it just might take a bit of time.

Non-reduced test-case:

  1. Checkout Update Wasmtime filecoin-project/ref-fvm#943
  2. Disable https://github.com/filecoin-project/ref-fvm/pull/943/files#diff-f6ca4a1c384bf9a1f911827c3974f3f2b25357d937ce1d529732c09404ed6c92R129
  3. In testing/integration/tests, run cargo test --release native_stack_overflow.

Expected Results

With backtraces disabled, it runs in 0.06 seconds.

Actual Results

With backtraces enabled, it runs in 20 seconds.

Versions and Environment

Wasmtime: 1.0.1

Operating system: Linux (5.19.14)

Architecture: x86_64 (i7-1185G7)

Extra Info

Anything else you'd like to add?

@Stebalien Stebalien added the bug Incorrect behavior in the current implementation that needs fixing label Oct 10, 2022
@alexcrichton
Copy link
Member

Thanks for reporting this! Digging in a bit here one aspect I believe is that we have quadratic behavior capturing the backtraces with interleavings of host and wasm frames, specifically each time Wasmtime returns with a trap back to wasm a backtrace is recaptured only to get thrown away later, meaning that with N sequences of host stack frames we'll capture a backtrace N times (hence the quadratic behavior). That may be one of the contributing factors to this and would probably be best to fix. @fitzgen would you be up for taking a look at this?

In the meantime @Stebalien would you be able to produce a more standalone example of what's going on? I made a simple reproduction myself but was unable to get more than 600 ish interleavings of host/wasm deep so I'm not sure how to get up to the multiple thousands you describe.

@Stebalien
Copy link
Contributor Author

I made a simple reproduction myself but was unable to get more than 600 ish interleavings of host/wasm deep so I'm not sure how to get up to the multiple thousands you describe.

Ah, we run wasmtime on a 64MiB stack to support cases like this. But I'll cobble together a reproducer.

@fitzgen
Copy link
Member

fitzgen commented Oct 11, 2022

Looking into this.

fitzgen added a commit to fitzgen/wasmtime that referenced this issue Oct 12, 2022
This fixes some accidentally quadratic code where we would re-capture a Wasm
stack trace (takes `O(n)` time) every time we propagated a trap through a host
frame back to Wasm (can happen `O(n)` times). And `O(n) * O(n) = O(n^2)`, of
course. Whoops. After this commit, it trapping with a call stack that is `n`
frames deep of Wasm-to-host-to-Wasm calls just captures a single backtrace and
is therefore just a proper `O(n)` time operation, as it is intended to be.

Now we explicitly track whether we need to capture a Wasm backtrace or not when
raising a trap. This unfortunately isn't as straightforward as one might hope,
however, because of the split between `wasmtime::Trap` and
`wasmtime_runtime::Trap`. We need to decide whether or not to capture a Wasm
backtrace inside `wasmtime_runtime` but in order to determine whether to do that
or not we need to reflect on the `anyhow::Error` and see if it is a
`wasmtime::Trap` that already has a backtrace or not. This can't be done the
straightforward way because it would introduce a cyclic dependency between the
`wasmtime` and `wasmtime-runtime` crates. We can't merge those two `Trap`
types-- at least not without effectively merging the whole `wasmtime` and
`wasmtime-runtime` crates together, which would be a good idea in a perfect
world but would be a *ton* of ocean boiling from where we currently are --
because `wasmtime::Trap` does symbolication of stack traces which relies on
module registration information data that resides inside the `wasmtime` crate
and therefore can't be moved into `wasmtime-runtime`. We resolve this problem by
adding a boolean to `wasmtime_runtime::raise_user_trap` that controls whether we
should capture a Wasm backtrace or not, and then determine whether we need a
backtrace or not at each of that function's call sites, which are in `wasmtime`
and therefore can do the reflection to determine whether the user trap already
has a backtrace or not. Phew!

Fixes bytecodealliance#5037
fitzgen added a commit that referenced this issue Oct 13, 2022
#5049)

* Add a benchmark for traps with many Wasm<-->host calls on the stack

* Add a test for expected Wasm stack traces with Wasm<--host calls on the stack when we trap

* Don't re-capture backtraces when propagating traps through host frames

This fixes some accidentally quadratic code where we would re-capture a Wasm
stack trace (takes `O(n)` time) every time we propagated a trap through a host
frame back to Wasm (can happen `O(n)` times). And `O(n) * O(n) = O(n^2)`, of
course. Whoops. After this commit, it trapping with a call stack that is `n`
frames deep of Wasm-to-host-to-Wasm calls just captures a single backtrace and
is therefore just a proper `O(n)` time operation, as it is intended to be.

Now we explicitly track whether we need to capture a Wasm backtrace or not when
raising a trap. This unfortunately isn't as straightforward as one might hope,
however, because of the split between `wasmtime::Trap` and
`wasmtime_runtime::Trap`. We need to decide whether or not to capture a Wasm
backtrace inside `wasmtime_runtime` but in order to determine whether to do that
or not we need to reflect on the `anyhow::Error` and see if it is a
`wasmtime::Trap` that already has a backtrace or not. This can't be done the
straightforward way because it would introduce a cyclic dependency between the
`wasmtime` and `wasmtime-runtime` crates. We can't merge those two `Trap`
types-- at least not without effectively merging the whole `wasmtime` and
`wasmtime-runtime` crates together, which would be a good idea in a perfect
world but would be a *ton* of ocean boiling from where we currently are --
because `wasmtime::Trap` does symbolication of stack traces which relies on
module registration information data that resides inside the `wasmtime` crate
and therefore can't be moved into `wasmtime-runtime`. We resolve this problem by
adding a boolean to `wasmtime_runtime::raise_user_trap` that controls whether we
should capture a Wasm backtrace or not, and then determine whether we need a
backtrace or not at each of that function's call sites, which are in `wasmtime`
and therefore can do the reflection to determine whether the user trap already
has a backtrace or not. Phew!

Fixes #5037

* debug assert that we don't record unnecessary backtraces for traps

* Add assertions around `needs_backtrace`

Unfortunately we can't do

    debug_assert_eq!(needs_backtrace, trap.inner.backtrace.get().is_some());

because `needs_backtrace` doesn't consider whether Wasm backtraces have been
disabled via config.

* Consolidate `needs_backtrace` calculation followed by calling `raise_user_trap` into one place
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Incorrect behavior in the current implementation that needs fixing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants