Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow using async_io::block_on in bevy_tasks #9626

Merged
merged 3 commits into from
Sep 25, 2023

Conversation

BigWingBeat
Copy link
Contributor

Objective

Fixes #9625

Solution

Adds async-io as an optional dependency of bevy_tasks. When enabled, this causes calls to futures_lite::future::block_on to be replaced with calls to async_io::block_on.


Changelog

  • Added a new async-io feature to bevy_tasks. When enabled, this causes bevy_tasks to use async-io's implemention of block_on instead of futures-lite's implementation. You should enable this if you use async-io in your application.

@BigWingBeat
Copy link
Contributor Author

Question: should this feature be transitively exposed by other Bevy crates that depend on bevy_tasks? It should probably be exposed by the root crate at least, right?

@hymm hymm self-requested a review August 29, 2023 16:41
@alice-i-cecile alice-i-cecile added C-Usability A simple quality-of-life change that makes Bevy easier to use A-Tasks Tools for parallel and async work labels Aug 29, 2023
@hymm
Copy link
Contributor

hymm commented Sep 7, 2023

Question: should this feature be transitively exposed by other Bevy crates that depend on bevy_tasks? It should probably be exposed by the root crate at least, right?

Does it work to take a dependency on bevy_tasks in your own crate and enable it there? If not then we definitely should.

This also has a perf impact. Red is futures-lite and yellow is async-io block_on. Not sure if that should be noted somewhere.
image

@BigWingBeat
Copy link
Contributor Author

How did you produce that performance graph? I would like to be able to measure it for myself, and I have an idea for a minor optimization that I want to test.

@hymm
Copy link
Contributor

hymm commented Sep 8, 2023

I used Tracy https://github.com/bevyengine/bevy/blob/main/docs/profiling.md#tracy-profiler. There a windows binary, but if you're on other platforms you'll have to compile it. I usually run with capture -o output.tracy -s 180. It takes about 3 minutes for my system to get consistent results. cargo run --example many_foxes --profile stress-test --features trace_tracy. stress-test rather than release as it has lto enabled.

@BigWingBeat
Copy link
Contributor Author

Does it work to take a dependency on bevy_tasks in your own crate and enable it there? If not then we definitely should.

There's no reason why that shouldn't work, as it's just an ordinary feature flag. I think the feature should at least be exposed through the root bevy crate though, given that bevy_tasks is reexported there.

@BigWingBeat
Copy link
Contributor Author

I've profiled my idea and it does reclaim almost all of the lost performance, however it's a patch to async-io, not to Bevy, so we'll have to accept the reduced performance unless async-io releases a new version with the patch.

futures-lite (Red) vs unpatched async-io (Yellow)

futures-lite vs async-io

futures-lite (Red) vs patched async-io (Yellow)

futures-lite vs patched async-io

@hymm
Copy link
Contributor

hymm commented Sep 12, 2023

Am I understanding correctly that enabling this is running the async-io futures on the thread that the block_on is running on? That makes me a bit nervous as that thread is the main thread in bevy and could potentially lead to more jitter in frame times as the futures are no longer running in parallel. Also for this feature to work right it's dependent on bevy calling block_on. Neither of these are deal breakers necessarily, but definitely a little more awkward than I like. I suspect just running another async executor in another thread would be better in most situations. We recommend this today for users asking to use tokio. Having said that, if all you're running is io-like tasks on the main thread the impact is probably minimal.

Do the bevy_tasks tests pass with the feature on?

There is also a prior pr that also adds tokio block_on support #6137. When I was reviewing that one I didn't understand that the tasks were being run in the block_on. If we decide to merge this, we should probably do the same for tokio.

@hymm hymm requested a review from james7132 September 14, 2023 02:08
@BigWingBeat
Copy link
Contributor Author

I don't entirely understand what you're saying.

Am I understanding correctly that enabling this is running the async-io futures on the thread that the block_on is running on? That makes me a bit nervous as that thread is the main thread in bevy and could potentially lead to more jitter in frame times as the futures are no longer running in parallel. Also for this feature to work right it's dependent on bevy calling block_on.

This PR changes all the places that Bevy already calls block_on, which includes all of the threads spawned by every task pool. What do you mean by futures no longer running in parallel? The only difference made by async-io's block_on compared to futures-lite's, is that async-io's implementation processes I/O events in the time that the calling thread would otherwise be parked, waiting for the future to wake.

I suspect just running another async executor in another thread would be better in most situations. We recommend this today for users asking to use tokio. Having said that, if all you're running is io-like tasks on the main thread the impact is probably minimal.

From what I understand, running multiple async executors in one application is a bad idea. This PR exists specifically so that can be avoided for people who want to use async-io with Bevy, given that bevy_tasks is Bevy's async executor. I think it would be entirely overkill to pull in an entire other async runtime just to ensure that async_io::block_on is being called reliably.

There is also a prior pr that also adds tokio block_on support #6137. When I was reviewing that one I didn't understand that the tasks were being run in the block_on. If we decide to merge this, we should probably do the same for tokio.

That PR seems to do a similar thing to this PR, but in a more complicated and roundabout way, by going through async-global-executor instead of just choosing the block_on impl to use directly. If someone is using Tokio alongside bevy_tasks, I don't see why they would want to spawn Tokio tasks on Bevy's task pools, instead of just using Tokio's own task spawning APIs.

@hymm hymm added the S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it label Sep 25, 2023
@mockersf mockersf added this pull request to the merge queue Sep 25, 2023
Merged via the queue into bevyengine:main with commit 503b861 Sep 25, 2023
25 of 26 checks passed
@BigWingBeat BigWingBeat deleted the async-io branch November 17, 2023 00:50
rdrpenguin04 pushed a commit to rdrpenguin04/bevy that referenced this pull request Jan 9, 2024
# Objective

Fixes bevyengine#9625

## Solution

Adds `async-io` as an optional dependency of `bevy_tasks`. When enabled,
this causes calls to `futures_lite::future::block_on` to be replaced
with calls to `async_io::block_on`.

---

## Changelog

- Added a new `async-io` feature to `bevy_tasks`. When enabled, this
causes `bevy_tasks` to use `async-io`'s implemention of `block_on`
instead of `futures-lite`'s implementation. You should enable this if
you use `async-io` in your application.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Tasks Tools for parallel and async work C-Usability A simple quality-of-life change that makes Bevy easier to use S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Opt-in use of async_io::block_on in bevy_tasks
4 participants