Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tokio as an optional bevy_tasks backend #6762

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

james7132
Copy link
Member

Objective

tokio is the de facto ecosystem standard async executor for the Rust ecosystem. It was initially avoided as it doesn't support all of the target systems that Bevy aims to support (i.e. WASM).

Solution

Add an optional feature flag for enabling tokio as a backing async executor for non-WASM platforms, and enable it by default for them. Use tokio::runtime::Runtime as a replacement for async_executor.

Note: this change conflicts heavily and is mutually exclusive with #4740. Additional care will needed to be taken to support all of the use cases of both options.

This PR is in a draft state as it currently deadlocks on something during normal program execution. This will need some investigation.


Changelog

Added: tokio as an optional and default backend async executor for non-WASM platforms.

@james7132 james7132 added C-Enhancement A new feature C-Usability A simple quality-of-life change that makes Bevy easier to use A-Tasks Tools for parallel and async work S-Needs-Investigation This issue requires detective work to figure out what's going wrong labels Nov 26, 2022

let _guard = task_scope_runtime.enter();
loop {
if let Some(result) = self.runtime.block_on(future::poll_once(&mut get_results)) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The primary problem I'm finding is that tokio's block_on does not allow us to internally tick another runtime without blocking. This might be what's causing the deadlock.

@inodentry
Copy link
Contributor

I would be very happy for tokio to be supported as a backend for Bevy. This would open up easy interopability with the tokio ecosystem. Previously, whenever I have wanted to use crates based on tokio (such as for network protocols, like quinn), I've had to create a separate tokio runtime / executor to run in parallel with Bevy, which is ugly and not ideal.

@hymm
Copy link
Contributor

hymm commented Dec 11, 2022

Poked at this some more. So my current branch only allows one local executor to run in the whole task pool. https://github.com/hymm/bevy/tree/tokio. This breaks the 2 thread locality tests, since they need to run a local executor on every thread. Bevy is able to run ok though, since it doesn't actively use that functionality.

I have a way of running multiple local executors, but it breaks when scopes are nested and the inner scope tries to tick it's local executor. Tokio complains about running one executor inside of another one. We need nested scopes since we use them when running par_for_each inside of a system. There are a couple possible fixes for this that I can see.

  1. par_for_each doesn't need the spawn_on_scope functionality. So we could only tick the local executor on the scope that runs the parallel executor. This could be done by either configuring the scope with a const and panicking if spawn_on_scope is called when it is disabled. Or we could create a new scope type that doesn't have the spawn_on_scope api.
  2. We're trying to move non send resources out of the world and once that is done spawn_in_scope will no longer be necessary and we will no longer need a local executor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Tasks Tools for parallel and async work C-Enhancement A new feature C-Usability A simple quality-of-life change that makes Bevy easier to use S-Needs-Investigation This issue requires detective work to figure out what's going wrong
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants