-
-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stimulus id contextvars explicit handlers #6083
Stimulus id contextvars explicit handlers #6083
Conversation
@fjetter Would you mind taking a brief look at the approach here and giving me your opinion on the overall complexity required here? I think looking at scheduler.py and test_scheduler.py will give you 95% of the flavour. Essentially, explicit handlers requiring stimulus_id's are created that call implementations. So for e.g. there's an explicit |
This is based on #6046 |
@@ -8204,6 +8266,34 @@ def request_remove_replicas(self, addr: str, keys: list, *, stimulus_id: str): | |||
} | |||
) | |||
|
|||
handle_update_graph_hlg = expect_stimulus(sync=True)(update_graph_hlg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we should not involve the Client at this point. There is almost no additional information avialable if the client generates an ID or if the scheduler just generates it.
We wouldn't need to define this many handlers if the client is not involved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are sending other stimuli from the client though? e.g. key release stimuli?
With handlers explicitly requiring stimuli to be sent in this PR, it's necessary to do so in other places in the Client too. Why not complete the loop on all stimuli?
9b08e29
to
1ddffb0
Compare
distributed/utils_test.py
Outdated
finally: | ||
STIMULUS_ID.reset(token) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be removed
await self.remove_worker(address=ws._address) | ||
await self.handle_remove_worker( | ||
address=ws._address, stimulus_id=f"check-worker-ttl-{time()}" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pehaps check_worker_ttl should be decorated with expects_stimulus
? I guess an initialising stimulus could be provided when setting up the PeriodicCallback
Unit Test Results 18 files ± 0 18 suites ±0 9h 19m 39s ⏱️ - 1m 55s For more details on these failures, see this check. Results for commit bfcbdb0. ± Comparison against base commit 6e30766. ♻️ This comment has been updated with latest results. |
Difficult to get ContextVar interaction with Tornado right in |
pre-commit run --all-files