-
-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Task state logs and data fix #4206
Conversation
I wonder if the current |
I noticed distributed/distributed/worker.py Lines 2089 to 2091 in cee4e3c
distributed/distributed/worker.py Line 1562 in cee4e3c
but I think this has been around for a while That's about as far as I got so far. |
44a053e
to
8cec77c
Compare
Thanks @mrocklin -- that did indeed serve my needs. And thanks for the pointers @fjetter (and for your work fixing up some of the regressions I introduced). I went in circles for a bit with this one, but I think I have it fixed now. So From @fjetter's notes and examining the story of the tasks that raised this, I thought this was because of an extra call to So when it's called but the given task still has dependent tasks, we remove the TaskState object but leave the data in But not always, as in this case. If the data already exists on the worker because something called |
This fixes the sporadic error that was showing up in test_failed_workers.py::test_worker_who_has_clears_after_failed_connection The tests would occasionally hit a validation error when trying to transition from waiting -> executing because the task key already existed in `self.data`. I thought this was because of an extra call to `release_key` somewhere, and it _is_, but that call is _in the test_ -- it's called explicitly as part of that test. So when it's called but the given task still has dependent tasks, we remove the TaskState object but leave the data in `self.data` (because the dependents probably need it) -- but then when the task is recreated as part of the call to `add_task` (on behalf of that depedent task), the state is set to "waiting", because this is almost always the correct state. But not always, as in this case. If the data already exists on the worker because something called `release_key` and then the task is recreated, the state should be `memory` because the output is already in `self.data`
8cec77c
to
e21b8c9
Compare
Everything here seems ok to me. |
I'm putting this in on top of @fjetter's fix in #4200 to try to track down the assertion errors in the
task_validate
that showed up in CI.I can reproduce that failure locally one out of every 10 times or so, running
pytest test_failed_workers.py::test_worker_who_has_clears_after_failed_connection -s
To try to track down what was happening, I've added in per-task logs.
What I found from there is that @fjetter is almost certainly correct that there's an extra
release_key
call somewhere.The failure occurs consistently for me where the only task-level log entry is "new", indicating a newly created
TaskState
object, but the same key already exists inself.data
, which would indicate that we've calledrelease_key
but that because there aredependents
of that particular task, we don't clear out the entry inself.data
.I've "fixed" that here by just nuking the key if it's present in
self.data
, but I don't think that's the correct way to fix this. I'm putting this up in case anyone else can make use of the task-level logs (and I'll also keep looking for the stray call)