Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task state logs and data fix #4206

Merged
merged 2 commits into from
Nov 4, 2020

Conversation

gforsyth
Copy link
Contributor

I'm putting this in on top of @fjetter's fix in #4200 to try to track down the assertion errors in the task_validate that showed up in CI.

I can reproduce that failure locally one out of every 10 times or so, running pytest test_failed_workers.py::test_worker_who_has_clears_after_failed_connection -s

To try to track down what was happening, I've added in per-task logs.
What I found from there is that @fjetter is almost certainly correct that there's an extra release_key call somewhere.
The failure occurs consistently for me where the only task-level log entry is "new", indicating a newly created TaskState object, but the same key already exists in self.data, which would indicate that we've called release_key but that because there are dependents of that particular task, we don't clear out the entry in self.data.

I've "fixed" that here by just nuking the key if it's present in self.data, but I don't think that's the correct way to fix this. I'm putting this up in case anyone else can make use of the task-level logs (and I'll also keep looking for the stray call)

@mrocklin
Copy link
Member

I wonder if the current Worker.story function would serve your needs here

@fjetter
Copy link
Member

fjetter commented Nov 2, 2020

I noticed

elif ts is None or ts.state == "executing":
self.release_key(d)
continue
was newly introduced and hasn't been there before the refactoring and I don't understand why we're calling release here during the flight->waiting transition
self.release_key(ts.key)

but I think this has been around for a while

That's about as far as I got so far.

@gforsyth gforsyth force-pushed the task_state_logs_and_data_fix branch 2 times, most recently from 44a053e to 8cec77c Compare November 2, 2020 15:34
@gforsyth
Copy link
Contributor Author

gforsyth commented Nov 2, 2020

I wonder if the current Worker.story function would serve your needs here

Thanks @mrocklin -- that did indeed serve my needs.

And thanks for the pointers @fjetter (and for your work fixing up some of the regressions I introduced).

I went in circles for a bit with this one, but I think I have it fixed now.

So test_who_has_clears_after_failed_connection would occasionally hit a validation error when trying to transition from waiting -> ready because the task key already existed in self.data.

From @fjetter's notes and examining the story of the tasks that raised this, I thought this was because of an extra call to release_key somewhere, and it is, but that call is in the test -- it's called explicitly as part of that test.

So when it's called but the given task still has dependent tasks, we remove the TaskState object but leave the data in self.data (because the dependents probably need it) -- but then when the task is recreated as part of the call to add_task (on behalf of that dependent task), the state is set to "waiting", because this is almost always the correct state.

But not always, as in this case. If the data already exists on the worker because something called release_key and then the task is recreated, the state should be memory because the output is already in self.data

fjetter and others added 2 commits November 3, 2020 09:40
This fixes the sporadic error that was showing up in
test_failed_workers.py::test_worker_who_has_clears_after_failed_connection

The tests would occasionally hit a validation error when trying to
transition from waiting -> executing because the task key already
existed in `self.data`.

I thought this was because of an extra call to `release_key` somewhere,
and it _is_, but that call is _in the test_ -- it's called explicitly as
part of that test.

So when it's called but the given task still has dependent tasks, we
remove the TaskState object but leave the data in `self.data` (because
the dependents probably need it) -- but then when the task is recreated
as part of the call to `add_task` (on behalf of that depedent task), the
state is set to "waiting", because this is almost always the correct
state.

But not always, as in this case. If the data already exists on the
worker because something called `release_key` and then the task is
recreated, the state should be `memory` because the output is already in `self.data`
@mrocklin
Copy link
Member

mrocklin commented Nov 3, 2020

Everything here seems ok to me.

@mrocklin mrocklin merged commit d7f532c into dask:master Nov 4, 2020
@gforsyth gforsyth deleted the task_state_logs_and_data_fix branch November 4, 2020 14:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants