No longer hold dependencies of erred tasks in memory#4918
Conversation
|
In general not holding onto dependencies of erred tasks seems fine to me |
|
@fjetter should this be merged in? |
Yes, I'm typically a bit hesitant to merge behavioural changing PRs if there is little feedback. I guess your "seems fine" is the admin override for others and I shouldn't wait for more feedback :) |
From my perspective you are now the admin on tricky scheduler state issues. "Admin" in my mind is loosely defined as "the person who cleans things up if they break" and that's clearly you today :) Merging. |
I think that maybe historically the client still reached out to the worker for exception and traceback information? Today we seem to hold this information on the scheduler in serialized form. Maybe that explains the odd structure in the code in main today? (my memory here is unreliable) |
There may also be the possibility that the rerun tasks feature would work a bit better but I'm leaning towards reduced complexity in favour of "convenient error rerun for debuggin". Haven't checked if it even works that way, though. |
This is a follow up to #4784 and reduces complexity of
Worker.release_keysignificantly.There is one non-trivial behavioural change regarding erred tasks. Current
mainbranch holds on to dependencies of an erred task on a worker and implements a release mechanism once that erred task is released. I implemented this recently trying to capture status quo but I'm not convinced any longer that this is the correct behaviour. It treats the erred case specially which introduces a lot of complexity. The only place where this might be of interest is if an erred task wants to be recomputed locally. Not forgetting the data keys until the erred task was released would speed up this process. However, we'd still need to potentially compute some keys and I'm inclined to strike this feature in favour of reduced complexity.Thoughts?
cc @mrocklin @jrbourbeau @crusaderky
black distributed/flake8 distributed/isort distributed