[train] after_worker_group_poll_status errors result in ControllerError#57869
Merged
matthewdeng merged 7 commits intoray-project:masterfrom Oct 22, 2025
Merged
Conversation
Signed-off-by: Timothy Seah <tseah@anyscale.com>
Contributor
There was a problem hiding this comment.
Code Review
This pull request aims to gracefully handle exceptions from after_worker_group_poll_status callbacks by wrapping them in a ControllerError. The changes achieve this by modifying WorkerGroup.poll_status to catch and return exceptions from callbacks. The TrainController is updated to handle this new return value, correctly identifying these exceptions as controller-level failures. The changes are well-tested, with updates to existing tests and a new test case specifically for callback exceptions. I found one minor issue in a test case where an exception class was used instead of an instance to simulate a failure. Overall, this is a good change that improves error handling and robustness.
Signed-off-by: Timothy Seah <tseah@anyscale.com>
Signed-off-by: Timothy Seah <tseah@anyscale.com>
…catch-callback-errors
matthewdeng
reviewed
Oct 20, 2025
python/ray/train/v2/_internal/execution/controller/controller.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Timothy Seah <tseah@anyscale.com>
Signed-off-by: Timothy Seah <tseah@anyscale.com>
Signed-off-by: Timothy Seah <tseah@anyscale.com>
matthewdeng
approved these changes
Oct 21, 2025
elliot-barn
pushed a commit
that referenced
this pull request
Oct 23, 2025
…or (#57869) # Summary We observed that whenever `after_worker_group_poll_status` raised an exception, the Train Run would fail ungracefully and show up as `ABORTED` in the dashboard. This happened in the following situations: 1) Different workers report remote checkpoints with different paths -> `(TrainController pid=46993) RuntimeError: The storage path of the checkpoints in the training results is not the same. This means the checkpoints are not consistent. Got a mix of the following checkpoint paths: {'/tmp/tmpl95kv7ax', '/tmp/tmp__8e6etk'} ` -> `ABORTED` Train Run 2) `ray.train.report("loss": ...}, checkpoint=checkpoint)` in `train_func` -> `TypeError: Object of type 'ellipsis' is not JSON serializable` in `CheckpointManager._save_state` -> `ABORTED` Train Run This PR catches these exceptions, wraps them in a `ControllerError`, and goes through the `FailurePolicy`, ultimately resulting in an `ERRORED` Train Run, which is more intuitive because it happened due to an error in the training workers (`The Train run failed due to an error in the training workers.` is the comment associated with `RunStatus.ERRORED`). I considered implementing a more general solution that caught all `WorkerGroupCallback` errors and resurfaced them as `ControllerError`s, but decided against it because: * Callbacks occur in many different places and we might want to add custom try/catch logic in each case. * `after_worker_group_poll_status` is the only offender so far and most of its errors are from user mistakes; other callback errors could be legitimate bugs that should result in `ABORTED` # Testing Unit tests --------- Signed-off-by: Timothy Seah <tseah@anyscale.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
landscapepainter
pushed a commit
to landscapepainter/ray
that referenced
this pull request
Nov 17, 2025
…or (ray-project#57869) # Summary We observed that whenever `after_worker_group_poll_status` raised an exception, the Train Run would fail ungracefully and show up as `ABORTED` in the dashboard. This happened in the following situations: 1) Different workers report remote checkpoints with different paths -> `(TrainController pid=46993) RuntimeError: The storage path of the checkpoints in the training results is not the same. This means the checkpoints are not consistent. Got a mix of the following checkpoint paths: {'/tmp/tmpl95kv7ax', '/tmp/tmp__8e6etk'} ` -> `ABORTED` Train Run 2) `ray.train.report("loss": ...}, checkpoint=checkpoint)` in `train_func` -> `TypeError: Object of type 'ellipsis' is not JSON serializable` in `CheckpointManager._save_state` -> `ABORTED` Train Run This PR catches these exceptions, wraps them in a `ControllerError`, and goes through the `FailurePolicy`, ultimately resulting in an `ERRORED` Train Run, which is more intuitive because it happened due to an error in the training workers (`The Train run failed due to an error in the training workers.` is the comment associated with `RunStatus.ERRORED`). I considered implementing a more general solution that caught all `WorkerGroupCallback` errors and resurfaced them as `ControllerError`s, but decided against it because: * Callbacks occur in many different places and we might want to add custom try/catch logic in each case. * `after_worker_group_poll_status` is the only offender so far and most of its errors are from user mistakes; other callback errors could be legitimate bugs that should result in `ABORTED` # Testing Unit tests --------- Signed-off-by: Timothy Seah <tseah@anyscale.com>
Aydin-ab
pushed a commit
to Aydin-ab/ray-aydin
that referenced
this pull request
Nov 19, 2025
…or (ray-project#57869) # Summary We observed that whenever `after_worker_group_poll_status` raised an exception, the Train Run would fail ungracefully and show up as `ABORTED` in the dashboard. This happened in the following situations: 1) Different workers report remote checkpoints with different paths -> `(TrainController pid=46993) RuntimeError: The storage path of the checkpoints in the training results is not the same. This means the checkpoints are not consistent. Got a mix of the following checkpoint paths: {'/tmp/tmpl95kv7ax', '/tmp/tmp__8e6etk'} ` -> `ABORTED` Train Run 2) `ray.train.report("loss": ...}, checkpoint=checkpoint)` in `train_func` -> `TypeError: Object of type 'ellipsis' is not JSON serializable` in `CheckpointManager._save_state` -> `ABORTED` Train Run This PR catches these exceptions, wraps them in a `ControllerError`, and goes through the `FailurePolicy`, ultimately resulting in an `ERRORED` Train Run, which is more intuitive because it happened due to an error in the training workers (`The Train run failed due to an error in the training workers.` is the comment associated with `RunStatus.ERRORED`). I considered implementing a more general solution that caught all `WorkerGroupCallback` errors and resurfaced them as `ControllerError`s, but decided against it because: * Callbacks occur in many different places and we might want to add custom try/catch logic in each case. * `after_worker_group_poll_status` is the only offender so far and most of its errors are from user mistakes; other callback errors could be legitimate bugs that should result in `ABORTED` # Testing Unit tests --------- Signed-off-by: Timothy Seah <tseah@anyscale.com> Signed-off-by: Aydin Abiar <aydin@anyscale.com>
Future-Outlier
pushed a commit
to Future-Outlier/ray
that referenced
this pull request
Dec 7, 2025
…or (ray-project#57869) # Summary We observed that whenever `after_worker_group_poll_status` raised an exception, the Train Run would fail ungracefully and show up as `ABORTED` in the dashboard. This happened in the following situations: 1) Different workers report remote checkpoints with different paths -> `(TrainController pid=46993) RuntimeError: The storage path of the checkpoints in the training results is not the same. This means the checkpoints are not consistent. Got a mix of the following checkpoint paths: {'/tmp/tmpl95kv7ax', '/tmp/tmp__8e6etk'} ` -> `ABORTED` Train Run 2) `ray.train.report("loss": ...}, checkpoint=checkpoint)` in `train_func` -> `TypeError: Object of type 'ellipsis' is not JSON serializable` in `CheckpointManager._save_state` -> `ABORTED` Train Run This PR catches these exceptions, wraps them in a `ControllerError`, and goes through the `FailurePolicy`, ultimately resulting in an `ERRORED` Train Run, which is more intuitive because it happened due to an error in the training workers (`The Train run failed due to an error in the training workers.` is the comment associated with `RunStatus.ERRORED`). I considered implementing a more general solution that caught all `WorkerGroupCallback` errors and resurfaced them as `ControllerError`s, but decided against it because: * Callbacks occur in many different places and we might want to add custom try/catch logic in each case. * `after_worker_group_poll_status` is the only offender so far and most of its errors are from user mistakes; other callback errors could be legitimate bugs that should result in `ABORTED` # Testing Unit tests --------- Signed-off-by: Timothy Seah <tseah@anyscale.com> Signed-off-by: Future-Outlier <eric901201@gmail.com>
Blaze-DSP
pushed a commit
to Blaze-DSP/ray
that referenced
this pull request
Dec 18, 2025
…or (ray-project#57869) # Summary We observed that whenever `after_worker_group_poll_status` raised an exception, the Train Run would fail ungracefully and show up as `ABORTED` in the dashboard. This happened in the following situations: 1) Different workers report remote checkpoints with different paths -> `(TrainController pid=46993) RuntimeError: The storage path of the checkpoints in the training results is not the same. This means the checkpoints are not consistent. Got a mix of the following checkpoint paths: {'/tmp/tmpl95kv7ax', '/tmp/tmp__8e6etk'} ` -> `ABORTED` Train Run 2) `ray.train.report("loss": ...}, checkpoint=checkpoint)` in `train_func` -> `TypeError: Object of type 'ellipsis' is not JSON serializable` in `CheckpointManager._save_state` -> `ABORTED` Train Run This PR catches these exceptions, wraps them in a `ControllerError`, and goes through the `FailurePolicy`, ultimately resulting in an `ERRORED` Train Run, which is more intuitive because it happened due to an error in the training workers (`The Train run failed due to an error in the training workers.` is the comment associated with `RunStatus.ERRORED`). I considered implementing a more general solution that caught all `WorkerGroupCallback` errors and resurfaced them as `ControllerError`s, but decided against it because: * Callbacks occur in many different places and we might want to add custom try/catch logic in each case. * `after_worker_group_poll_status` is the only offender so far and most of its errors are from user mistakes; other callback errors could be legitimate bugs that should result in `ABORTED` # Testing Unit tests --------- Signed-off-by: Timothy Seah <tseah@anyscale.com>
peterxcli
pushed a commit
to peterxcli/ray
that referenced
this pull request
Feb 25, 2026
…or (ray-project#57869) # Summary We observed that whenever `after_worker_group_poll_status` raised an exception, the Train Run would fail ungracefully and show up as `ABORTED` in the dashboard. This happened in the following situations: 1) Different workers report remote checkpoints with different paths -> `(TrainController pid=46993) RuntimeError: The storage path of the checkpoints in the training results is not the same. This means the checkpoints are not consistent. Got a mix of the following checkpoint paths: {'/tmp/tmpl95kv7ax', '/tmp/tmp__8e6etk'} ` -> `ABORTED` Train Run 2) `ray.train.report("loss": ...}, checkpoint=checkpoint)` in `train_func` -> `TypeError: Object of type 'ellipsis' is not JSON serializable` in `CheckpointManager._save_state` -> `ABORTED` Train Run This PR catches these exceptions, wraps them in a `ControllerError`, and goes through the `FailurePolicy`, ultimately resulting in an `ERRORED` Train Run, which is more intuitive because it happened due to an error in the training workers (`The Train run failed due to an error in the training workers.` is the comment associated with `RunStatus.ERRORED`). I considered implementing a more general solution that caught all `WorkerGroupCallback` errors and resurfaced them as `ControllerError`s, but decided against it because: * Callbacks occur in many different places and we might want to add custom try/catch logic in each case. * `after_worker_group_poll_status` is the only offender so far and most of its errors are from user mistakes; other callback errors could be legitimate bugs that should result in `ABORTED` # Testing Unit tests --------- Signed-off-by: Timothy Seah <tseah@anyscale.com> Signed-off-by: peterxcli <peterxcli@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
We observed that whenever
after_worker_group_poll_statusraised an exception, the Train Run would fail ungracefully and show up asABORTEDin the dashboard. This happened in the following situations:(TrainController pid=46993) RuntimeError: The storage path of the checkpoints in the training results is not the same. This means the checkpoints are not consistent. Got a mix of the following checkpoint paths: {'/tmp/tmpl95kv7ax', '/tmp/tmp__8e6etk'}->ABORTEDTrain Runray.train.report("loss": ...}, checkpoint=checkpoint)intrain_func->TypeError: Object of type 'ellipsis' is not JSON serializableinCheckpointManager._save_state->ABORTEDTrain RunThis PR catches these exceptions, wraps them in a
ControllerError, and goes through theFailurePolicy, ultimately resulting in anERROREDTrain Run, which is more intuitive because it happened due to an error in the training workers (The Train run failed due to an error in the training workers.is the comment associated withRunStatus.ERRORED).I considered implementing a more general solution that caught all
WorkerGroupCallbackerrors and resurfaced them asControllerErrors, but decided against it because:after_worker_group_poll_statusis the only offender so far and most of its errors are from user mistakes; other callback errors could be legitimate bugs that should result inABORTEDTesting
Unit tests