Skip to content

[Data] Consider budget when making actor autoscaling decisions#54902

Merged
bveeramani merged 3 commits intomasterfrom
fix-autoscaler2
Jul 25, 2025
Merged

[Data] Consider budget when making actor autoscaling decisions#54902
bveeramani merged 3 commits intomasterfrom
fix-autoscaler2

Conversation

@bveeramani
Copy link
Copy Markdown
Member

Why are these changes needed?

If you don't consider budget when scaling up actor pools, then actors can occupy all of the logical resources and starve other operators.

To fix this issue, this PR updates the autoscaler to consider the budget as well.

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
@bveeramani bveeramani requested a review from a team as a code owner July 24, 2025 22:37
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @bveeramani, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've addressed a critical issue in the Ray data autoscaler where actor pools could scale up without proper consideration of the overall resource budget. This oversight could lead to resource starvation for other operations. My changes integrate budget awareness directly into the autoscaling decision process, ensuring that actor pools only scale up if sufficient resources are available. This prevents over-allocation and promotes a more balanced resource distribution across the system.

Highlights

  • Budget-Aware Autoscaling: The autoscaler now explicitly considers the available resource budget before deciding to scale up actor pools. This prevents scenarios where actor pools might consume all logical resources, leading to starvation for other operators.
  • Precise Resource Limit Enforcement: A new utility function, get_max_scale_up, has been introduced to precisely calculate the maximum number of actors that can be added. This calculation is based on the remaining CPU and GPU budget and the resource requirements of each actor, ensuring that scaling decisions adhere to defined resource limits.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable check to consider resource budgets when scaling up actor pools, preventing resource starvation for other operators. The core change is in a new utility function, get_max_scale_up, which calculates scaling limits. My review identifies a potential precision issue in this new function where using floating-point division for resource calculations could lead to under-provisioning. I've suggested a more robust implementation using Python's decimal module to ensure arithmetic accuracy.

Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
@bveeramani bveeramani enabled auto-merge (squash) July 24, 2025 23:31
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Jul 24, 2025
@bveeramani bveeramani merged commit b41b4da into master Jul 25, 2025
6 of 7 checks passed
@bveeramani bveeramani deleted the fix-autoscaler2 branch July 25, 2025 00:42
krishnakalyan3 pushed a commit to krishnakalyan3/ray that referenced this pull request Jul 30, 2025
…roject#54902)

<!-- Thank you for your contribution! Please review
https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before
opening a pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

If you don't consider budget when scaling up actor pools, then actors
can occupy all of the logical resources and starve other operators.

To fix this issue, this PR updates the autoscaler to consider the budget
as well.

## Related issue number

<!-- For example: "Closes ray-project#1234" -->

## Checks

- [ ] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

---------

Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
Signed-off-by: Krishna Kalyan <krishnakalyan3@gmail.com>
jugalshah291 pushed a commit to jugalshah291/ray_fork that referenced this pull request Sep 11, 2025
…roject#54902)

<!-- Thank you for your contribution! Please review
https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before
opening a pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

If you don't consider budget when scaling up actor pools, then actors
can occupy all of the logical resources and starve other operators.

To fix this issue, this PR updates the autoscaler to consider the budget
as well.

## Related issue number

<!-- For example: "Closes ray-project#1234" -->

## Checks

- [ ] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

---------

Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
Signed-off-by: jugalshah291 <shah.jugal291@gmail.com>
dstrodtman pushed a commit to dstrodtman/ray that referenced this pull request Oct 6, 2025
…roject#54902)

<!-- Thank you for your contribution! Please review
https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before
opening a pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

If you don't consider budget when scaling up actor pools, then actors
can occupy all of the logical resources and starve other operators.

To fix this issue, this PR updates the autoscaler to consider the budget
as well.

## Related issue number

<!-- For example: "Closes ray-project#1234" -->

## Checks

- [ ] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

---------

Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
bveeramani added a commit that referenced this pull request Dec 11, 2025
…h GPUs (#59366)

This PR enables the utilization-based autoscaler to work with GPUs.

**Motivation**

The current implementation only considers CPU and memory utilization
when making scaling decisions. This is based on an outdated assumption
that GPUs are only used by actor pools, and we can use actor pool
autoscaling to trigger node scale ups.

This assumption doesn't hold anymore. To fix deadlocks,
#54902 made actor pool
autoscaling respect resource budgets. As a result, the actor pool
autoscaler can't implicitly trigger node autoscaling anymore.

**Changes**

This PR extends the autoscaler to track GPU utilization alongside CPU
and memory:

1. Extract resource utilization calculation into a dedicated abstraction
(ResourceUtilizationGauge): This separates the concern of how
utilization is measured from how it's used
for scaling decisions. The abstraction makes the autoscaler more
testable and opens the door for alternative utilization strategies
(e.g., physical vs logical utilization, different averaging windows).
2. Include GPU nodes in scaling decisions: Previously, GPU nodes were
explicitly filtered out when determining what node types exist in the
cluster. Now all worker node types are considered, allowing the
autoscaler to request additional GPU nodes when needed.
3. Add GPU utilization to the scaling threshold check: The autoscaler
now triggers scale-up when any of CPU, GPU, or memory utilization
exceeds the threshold, rather than only CPU or memory.

---------

Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
peterxcli pushed a commit to peterxcli/ray that referenced this pull request Feb 25, 2026
…h GPUs (ray-project#59366)

This PR enables the utilization-based autoscaler to work with GPUs.

**Motivation**

The current implementation only considers CPU and memory utilization
when making scaling decisions. This is based on an outdated assumption
that GPUs are only used by actor pools, and we can use actor pool
autoscaling to trigger node scale ups.

This assumption doesn't hold anymore. To fix deadlocks,
ray-project#54902 made actor pool
autoscaling respect resource budgets. As a result, the actor pool
autoscaler can't implicitly trigger node autoscaling anymore.

**Changes**

This PR extends the autoscaler to track GPU utilization alongside CPU
and memory:

1. Extract resource utilization calculation into a dedicated abstraction
(ResourceUtilizationGauge): This separates the concern of how
utilization is measured from how it's used
for scaling decisions. The abstraction makes the autoscaler more
testable and opens the door for alternative utilization strategies
(e.g., physical vs logical utilization, different averaging windows).
2. Include GPU nodes in scaling decisions: Previously, GPU nodes were
explicitly filtered out when determining what node types exist in the
cluster. Now all worker node types are considered, allowing the
autoscaler to request additional GPU nodes when needed.
3. Add GPU utilization to the scaling threshold check: The autoscaler
now triggers scale-up when any of CPU, GPU, or memory utilization
exceeds the threshold, rather than only CPU or memory.

---------

Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: peterxcli <peterxcli@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

go add ONLY when ready to merge, run all tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants