[Data] Consider budget when making actor autoscaling decisions#54902
[Data] Consider budget when making actor autoscaling decisions#54902bveeramani merged 3 commits intomasterfrom
Conversation
Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
There was a problem hiding this comment.
Summary of Changes
Hello @bveeramani, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've addressed a critical issue in the Ray data autoscaler where actor pools could scale up without proper consideration of the overall resource budget. This oversight could lead to resource starvation for other operations. My changes integrate budget awareness directly into the autoscaling decision process, ensuring that actor pools only scale up if sufficient resources are available. This prevents over-allocation and promotes a more balanced resource distribution across the system.
Highlights
- Budget-Aware Autoscaling: The autoscaler now explicitly considers the available resource budget before deciding to scale up actor pools. This prevents scenarios where actor pools might consume all logical resources, leading to starvation for other operators.
- Precise Resource Limit Enforcement: A new utility function,
get_max_scale_up, has been introduced to precisely calculate the maximum number of actors that can be added. This calculation is based on the remaining CPU and GPU budget and the resource requirements of each actor, ensuring that scaling decisions adhere to defined resource limits.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a valuable check to consider resource budgets when scaling up actor pools, preventing resource starvation for other operators. The core change is in a new utility function, get_max_scale_up, which calculates scaling limits. My review identifies a potential precision issue in this new function where using floating-point division for resource calculations could lead to under-provisioning. I've suggested a more robust implementation using Python's decimal module to ensure arithmetic accuracy.
Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
…roject#54902) <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> If you don't consider budget when scaling up actor pools, then actors can occupy all of the logical resources and starve other operators. To fix this issue, this PR updates the autoscaler to consider the budget as well. ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Signed-off-by: Krishna Kalyan <krishnakalyan3@gmail.com>
…roject#54902) <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> If you don't consider budget when scaling up actor pools, then actors can occupy all of the logical resources and starve other operators. To fix this issue, this PR updates the autoscaler to consider the budget as well. ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Signed-off-by: jugalshah291 <shah.jugal291@gmail.com>
…roject#54902) <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> If you don't consider budget when scaling up actor pools, then actors can occupy all of the logical resources and starve other operators. To fix this issue, this PR updates the autoscaler to consider the budget as well. ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
…h GPUs (#59366) This PR enables the utilization-based autoscaler to work with GPUs. **Motivation** The current implementation only considers CPU and memory utilization when making scaling decisions. This is based on an outdated assumption that GPUs are only used by actor pools, and we can use actor pool autoscaling to trigger node scale ups. This assumption doesn't hold anymore. To fix deadlocks, #54902 made actor pool autoscaling respect resource budgets. As a result, the actor pool autoscaler can't implicitly trigger node autoscaling anymore. **Changes** This PR extends the autoscaler to track GPU utilization alongside CPU and memory: 1. Extract resource utilization calculation into a dedicated abstraction (ResourceUtilizationGauge): This separates the concern of how utilization is measured from how it's used for scaling decisions. The abstraction makes the autoscaler more testable and opens the door for alternative utilization strategies (e.g., physical vs logical utilization, different averaging windows). 2. Include GPU nodes in scaling decisions: Previously, GPU nodes were explicitly filtered out when determining what node types exist in the cluster. Now all worker node types are considered, allowing the autoscaler to request additional GPU nodes when needed. 3. Add GPU utilization to the scaling threshold check: The autoscaler now triggers scale-up when any of CPU, GPU, or memory utilization exceeds the threshold, rather than only CPU or memory. --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…h GPUs (ray-project#59366) This PR enables the utilization-based autoscaler to work with GPUs. **Motivation** The current implementation only considers CPU and memory utilization when making scaling decisions. This is based on an outdated assumption that GPUs are only used by actor pools, and we can use actor pool autoscaling to trigger node scale ups. This assumption doesn't hold anymore. To fix deadlocks, ray-project#54902 made actor pool autoscaling respect resource budgets. As a result, the actor pool autoscaler can't implicitly trigger node autoscaling anymore. **Changes** This PR extends the autoscaler to track GPU utilization alongside CPU and memory: 1. Extract resource utilization calculation into a dedicated abstraction (ResourceUtilizationGauge): This separates the concern of how utilization is measured from how it's used for scaling decisions. The abstraction makes the autoscaler more testable and opens the door for alternative utilization strategies (e.g., physical vs logical utilization, different averaging windows). 2. Include GPU nodes in scaling decisions: Previously, GPU nodes were explicitly filtered out when determining what node types exist in the cluster. Now all worker node types are considered, allowing the autoscaler to request additional GPU nodes when needed. 3. Add GPU utilization to the scaling threshold check: The autoscaler now triggers scale-up when any of CPU, GPU, or memory utilization exceeds the threshold, rather than only CPU or memory. --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: peterxcli <peterxcli@gmail.com>
Why are these changes needed?
If you don't consider budget when scaling up actor pools, then actors can occupy all of the logical resources and starve other operators.
To fix this issue, this PR updates the autoscaler to consider the budget as well.
Related issue number
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.