[Serve] Group DeploymentHandle autoscaling metrics pushes by process#55016
[Serve] Group DeploymentHandle autoscaling metrics pushes by process#55016JoshKarpel wants to merge 75 commits intoray-project:masterfrom
DeploymentHandle autoscaling metrics pushes by process#55016Conversation
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
…shing-v2 Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
# Conflicts: # python/ray/serve/_private/router.py
# Conflicts: # python/ray/serve/_private/router.py
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
There was a problem hiding this comment.
Summary of Changes
Hello @JoshKarpel, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on optimizing the autoscaling metrics reporting within Ray Serve to alleviate RPC overhead on the Serve Controller. The core change involves grouping DeploymentHandle autoscaling metrics at the process level before transmitting them, rather than sending individual reports. This refactoring is expected to improve the scalability and performance of the Serve system, particularly when managing a large number of applications. Additionally, it introduces more explicit and configurable intervals for metrics collection and pushing.
Highlights
- Metrics Grouping for Performance: I've introduced a
SharedHandleMetricsPusherclass that aggregates autoscaling metrics from multipleDeploymentHandles within the same process. This allows for sending these metrics to the controller in a single bulk RPC call, significantly reducing network overhead and pressure on the Serve Controller, especially in large-scale deployments. - Standardized Metrics Reporting: I've refactored the internal metrics reporting by creating a dedicated
HandleMetricReportdataclass. This new structure encapsulates all necessary handle metrics, simplifying the API forAutoscalingStateManagerand the Serve Controller, which now directly consume these unified report objects. - Granular Control over Intervals: I've deprecated the
metrics_interval_sfield inAutoscalingConfigin favor of new environment variables (e.g.,RAY_SERVE_REPLICA_AUTOSCALING_METRIC_PUSH_INTERVAL_S,RAY_SERVE_HANDLE_AUTOSCALING_METRIC_PUSH_INTERVAL_S). This provides more precise and independent control over the recording and pushing intervals for both replica and handle autoscaling metrics. - Controller API Enhancement: I've added a new
bulk_record_handle_metricsmethod to the Serve Controller. This method is designed to efficiently receive and process sequences ofHandleMetricReportobjects, supporting the new grouped metrics push mechanism.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
Overall, this is a well-executed pull request that addresses a significant performance bottleneck in Ray Serve. The introduction of SharedHandleMetricsPusher to batch metric reports is a smart solution. The related code refactoring, such as using the HandleMetricReport dataclass, improves code clarity and maintainability.
My review includes one high-priority comment regarding a potential bug in an environment variable name and one medium-priority comment about a future performance consideration for the new shared pusher. Once these are addressed, the PR should be in great shape.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
…deprecated (#55102) ## Why are these changes needed? Deprecation announcements for this future PR #55016 ## Related issue number Related to #45777 ## Checks - [x] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [x] I've run `scripts/format.sh` to lint the changes in this PR. - [x] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [x] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [x] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
# Conflicts: # python/ray/serve/_private/constants.py # python/ray/serve/_private/replica.py # python/ray/serve/_private/router.py # python/ray/serve/config.py # src/ray/protobuf/serve.proto
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
…deprecated (#55102) ## Why are these changes needed? Deprecation announcements for this future PR #55016 ## Related issue number Related to #45777 ## Checks - [x] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [x] I've run `scripts/format.sh` to lint the changes in this PR. - [x] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [x] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [x] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Josh Karpel <josh.karpel@gmail.com> Signed-off-by: sampan <sampan@anyscale.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
| autoscaling_config={"max_replicas": 2, "metrics_interval_s": 10} | ||
| autoscaling_config={"max_replicas": 2, "look_back_period_s": 10} | ||
| ), | ||
| {}, | ||
| ) | ||
| v2 = DeploymentVersion( | ||
| "1", | ||
| DeploymentConfig( | ||
| autoscaling_config={"max_replicas": 5, "metrics_interval_s": 10} | ||
| autoscaling_config={"max_replicas": 5, "look_back_period_s": 10} | ||
| ), | ||
| {}, | ||
| ) | ||
| v3 = DeploymentVersion( | ||
| "1", | ||
| DeploymentConfig( | ||
| autoscaling_config={"max_replicas": 2, "metrics_interval_s": 3} | ||
| autoscaling_config={"max_replicas": 2, "look_back_period_s": 3} |
There was a problem hiding this comment.
This changed because metrics_interval_s no longer requires a reconfig due to https://github.com/ray-project/ray/pull/55016/files#diff-75ae406c06c8505d98c21ff05842d770fc5b52870864bfb9e70fd5e38ba7bd81R194 (because it doesn't do anything anymore)
| with patch.object(metrics_manager, "metrics_pusher"): | ||
| metrics_manager.update_deployment_config(DeploymentConfig(), 0) | ||
| metrics_manager.metrics_pusher.register_or_update_task.assert_not_called() | ||
|
|
||
| # With autoscaling config, register or update task should be called | ||
| metrics_manager.update_deployment_config( | ||
| DeploymentConfig(autoscaling_config=AutoscalingConfig()), 0 | ||
| ) | ||
| metrics_manager.metrics_pusher.register_or_update_task.assert_called() | ||
| with patch.object(metrics_manager, "metrics_pusher"): | ||
| metrics_manager.update_deployment_config( | ||
| DeploymentConfig(autoscaling_config=AutoscalingConfig()), 0 | ||
| ) | ||
| metrics_manager.metrics_pusher.register_or_update_task.assert_called() |
There was a problem hiding this comment.
This changed because the shared metrics pusher does have register_or_update_task called in these cases. So instead of mocking all metrics pushers, we just mock the one directly attached to the metrics manager under test.
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
# Conflicts: # python/ray/serve/_private/autoscaling_state.py # python/ray/serve/_private/controller.py
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
| print(time.ctime(), "Redeploying with `look_back_period_s` updated to 0.5s.") | ||
| config_template["deployments"][0]["autoscaling_config"]["look_back_period_s"] = 0.5 |
There was a problem hiding this comment.
Changing metrics_interval_s no longer triggers a redeploy
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
| "target_ongoing_requests": 1, | ||
| "min_replicas": 1, | ||
| "max_replicas": 10, | ||
| "metrics_interval_s": 15, |
There was a problem hiding this comment.
@abrarsheikh I'm running into some trouble on fixing this test and could use some guidance. It looks like the intent of this test was that by setting metrics_interval_s=15, the wait_for_condition on https://github.com/ray-project/ray/pull/55016/files#diff-fb12df9046242e153209be3f7daa63c0dbedbbfb6eb0fd8d7fff48467fd1887bR746 with a default timeout of 10 seconds would fail, because autoscaling metrics wouldn't be pushed, thus no scaleup.
Since the push interval can no longer be controlled like this, this part of the test doesn't make sense, but the name of the test doesn't really describe what the intent is here. Do you know what behavior this test is supposed to be covering so that I can figure out how to test that behavior now?
There was a problem hiding this comment.
Ok, well, apparently CI passed anyway 😅
There was a problem hiding this comment.
Nope, it's ephemeral, which isn't surprising - so yeah, I could use some help on this one when you get a chance @abrarsheikh
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
|
@JoshKarpel quite busy this week, I will pick this up after Ray Rummit. |
Same here! See you soon 😁 |
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
6b1f2f5 to
4eb3514
Compare
| pusher = cls(controller_handle=controller_handle) | ||
| pusher.start() | ||
| logger.info(f"Created {pusher} for Serve Controller {controller_handle}.") | ||
| return pusher |
There was a problem hiding this comment.
Bug: Missing event loop in cache key causes potential thread-safety issues
The SharedHandleMetricsPusher.get_or_create method only uses controller_handle as the lru_cache key, while the analogous SharedRouterLongPollClient.get_or_create correctly uses both controller_handle AND event_loop as cache keys. This inconsistency means that routers running on different event loops (e.g., SingletonThreadRouter vs CurrentLoopRouter in the same process) could share the same SharedHandleMetricsPusher instance. Since the pusher's MetricsPusher tasks run on whichever event loop first created the pusher, this can lead to cross-thread calls to _get_metrics_report() which accesses non-thread-safe data structures like metrics_store and the WeakSet containing RouterMetricsManager instances.
Additional Locations (1)
There was a problem hiding this comment.
@abrarsheikh thoughts on this one? I'm not too familiar with the recent changes around event loops in this code.
Signed-off-by: Josh Karpel <josh.karpel@gmail.com>
| self._router_metrics_managers.add(router_metrics_manager) | ||
|
|
||
| def unregister(self, router_metrics_manager: RouterMetricsManager) -> None: | ||
| self._router_metrics_managers.discard(router_metrics_manager) |
There was a problem hiding this comment.
Bug: Missing thread-safety protection for WeakSet iteration during push_metrics
SharedHandleMetricsPusher.register() and unregister() directly modify _router_metrics_managers WeakSet without thread-safety protection, while push_metrics() iterates over it. Unlike SharedRouterLongPollClient, which uses call_soon_threadsafe to prevent "RuntimeError: Set changed size during iteration", SharedHandleMetricsPusher lacks this protection. If register/unregister and push_metrics execute on different threads, or if garbage collection removes weak references during iteration, a RuntimeError could occur.
Additional Locations (1)
|
@JoshKarpel can you please resolve the merge conflicts? |
@harshit-anyscale @abrarsheikh apologies for the lack of progress on this - my focus has shifted to another project and I'm hoping that my coworker @jugalshah291 can pick up where I left off on this one. |
|
no issues @JoshKarpel, thanks for contributing :) |
Why are these changes needed?
We're seeing a lot of pressure on the Serve Controller from metrics push tasks when running thousands of Serve apps. A lot of this pressure is purely from the overhead of lots of RPC connections incoming to the controller. We can amortize this overhead (and presumably similar overhead in the handles too) by having the metrics push happen at the per-process level instead of the per-handle level.
We've made this change on our setup and it has reduced CPU time spent on this in the Controller, and also our ingress application replicas that have all the handles.
See also #55102
Related issue number
Closes #45777
Second attempt on #45957 , see previous discussion there for more context.
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.