[WIP][Serve] Fix perf regression in autoscaling snapshot cache refresh#61611
Open
nadongjun wants to merge 4 commits intoray-project:masterfrom
Open
[WIP][Serve] Fix perf regression in autoscaling snapshot cache refresh#61611nadongjun wants to merge 4 commits intoray-project:masterfrom
nadongjun wants to merge 4 commits intoray-project:masterfrom
Conversation
…arizer (ray-project#56225) <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> This PR introduces deployment-level autoscaling observability in Serve. The controller now emits a single, structured JSON log line (serve_autoscaling_snapshot) per autoscaling-enabled deployment each control-loop tick. This avoids recomputation in the controller call sites and provides a stable, machine-parsable surface for tooling and debugging. - Add get_observability_snapshot in AutoscalingState and manager wrapper to generate compact snapshots (replica counts, queued/total requests, metric freshness). - Add ServeEventSummarizer to build payloads, reduce duplicate logs, and summarize recent scaling decisions. Logs can be found in controller log files, `e.g. /tmp/ray/session_2025-09-03_21-12-01_095657_13385/logs/serve/controller_13474.log`. ``` serve_autoscaling_snapshot {"ts":"2025-09-04T06:12:11Z","app":"default","deployment":"worker","current_replicas":2,"target_replicas":2,"replicas_allowed":{"min":1,"max":8},"scaling_status":"stable","policy":"default","metrics":{"look_back_period_s":10.0,"queued_requests":0.0,"total_requests":0.0},"metrics_health":"ok","errors":[],"decisions":[{"ts":"2025-09-04T06:12:11Z","from":0,"to":2,"reason":"current=0, proposed=2"},{"ts":"2025-09-04T06:12:11Z","from":2,"to":2,"reason":"current=2, proposed=2"}]} ``` - Expose the same snapshot data via `serve status -v` and CLI/SDK surfaces. - Aggregate per-app snapshots and external scaler history. - [x] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [x] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [x] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [x] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Dongjun Na <kmu5544616@gmail.com> Co-authored-by: akyang-anyscale <alexyang@anyscale.com> Co-authored-by: Abrar Sheikh <abrar2002as@gmail.com>
Signed-off-by: Dongjun Na <kmu5544616@gmail.com>
Contributor
There was a problem hiding this comment.
Code Review
This pull request effectively addresses a performance regression in the autoscaling snapshot cache refresh by replacing expensive object constructions with efficient lookups. The introduction of dedicated logging for autoscaling snapshots is a great addition for observability. The code is well-structured, and the new tests are comprehensive.
I have a few minor suggestions to improve maintainability and clarity:
- Refactor duplicated code for updating timestamps in
autoscaling_state.py. - Simplify the
is_scaling_equivalentmethod incommon.pyby removing redundant checks. - Streamline the log parsing logic in the test file.
Overall, this is a solid improvement.
…aling tick Signed-off-by: Dongjun Na <kmu5544616@gmail.com>
Signed-off-by: Dongjun Na <kmu5544616@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Description
Improved controller performance degradation at scale (2048 replicas:
loop_duration+15%,handle_metrics_delay+28%) introduced after PR #56225.Root causes:
_refresh_autoscaling_deployments_cache()calledlist_deployment_details(), constructing O(N)ReplicaDetailsPydantic objects per deployment on every control loop_create_deployment_snapshot()built a 14-field PydanticDeploymentSnapshoton every autoscaling tick, regardless of whether scaling state had changedFixes:
list_deployment_details()with O(1)get_deployment()lookupsDeploymentSnapshotconstruction (only when scaling state changes)_emittry/except from application state update to prevent error maskingapplication_state_update_duration_smetrics_create_deployment_snapshot,is_scaling_equivalent)Related issues
Re-lands #56225 (reverted in #61557) with performance fix.
Additional information
Benchmark
bench_fix_validation.py
Measured per-tick cost by importing actual Ray Serve modules (
DeploymentAutoscalingState,get_decision_num_replicas(), etc.) without requiring a cluster.bench_fix_validation.pycompares three code paths:A) No snapshot (baseline) — autoscale logic only, no snapshot creation:
B) Original PR #56225 — Pydantic
DeploymentSnapshotcreated every tick:C) This fix — tuple cache + lazy Pydantic:
create_state(n_replicas)creates aDeploymentAutoscalingStateat 1–2048 replica scale. Each path is measured viatime.perf_counter_ns()over 200 warmup + 2000 iterations:Test results
All 11 tests in
test_controller.pypass, including 4 snapshot-specific tests.