Skip to content

[CI] Add PD Disaggregation aarch64 test#16572

Merged
ShangmingCai merged 9 commits intomainfrom
gb200_pd_test
Jan 10, 2026
Merged

[CI] Add PD Disaggregation aarch64 test#16572
ShangmingCai merged 9 commits intomainfrom
gb200_pd_test

Conversation

@ShangmingCai
Copy link
Copy Markdown
Collaborator

Motivation

Add GB200 PD test to help us identify the failure earlier since GB200 is not accessible in some areas.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments (/tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci) or contact authorized users to do so.
  4. After green CI and required approvals, ask Merge Oncalls to merge.

Signed-off-by: Shangming Cai <csmthu@gmail.com>
@ShangmingCai ShangmingCai requested a review from Fridge003 January 6, 2026 12:52
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ShangmingCai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates a new continuous integration test designed to validate the performance and accuracy of PD Disaggregation on AARCH64 architectures. The addition of this test aims to proactively detect potential failures in environments like GB200, where direct access for debugging can be limited. The test sets up a disaggregated server environment and performs an accuracy evaluation using the GSM8K benchmark.

Highlights

  • New CI Test for AARCH64 PD Disaggregation: Introduces test_disaggregation_aarch64.py to the per-commit-4-gpu-gb200 CI suite, expanding test coverage for disaggregated environments.
  • Early Failure Detection for GB200: The primary motivation for this test is to identify issues earlier for GB200 environments, which are not always readily available for direct debugging.
  • GSM8K Accuracy Validation: The newly added test specifically evaluates the accuracy of a disaggregated prefill/decode setup using the few-shot GSM8K benchmark, asserting an accuracy threshold.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

/tag-and-rerun-ci

@github-actions github-actions Bot added the run-ci label Jan 6, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new test for PD (Prefill/Decode) disaggregation on aarch64, specifically for the GB200 platform. The changes include adding the new test file to the per-commit-4-gpu-gb200 test suite and implementing the test itself. The new test sets up prefill and decode servers, and runs a GSM8K accuracy evaluation. My review focuses on improving the maintainability of the new test code. I've suggested replacing hardcoded values with named constants to make the test easier to read and update in the future.

Comment thread test/srt/test_disaggregation_aarch64.py
Signed-off-by: Shangming Cai <csmthu@gmail.com>
@Fridge003
Copy link
Copy Markdown
Collaborator

/rerun-stage unit-test-backend-4-gpu-gb200

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 6, 2026

✅ Triggered unit-test-backend-4-gpu-gb200 to run independently (skipping dependencies).

It will not be shown in this page. Check the Actions tab for progress.

@Fridge003
Copy link
Copy Markdown
Collaborator

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

/rerun-stage unit-test-backend-4-gpu-gb200

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 6, 2026

✅ Triggered unit-test-backend-4-gpu-gb200 to run independently (skipping dependencies).

It will not be shown in this page. Check the Actions tab for progress.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

/rerun-stage unit-test-backend-4-gpu-gb200

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 6, 2026

✅ Triggered unit-test-backend-4-gpu-gb200 to run independently (skipping dependencies).

It will not be shown in this page. Check the Actions tab for progress.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

I have reproduced the bug:

I0106 15:45:22.865772 2356789 transfer_engine.cpp:607] Metrics reporting is disabled (set MC_TE_METRIC=1 to enable)
I0106 15:45:22.865772 2356790 transfer_engine.cpp:607] Metrics reporting is disabled (set MC_TE_METRIC=1 to enable)
I0106 15:45:22.865804 2356789 transfer_engine.cpp:105] Transfer Engine parseHostNameWithPort. server_name: 10.197.185.164 port: 12001
I0106 15:45:22.865804 2356790 transfer_engine.cpp:105] Transfer Engine parseHostNameWithPort. server_name: 10.197.185.164 port: 12001
I0106 15:45:22.865835 2356790 transfer_engine.cpp:172] Transfer Engine RPC using P2P handshake, listening on 10.197.185.164:16548
I0106 15:45:22.865836 2356789 transfer_engine.cpp:172] Transfer Engine RPC using P2P handshake, listening on 10.197.185.164:15484
I0106 15:45:22.865929 2356789 transfer_engine.cpp:217] Auto-discovering topology...
I0106 15:45:22.865931 2356790 transfer_engine.cpp:217] Auto-discovering topology...
I0106 15:45:22.867902 2356790 transfer_engine.cpp:232] Topology discovery complete. Found 4 HCAs.
I0106 15:45:22.867923 2356789 transfer_engine.cpp:232] Topology discovery complete. Found 4 HCAs.
I0106 15:45:22.867931 2356790 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:22.867938 2356789 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:22.867949 2356789 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:22.867949 2356790 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:23.209983 2356789 nvlink_transport.cpp:563] NvlinkTransport: Falling back to cudaMalloc for 2147483648 bytes (memory will NOT be exportable)
I0106 15:45:23.210187 2356790 nvlink_transport.cpp:563] NvlinkTransport: Falling back to cudaMalloc for 2147483648 bytes (memory will NOT be exportable)
E0106 15:45:23.210886 2356790 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210887 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210896 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210899 2356790 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210902 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210907 2356790 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210907 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed

Let's wait for the new release of mooncake 0.3.8.post1, which will fix this bug, then I will continue to update this PR.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

/rerun-stage unit-test-backend-4-gpu-gb200

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 9, 2026

✅ Triggered unit-test-backend-4-gpu-gb200 to run independently (skipping dependencies).

It will not be shown in this page. Check the Actions tab for progress.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

https://github.com/sgl-project/sglang/actions/runs/20843530491/job/59882352266 v0.3.8.post1 works, let me bump the version first, then I will update this PR.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

/rerun-stage unit-test-backend-4-gpu-gb200

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

/tag-and-rerun-ci

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 9, 2026

✅ Triggered unit-test-backend-4-gpu-gb200 to run independently (skipping dependencies).

It will not be shown in this page. Check the Actions tab for progress.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

/rerun-failed-ci

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

@Fridge003 gb200 test has passed.

@ShangmingCai
Copy link
Copy Markdown
Collaborator Author

Failed one has been fixed in the main:
image

@ShangmingCai ShangmingCai merged commit 4b14f62 into main Jan 10, 2026
490 of 522 checks passed
@ShangmingCai ShangmingCai deleted the gb200_pd_test branch January 10, 2026 06:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants