[CI] Add PD Disaggregation aarch64 test#16572
Conversation
Signed-off-by: Shangming Cai <csmthu@gmail.com>
Summary of ChangesHello @ShangmingCai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates a new continuous integration test designed to validate the performance and accuracy of PD Disaggregation on AARCH64 architectures. The addition of this test aims to proactively detect potential failures in environments like GB200, where direct access for debugging can be limited. The test sets up a disaggregated server environment and performs an accuracy evaluation using the GSM8K benchmark. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
/tag-and-rerun-ci |
There was a problem hiding this comment.
Code Review
This pull request adds a new test for PD (Prefill/Decode) disaggregation on aarch64, specifically for the GB200 platform. The changes include adding the new test file to the per-commit-4-gpu-gb200 test suite and implementing the test itself. The new test sets up prefill and decode servers, and runs a GSM8K accuracy evaluation. My review focuses on improving the maintainability of the new test code. I've suggested replacing hardcoded values with named constants to make the test easier to read and update in the future.
|
/rerun-stage unit-test-backend-4-gpu-gb200 |
|
✅ Triggered It will not be shown in this page. Check the Actions tab for progress. |
|
/rerun-stage unit-test-backend-4-gpu-gb200 |
|
✅ Triggered It will not be shown in this page. Check the Actions tab for progress. |
|
/rerun-stage unit-test-backend-4-gpu-gb200 |
|
✅ Triggered It will not be shown in this page. Check the Actions tab for progress. |
|
I have reproduced the bug: I0106 15:45:22.865772 2356789 transfer_engine.cpp:607] Metrics reporting is disabled (set MC_TE_METRIC=1 to enable)
I0106 15:45:22.865772 2356790 transfer_engine.cpp:607] Metrics reporting is disabled (set MC_TE_METRIC=1 to enable)
I0106 15:45:22.865804 2356789 transfer_engine.cpp:105] Transfer Engine parseHostNameWithPort. server_name: 10.197.185.164 port: 12001
I0106 15:45:22.865804 2356790 transfer_engine.cpp:105] Transfer Engine parseHostNameWithPort. server_name: 10.197.185.164 port: 12001
I0106 15:45:22.865835 2356790 transfer_engine.cpp:172] Transfer Engine RPC using P2P handshake, listening on 10.197.185.164:16548
I0106 15:45:22.865836 2356789 transfer_engine.cpp:172] Transfer Engine RPC using P2P handshake, listening on 10.197.185.164:15484
I0106 15:45:22.865929 2356789 transfer_engine.cpp:217] Auto-discovering topology...
I0106 15:45:22.865931 2356790 transfer_engine.cpp:217] Auto-discovering topology...
I0106 15:45:22.867902 2356790 transfer_engine.cpp:232] Topology discovery complete. Found 4 HCAs.
I0106 15:45:22.867923 2356789 transfer_engine.cpp:232] Topology discovery complete. Found 4 HCAs.
I0106 15:45:22.867931 2356790 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:22.867938 2356789 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:22.867949 2356789 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:22.867949 2356790 nvlink_transport.cpp:77] cuMemCreate(FABRIC) failed: 1, falling back to CudaMalloc and use CudaIPC to share handle
I0106 15:45:23.209983 2356789 nvlink_transport.cpp:563] NvlinkTransport: Falling back to cudaMalloc for 2147483648 bytes (memory will NOT be exportable)
I0106 15:45:23.210187 2356790 nvlink_transport.cpp:563] NvlinkTransport: Falling back to cudaMalloc for 2147483648 bytes (memory will NOT be exportable)
E0106 15:45:23.210886 2356790 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210887 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210896 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210899 2356790 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210902 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210907 2356790 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failed
E0106 15:45:23.210907 2356789 nvlink_transport.cpp:373] NvlinkTransport: cudaIpcGetMemHandle failedLet's wait for the new release of mooncake 0.3.8.post1, which will fix this bug, then I will continue to update this PR. |
|
/rerun-stage unit-test-backend-4-gpu-gb200 |
|
✅ Triggered It will not be shown in this page. Check the Actions tab for progress. |
|
https://github.com/sgl-project/sglang/actions/runs/20843530491/job/59882352266 v0.3.8.post1 works, let me bump the version first, then I will update this PR. |
|
/rerun-stage unit-test-backend-4-gpu-gb200 |
|
/tag-and-rerun-ci |
|
✅ Triggered It will not be shown in this page. Check the Actions tab for progress. |
|
/rerun-failed-ci |
|
@Fridge003 gb200 test has passed. |

Motivation
Add GB200 PD test to help us identify the failure earlier since GB200 is not accessible in some areas.
Modifications
Accuracy Tests
Benchmarking and Profiling
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci) or contact authorized users to do so.