[fsdp, megatron, sglang] fix: Fixed a bug in the update_weight process where the GPU ID was being passed incorrectly.#2620
[fsdp, megatron, sglang] fix: Fixed a bug in the update_weight process where the GPU ID was being passed incorrectly.#2620SuperCB wants to merge 2 commits intoverl-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request addresses a critical bug causing excessive VRAM usage on GPU0 when using verl with Megatron and SGLang. The root cause was the incorrect expansion of named_tensors by a factor of the tensor parallel size during weight updates.
The fix is well-implemented and consists of two main changes:
- In
verl/workers/sharding_manager/megatron_sglang.py, the serialization logic is simplified. Instead of serializing each tensor individually, the entire batch of tensors is now serialized into a single string on each tensor parallel rank. This list of serialized strings is then gathered and passed for the update. - In
verl/workers/rollout/sglang_rollout/sglang_rollout.py, the erroneous replication of the weight data is removed. The method now correctly accepts the list of serialized strings from the sharding manager and passes it to the SGLang engine.
These changes not only resolve the memory issue but also significantly simplify the weight update pipeline, making it more efficient and maintainable. The code modifications are clean, logical, and directly address the described problem. The PR is a solid improvement.
|
Does this issue result "CUDA error: an illegal memory access was encountered"? |
This may not be the same issue. illegal mem access may be fixed by bumping a higher version of sglang #2720 Right now it's hard for us to reproduce illigal mem access, but typically, just bump up sglang can fix it. |
@chenhaiq @zhaochenyang20 #2720 is built on top of #2620, we can close #2620 For illegal memory issue, we hope higher version can solve that, but we haven't find a way to consistently repro it. |
|
Could you resolve the conflict? |
|
It seems that all the sglang tests passed. Could you help move the deprecated sglang tests out after rebase? Thanks! |
|
Duplicated with #2720 |
What does this PR do?
When using verl+megatron+sglang, we observed extremely unusual phenomena. We found that some abnormal processes were always running on GPU0, occupying VRAM.
When Ray launches each process, it sets
CUDA_VISIBLE_DEVICESfor each process. For example, rank 1 is set to GPU 1, and rank 2 is set to GPU 2. However, the problem is that from the perspective of process rank 2, it treats GPU 2 as its own GPU 0 (index=0). This leads to rank 2 actually sending a GPU ID of 0 when transferring Tensors across processes. Consequently, every Worker in SGlang receives a Tensor with an incorrect GPU ID, causing abnormal memory usage on GPU 0,at the same time, this also causes unnecessary memory copies, slowing down the update_weights process.PR addresses the issue of transferring Tensors across processes. This requires a patch to be applied both during serialization in the sending process and deserialization in the receiving process. Verl hasn't implemented this. Referencing PR and the implementation in slime, we resolved this problem.
We also modified the Sglang.
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace.