Skip to content

[fsdp, megatron, sglang] fix: Fixed a bug in the update_weight process where the GPU ID was being passed incorrectly.#2620

Closed
SuperCB wants to merge 2 commits intoverl-project:mainfrom
SuperCB:fix_update
Closed

[fsdp, megatron, sglang] fix: Fixed a bug in the update_weight process where the GPU ID was being passed incorrectly.#2620
SuperCB wants to merge 2 commits intoverl-project:mainfrom
SuperCB:fix_update

Conversation

@SuperCB
Copy link
Copy Markdown
Contributor

@SuperCB SuperCB commented Jul 18, 2025

What does this PR do?

When using verl+megatron+sglang, we observed extremely unusual phenomena. We found that some abnormal processes were always running on GPU0, occupying VRAM.

image

When Ray launches each process, it sets CUDA_VISIBLE_DEVICES for each process. For example, rank 1 is set to GPU 1, and rank 2 is set to GPU 2. However, the problem is that from the perspective of process rank 2, it treats GPU 2 as its own GPU 0 (index=0). This leads to rank 2 actually sending a GPU ID of 0 when transferring Tensors across processes. Consequently, every Worker in SGlang receives a Tensor with an incorrect GPU ID, causing abnormal memory usage on GPU 0,at the same time, this also causes unnecessary memory copies, slowing down the update_weights process.

PR addresses the issue of transferring Tensors across processes. This requires a patch to be applied both during serialization in the sending process and deserialization in the receiving process. Verl hasn't implemented this. Referencing PR and the implementation in slime, we resolved this problem.
We also modified the Sglang.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical bug causing excessive VRAM usage on GPU0 when using verl with Megatron and SGLang. The root cause was the incorrect expansion of named_tensors by a factor of the tensor parallel size during weight updates.

The fix is well-implemented and consists of two main changes:

  1. In verl/workers/sharding_manager/megatron_sglang.py, the serialization logic is simplified. Instead of serializing each tensor individually, the entire batch of tensors is now serialized into a single string on each tensor parallel rank. This list of serialized strings is then gathered and passed for the update.
  2. In verl/workers/rollout/sglang_rollout/sglang_rollout.py, the erroneous replication of the weight data is removed. The method now correctly accepts the list of serialized strings from the sharding manager and passes it to the SGLang engine.

These changes not only resolve the memory issue but also significantly simplify the weight update pipeline, making it more efficient and maintainable. The code modifications are clean, logical, and directly address the described problem. The PR is a solid improvement.

@GeLee-Q GeLee-Q changed the title fix update_weights_from_tensors [fsdp, megatron, sglang] fix: correct tp_size expansion in rollout weight updates Jul 21, 2025
@SuperCB SuperCB changed the title [fsdp, megatron, sglang] fix: correct tp_size expansion in rollout weight updates [fsdp, megatron, sglang] fix: Fixed a bug in the update_weight process where the GPU ID was being passed incorrectly. Jul 30, 2025
@chenhaiq
Copy link
Copy Markdown
Collaborator

Does this issue result "CUDA error: an illegal memory access was encountered"?

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

zhaochenyang20 commented Jul 31, 2025

Does this issue result "CUDA error: an illegal memory access was encountered"?

This may not be the same issue. illegal mem access may be fixed by bumping a higher version of sglang #2720

Right now it's hard for us to reproduce illigal mem access, but typically, just bump up sglang can fix it.

@hebiao064
Copy link
Copy Markdown
Collaborator

Does this issue result "CUDA error: an illegal memory access was encountered"?

This may not be the same issue. illegal mem access may be fixed by bumping a higher version of sglang #2720

@chenhaiq @zhaochenyang20 #2720 is built on top of #2620, we can close #2620

For illegal memory issue, we hope higher version can solve that, but we haven't find a way to consistently repro it.

@vermouth1992
Copy link
Copy Markdown
Collaborator

Could you resolve the conflict?

@vermouth1992
Copy link
Copy Markdown
Collaborator

It seems that all the sglang tests passed. Could you help move the deprecated sglang tests out after rebase? Thanks!

@hebiao064 hebiao064 closed this Aug 1, 2025
@hebiao064
Copy link
Copy Markdown
Collaborator

Duplicated with #2720

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants