Skip to content

turn off dit_layerwise_offload for wan on rocm#17569

Merged
Fridge003 merged 3 commits intosgl-project:mainfrom
zyzshishui:1
Jan 23, 2026
Merged

turn off dit_layerwise_offload for wan on rocm#17569
Fridge003 merged 3 commits intosgl-project:mainfrom
zyzshishui:1

Conversation

@zyzshishui
Copy link
Copy Markdown
Contributor

Motivation

This pr is related to #16499. On MI350/MI355X, Wan models run faster with dit_layerwise_offload off.

Modifications

Turn off dit_layerwise_offload on rocm platform by default

Accuracy Tests

Benchmarking and Profiling

MI350

Model 1 GPU (on / off) Speedup 8 GPUs (on / off) Speedup
Wan2.2-TI2V-5B-Diffusers 53.52s / 34.17s 1.57× 32.10s / 7.69s 4.17×
Wan2.2-T2V-A14B-Diffusers 546.77s / 525.65s 1.04× 117.83s / 100.49s 1.17×

MI355X

Model 1 GPU (on / off) Speedup 8 GPUs (on / off) Speedup
Wan2.2-TI2V-5B-Diffusers 48.03s / 29.15s 1.65× 28.35s / 6.85s 4.14×
Wan2.2-T2V-A14B-Diffusers 457.21s / 444.15s 1.03× 101.26s / 82.88s 1.22x

Repro commands

Wan2.2-T2V-A14B (single GPU)

PROMPT="A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."

COMMON_ARGS="--log-level=info \
  --prompt=\"$PROMPT\" --negative-prompt=\" \" --720p \
  --num-inference-steps=40 --num-frames=81 --guidance-scale=5.0 --seed=42 \
  --save-output --warmup --enable-torch-compile true \
  --dit-cpu-offload false --text-encoder-cpu-offload false \
  --image-encoder-cpu-offload false --vae-cpu-offload false"

# on
sglang generate --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers $COMMON_ARGS --dit-layerwise-offload true
# off
sglang generate --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers $COMMON_ARGS --dit-layerwise-offload false

Wan2.2-T2V-A14B (8 GPUs)

PROMPT="A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."

COMMON_ARGS="--log-level=info \
  --prompt=\"$PROMPT\" --negative-prompt=\" \" --720p \
  --num-inference-steps=40 --num-frames=81 --guidance-scale=5.0 --seed=42 \
  --save-output --warmup --enable-torch-compile true \
  --num-gpus=8 --enable-cfg-parallel --ulysses-degree=4 \
  --dit-cpu-offload false --text-encoder-cpu-offload false \
  --image-encoder-cpu-offload false --vae-cpu-offload false"

# on
sglang generate --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers $COMMON_ARGS --dit-layerwise-offload true
# off
sglang generate --model-path Wan-AI/Wan2.2-T2V-A14B-Diffusers $COMMON_ARGS --dit-layerwise-offload false

Wan2.2-TI2V-5B (single GPU / 8 GPUs)

PROMPT="An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
NEG="Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
IMG="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg"

COMMON_ARGS="--log-level info --warmup true --enable-torch-compile true \
  --prompt \"$PROMPT\" --negative-prompt \"$NEG\" --image-path $IMG \
  --num-frames 81 --num-inference-steps 50 --guidance-scale 5.0 --seed 42 \
  --save-output --720p \
  --dit-cpu-offload false --vae-cpu-offload false --text-encoder-cpu-offload false"

# single GPU: on/off
sglang generate --model-path Wan-AI/Wan2.2-TI2V-5B-Diffusers $COMMON_ARGS --dit-layerwise-offload true
sglang generate --model-path Wan-AI/Wan2.2-TI2V-5B-Diffusers $COMMON_ARGS --dit-layerwise-offload false

# 8 GPUs: on/off
sglang generate --model-path Wan-AI/Wan2.2-TI2V-5B-Diffusers $COMMON_ARGS \
  --num-gpus=8 --enable-cfg-parallel --ulysses-degree=4 --dit-layerwise-offload true
sglang generate --model-path Wan-AI/Wan2.2-TI2V-5B-Diffusers $COMMON_ARGS \
  --num-gpus=8 --enable-cfg-parallel --ulysses-degree=4 --dit-layerwise-offload false

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zyzshishui, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a crucial performance enhancement for users running Wan models on ROCm-based hardware. By disabling the dit_layerwise_offload feature by default on ROCm, the system can achieve substantial speedups, particularly in multi-GPU configurations. This change streamlines the user experience by automatically applying an optimal configuration, removing the need for manual adjustments to achieve better inference performance.

Highlights

  • Performance Optimization for ROCm: The dit_layerwise_offload feature is now disabled by default for Wan models when running on ROCm platforms, based on benchmark results showing significant speed improvements.
  • Platform-Specific Configuration: A new class method, enable_dit_layerwise_offload_for_wan_by_default, has been introduced in the Platform interface, allowing specific platforms like ROCm to override the default behavior for this setting.
  • Dynamic Default Setting: The server argument checking logic has been updated to dynamically determine the default dit_layerwise_offload setting for Wan models based on the current platform's configuration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions github-actions Bot added amd diffusion SGLang Diffusion labels Jan 22, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request disables dit_layerwise_offload for Wan models on ROCm platforms by default to improve performance, which is supported by the provided benchmarks. The implementation is well-structured, introducing a new method in the Platform interface to handle platform-specific defaults and overriding it for ROCm. The logic is correctly applied in the server arguments check. The changes are clear and effective. I have one minor style suggestion to ensure code consistency.

Comment thread python/sglang/multimodal_gen/runtime/platforms/rocm.py Outdated
@zyzshishui
Copy link
Copy Markdown
Contributor Author

/tag-run-ci-label

@Fridge003 Fridge003 merged commit 2169025 into sgl-project:main Jan 23, 2026
80 of 87 checks passed
@zyzshishui zyzshishui deleted the 1 branch January 23, 2026 20:36
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants