[diffusion] feat: introduce ltx-2-two-stage device manager#22869
Merged
[diffusion] feat: introduce ltx-2-two-stage device manager#22869
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This pull request optimizes LTX-2.3 pipelines by implementing pre-merged stage 2 transformers and a CPU snapshot mechanism for efficient weight management. It also introduces performance profiling and adjusts default offloading settings for high-memory GPUs. Review feedback highlights safety concerns regarding the use of next(module.parameters()), which can raise StopIteration if parameters are missing. Additionally, a potential AttributeError in the denoising stage and a logic flaw in CPU tensor pinning were identified.
Collaborator
Author
|
/tag-and-rerun-ci |
e89d478 to
b94e8d0
Compare
Revert _adjust_offload LTX-2.3 special branch and _temporarily_disable_offload behavior change to match origin/main, keeping only offload infrastructure changes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts: # python/sglang/multimodal_gen/runtime/pipelines_core/stages/ltx_2_denoising.py
Off-topic for the offload PR; redundant with the outer StageProfiler that already wraps each PipelineStage. Restores parity with origin/main.
BBuf
approved these changes
Apr 17, 2026
jmamou
pushed a commit
to jmamou/sglang
that referenced
this pull request
Apr 20, 2026
zhangying098
pushed a commit
to zhangying098/sglang
that referenced
this pull request
Apr 23, 2026
kyx1999
pushed a commit
to KMSorSMS/sglang
that referenced
this pull request
Apr 27, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Modifications
One-stage
Two-stage
transformer_2, for snapshot and resident mode)module.to(cpu/cuda)switching with CPU snapshot based releasePre-merged LoRA (only applicable for two-stage)
when running two-stage pipeline, LoRA will be applied between two denoising stages, this would include:
cuda.synchronizeto wait for ongoing taskswhile with new approach
snapshotmode:param.datadirectly to cpu_snapshot instead of d2hfor even more aggressive mode
resident, all two dits are always kept resident.cuda.synchronizeto wait for ongoing tasks + layerwise_offload.disable_offload + apply loraMotivation
Statistics
Stage Breakdown
Peak VRAM
NOTES
LTX2LoRASwitchStageno longer exists in non-legacy modeModifications
Accuracy Tests
Speed Tests and Profiling
Checklist
Review and Merge Process
/tag-and-rerun-ci,/tag-run-ci-label,/rerun-failed-ci