Add offline auto-tuning for LoRA CSGMV kernel#20391
Merged
Fridge003 merged 8 commits intosgl-project:mainfrom Apr 10, 2026
Merged
Add offline auto-tuning for LoRA CSGMV kernel#20391Fridge003 merged 8 commits intosgl-project:mainfrom
Fridge003 merged 8 commits intosgl-project:mainfrom
Conversation
Contributor
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
Collaborator
|
/tag-and-rerun-ci |
Contributor
Author
|
/rerun-failed-ci |
zminglei
reviewed
Apr 8, 2026
zminglei
approved these changes
Apr 9, 2026
Contributor
Author
|
/rerun-failed-ci try 2 |
Fridge003
pushed a commit
that referenced
this pull request
Apr 11, 2026
Co-authored-by: Satyam Kumar <satyamk@linkedin.com>
pyc96
pushed a commit
to pyc96/sglang
that referenced
this pull request
Apr 14, 2026
Co-authored-by: Satyam Kumar <satyamk@linkedin.com>
yushengsu-thu
pushed a commit
that referenced
this pull request
Apr 17, 2026
Co-authored-by: Satyam Kumar <satyamk@linkedin.com>
yhyang201
pushed a commit
to yhyang201/sglang
that referenced
this pull request
Apr 22, 2026
Co-authored-by: Satyam Kumar <satyamk@linkedin.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
Add offline auto-tuning script for LoRA csgmv shrink / expand kernels (similar to MoE auto-tuning)
On H200 with Qwen3-Embedding-0.6B (rank=64), tuning yields 2-3x speedup on shrink kernels and 1.1-1.5x on expand kernels
Modifications
lora_tuning_config.py: Config loader with LRU cache – same MoE loadertune_lora_csgmv.py: Offline script to generate configsFallback behavior
When no tuned config is found (no config file for the current GPU/model/Triton version), falls back to the original upstream defaults.
Usage
Accuracy Tests
No changes to kernel computation logic — only block size and launch params are tuned. The kernels produce identical outputs with different block sizes (verified by existing LoRA correctness tests).
Benchmarking and Profiling
Kernel-level tuning results (H200, Triton 3.5.1, Qwen3-Embedding-0.6B, rank=64)
Per-layer net savings at chunk_size=128 (shrink + expand combined)
E2E benchmark result
Launch Server
All % gains are relative to main @ chunk_size=16 (the current default behavior).
Checklist