Improve Readme#10
Merged
merrymercy merged 7 commits intomainfrom Jan 16, 2024
Merged
Conversation
merrymercy
added a commit
that referenced
this pull request
Jan 16, 2024
Ying1123
pushed a commit
that referenced
this pull request
Sep 13, 2024
5 tasks
timethink
pushed a commit
to timethink/sglang
that referenced
this pull request
Mar 9, 2025
yanbing-j
pushed a commit
to yanbing-j/sglang
that referenced
this pull request
Mar 12, 2025
* add a sgl_kernel.cpu wrapper for CPU OPs in sgl-kernel * add wrapper for attention OPs * set default value of is_vnni to True
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
* add a sgl_kernel.cpu wrapper for CPU OPs in sgl-kernel * add wrapper for attention OPs * set default value of is_vnni to True
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
* add a sgl_kernel.cpu wrapper for CPU OPs in sgl-kernel * add wrapper for attention OPs * set default value of is_vnni to True
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
* add a sgl_kernel.cpu wrapper for CPU OPs in sgl-kernel * add wrapper for attention OPs * set default value of is_vnni to True
This was referenced Apr 16, 2025
sleepcoo
pushed a commit
to shuaills/sglang
that referenced
this pull request
Jun 24, 2025
fix model loading and add eagle3 inference
yichiche
pushed a commit
to yichiche/sglang
that referenced
this pull request
Jul 30, 2025
Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com>
yichiche
pushed a commit
to yichiche/sglang
that referenced
this pull request
Aug 7, 2025
Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com>
yichiche
pushed a commit
to yichiche/sglang
that referenced
this pull request
Aug 11, 2025
Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com>
5 tasks
5 tasks
Xia-Weiwen
pushed a commit
to Xia-Weiwen/sglang
that referenced
this pull request
Sep 9, 2025
* Revert "port prefill optimization (sgl-project#7)" This reverts commit ea0d028. * improve bfloat16 gemm performance for prefilling before: ``` gemm_bf16(native): 4.772 ms, gemm_fp8(opt): 0.000 ms, gemm_int8(opt): 0.000 ms, gemm_bf16(opt): 15.328 ms ``` after: ``` gemm_bf16(native): 4.847 ms, gemm_fp8(opt): 0.000 ms, gemm_int8(opt): 0.000 ms, gemm_bf16(opt): 3.927 ms ``` * improve fp8 gemm performance with large M * enable amx-int8 for gemm, fused moe, shared moe and qkv_proj kernels on PyTorch 2.7 * improve int8 gemm performance with large M * improve bf16 and int8 moe performance with large nbatches * update naming for nb0 and nb1 in fused gemm and silu_mul kernel * improve fp8 moe performance with large nbatches * remove hardcode numbers --------- Co-authored-by: mingfeima <mingfei.ma@intel.com>
kalyank007
pushed a commit
to kalyank007/sglang
that referenced
this pull request
Nov 7, 2025
5 tasks
amd-youchen
referenced
this pull request
in amd-youchen/sglang
Nov 13, 2025
[Feature] add multimodal attention for rocm and fix qwenvl load error…
whybeyoung
pushed a commit
to whybeyoung/sglang
that referenced
this pull request
Nov 19, 2025
…hang PP PD dev
nithinsubbiah
pushed a commit
to nithinsubbiah/sglang
that referenced
this pull request
Nov 21, 2025
Signed-off-by: Stanley Winata <stanley.winata@amd.com> [Wave] Add wave extend attention kernel Signed-off-by: Harsh Menon <harsh@nod-labs.com> [Wave] Adding logit_cap and layer scaling to API Also add support for the wave backend to the model runner. And use Triton decode kernels for now. [Wave] Run chunked prefill for perf comparison on Wave test Need to rename the non chunked/regular prefill version because otherwise rpd will treat it as the same kernel Signed-off-by: Stanley Winata <stanley.winata@amd.com> [Wave] Cache the function that loads the wave kernel Also maintain a global kernel hash to avoid recomputing the hash on every call. [Wave] Don't specify block size and enable buffer ops [Wave] Enable wave runtime and update scheduling API [Wave] Update API to use wave_compile & WaveCompileOptions [Wave] Update wave backend and extend attention to latest [Wave] Add speculative decode kernel Signed-off-by: nithinsubbiah <nithinsubbiah@gmail.com> cache kernels using lru_cache Update WaveBackend to use Wave Decode (sgl-project#6) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Revert "Update WaveBackend to use Wave Decode (sgl-project#6)" (sgl-project#7) This reverts commit eac4599. Wave Backend decode (sgl-project#8) * align shapes Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> * fix Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> --------- Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Wave backend fixes (sgl-project#10) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> More fixes to Wave decode (sgl-project#12) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> is_causal Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Enable the grok in3 model (sgl-project#14) Set unique cache dir for each worker (sgl-project#16) update kernel (sgl-project#18) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> updated spec decode test as per wave Signed-off-by: xintin <gaurav.verma@amd.com> fix extend (sgl-project#23) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Refactor paged decode intermediate arrays shapes (sgl-project#24) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> remove dyn symbols (sgl-project#26) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> cleanup shapes (sgl-project#27) Some fields were removed from `paged_decode_attention_shape`. Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Remove `mha` param from Wave decode attention kernel (sgl-project#28) Depends on iree-org/iree-turbine#1039 Signed-off-by: Paul Zhang <paul.zhang@amd.com> nfc: fix problems reported by linting update references from iree.turbine to wave_lang
tpoisonooo
pushed a commit
to tpoisonooo/sglang
that referenced
this pull request
Feb 12, 2026
5 tasks
cscyuge
pushed a commit
to cscyuge/sglang
that referenced
this pull request
Mar 9, 2026
1. rtmp_pusher (sgl-project#10): _force_close_container() (main thread) and _run() finally block (background thread) both read/write self._container without synchronization, risking a double-close segfault. Add threading.Lock to protect all _container access. 2. fMP4 generator (sgl-project#13): when a client disconnects mid-stream, GeneratorExit is raised at a yield point and the generator exits without reaching the container.close() at the end, leaking the PyAV container and its file descriptors. Wrap the streaming loop and flush in try/finally to ensure container.close() is always called. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
lawrence-harmonic
added a commit
to lawrence-harmonic/sglang
that referenced
this pull request
Mar 19, 2026
apinge
pushed a commit
to apinge/sglang
that referenced
this pull request
Mar 31, 2026
* add OPTFLAG="moe,w8a8_gemm" * Update CI workflow and evaluation guide & scripts Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Bugfix Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Pin transformers version Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Update workflow for triton and pyhip install Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Debug CI error Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Update port to avoid conflict Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Revert "Update port to avoid conflict" This reverts commit dd868e5. Signed-off-by: Li,Tingqian <tingqli.amd.com> * Update CI workflow and scripts Signed-off-by: Li,Tingqian <tingqli.amd.com> * Fix bug Signed-off-by: Li,Tingqian <tingqli.amd.com> * Handle benchmark workflow with port conflict Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix CI error Signed-off-by: Xiake Sun <xiake.sun@amd.com> --------- Signed-off-by: Xiake Sun <xiake.sun@amd.com> Signed-off-by: Li,Tingqian <tingqli.amd.com> Co-authored-by: Li,Tingqian <tingqli.amd.com>
mmangkad
pushed a commit
to mmangkad-dev/sglang
that referenced
this pull request
Apr 3, 2026
torch compile and tuning.
5 tasks
rucnyz
added a commit
to rucnyz/sglang
that referenced
this pull request
Apr 30, 2026
sgl-project#10 Sweep 1: 3 seeds × 5 ratios. Std 3-5% of mean across all ratios; swing 1.71× (4711→8075) reproduces within noise of original 1.91×. Variance bands now in paper Table 1. sgl-project#11 Setting 4 fallback rule: - Implementation: SGLANG_XPOOL_QDEPTH_TRIGGER added to cross_pool_planner.py (gated, legacy preserved). - Unit tests: 5/5 PASS. - E2E: both arms fired 21 transfers on Phase 1+2+3 (workload doesn't dual-saturate; KV stays <1%). Honest finding documented in §6.4. - Deeper fix (per-pool admission signal everywhere) is follow-up. SETTINGS.md scoreboard reflects both items DONE.
rucnyz
added a commit
to rucnyz/sglang
that referenced
this pull request
Apr 30, 2026
…s 28 xfers v9 pool-binding-shift trace produces real differentiation: - Phase B (KV-bound 8K random): L1+L2 -37% mean TTFT vs stock - Phase C (mixed 4K random): L1+L2 -38% median E2E vs stock - Cross-pool transfers: stock 0, L1-only 0, L2-only 0, L1+L2 28 Two surprising findings documented: 1. Layer 2 alone fires zero transfers — Layer 1 retention is what makes Layer 2 cross the firing threshold. 2. Phase A regresses with L1 (-20% TPS) because K_big=8192 hurts on prefix-friendly GSP. Consistent with A2's K_big=0-wins finding. Adaptive K_big control marked as follow-up. Settings status: Setting 1 marked **DONE v6 NULL + v9 PASS**. All 4 user-requested follow-ups (sgl-project#9 Q3.A 4-arm, sgl-project#10 Sweep 1 multi-seed, sgl-project#11 Setting 4 fallback rule, sgl-project#12 Setting 1 v9 trace) now complete.
lujangus
added a commit
to tails-mpt/sglang
that referenced
this pull request
May 1, 2026
Closes architecture-notes.md "Open risks sgl-project#10": V4-Flash has NO Jinja chat_template field in tokenizer_config.json (uses Python encoding/encoding_dsv4.py) — sglang's automatic detection finds nothing and falls back to a default that doesn't match. Two changes: 1. ChatTemplate(name="deepseek-v4", ...) registered in lang/chat_template.py right after deepseek-v3. Same role prefix/suffix structure (system empty, user="<|User|>", assistant="<|Assistant|>"/<|end▁of▁sentence|>), same stop_str. Thinking-mode + tool-call DSML are handled inside the model's decoder; this template only covers the role prefix structure sglang uses when it constructs prompts. 2. match_deepseek_v4 matching function registered so model paths matching /deepseek-v4/i (and not /base/i) auto-route to the deepseek-v4 template. Mirrors the existing match_deepseek pattern that handles V3 and R1. Reference: V4 encoding format from /tmp/v4-flash-meta/encoding/README.md <|begin▁of▁sentence|>{system}<|User|>{user}<|Assistant|><think>{reasoning}</think>{response}<|end▁of▁sentence|> Phase 4 + Phase 4.5 + Phase 5 / 6 / 7 recipes all depend on this. Until this commit landed, any sglang serve or train call against deepseek-ai/DeepSeek-V4-Flash would fail at the chat-template lookup.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.