Motivation
SGLang's test suite has 600+ test files, but most are E2E tests that launch a full server. While these cover many code paths, core modules like managers/, mem_cache/, entrypoints/, sampling/, parser/, function_call/, and utils/ still need better unit test coverage (tests that run without launching a server).
Why this matters:
- E2E tests launch a server and load model weights, taking minutes. Unit tests run in seconds with mocked dependencies.
- When an E2E test fails, the root cause could be anywhere. Unit tests pinpoint the exact broken function.
What's Already Done
Directory Structure
test/registered/unit/ # mirrors python/sglang/srt/
├── mem_cache/ (6 files)
├── function_call/ (5 files)
├── utils/ (4 files)
├── parser/ (3 files)
├── managers/ (3 files)
├── entrypoints/ (2 files)
├── layers/ (2 files)
├── observability/ (2 files)
├── model_loader/ (2 files)
├── server_args/ (1 file)
├── model_executor/ (1 file)
├── batch_invariant_ops/ (1 file)
└── ... (new subdirs added as tests are written)
Principle: Source file at srt/mem_cache/radix_cache.py → test at unit/mem_cache/test_radix_cache.py.
Open Tasks
Below are some suggested starting points. Comment on this issue to claim one.
You can check the latest UT coverage report run (example) or run coverage locally to find your own opportunities — modules with low coverage under core directories (managers/, sampling/, parser/, etc.) are good candidates:
pytest test/registered/unit/ --cov --cov-config=.coveragerc --cov-report=term-missing -v
Easy (good first issue)
srt/parser/ — conversation templates, reasoning parsers, code completion parser
srt/sampling/ — parameter validation, normalization, logit processor serialization
srt/entrypoints/openai/ — encoding/decoding, tool call formatting
srt/function_call/ — extend existing detector tests, add new detector coverage
Medium
srt/utils/ — common util helpers
srt/multimodal/ — media utils, processor base logic
srt/constrained/ — grammar dispatch logic
srt/managers/ — template manager, tokenizer utils, batch metadata helpers
How to Contribute
- Pick a task from above (or find your own via coverage), comment to claim it
- Look at existing examples:
test/registered/unit/mem_cache/test_evict_policy.py — pure logic, zero deps
test/registered/unit/managers/test_prefill_adder.py — mock factories for scheduler tests
test/registered/unit/parser/test_reasoning_parser.py — streaming parser tests
- Follow the conventions in
test/registered/unit/README.md and the contribution guide
- Run locally:
pytest test/registered/unit/ -v
pytest test/registered/unit/ --cov --cov-config=.coveragerc -v # with coverage
- Submit a PR titled
[Test] Add unit tests for <module_name>, referencing this issue
PR Quality Bar
Note: AI-assisted code is acceptable, but you are responsible for the quality — understand every line you submit. We will reject PRs that look auto-generated without understanding — e.g., tests that only check trivial cases, assert on mock return values, or don't actually exercise the real code logic. Read the source code first, then write tests that would catch real bugs.
PR Requirements
Motivation
SGLang's test suite has 600+ test files, but most are E2E tests that launch a full server. While these cover many code paths, core modules like
managers/,mem_cache/,entrypoints/,sampling/,parser/,function_call/, andutils/still need better unit test coverage (tests that run without launching a server).Why this matters:
What's Already Done
test/registered/unit/with READMEs (test/registered/README.md,test/registered/unit/README.md) documenting conventions.coveragerc,pytest-cov)ci-coverage-overview.yml(daily + manual trigger) — example runDirectory Structure
Principle: Source file at
srt/mem_cache/radix_cache.py→ test atunit/mem_cache/test_radix_cache.py.Open Tasks
Below are some suggested starting points. Comment on this issue to claim one.
You can check the latest UT coverage report run (example) or run coverage locally to find your own opportunities — modules with low coverage under core directories (
managers/,sampling/,parser/, etc.) are good candidates:Easy (good first issue)
srt/parser/— conversation templates, reasoning parsers, code completion parsersrt/sampling/— parameter validation, normalization, logit processor serializationsrt/entrypoints/openai/— encoding/decoding, tool call formattingsrt/function_call/— extend existing detector tests, add new detector coverageMedium
srt/utils/— common util helperssrt/multimodal/— media utils, processor base logicsrt/constrained/— grammar dispatch logicsrt/managers/— template manager, tokenizer utils, batch metadata helpersHow to Contribute
test/registered/unit/mem_cache/test_evict_policy.py— pure logic, zero depstest/registered/unit/managers/test_prefill_adder.py— mock factories for scheduler teststest/registered/unit/parser/test_reasoning_parser.py— streaming parser teststest/registered/unit/README.mdand the contribution guidepytest test/registered/unit/ -v pytest test/registered/unit/ --cov --cov-config=.coveragerc -v # with coverage[Test] Add unit tests for <module_name>, referencing this issuePR Quality Bar
Note: AI-assisted code is acceptable, but you are responsible for the quality — understand every line you submit. We will reject PRs that look auto-generated without understanding — e.g., tests that only check trivial cases, assert on mock return values, or don't actually exercise the real code logic. Read the source code first, then write tests that would catch real bugs.
PR Requirements
test/registered/unit/<module>/(mirroringsrt/)CustomTestCaseinstead ofunittest.TestCase.register_cpu_ci()orregister_cuda_ci()