Skip to content

test: Enable testing for trtllm-gen decode bs1#2103

Merged
yzh119 merged 3 commits intoflashinfer-ai:mainfrom
bkryu:test_trtllm_decode_bs1
Nov 19, 2025
Merged

test: Enable testing for trtllm-gen decode bs1#2103
yzh119 merged 3 commits intoflashinfer-ai:mainfrom
bkryu:test_trtllm_decode_bs1

Conversation

@bkryu
Copy link
Copy Markdown
Collaborator

@bkryu bkryu commented Nov 18, 2025

📌 Description

In #1898, it was raised that trtllm-gen's attention kernels fail for batch size 1. The prefill kernel was fixed in #1912 and prefill tests have been enabled.

Further updates to trtllm-gen kernels have also fixed the decode batch size 1 issue. Current PR re-enables testing.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Tests
    • Expanded batch_decode test scenarios to cover additional small-batch and page-size combinations.
    • Increased coverage for max_in_kv_len by testing multiple length options instead of a single value.
    • Restored previously marked-as-expected-failure case to run normally, improving overall test pass coverage.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Nov 18, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

The test file tests/attention/test_trtllm_gen_attention.py adds a new batch_decode case with batch_size=1 and page_size=32, expands max_in_kv_len options from [8192] to [4096, 8192], and removes the xfail marker for the batch_size==1 case.

Changes

Cohort / File(s) Summary
Batch decode test expansion
tests/attention/test_trtllm_gen_attention.py
Added a new batch_decode configuration (batch_size=1, q_len_per_req=1, page_size=32, num_kv_heads=8, head_grp_size=8); expanded max_in_kv_len options from [8192] to [4096, 8192]; removed the bs1 xfail marker so the test runs normally.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify removal of the xfail for batch_size==1 is intentional and test expectations are correct.
  • Confirm the new parameter combination is valid and doesn't produce intermittent failures.
  • Check effects of adding 4096 to max_in_kv_len on CI coverage and timing.

Possibly related issues

Possibly related PRs

Suggested reviewers

  • nvmbreughe
  • PerkzZheng
  • cyx-6

Poem

🐰 I hopped through tests where xfails lay,
A tiny batch one led the way.
Pages sized and lengths extended,
The rabbit cheered — no case suspended! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: enabling testing for trtllm-gen decode with batch size 1, which aligns with the file changes.
Description check ✅ Passed The PR description adequately explains the context and purpose: re-enabling decode batch size 1 tests after kernel fixes, with references to related issues #1898 and #1912.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on validating the stability and correctness of trtllm-gen attention kernels for batch size 1 decode operations. It removes the xfail marker from relevant tests, allowing them to execute and pass, following recent fixes to the kernels. Additionally, new test parameters are introduced to broaden the coverage and ensure robustness under various configurations.

Highlights

  • Test Re-enablement: Re-enables previously failing tests for trtllm-gen decode operations with batch size 1, indicating that the underlying kernel issues have been resolved.
  • Expanded Test Coverage: Adds new test configurations for trtllm-gen batch decode, specifically including a test case with page_size=32 and another with max_in_kv_len=4096 for batch size 1 scenarios.
  • Issue Resolution Confirmation: Confirms that recent updates to trtllm-gen kernels have successfully addressed the issue where decode operations produced incorrect output for batch size 1.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@bkryu
Copy link
Copy Markdown
Collaborator Author

bkryu commented Nov 18, 2025

/bot run

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request re-enables testing for trtllm-gen decode with a batch size of 1, which was previously failing. The changes correctly remove the pytest.xfail marker and expand test coverage by adding more parameterizations for page_size and max_in_kv_len. The changes are correct and align with the goal of the pull request. I have one suggestion to further improve test coverage.

head_dim,
):
# Small number of test cases for batch size 1
pytest.xfail("trtllm-gen decode gets incorrect output with bs1")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While re-enabling this test by removing pytest.xfail, it's a great opportunity to also improve test coverage for the bs=1 case. The trtllm-gen backend appears to support the NHD layout via transposition in the wrapper, and it's already being tested for other batch sizes in test_trtllm_batch_decode. Consider enabling NHD layout testing for this test as well by modifying line 982 to:

@pytest.mark.parametrize("kv_layout", ["HND", "NHD"])

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !144 has been created, and the CI pipeline #38686045 is currently running. I'll report back once the pipeline job completes.

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[FAILED] Pipeline #38686045: 5/18 passed

@yzh119
Copy link
Copy Markdown
Collaborator

yzh119 commented Nov 18, 2025

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !144 has been updated with latest changes, and the CI pipeline #38743306 is currently running. I'll report back once the pipeline job completes.

@bkryu bkryu self-assigned this Nov 19, 2025
@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[FAILED] Pipeline #38743306: 14/18 passed

@yzh119 yzh119 merged commit b9964cc into flashinfer-ai:main Nov 19, 2025
4 checks passed
@bkryu bkryu deleted the test_trtllm_decode_bs1 branch November 21, 2025 21:42
BingooYang pushed a commit to BingooYang/flashinfer that referenced this pull request Mar 13, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

In flashinfer-ai#1898, it was raised that trtllm-gen's attention kernels fail for
batch size 1. The prefill kernel was fixed in flashinfer-ai#1912 and prefill tests
have been enabled.

Further updates to trtllm-gen kernels have also fixed the decode batch
size 1 issue. Current PR re-enables testing.

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Tests**
* Expanded batch_decode test scenarios to cover additional small-batch
and page-size combinations.
* Increased coverage for max_in_kv_len by testing multiple length
options instead of a single value.
* Restored previously marked-as-expected-failure case to run normally,
improving overall test pass coverage.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Zihao Ye <expye@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants