Skip to content

[None][chore] NVLinkOneSided AlltoAll Support zero local_num_tokens.#9822

Merged
bobboli merged 2 commits intoNVIDIA:mainfrom
bobboli:alltoall_zero_tokens
Dec 22, 2025
Merged

[None][chore] NVLinkOneSided AlltoAll Support zero local_num_tokens.#9822
bobboli merged 2 commits intoNVIDIA:mainfrom
bobboli:alltoall_zero_tokens

Conversation

@bobboli
Copy link
Copy Markdown
Collaborator

@bobboli bobboli commented Dec 9, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Fixed handling of zero tokens on individual ranks in multi-GPU token distribution, preventing errors and improving robustness for non-uniform token scenarios.
    • Enhanced synchronization logic to gracefully manage edge cases without data loss or crashes.
  • Tests

    • Expanded test coverage to include non-uniform token distributions with zero tokens on specific ranks.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 9, 2025

📝 Walkthrough

Walkthrough

These changes enable the MoE all-to-all communication kernels to handle zero local tokens gracefully. The dispatch and combine kernels are updated with conditional guards, synchronized shared-memory allocations, and relaxed validation checks to safely process cases where some ranks have no tokens.

Changes

Cohort / File(s) Summary
Kernel implementation
cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
Added guards throughout moeA2ADispatchKernel and moeA2ACombineKernel to handle local_num_tokens == 0; conditionally allocate shared memory only when tokens exist; reworked per-token tiling with lane-based logic; introduced per-k bookkeeping for top-K routing with atomic assignments and synchronization barriers; moved payload dispatch to per-token, per-payload loops; relaxed launch validation from > 0 to >= 0; adjusted kernel launches to use minimal grid sizes when zero tokens present.
Op wrapper
cpp/tensorrt_llm/thop/moeAlltoAllOp.cpp
Removed runtime check enforcing localNumTokens > 0, allowing zero tokens while preserving other validations.
Test coverage
tests/unittest/_torch/multi_gpu/test_moe_a2a.py
Added test parameter tuple (4, [32, 0, 16, 0], 2) to both test_dispatch and test_combine parameterizations, extending coverage to non-uniform token distributions with zero tokens on select ranks.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–30 minutes

  • moeAlltoAllKernels.cu: Multiple interconnected logic changes to kernel dispatch/combine paths requiring careful verification of guard conditions, synchronization barriers, and shared-memory allocation logic, particularly for zero-token edge cases
  • Synchronization handling: Review the new ThreadingPolicy::sync calls and atomic operations to ensure correct coordination across thread blocks when local_num_tokens is zero
  • Launch grid logic: Verify that minimal grid launches (grid_size = 1) correctly participate in synchronization without causing deadlock or under-utilization

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning The pull request description is largely incomplete. Required sections (Description and Test Coverage) are empty; only the template structure and checklist are visible. Add a clear Description section explaining the issue and solution, and list the relevant tests that safeguard the changes (e.g., test_moe_a2a.py::TestMoEAlltoAll::test_dispatch with new test case [32, 0, 16, 0]).
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding support for zero local_num_tokens in NVLinkOneSided AlltoAll, which aligns with the code modifications across kernel, operator, and test files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/unittest/_torch/multi_gpu/test_moe_a2a.py (1)

569-569: Consider adding the zero-token configuration to test_dispatch as well.

The zero-token test case (4, [32, 0, 16, 0], 2) is added to test_combine but not to test_dispatch (lines 477-496). Since the dispatch kernel also includes zero-token handling logic, adding this configuration to test_dispatch would provide symmetric test coverage and ensure both dispatch and combine operations are validated for zero-token scenarios.

Apply this diff to add the zero-token test case to test_dispatch:

             (4, [32, 32, 32, 32], 8),  # Four ranks with top_k = 8

             # Edge cases
             (4, [1, 1, 1, 1], 2),  # Four ranks with single token per rank
+            (4, [32, 0, 16, 0], 2),  # Four ranks with zero tokens on some ranks
         ],
         indirect=["mpi_pool_executor"])
     def test_dispatch(self, mpi_pool_executor, all_num_tokens, top_k):
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c7a2568 and abfb89f.

📒 Files selected for processing (3)
  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (8 hunks)
  • cpp/tensorrt_llm/thop/moeAlltoAllOp.cpp (0 hunks)
  • tests/unittest/_torch/multi_gpu/test_moe_a2a.py (1 hunks)
💤 Files with no reviewable changes (1)
  • cpp/tensorrt_llm/thop/moeAlltoAllOp.cpp
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tests/unittest/_torch/multi_gpu/test_moe_a2a.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tests/unittest/_torch/multi_gpu/test_moe_a2a.py
  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
**/*.{cpp,h,cu}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,h,cu}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo)
Prefer const or constexpr variables over #define whenever possible, as the latter are not visible to the compiler
A variable that is not modified after its initialization should be declared as const
Except 0 (only used in comparison for checking signness/existence/emptiness) and nullptr, true, false, all other literals should only be used for variable initialization and should be replaced with named constants
Use Allman indentation style for braces in C++
Put the semicolon for an empty for or while loop in a new line
The statement forming the body of a switch, while, do .. while or for statement shall be a compound statement (use brace-delimited statements)
If and else should always be followed by brace-delimited statements, even if empty or a single statement
C++ filenames should use camel case with first letter lowercase (e.g., thisIsASubDir and thisIsAFilename.cpp)
All filenames involved in compilation of a compilation target must have case-insensitive unique filenames
All types (including class names) should use camel case with uppercase first letter (e.g., FooBarClass)
Local variables, methods and namespaces should use camel case with first letter lowercase (e.g., localFooBar)
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos)
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal)
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter of the name (e.g., static std::once_flag sFlag;)
Public, private and protected class member variables should use camel case prefixed with 'm' (e.g., mNbFooValues), though the 'm' pre...

Files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
**/*.cu

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

CUDA code must be compiled with a CUDA compiler and includes declarations/definitions with CUDA keywords (__device__, __managed__, __constant__, __global__), device functions, and kernel launching with <<<...>>> syntax

Files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
🧠 Learnings (14)
📓 Common learnings
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-09-02T13:42:44.885Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-14T15:36:37.610Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/kernels/mlaKernels.cu:436-439
Timestamp: 2025-08-14T15:36:37.610Z
Learning: CUDA kernels prioritize performance and should avoid runtime bounds checking or conditional operations that cause branching/warp divergence. Input validation should be done at the host level before kernel launch, not per-thread in the kernel.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
🧬 Code graph analysis (1)
cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (1)
tests/unittest/_torch/multi_gpu/test_moe_a2a.py (1)
  • compute_target_rank_id (44-56)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (8)

350-353: LGTM! Boundary check correctly enables zero-token synchronization.

The condition local_token_idx >= local_num_tokens && local_num_tokens != 0 ensures that when local_num_tokens == 0, threads with local_token_idx == 0 proceed past this check to participate in synchronization barriers (lines 435-503), while skipping token processing (guarded at line 355).


355-433: LGTM! Token processing correctly guarded for zero-token case.

The token processing logic (shared memory setup, routing computation, and payload dispatch) is properly guarded by if (local_num_tokens != 0). This ensures ranks with zero tokens skip all token-related work while still participating in synchronization (lines 435-503).

Minor note: Shared memory is still allocated in the launch configuration (lines 571, 585) even when local_num_tokens == 0, causing unused allocation. This is a negligible inefficiency and doesn't affect correctness.


443-451: LGTM! Last-token detection correctly handles zero-token case.

When local_num_tokens == 0, setting is_last_token = true immediately (without atomicAdd) is correct. This allows all threads to proceed directly to synchronization barriers without incrementing a meaningless counter.


521-521: LGTM! Validation correctly relaxed to allow zero tokens.

Changing the check from > 0 to >= 0 enables the zero-token support that this PR implements. Other validations remain appropriately strict.


566-584: LGTM! Minimal grid launch ensures synchronization participation.

Setting grid_size = 1 when local_num_tokens == 0 ensures ranks with no tokens still launch a minimal kernel to participate in synchronization barriers. This approach is consistent with existing patterns in TensorRT-LLM NCCL device kernels where all ranks launch uniform grids for optimal barrier performance.

Based on learnings, this design aligns with established practices in the codebase.


799-802: LGTM! Combine kernel boundary check mirrors dispatch pattern.

The condition local_token_idx >= local_num_tokens && local_num_tokens != 0 follows the same pattern as the dispatch kernel (line 350), ensuring consistent zero-token handling across both operations.


853-854: LGTM! Early return correctly placed after synchronization.

Returning early when local_num_tokens == 0 is placed after the synchronization barriers (lines 804-851), ensuring ranks with no tokens participate in cross-rank coordination before exiting. This ordering is critical for correctness.


908-924: LGTM! Combine launch logic consistently handles zero tokens.

The validation relaxation (line 908) and minimal grid launch (lines 917-924) mirror the dispatch implementation, providing symmetric zero-token handling across both dispatch and combine operations.

Comment thread cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu Outdated
@bobboli bobboli force-pushed the alltoall_zero_tokens branch 2 times, most recently from edb5ddf to 8de40f8 Compare December 15, 2025 07:15
@bobboli
Copy link
Copy Markdown
Collaborator Author

bobboli commented Dec 15, 2025

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #28373 [ run ] triggered by Bot. Commit: 8de40f8

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #28373 [ run ] completed with state SUCCESS. Commit: 8de40f8
/LLM/main/L0_MergeRequest_PR pipeline #21710 completed with status: 'FAILURE'

@bobboli
Copy link
Copy Markdown
Collaborator Author

bobboli commented Dec 16, 2025

/bot run --reuse-test

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #28532 [ run ] triggered by Bot. Commit: 8de40f8

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #28532 [ run ] completed with state SUCCESS. Commit: 8de40f8
/LLM/main/L0_MergeRequest_PR pipeline #21851 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Copy Markdown
Collaborator Author

bobboli commented Dec 21, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #29324 [ run ] triggered by Bot. Commit: 990ebd6

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #29324 [ run ] completed with state SUCCESS. Commit: 990ebd6
/LLM/main/L0_MergeRequest_PR pipeline #22517 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@bobboli
Copy link
Copy Markdown
Collaborator Author

bobboli commented Dec 22, 2025

/bot run --reuse-test

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #29365 [ run ] triggered by Bot. Commit: 990ebd6

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #29365 [ run ] completed with state SUCCESS. Commit: 990ebd6
/LLM/main/L0_MergeRequest_PR pipeline #22555 completed with status: 'SUCCESS'

@bobboli bobboli enabled auto-merge (squash) December 22, 2025 07:44
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
@bobboli bobboli force-pushed the alltoall_zero_tokens branch from 990ebd6 to fb223de Compare December 22, 2025 10:02
@bobboli
Copy link
Copy Markdown
Collaborator Author

bobboli commented Dec 22, 2025

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #29409 [ reuse-pipeline ] triggered by Bot. Commit: fb223de

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #29409 [ reuse-pipeline ] completed with state SUCCESS. Commit: fb223de
Reusing PR_Github #29365 for commit fb223de

@bobboli bobboli merged commit 472fe49 into NVIDIA:main Dec 22, 2025
6 of 7 checks passed
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 22, 2025
…VIDIA#9822)

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
yzh119 pushed a commit to flashinfer-ai/flashinfer that referenced this pull request Dec 23, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

This is a port of NVIDIA/TensorRT-LLM#9822 which
was done by @bobboli

This feature is necessary for SGlang integration because some DP workers
may have 0 tokens. The workaround to use a dummy token is quite messy
and brittle.

## 🔍 Related Issues

Follow up to #2102

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved robustness of mixture-of-experts all-to-all communication to
gracefully handle scenarios with zero local tokens, preventing
synchronization failures and ensuring stable operation in edge cases.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
JunyiXu-nv pushed a commit to JunyiXu-nv/TensorRT-LLM that referenced this pull request Dec 30, 2025
…VIDIA#9822)

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
murphymatt pushed a commit to fw-ai/flashinfer that referenced this pull request Jan 4, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

This is a port of NVIDIA/TensorRT-LLM#9822 which
was done by @bobboli

This feature is necessary for SGlang integration because some DP workers
may have 0 tokens. The workaround to use a dummy token is quite messy
and brittle.

## 🔍 Related Issues

Follow up to flashinfer-ai/flashinfer#2102

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved robustness of mixture-of-experts all-to-all communication to
gracefully handle scenarios with zero local tokens, preventing
synchronization failures and ensuring stable operation in edge cases.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
videodanchik pushed a commit to videodanchik/TensorRT-LLM that referenced this pull request Jan 14, 2026
…VIDIA#9822)

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Daniil Kulko <kulkodaniil@gmail.com>
murphymatt pushed a commit to fw-ai/flashinfer that referenced this pull request Mar 31, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

This is a port of NVIDIA/TensorRT-LLM#9822 which
was done by @bobboli

This feature is necessary for SGlang integration because some DP workers
may have 0 tokens. The workaround to use a dummy token is quite messy
and brittle.

## 🔍 Related Issues

Follow up to flashinfer-ai/flashinfer#2102

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved robustness of mixture-of-experts all-to-all communication to
gracefully handle scenarios with zero local tokens, preventing
synchronization failures and ensuring stable operation in edge cases.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants