Skip to content

Port TRT-LLM communication kernels to flashinfer#2102

Merged
yzh119 merged 36 commits intoflashinfer-ai:mainfrom
djns99:djns99/update-trtllm-kernels
Dec 15, 2025
Merged

Port TRT-LLM communication kernels to flashinfer#2102
yzh119 merged 36 commits intoflashinfer-ai:mainfrom
djns99:djns99/update-trtllm-kernels

Conversation

@djns99
Copy link
Copy Markdown
Contributor

@djns99 djns99 commented Nov 18, 2025

📌 Description

This ports the latest MNNVL A2A communication implementation from TRT-LLM

🔍 Related Issues

#2094

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • New Features

    • Added a throughput‑optimized Mixture‑of‑Experts all‑to‑all backend with dispatch, combine and sanitization phases and a high‑level Python class for workspace management and zero‑copy payloads.
    • New runtime tunables for MOE A2A behavior and block‑size tuning; KV‑cache time output setting renamed.
  • Documentation

    • Added API reference for the new A2A throughput backend.
  • Tests

    • Added comprehensive single‑ and multi‑rank end‑to‑end tests and updated test runners to include A2A scenarios.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Nov 18, 2025

Walkthrough

Adds a throughput-optimized MOE all-to-all backend: new CUDA dispatch/combine/sanitize kernels and headers, C++ TVM/FFI bridge exposing workspace/init/dispatch/combine/sanitize/metainfo APIs, Python JIT/AOT integration and MoeAlltoAll manager, env helpers, meta-info utilities, tests, and CI script updates.

Changes

Cohort / File(s) Summary
Environment Configuration
csrc/nv_internal/cpp/common/envUtils.cpp, csrc/nv_internal/tensorrt_llm/common/envUtils.h
Removed getEnvParallelCacheSend(). Renamed getEnvKVCacheTransferOutputPath()getEnvKVCacheTimeOutputPath() returning std::string const&. Added getEnvMoeA2AOneBlockPerToken(), getEnvMoeA2ADispatchBlockSize(), getEnvMoeA2ACombineBlockSize(), getEnvEplbForceGdrcopy(), and sanitizeBlockSize() (clamps 256–1024, aligns to 32).
CUDA Kernel Implementation
csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu, csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h
New MOE A2A CUDA/C++ implementation: warp/block policies, vectorized copy/dispatch/combine helpers, prepare/dispatch/combine/sanitize kernels, selection macros, data structs (DispatchKernelPointers, CombineKernelPointers, PayloadDescriptor), and host launch wrappers.
TensorRT / TVM FFI Bridge
csrc/trtllm_moe_alltoall.cu
New FFI-exported implementation exposing workspace sizing (moe_a2a_get_aux_data_size), initialize, dispatch, combine, sanitize_expert_ids, and metainfo-index retrieval; implements workspace layout, payload handling, and Python-facing metainfo helpers.
Python API & Workspace Manager
flashinfer/comm/trtllm_moe_alltoall.py, flashinfer/comm/__init__.py
New MoeAlltoAll class and top-level wrappers (initialize/dispatch/combine/sanitize/get_workspace_size/wrap_payload), per-process workspace cache and lifecycle; re-exported symbols in module init.
JIT / AOT Integration
flashinfer/jit/comm.py, flashinfer/jit/__init__.py, flashinfer/aot.py
Added gen_moe_alltoall_module() JIT spec (sources + extra include paths) and re-export; integrated into AOT generation when SM100 present.
Meta Info Utilities
csrc/nv_internal/tensorrt_llm/thop/moeAlltoAllMeta.h
Added MoeA2AMetaInfoIndex enum, MoeA2ADataOffsets alias, and getMoeA2AMetaInfoIndexPairs() mapping names to indices.
Tests — New & Updated
tests/comm/test_trtllm_moe_alltoall.py, tests/comm/test_mnnvl_moe_alltoall.py, tests/comm/test_mnnvl_memory.py
Added comprehensive single-/multi-rank tests (dispatch, combine, sanitize), synthetic MOE helpers, MPI orchestration, validations; updated test_mnnvl_memory.py to set CUDA device via local_rank.
Test Scripts
scripts/task_test_single_node_comm_kernels.sh, scripts/task_test_multi_node_comm_kernels.sh
Added pytest invocations for new MOE A2A tests; multi-node script commented out pycache cleanup and added the new test call.
Documentation
docs/api/comm.rst
Added "MNNVL A2A (Throughput Backend)" subsection and autosummary entries for new public API symbols.
Build / JIT Sources
flashinfer/jit/comm.py, csrc/.../envUtils.cpp, tllmException.cpp, .../moeAlltoAllKernels.cu
JIT spec includes the new CUDA/C++ sources and env utilities for building the module.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant Py as Python MoeAlltoAll
    participant JIT as JIT Module
    participant Cpp as C++ FFI
    participant CUDA as CUDA Kernels
    participant Net as Network/P2P

    App->>Py: moe_a2a_dispatch(...)
    Py->>JIT: call registered dispatch op (workspace, metainfo)
    JIT->>Cpp: FFI -> moe_a2a_dispatch
    Cpp->>CUDA: launch prepare_dispatch kernel
    Cpp->>CUDA: launch dispatch kernel
    CUDA->>Net: write/send per-rank payloads
    Net->>CUDA: peers receive payloads

    App->>Py: moe_a2a_combine(...)
    Py->>JIT: call combine op
    JIT->>Cpp: FFI -> moe_a2a_combine
    Cpp->>CUDA: launch prepare_combine (copy recv->workspace)
    Cpp->>CUDA: launch combine kernel
    CUDA->>Cpp: return outputs / set flags

    App->>Py: moe_a2a_sanitize_expert_ids(...)
    Py->>JIT: sanitize op
    JIT->>Cpp: FFI -> sanitize entry
    Cpp->>CUDA: launch sanitize kernel
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Areas needing extra attention:
    • CUDA kernels, vectorized memory ops and synchronization (moeAlltoAllKernels.cu / .h).
    • FFI/workspace layout, alignment, and Python/C++ type mappings (trtllm_moe_alltoall.cu).
    • Python MoeAlltoAll workspace caching, concurrency, and lifetime (flashinfer/comm/trtllm_moe_alltoall.py).
    • JIT/AOT spec correctness and build include paths (flashinfer/jit/comm.py, flashinfer/aot.py).
    • MPI/tests: deterministic behavior, device selection, and race conditions in new tests.

Suggested reviewers

  • djmmoss
  • aleozlx
  • nvmbreughe
  • yongwww
  • cyx-6
  • wenscarl
  • bkryu

Poem

🐰 I hopped through kernels, threads, and queues,

I packed the ranks with tiny news,
Dispatch hop, combine hooray,
Flags blink bright and bytes obey,
A rabbit cheers — A2A, woo-hoo! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.04% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and concisely summarizes the main objective: porting TRT-LLM communication kernels to flashinfer, which aligns with the substantial changes across multiple files.
Description check ✅ Passed The PR description includes the main change (porting MNNVL A2A from TRT-LLM), references the related issue (#2094), and confirms that pre-commit checks and tests have been completed. However, it lacks detailed explanation of what was changed and why it was needed beyond the one-line summary.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@djns99 djns99 force-pushed the djns99/update-trtllm-kernels branch 5 times, most recently from 710a388 to bd82a2b Compare November 20, 2025 04:18
Comment thread flashinfer/comm/trtllm_moe_alltoall.py
#define check_timeout(s) false
#else
// 300 * 2000 MHz - should be high enough on any GPU but will prevent a hang
#define check_timeout(s) ((clock64() - (s)) > (300ll * 2000ll * 1000ll * 1000ll))
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have manually added this, can I get someone to sanity check my logic here

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

Comment thread scripts/task_test_multi_node_comm_kernels.sh
@djns99 djns99 force-pushed the djns99/update-trtllm-kernels branch from f766bfe to 8cdf8d8 Compare November 27, 2025 02:00
Comment thread flashinfer/comm/trtllm_moe_alltoall.py
Comment thread flashinfer/comm/trtllm_moe_alltoall.py
@djns99 djns99 marked this pull request as ready for review November 28, 2025 02:03
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (18)
csrc/nv_internal/cpp/common/envUtils.cpp (1)

357-357: Consider caching getEnvEplbForceGdrcopy like other bool env helpers

getEnvEplbForceGdrcopy calls getBoolEnv (and thus std::getenv) on every invocation, while most other helpers in this file cache the value in a static local. Functionally this is fine, but for consistency and to avoid repeated env lookups in hot paths you might want to align it:

-bool getEnvEplbForceGdrcopy() { return getBoolEnv("TRTLLM_EPLB_FORCE_GDRCOPY"); }
+bool getEnvEplbForceGdrcopy() {
+  static bool const forceGdrcopy = getBoolEnv("TRTLLM_EPLB_FORCE_GDRCOPY");
+  return forceGdrcopy;
+}

Not critical, but it would match the rest of the env-utils style.

csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (2)

428-431: Consider documenting why the acquire fence is commented out.

The fence.acquire.sys at line 430 is commented out after the dispatch wait loop. While the combine kernel (line 735) does have an acquire fence, having it commented here without explanation could cause confusion for future maintainers. If this is intentional (relying on the combine kernel's fence), a brief comment explaining the design decision would help.

       }
-      // asm volatile("fence.acquire.sys;");
+      // NOTE: Acquire fence intentionally omitted here; combine kernel provides
+      // the acquire semantics before reading peer data.
 #endif

596-609: Generic fallback is unreachable code.

The generic fallback reduction loop (lines 599-608) can never be reached because the SWITCH_TOP_K macro (lines 53-78) only allows TOP_K values of 1, 2, 4, or 8, and all these cases have explicit handling above. Consider removing this dead code or adding a static_assert to document the constraint.

     } else if constexpr (TOP_K == 1) {
       // nothing to do
-    } else {
-      // Generic fallback: accumulate all into acc[0]
-      T* a0 = reinterpret_cast<T*>(&acc[0]);
-#pragma unroll
-      for (int k = 1; k < TOP_K; ++k) {
-        T* ak = reinterpret_cast<T*>(&acc[k]);
-#pragma unroll
-        for (int j = 0; j < elems_per_vec; ++j) {
-          a0[j] += ak[j];
-        }
-      }
+    } else {
+      static_assert(TOP_K == 1 || TOP_K == 2 || TOP_K == 4 || TOP_K == 8,
+                    "Only TOP_K values 1, 2, 4, 8 are supported");
     }
scripts/task_test_multi_node_comm_kernels.sh (1)

9-13: Disabling cache cleanup may cause stale import issues.

The cache cleanup commands are commented out. If module refactoring occurs between test runs, stale .pyc files could cause import errors or unexpected behavior. Consider re-enabling these commands or documenting why they're disabled.

tests/comm/test_trtllm_moe_alltoall.py (4)

1-2: Copyright year should be updated to 2025.

The license header shows 2024 but this is a new file created in 2025.

-Copyright (c) 2024 by FlashInfer team.
+Copyright (c) 2025 by FlashInfer team.

112-112: Potential issue with payload size calculation.

x[0].numel() gets the number of elements in the first row, but if input_tensors is a list of 2D tensors, this calculates size per token correctly. However, the variable name payload_size_per_token and the indexing x[0] could be clearer.

-    payload_size_per_token = sum([x[0].numel() * x.itemsize for x in input_tensors])
+    payload_size_per_token = sum([x.shape[-1] * x.element_size() for x in input_tensors])

207-236: CUDA streams created but not explicitly cleaned up.

The cuda_streams_all_ranks list creates CUDA streams that are not explicitly destroyed. While Python's garbage collector will eventually clean them up, for test reliability consider using a context manager or explicit cleanup.


411-411: Minor typo in comment.

Extra slash at end of comment.

-        # For each expert selected for this token/
+        # For each expert selected for this token
tests/comm/test_mnnvl_moe_alltoall.py (4)

37-46: Consider using raise without exception name per Python best practices.

The explicit raise e is redundant; bare raise preserves the traceback better.

 def safe_run(func, *args, **kwargs):
     comm = MPI.COMM_WORLD
     try:
         func(*args, **kwargs)
     except MPIExit as e:
-        raise e
+        raise
     except Exception as e:
         traceback.print_exc()
         comm.allgather(True)
-        raise e
+        raise

49-51: Test fixture should yield for proper cleanup semantics.

Even though no cleanup is needed, the fixture pattern should include yield for consistency.

 @pytest.fixture(autouse=True)
 def setup_test():
     torch.manual_seed(0x1234)
+    yield

571-576: Blind exception catch may mask real initialization errors.

Catching bare Exception when checking MNNVL support could hide legitimate configuration issues. Consider catching specific exception types or at least logging the exception.

     try:
         MnnvlMemory.initialize()
         if not MnnvlMemory.supports_mnnvl():
             pytest.skip("MNNVL not supported on this system")
-    except Exception:
+    except (RuntimeError, pynvml.NVMLError) as e:
+        # Log exception for debugging if needed
         pytest.skip("MNNVL not supported on this system")

709-712: Unused variable expert_id_payload_index as flagged by static analysis.

The unpacked variable is never used. Either prefix with underscore or remove from unpacking.

-    payloads, expert_id_payload_index = make_bfloat16_payloads(
+    payloads, _expert_id_payload_index = make_bfloat16_payloads(
         local_num_tokens, hidden_size, top_k, rank, token_selected_experts
     )
flashinfer/comm/trtllm_moe_alltoall.py (5)

8-8: TODO comment should be addressed or tracked.

The # TODO Review comment at the top suggests this module needs review. Consider removing after review or converting to a tracked issue.

Would you like me to open an issue to track any remaining review items?


351-351: Mutable class attribute should use ClassVar annotation.

Per static analysis and Python best practices, mutable class attributes should be annotated with typing.ClassVar.

+from typing import ClassVar
+
 class MoeAlltoAll:
     ...
     # Single shared workspace across the process
-    _WORKSPACE_CACHE: dict[tuple[int, int, int, int], dict] = {}
+    _WORKSPACE_CACHE: ClassVar[dict[tuple[int, int, int, int], dict]] = {}

456-463: Assertions for validation could use proper exceptions in production.

Using assert for validation is acceptable for debug builds but these checks may be skipped in optimized Python (python -O). Consider using explicit if/raise for critical invariants.


610-610: Inefficient way to get element size.

Creating an empty tensor just to get element size is wasteful. Use torch.finfo or torch.iinfo or a lookup table instead.

-        element_size = torch.tensor([], dtype=dtype).element_size()
+        # More efficient: use dtype itemsize directly
+        element_size = torch.empty(0, dtype=dtype).element_size()

Or better, consider caching element sizes or using:

element_size = torch.finfo(dtype).bits // 8 if dtype.is_floating_point else torch.iinfo(dtype).bits // 8

621-628: __all__ is not sorted as noted by static analysis.

Consider sorting for consistency, though this is a minor issue.

 __all__ = [
     "MoeAlltoAll",
     "moe_a2a_initialize",
+    "moe_a2a_combine",
     "moe_a2a_dispatch",
-    "moe_a2a_combine",
+    "moe_a2a_get_workspace_size_per_rank",
     "moe_a2a_sanitize_expert_ids",
-    "moe_a2a_get_workspace_size_per_rank",
 ]
csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h (1)

78-120: Well-documented struct with clear field descriptions.

The MoeA2ADispatchParams struct has excellent inline documentation explaining each field's purpose and dimensions. The TODO on line 90-91 about renaming max_tokens_per_rank to runtime_max_tokens_per_rank should be tracked.

Would you like me to open an issue to track the TODO about renaming max_tokens_per_rank?

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 18004a8 and 71bb8fba3180a44c7bf11e43d1c42ad4cd046421.

📒 Files selected for processing (16)
  • csrc/nv_internal/cpp/common/envUtils.cpp (2 hunks)
  • csrc/nv_internal/tensorrt_llm/common/envUtils.h (2 hunks)
  • csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (1 hunks)
  • csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h (1 hunks)
  • csrc/nv_internal/tensorrt_llm/thop/moeAlltoAllMeta.h (1 hunks)
  • csrc/trtllm_moe_a2a.cu (1 hunks)
  • docs/api/comm.rst (1 hunks)
  • flashinfer/aot.py (1 hunks)
  • flashinfer/comm/__init__.py (1 hunks)
  • flashinfer/comm/trtllm_moe_alltoall.py (1 hunks)
  • flashinfer/jit/__init__.py (1 hunks)
  • flashinfer/jit/comm.py (1 hunks)
  • scripts/task_test_multi_node_comm_kernels.sh (1 hunks)
  • tests/comm/test_mnnvl_memory.py (1 hunks)
  • tests/comm/test_mnnvl_moe_alltoall.py (1 hunks)
  • tests/comm/test_trtllm_moe_alltoall.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (11)
flashinfer/jit/__init__.py (1)
flashinfer/jit/comm.py (1)
  • gen_mnnvl_a2a_module (83-109)
csrc/nv_internal/tensorrt_llm/common/envUtils.h (1)
csrc/nv_internal/cpp/common/envUtils.cpp (8)
  • getEnvKVCacheTimeOutputPath (275-278)
  • getEnvKVCacheTimeOutputPath (275-275)
  • getEnvMoeA2AOneBlockPerToken (326-333)
  • getEnvMoeA2AOneBlockPerToken (326-326)
  • getEnvMoeA2ADispatchBlockSize (347-350)
  • getEnvMoeA2ADispatchBlockSize (347-347)
  • getEnvMoeA2ACombineBlockSize (352-355)
  • getEnvMoeA2ACombineBlockSize (352-352)
tests/comm/test_mnnvl_memory.py (1)
flashinfer/comm/mapping.py (1)
  • local_rank (391-392)
csrc/nv_internal/tensorrt_llm/thop/moeAlltoAllMeta.h (1)
csrc/trtllm_moe_a2a.cu (2)
  • getMoeA2AMetaInfoIndexPairs (395-407)
  • getMoeA2AMetaInfoIndexPairs (395-395)
csrc/trtllm_moe_a2a.cu (1)
csrc/nv_internal/cpp/common/envUtils.cpp (2)
  • getEnvMoeA2AOneBlockPerToken (326-333)
  • getEnvMoeA2AOneBlockPerToken (326-326)
flashinfer/aot.py (1)
flashinfer/jit/comm.py (1)
  • gen_mnnvl_a2a_module (83-109)
flashinfer/jit/comm.py (1)
flashinfer/jit/core.py (2)
  • JitSpec (213-312)
  • gen_jit_spec (315-381)
csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (1)
csrc/nv_internal/cpp/common/envUtils.cpp (4)
  • getEnvMoeA2ADispatchBlockSize (347-350)
  • getEnvMoeA2ADispatchBlockSize (347-347)
  • getEnvMoeA2ACombineBlockSize (352-355)
  • getEnvMoeA2ACombineBlockSize (352-352)
flashinfer/comm/trtllm_moe_alltoall.py (4)
flashinfer/comm/mnnvl.py (5)
  • MnnvlMemory (232-551)
  • MnnvlConfig (224-229)
  • as_torch_strided_tensor (264-273)
  • initialize (276-285)
  • set_comm_from_config (288-293)
flashinfer/comm/mapping.py (2)
  • Mapping (21-475)
  • moe_ep_rank (349-350)
flashinfer/jit/comm.py (1)
  • gen_mnnvl_a2a_module (83-109)
include/flashinfer/trtllm/fused_moe/runner.h (1)
  • num_experts (263-263)
tests/comm/test_mnnvl_moe_alltoall.py (2)
flashinfer/comm/trtllm_moe_alltoall.py (4)
  • MoeAlltoAll (336-618)
  • dispatch (484-541)
  • get_combine_payload_tensor_in_workspace (585-618)
  • combine (543-583)
flashinfer/comm/mnnvl.py (3)
  • MnnvlMemory (232-551)
  • initialize (276-285)
  • supports_mnnvl (545-551)
csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h (3)
csrc/nv_internal/tensorrt_llm/common/envUtils.h (1)
  • tensorrt_llm (23-104)
csrc/nv_internal/tensorrt_llm/thop/moeAlltoAllMeta.h (1)
  • mnnvl_throughput (25-58)
csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (8)
  • moe_a2a_dispatch_launch (445-506)
  • moe_a2a_dispatch_launch (445-445)
  • moe_a2a_prepare_dispatch_launch (436-439)
  • moe_a2a_prepare_dispatch_launch (436-436)
  • moe_a2a_combine_launch (792-842)
  • moe_a2a_combine_launch (792-792)
  • moe_a2a_sanitize_expert_ids_launch (864-872)
  • moe_a2a_sanitize_expert_ids_launch (864-866)
🪛 Clang (14.0.6)
csrc/nv_internal/tensorrt_llm/thop/moeAlltoAllMeta.h

[error] 19-19: 'array' file not found

(clang-diagnostic-error)

csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h

[error] 18-18: 'cuda_bf16.h' file not found

(clang-diagnostic-error)

🪛 Ruff (0.14.6)
flashinfer/comm/trtllm_moe_alltoall.py

351-351: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


443-443: Avoid specifying long messages outside the exception class

(TRY003)


445-445: Avoid specifying long messages outside the exception class

(TRY003)


606-608: Avoid specifying long messages outside the exception class

(TRY003)


621-628: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)

tests/comm/test_mnnvl_moe_alltoall.py

34-34: Avoid specifying long messages outside the exception class

(TRY003)


42-42: Use raise without specifying exception name

Remove exception name

(TRY201)


46-46: Use raise without specifying exception name

Remove exception name

(TRY201)


575-575: Do not catch blind exception: Exception

(BLE001)


676-676: Do not catch blind exception: Exception

(BLE001)


710-710: Unpacked variable expert_id_payload_index is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

🔇 Additional comments (38)
csrc/nv_internal/cpp/common/envUtils.cpp (2)

275-277: KV cache time output path now cached by reference – behavior looks consistent

Returning std::string const& to a function-local static is safe here and matches the pattern used by other helpers in this file (single evaluation of the env var, cheap repeated access). The only behavior change is that changes to TRTLLM_KVCACHE_TIME_OUTPUT_PATH after the first call are no longer observed, which is usually fine for env-based config.

If you don’t rely on mutating the env mid-process (including in tests), this change looks good to me.


326-355: Based on my analysis with the available information, I can now rewrite the review comment:

Verify MOE A2A environment variable prefixes (TLLM_ vs TRTLLM_) against project documentation and call sites; fix misleading comment in sanitizeBlockSize to reflect "round up" instead of "nearest"

The new helpers follow the file's consistent patterns (single-read statics, sane defaults), but need attention in two areas:

  1. Env var prefixes for MoE A2A knobs

    • These functions read TLLM_MOE_A2A_ONE_BLOCK_PER_TOKEN, TLLM_MOE_A2A_DISPATCH_BLOCK_SIZE, and TLLM_MOE_A2A_COMBINE_BLOCK_SIZE
    • Review the project documentation, call sites, and surrounding code to confirm whether the TLLM_ prefix matches intended usage; most other envs in this file appear to use TRTLLM_ prefix and silently using the wrong names would be problematic.
  2. sanitizeBlockSize comment accuracy

    • The implementation rounds up to a multiple of 32: block = (block + 31) / 32 * 32; (e.g., 33 → 64, not 32)
    • The comment currently states "Round to nearest multiple of 32 (warp size)", which is misleading.

    Fix the comment:

  • // Round to nearest multiple of 32 (warp size)
  • // Round up to the next multiple of 32 (warp size)
    
    The extra `if (block == 0) block = 256;` check after clamping is redundant but harmless.
    
    
csrc/nv_internal/tensorrt_llm/common/envUtils.h (1)

95-102: LGTM! New MoE A2A environment variable accessors are properly declared.

The three new accessor functions (getEnvMoeA2AOneBlockPerToken, getEnvMoeA2ADispatchBlockSize, getEnvMoeA2ACombineBlockSize) are well-documented with default behaviors and align with their implementations in envUtils.cpp.

csrc/nv_internal/tensorrt_llm/thop/moeAlltoAllMeta.h (3)

19-22: Static analysis false positive - standard headers are valid.

The Clang error about <array> not being found is a false positive. This is a standard C++11 header that should be available in any modern C++ environment. The includes are correct.


28-43: LGTM! Well-structured metadata index enum.

The MoeA2AMetaInfoIndex enum provides clear, sequential indexing for metadata fields with NUM_METAINFO_FIELDS = 9 correctly representing the count of actual data fields (0-8). The MoeA2ADataOffsets type alias correctly uses this count for the array size.


45-58: LGTM! Useful name-to-index mapping function.

The inline getMoeA2AMetaInfoIndexPairs() function provides a clean way to expose metadata field names and their corresponding indices, which is consumed by the TVM FFI interface in csrc/trtllm_moe_a2a.cu.

csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (4)

114-116: Timeout calculation looks reasonable.

The timeout of 300ll * 2000ll * 1000ll * 1000ll cycles (~600 billion) translates to approximately 300 seconds at 2 GHz, which provides a generous upper bound to prevent infinite hangs while allowing ample time for legitimate synchronization delays. The calculation avoids overflow by using long long literals.


272-276: LGTM! Single-threaded flag increment is safe.

The flag_val increment occurs only when idx == 0, ensuring single-threaded access. Since this kernel runs sequentially in the stream before the dispatch kernel, there's no race condition.


844-872: LGTM! Sanitize kernel implementation is correct.

The kernel correctly identifies invalid tokens (where token_idx >= recv_counters[source_rank]) and sets their expert IDs to invalid_id. Each thread operates on disjoint memory locations, avoiding any race conditions.


315-344: Based on my investigation, I cannot access the repository or find public documentation for the kMaxRanks constant definition. The repository clone failed, and web searches returned no results for this internal NVIDIA TensorRT-LLM code.

However, the core concern raised in the review comment remains valid and cannot be conclusively verified without access to:

  1. The header file containing kMaxRanks definition
  2. The actual value of kMaxRanks
  3. Runtime validation constraints on ep_size (ensemble parallel size)

The potential undefined behavior is legitimate: if target_rank can be 64 or greater, the bit shift 1ULL << target_rank on a uint64_t would indeed cause undefined behavior in C++.


Verify kMaxRanks does not exceed 64 to avoid undefined behavior.

The already_copied bitmask uses uint64_t with bit operations 1ULL << target_rank. If target_rank can be 64 or greater, this causes undefined behavior (shifting by >= width of type). The code validates params.ep_size <= kMaxRanks at line 448, so ensure kMaxRanks is defined as ≤ 64 in the header.

flashinfer/aot.py (1)

515-522: LGTM! MNNVL A2A module integration follows existing patterns.

The new gen_mnnvl_a2a_module is correctly imported within the add_comm block and added under the has_sm100 condition, consistent with the existing gen_trtllm_comm_module and gen_trtllm_mnnvl_comm_module placement.

tests/comm/test_mnnvl_memory.py (1)

125-125: LGTM! Correct device selection for multi-node scenarios.

Using self.local_rank instead of self.rank is the correct fix for multi-node setups where the global rank may exceed the local GPU count. This aligns with the setup fixture (line 51) and the Mapping.local_rank property shown in the relevant snippet.

scripts/task_test_multi_node_comm_kernels.sh (1)

17-19: LGTM - new MoE A2A test coverage added.

The addition of test_mnnvl_moe_alltoall.py aligns with the new MoE A2A functionality introduced in this PR.

flashinfer/jit/__init__.py (1)

80-80: LGTM!

The re-export follows the established pattern for module generators in this file.

flashinfer/comm/__init__.py (1)

42-52: LGTM!

The new MoE A2A exports follow the established import patterns and properly expose the public API surface for the throughput backend.

csrc/trtllm_moe_a2a.cu (6)

53-99: LGTM - offset calculations and workspace sizing are well-structured.

The alignment logic using cache-line boundaries (128 bytes) is appropriate for GPU memory access patterns. The offset calculations correctly account for metadata fields and payload regions.


101-123: LGTM - initialization logic is correct.

Input validation is comprehensive. The cudaStreamSynchronize at line 119 ensures the memset completes before returning, which is necessary for correctness.


132-262: LGTM - dispatch implementation is solid.

The function has comprehensive input validation, proper payload descriptor setup, workspace size verification, and appropriate CUDA error checking after kernel launch.


264-277: LGTM!

The dtype conversion correctly handles the common floating-point types used in MoE computations. The unreachable return after the throw silences compiler warnings.


279-361: LGTM - combine implementation is well-validated.

The function properly validates all inputs including the optional payloadInWorkspace optimization for zero-copy scenarios. Error handling is comprehensive.


393-416: LGTM!

The metainfo accessor correctly wraps the C++ pairs for Python accessibility, and all FFI exports are properly registered.

flashinfer/jit/comm.py (1)

83-109: I'll verify whether architecture-specific NVCC flags are required for the gen_mnnvl_a2a_module function. Let me search the codebase for relevant information.
<function_calls>


#!/bin/bash

Find the moeAlltoAllKernels.cu file

find . -name "moeAlltoAllKernels.cu" -type f 2>/dev/null


</function_calls>
<function_calls>


#!/bin/bash

Check if the file exists and search for architecture-specific patterns

if [ -f "csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu" ]; then
echo "=== Checking for SM guards and architecture macros ==="
rg -n 'sm|SM[0-9]|CUDA_ARCH|CUDA_SM' "csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu" || echo "No explicit SM guards found"

echo ""
echo "=== Checking for tensor core/MMA operations ==="
rg -n 'mma|wmma|tensorop|__mma|__hmma' "csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu" || echo "No tensor core intrinsics found"
else
echo "File not found at expected path"
fi


</function_calls>

<function_calls>


#!/bin/bash

Look at the comm.py file to understand nvcc_flags definition and usage

echo "=== Current comm.py around nvcc_flags definition ==="
rg -n 'nvcc_flags' flashinfer/jit/comm.py -B3 -A3

echo ""
echo "=== All functions in comm.py to see pattern ==="
rg -n '^def gen_' flashinfer/jit/comm.py


</function_calls>

<function_calls>


#!/bin/bash

Let's see the full context of comm.py to understand nvcc_flags and module patterns

wc -l flashinfer/jit/comm.py


</function_calls>

<function_calls>


#!/bin/bash

Read the entire comm.py file to understand structure and nvcc_flags

cat -n flashinfer/jit/comm.py


</function_calls>

<function_calls>


#!/bin/bash

Check what nvcc_flags is and where it comes from

rg -n 'nvcc_flags\s*=' flashinfer/jit/comm.py -B2 -A2

echo ""
echo "=== Check for imports in comm.py ==="
head -30 flashinfer/jit/comm.py


</function_calls>

<function_calls>


#!/bin/bash

Check the jit module structure

ls -la flashinfer/jit/


</function_calls>

<function_calls>


#!/bin/bash

Check jit_env for nvcc_flags

rg -n 'nvcc_flags' flashinfer/jit/ -A2 -B2


</function_calls>

tests/comm/test_trtllm_moe_alltoall.py (6)

25-29: Session-scoped fixture with torch.manual_seed may not reset seed between tests.

The fixture sets the seed once at session start but individual tests may want reproducible seeds. Consider if this is intentional or if test-level seeding is needed.


62-69: Good defensive check for SM resources.

This helper appropriately skips tests when insufficient SMs are available for parallel kernel execution. The check prevents hangs on systems with limited GPU resources.


72-84: LGTM!

The make_payload helper correctly distinguishes between integer and floating-point types for random tensor generation.


138-141: Sorting approach for validation is reasonable but fragile.

Sorting both input and output tensors to compare them works for this test case but relies on unique values. If there are duplicate values, the sort order could differ. Consider documenting this assumption or using a more robust comparison.


388-429: Reference implementation for fake_moe looks correct.

The fake_moe function properly handles expert parallelism filtering and accumulation. The tree reduction comment on line 423 correctly explains why results are summed after collection.


530-536: Relatively loose tolerance for numerical comparison.

Using atol=1.5e-2 and rtol=1.5e-2 is quite loose for bf16/fp16. This may mask precision issues. Verify this tolerance is intentional given the accumulation order differences mentioned elsewhere.

tests/comm/test_mnnvl_moe_alltoall.py (3)

293-293: Direct modification of class variable _WORKSPACE is concerning.

Setting MoeAlltoAll._WORKSPACE = None directly before instantiation suggests test isolation concerns. This should be documented or handled via a proper reset method.

Consider whether _reset_workspace() method from MoeAlltoAll should be used instead, or if this pattern is intentional for test setup.


800-813: Good documentation of tolerance rationale.

The comment on line 809 clearly explains why a 99% match threshold is used instead of exact comparison due to bf16 accumulation order differences. This is helpful for future maintainers.


836-838: Helpful run instructions in docstring.

The comment showing how to run with mpirun is useful for developers unfamiliar with MPI testing.

flashinfer/comm/trtllm_moe_alltoall.py (3)

353-383: Workspace caching strategy looks correct.

The caching by (workspace_size_per_rank, ep_rank, ep_size, max_num_tokens) tuple allows reusing workspaces across instances with compatible configurations. This addresses the past review comment about supporting different shaped communicators.


470-482: _reset_workspace method deletes from class cache without thread safety.

If multiple threads could access this class simultaneously, the del operation on _WORKSPACE_CACHE could cause issues. Document that this method is not thread-safe.

     def _reset_workspace(self):
-        """Reset the workspace to free up its state. This is mainly used for testing. Use this with caution. This object is no longer usable after this."""
+        """Reset the workspace to free up its state.
+        
+        Warning: This method is not thread-safe and is mainly used for testing.
+        This object is no longer usable after calling this method.
+        """

505-508: Good use of state machine pattern for dispatch/combine sequencing.

The phase checking prevents calling dispatch twice without combine and ensures proper operation ordering. This is a clean design.

csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h (4)

17-19: Static analysis reports missing cuda_bf16.h - this is a false positive.

The cuda_bf16.h and cuda_fp16.h headers are provided by the CUDA toolkit and will be available during compilation with nvcc. Static analysis tools without CUDA environment cannot find these headers.


23-27: Configuration constants are well-documented and reasonable.

The limits (256 experts, 8 top-k, 8 payloads, 64 ranks) provide good flexibility while keeping fixed-size arrays manageable. Consider whether these should be configurable at runtime if larger deployments are anticipated.


173-179: Function declarations are clean and match the implementation.

The kernel launch function declarations align with the implementations shown in the relevant code snippets from moeAlltoAllKernels.cu.


148-148: Unable to verify include configuration due to repository access failure.

The repository clone failed, preventing me from examining the file's include structure, verifying whether nvinfer1::DataType is actually used, or confirming if the necessary headers are already present. Manual verification is required to confirm:

  1. Whether csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h currently includes NvInfer headers
  2. Whether nvinfer1::DataType is actually declared in the file or included transitively
  3. Whether the code compiles successfully without the suggested include

Comment thread csrc/trtllm_moe_a2a.cu Outdated
Comment thread csrc/trtllm_moe_a2a.cu
Comment thread docs/api/comm.rst
Comment thread flashinfer/comm/trtllm_moe_alltoall.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
flashinfer/comm/trtllm_moe_alltoall.py (3)

379-379: Annotate mutable class attribute with ClassVar.

Per Python best practices, mutable class attributes should be annotated with ClassVar to make clear they are shared across instances.

+from typing import ClassVar
+
 class MoeAlltoAll:
     ...
-    _WORKSPACE_CACHE: dict[tuple[int, int, int, int], dict] = {}
+    _WORKSPACE_CACHE: ClassVar[dict[tuple[int, int, int, int], dict]] = {}

638-638: Consider using torch.finfo or torch.iinfo for element size.

Creating an empty tensor just to get element size has minor overhead. Consider using dtype introspection directly.

-        element_size = torch.tensor([], dtype=dtype).element_size()
+        element_size = torch._utils._element_size(dtype)

Alternatively, keep the current approach if you prefer avoiding private APIs.


649-656: Consider adding moe_a2a_wrap_payload_tensor_in_workspace to __all__.

This function is used in tests and appears to be part of the public API. Also consider sorting __all__ for consistency.

 __all__ = [
     "MoeAlltoAll",
+    "moe_a2a_combine",
     "moe_a2a_initialize",
     "moe_a2a_dispatch",
-    "moe_a2a_combine",
+    "moe_a2a_get_workspace_size_per_rank",
     "moe_a2a_sanitize_expert_ids",
-    "moe_a2a_get_workspace_size_per_rank",
+    "moe_a2a_wrap_payload_tensor_in_workspace",
 ]
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71bb8fba3180a44c7bf11e43d1c42ad4cd046421 and 222a2e8815f7c5de657846a95e3b9dbd3702a6d9.

📒 Files selected for processing (6)
  • csrc/trtllm_moe_alltoall.cu (1 hunks)
  • flashinfer/aot.py (1 hunks)
  • flashinfer/comm/trtllm_moe_alltoall.py (1 hunks)
  • flashinfer/jit/__init__.py (1 hunks)
  • flashinfer/jit/comm.py (1 hunks)
  • tests/comm/test_trtllm_moe_alltoall.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • flashinfer/aot.py
  • flashinfer/jit/comm.py
🧰 Additional context used
🧬 Code graph analysis (2)
csrc/trtllm_moe_alltoall.cu (2)
csrc/tvm_ffi_utils.h (3)
  • Tensor (282-284)
  • get_current_stream (266-270)
  • encode_dlpack_dtype (29-31)
flashinfer/comm/trtllm_moe_alltoall.py (6)
  • moe_a2a_get_workspace_size_per_rank (175-198)
  • moe_a2a_get_workspace_size_per_rank (350-361)
  • moe_a2a_initialize (41-47)
  • moe_a2a_initialize (210-218)
  • moe_a2a_dispatch (53-93)
  • moe_a2a_dispatch (251-309)
tests/comm/test_trtllm_moe_alltoall.py (7)
flashinfer/comm/mapping.py (1)
  • Mapping (21-475)
tests/test_helpers/test_helpers.py (1)
  • get_device_properties (10-11)
include/flashinfer/trtllm/fused_moe/runner.h (1)
  • num_experts (263-263)
flashinfer/comm/trtllm_moe_alltoall.py (13)
  • moe_a2a_get_workspace_size_per_rank (175-198)
  • moe_a2a_get_workspace_size_per_rank (350-361)
  • MoeAlltoAll (364-646)
  • dispatch (512-569)
  • get_combine_payload_tensor_in_workspace (613-646)
  • combine (571-611)
  • moe_a2a_initialize (41-47)
  • moe_a2a_initialize (210-218)
  • moe_a2a_dispatch (53-93)
  • moe_a2a_dispatch (251-309)
  • moe_a2a_sanitize_expert_ids (146-155)
  • moe_a2a_sanitize_expert_ids (338-347)
  • moe_a2a_wrap_payload_tensor_in_workspace (221-248)
flashinfer/fused_moe/utils.py (1)
  • _ (157-163)
csrc/xqa/mha.cu (1)
  • any (157-157)
tests/comm/test_mnnvl_moe_alltoall.py (1)
  • fake_moe (121-181)
🪛 Ruff (0.14.6)
flashinfer/comm/trtllm_moe_alltoall.py

379-379: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


471-471: Avoid specifying long messages outside the exception class

(TRY003)


473-473: Avoid specifying long messages outside the exception class

(TRY003)


634-636: Avoid specifying long messages outside the exception class

(TRY003)


649-656: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)

🔇 Additional comments (16)
flashinfer/jit/__init__.py (1)

80-80: LGTM!

The new import follows the established pattern for re-exporting JIT module generators from the comm submodule.

csrc/trtllm_moe_alltoall.cu (5)

53-88: LGTM!

The offset calculation logic correctly aligns data structures to cache-line boundaries where needed, following a clear sequential layout pattern.


101-123: LGTM!

The initialization correctly validates inputs, zeros the workspace region, and returns metadata offsets. The stream synchronization ensures the workspace is properly initialized before returning.


278-360: LGTM!

The combine operation has thorough input validation, properly handles the workspace-backed payload case, and includes appropriate error checking after kernel launch.


362-394: LGTM!

The sanitization operation correctly validates inputs and launches the kernel with proper error checking.


396-419: LGTM!

The metainfo index pairs helper provides a clean mechanism to expose C++ constants to Python, and all required functions are properly exported.

tests/comm/test_trtllm_moe_alltoall.py (6)

74-86: LGTM!

The payload generator correctly handles both integer and floating-point dtypes for test data generation.


93-161: LGTM!

Comprehensive single-GPU test covering multiple payload dtypes, dispatch/combine workflow, and workspace-backed tensor operations.


164-240: LGTM!

The helper correctly simulates multi-rank dispatch on a single GPU using separate CUDA streams, with proper synchronization.


302-344: LGTM!

The multi-rank test correctly validates token routing across simulated ranks with proper verification of payload delivery.


390-431: LGTM!

The reference MoE implementation provides a deterministic baseline for verifying combine correctness, with appropriate handling of expert-parallel scenarios.


434-551: LGTM!

Comprehensive combine test covering multiple dtypes, workspace configurations, and ranks with appropriate numerical tolerances for reduced-precision arithmetic.

flashinfer/comm/trtllm_moe_alltoall.py (4)

32-207: LGTM!

The JIT module getter follows the established pattern with proper caching and custom op registration.


221-248: LGTM!

The function correctly creates a workspace-backed tensor view with properly documented parameters.


470-473: LGTM!

The validation logic is appropriate and the exception messages are concise.


498-510: LGTM!

The reset method appropriately handles workspace cleanup for testing scenarios, with clear documentation about post-call state.

Comment thread csrc/trtllm_moe_alltoall.cu Outdated
Comment thread flashinfer/comm/trtllm_moe_alltoall.py Outdated
Comment thread tests/comm/test_trtllm_moe_alltoall.py
@djns99 djns99 force-pushed the djns99/update-trtllm-kernels branch from 6e9bed5 to a51b1ea Compare November 28, 2025 02:42
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
csrc/trtllm_moe_alltoall.cu (1)

263-276: Consider extending dtype support for future flexibility.

The toNvDataType function currently supports half, bfloat16, and float32. Consider documenting supported types or adding int8/fp8 support if those are common in MoE workloads.

tests/comm/test_mnnvl_moe_alltoall.py (3)

711-712: Unused variable is intentional; consider underscore prefix per Ruff hint.

The expert_id_payload_index is returned by the helper but not used in this test. Consider renaming to _expert_id_payload_index to signal intentional discard.

-    payloads, expert_id_payload_index = make_bfloat16_payloads(
+    payloads, _expert_id_payload_index = make_bfloat16_payloads(

293-294: Setting class attribute _WORKSPACE = None may conflict with class-level cache.

Assigning MoeAlltoAll._WORKSPACE = None resets a non-existent instance attribute. The class uses _WORKSPACE_CACHE for caching. This assignment has no effect but is misleading.

Consider removing this line or using MoeAlltoAll._WORKSPACE_CACHE.clear() if the intent is to reset the cache:

-    MoeAlltoAll._WORKSPACE = None
+    MoeAlltoAll._WORKSPACE_CACHE.clear()

742-742: Same issue: _WORKSPACE = None assignment is ineffective.

This line also sets a non-existent attribute. Consider removing or using _WORKSPACE_CACHE.clear().

-    MoeAlltoAll._WORKSPACE = None
flashinfer/comm/trtllm_moe_alltoall.py (2)

375-377: Annotate mutable class attribute with ClassVar per Ruff hint.

The _WORKSPACE_CACHE is a mutable class-level attribute that should be annotated with ClassVar to make the intent clear.

+from typing import ClassVar
+
 class MoeAlltoAll:
     ...
-    _WORKSPACE_CACHE: dict[tuple[int, int, int, int], dict] = {}
+    _WORKSPACE_CACHE: ClassVar[dict[tuple[int, int, int, int], dict]] = {}

496-508: Document that _reset_workspace invalidates the instance.

The docstring mentions this but it's critical: after calling _reset_workspace, the object is unusable. Consider adding a stronger warning or raising an exception on subsequent method calls.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d375afefcadb43a5d70996b38715199fe7aed0e0 and db22fce908a1b36285177059090943325127dea6.

📒 Files selected for processing (4)
  • csrc/trtllm_moe_alltoall.cu (1 hunks)
  • flashinfer/comm/trtllm_moe_alltoall.py (1 hunks)
  • tests/comm/test_mnnvl_moe_alltoall.py (1 hunks)
  • tests/comm/test_trtllm_moe_alltoall.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
csrc/trtllm_moe_alltoall.cu (2)
csrc/tvm_ffi_utils.h (3)
  • Tensor (282-284)
  • get_current_stream (266-270)
  • encode_dlpack_dtype (29-31)
flashinfer/comm/trtllm_moe_alltoall.py (11)
  • moe_a2a_get_workspace_size_per_rank (173-196)
  • moe_a2a_get_workspace_size_per_rank (348-359)
  • moe_a2a_initialize (39-45)
  • moe_a2a_initialize (208-216)
  • moe_a2a_dispatch (51-91)
  • moe_a2a_dispatch (249-307)
  • moe_a2a_combine (97-138)
  • moe_a2a_combine (310-333)
  • moe_a2a_sanitize_expert_ids (144-153)
  • moe_a2a_sanitize_expert_ids (336-345)
  • moe_a2a_get_metainfo_index_pairs (159-167)
flashinfer/comm/trtllm_moe_alltoall.py (3)
flashinfer/comm/mnnvl.py (5)
  • MnnvlMemory (232-551)
  • MnnvlConfig (224-229)
  • as_torch_strided_tensor (264-273)
  • initialize (276-285)
  • set_comm_from_config (288-293)
flashinfer/comm/mapping.py (2)
  • Mapping (21-475)
  • moe_ep_rank (349-350)
flashinfer/jit/comm.py (1)
  • gen_mnnvl_moe_alltoall_module (83-109)
tests/comm/test_mnnvl_moe_alltoall.py (3)
flashinfer/comm/trtllm_moe_alltoall.py (4)
  • MoeAlltoAll (362-644)
  • dispatch (510-567)
  • get_combine_payload_tensor_in_workspace (611-644)
  • combine (569-609)
flashinfer/comm/mapping.py (1)
  • Mapping (21-475)
flashinfer/comm/mnnvl.py (3)
  • MnnvlMemory (232-551)
  • initialize (276-285)
  • supports_mnnvl (545-551)
🪛 Ruff (0.14.6)
flashinfer/comm/trtllm_moe_alltoall.py

377-377: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


469-469: Avoid specifying long messages outside the exception class

(TRY003)


471-471: Avoid specifying long messages outside the exception class

(TRY003)


632-634: Avoid specifying long messages outside the exception class

(TRY003)

tests/comm/test_mnnvl_moe_alltoall.py

34-34: Avoid specifying long messages outside the exception class

(TRY003)


576-576: Do not catch blind exception: Exception

(BLE001)


677-677: Do not catch blind exception: Exception

(BLE001)


711-711: Unpacked variable expert_id_payload_index is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

🔇 Additional comments (44)
csrc/trtllm_moe_alltoall.cu (8)

1-51: LGTM! Well-structured header and utility functions.

The license header, includes, and utility functions are appropriately organized. The alignOffset function correctly implements cache-line alignment using bitwise operations.


53-88: LGTM! Offset calculation is well-structured.

The calculateOffsets function properly calculates memory offsets for various MoE A2A data structures with appropriate alignment for cache-line boundaries.


90-99: LGTM! Workspace size calculation.

The workspace size calculation correctly accounts for metadata, payload, and combine regions with proper alignment.


101-123: LGTM! Initialize operation with proper validation.

Good input validation for workspace dimensions, rank bounds, and proper error checking for CUDA operations. The synchronization before returning metainfo is appropriate.


125-261: LGTM! Dispatch operation is well-implemented.

The dispatch function has comprehensive input validation, proper payload descriptor handling, and correct workspace pointer arithmetic. Error checking after kernel launch is appropriate.


278-360: LGTM! Combine operation with proper validation.

The combine function correctly validates payload dimensions, workspace pointer alignment, and handles the payloadInWorkspace flag appropriately. Error checking after kernel launch is proper.


362-394: LGTM! Sanitize operation is correctly implemented.

Proper input validation and error checking for the sanitize expert IDs kernel.


396-419: LGTM! Metainfo export and FFI registration.

The metainfo index pairs function and TVM FFI exports are correctly implemented, providing clean Python interoperability.

tests/comm/test_trtllm_moe_alltoall.py (12)

25-29: LGTM! Docstring has been corrected.

The fixture docstring now accurately describes that it sets the torch seed for deterministic tests.


32-60: Good test parameter coverage.

The test parameters cover a good range of configurations (small, medium, large) for both single-GPU and multi-rank scenarios, with various dtypes and payload configurations.


63-72: Good resource-aware skip logic.

The SM count check appropriately skips tests when hardware resources are insufficient, preventing false failures on less capable GPUs.


74-86: LGTM! Payload generation helper.

The make_payload function correctly handles both integer and floating-point dtypes with appropriate random value generation.


89-162: Comprehensive single-GPU test with proper verification.

The test covers dispatch and combine flows with multiple dtypes, validates output via sorting and exact comparison, and tests the workspace-backed combine path.


164-240: LGTM! Multi-rank dispatch helper is well-structured.

The helper properly manages workspaces, initializes per-rank metadata, uses separate CUDA streams for parallel execution, and synchronizes appropriately.


243-259: LGTM! Sanitize helper function.

Simple and correct delegation to the underlying sanitize function for each rank.


262-299: LGTM! Combine helper with parallel execution.

The combine helper correctly uses separate streams per rank and synchronizes before returning results.


302-345: LGTM! Multi-rank test with proper verification.

Good verification logic that filters non-zero tensors and compares sorted outputs against the reference filtered by expert assignment.


347-388: LGTM! Sanitize test with comprehensive verification.

The test properly clones tensors before sanitization to enable before/after comparison and correctly verifies the sanitization logic.


390-431: LGTM! Reference MoE implementation for verification.

The fake_moe function provides a clear reference implementation for verifying the distributed MoE behavior, with proper EP-rank filtering logic.


434-555: Good end-to-end combine test with tolerance handling.

The test covers the full dispatch-process-combine cycle with both in-workspace and external payload paths. The tolerance values for bf16 are reasonable.

tests/comm/test_mnnvl_moe_alltoall.py (11)

27-46: MPI error handling utilities are well-designed.

The MPIExit exception, check_any_rank_failed, and safe_run pattern provide robust MPI coordination for test failures across ranks, ensuring clean error propagation.


49-52: LGTM! Test fixture for deterministic seeding.


55-88: LGTM! Helper functions for expert routing and token generation.

compute_target_rank_id correctly implements contiguous expert partitioning, and generate_token_selected_experts properly generates random expert assignments.


91-119: LGTM! Expert weight creation with reproducible seeding.

Using ep_rank * 1000 + i as a seed ensures reproducibility across runs while differentiating experts per rank.


122-182: LGTM! Comprehensive fake MoE reference implementation.

The function correctly handles both EP-rank and global modes with proper local expert ID conversion.


185-258: LGTM! Payload creation helpers.

Both NV FP4 and BFloat16 payload creators are well-structured with appropriate rank-specific patterns for verification.


261-383: LGTM! Single-rank dispatch worker function.

Comprehensive workspace setup, dispatch execution, and metadata extraction for MPI-based testing.


386-556: LGTM! Thorough dispatch verification.

The verify_dispatch function provides exhaustive validation of shapes, dtypes, counters, routing, and payload content. This is excellent for catching regressions.


572-577: Bare Exception catch is intentional for MNNVL availability check.

The broad exception catch here is acceptable since it's used to detect MNNVL support availability across various failure modes (driver issues, missing hardware, etc.).


673-678: Bare Exception catch is acceptable for capability detection.

Same as above - this is intentional for gracefully skipping tests on systems without MNNVL support.


657-818: LGTM! Full dispatch+combine cycle test.

The test properly verifies the complete MoE A2A workflow with appropriate tolerance for bf16 accumulation order differences. The 99% match threshold is reasonable given the expected numerical variations.

flashinfer/comm/trtllm_moe_alltoall.py (13)

1-19: LGTM! Module header and imports are well-organized.

Clean module docstring and appropriate imports for the MoE A2A functionality.


21-28: LGTM! State dataclass is appropriate.

The _A2AState dataclass cleanly tracks the dispatch/combine lifecycle with appropriate fields.


30-205: LGTM! JIT module initialization with custom op registration.

The lazy module loading with @functools.cache and custom op registration is well-structured. The returned SimpleNamespace provides a clean API surface.


208-246: LGTM! Public wrapper functions are clean delegations.

The top-level moe_a2a_* functions provide clean interfaces to the JIT module, with appropriate docstrings where needed.


249-307: LGTM! Dispatch wrapper with tensor wrapping.

The dispatch function correctly wraps the raw offsets into workspace-backed tensors for each payload.


310-359: LGTM! Combine, sanitize, and workspace size wrappers.

Clean delegation to the underlying JIT module.


379-409: LGTM! Workspace caching with proper key management.

The get_workspace classmethod correctly caches workspaces by configuration tuple, preventing redundant allocations.


411-432: LGTM! Lazy metainfo constant initialization.

The _init_constants method properly strips prefixes for a cleaner Python API.


434-494: LGTM! Constructor with proper validation and MNNVL configuration.

Good input validation for top_k and num_experts, with optional MnnvlConfig support as discussed in past reviews.


510-567: LGTM! Dispatch method with proper state management.

Good state assertions, lifecycle tracking, and optional sanitization flow.


569-609: LGTM! Combine method with state reset.

Proper state validation and reset after combine completes, enabling the next dispatch/combine cycle.


611-645: LGTM! Workspace-backed tensor accessor.

The get_combine_payload_tensor_in_workspace method correctly computes slice bounds and validates state.


647-654: LGTM! Clean __all__ export list.

Explicitly defines the public API surface.

@aleozlx
Copy link
Copy Markdown
Collaborator

aleozlx commented Dec 3, 2025

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !174 has been created, and the CI pipeline #39564968 is currently running. I'll report back once the pipeline job completes.

@djns99 djns99 requested a review from kahyunnam as a code owner December 3, 2025 22:22
@aidando73
Copy link
Copy Markdown
Contributor

Oh - are you guys not using the FIFO queue implementation here? Looks like this implementation is a PUT -> barrier - what was the rationale? Seems like the FIFO queue version was performing pretty well.

@djns99
Copy link
Copy Markdown
Contributor Author

djns99 commented Dec 14, 2025

Oh - are you guys not using the FIFO queue implementation here? Looks like this implementation is a PUT -> barrier - what was the rationale? Seems like the FIFO queue version was performing pretty well.

@aidando73 I believe this approach is faster than FIFO in TRT-LLM for some of the DeepSeek workloads we have tested. In general it is less memory efficient, but because it is one shot it is better for throughput cases.
@bobboli can you share more details

@yzh119 yzh119 merged commit f0355f7 into flashinfer-ai:main Dec 15, 2025
4 checks passed
@aidando73
Copy link
Copy Markdown
Contributor

better for throughput cases.

Oh so should I be testing with bigger batch sizes. Do you consider batch size 256, 512 a throughput case

@aidando73
Copy link
Copy Markdown
Contributor

aidando73 commented Dec 15, 2025

In general it is less memory efficient

For my understanding - when you say the FIFO implementation is more memory efficient - that's because we're copying from shared memory -> fabric -> shared memory? And the shared memory <-> global memory copies are overlapped with compute / nvlink transfers so we kind of get them for free?

@aidando73
Copy link
Copy Markdown
Contributor

aidando73 commented Dec 15, 2025

better for throughput cases.

Tested on 1024 and 2048 - seems like the cross-over point is 1024 for these comms

flashinfer:
|         1024 |                     65 |
|         2048 |                     82 |
trt_llm:
|   batch_size |   trt_mnnvl |
|--------------|-------------|
|         1024 |          66 |
|         2048 |          96 |

@djns99
Copy link
Copy Markdown
Contributor Author

djns99 commented Dec 15, 2025

For my understanding - when you say the FIFO implementation is more memory efficient - that's because we're copying from shared memory -> fabric -> shared memory? And the shared memory <-> global memory copies are overlapped with compute / nvlink transfers so we kind of get them for free?

Memory efficient here I just mean that we have to allocate the max number of tokens on all ranks up front. i.e. the workspace is bigger in this version

@bobboli
Copy link
Copy Markdown
Contributor

bobboli commented Dec 16, 2025

For my understanding - when you say the FIFO implementation is more memory efficient - that's because we're copying from shared memory -> fabric -> shared memory? And the shared memory <-> global memory copies are overlapped with compute / nvlink transfers so we kind of get them for free?

Memory efficient here I just mean that we have to allocate the max number of tokens on all ranks up front. i.e. the workspace is bigger in this version

Yes, flashinfer_mnnvl_new may require larger memory, but it's not a consideration for kernel speed.

Please see #2222 (comment) for more discussions.

yzh119 pushed a commit that referenced this pull request Dec 23, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

This is a port of NVIDIA/TensorRT-LLM#9822 which
was done by @bobboli

This feature is necessary for SGlang integration because some DP workers
may have 0 tokens. The workaround to use a dummy token is quite messy
and brittle.

## 🔍 Related Issues

Follow up to #2102

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved robustness of mixture-of-experts all-to-all communication to
gracefully handle scenarios with zero local tokens, preventing
synchronization failures and ensuring stable operation in edge cases.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
murphymatt pushed a commit to fw-ai/flashinfer that referenced this pull request Jan 4, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

This is a port of NVIDIA/TensorRT-LLM#9822 which
was done by @bobboli

This feature is necessary for SGlang integration because some DP workers
may have 0 tokens. The workaround to use a dummy token is quite messy
and brittle.

## 🔍 Related Issues

Follow up to flashinfer-ai/flashinfer#2102

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved robustness of mixture-of-experts all-to-all communication to
gracefully handle scenarios with zero local tokens, preventing
synchronization failures and ensuring stable operation in edge cases.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
@coderabbitai coderabbitai Bot mentioned this pull request Jan 5, 2026
5 tasks
@coderabbitai coderabbitai Bot mentioned this pull request Jan 15, 2026
5 tasks
yzh119 pushed a commit that referenced this pull request Jan 18, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

This PR adds benchmark script for the MoE A2A kernels added in #2102

Supports bf16, nvfp4, and per-tensor fp8.

A2A dispatch communicates in quantized format (fp8, nvfp4). A2A combine
commincates in orignal format (bfloat16/float16).

```
# Example usage
mpirun -np 2 python benchmarks/flashinfer_benchmark.py --routine moe_a2a_dispatch_combine --num_tokens 1024 --hidden_size 7168 --num_experts 256 --top_k 8 -v --quant_dtype nvfp4

# Check with NVTX + nsys
nsys profile -f true -o moe_a2a --trace=cuda,nvtx --trace-fork-before-exec=true --cuda-event-trace=true --cuda-graph-trace=node mpirun -np 2 python benchmarks/flashinfer_benchmark.py --routine moe_a2a_dispatch_combine --num_tokens 1024 --hidden_size 7168 --num_experts 256 --top_k 8 -v --nvtx --quant_dtype nvfp4

# DS-R1 DEP8 example output (number is for demo purpose only)
$ mpirun -np 8 python benchmarks/flashinfer_benchmark.py --routine moe_a2a_dispatch_combine --num_tokens 8192 --hidden_size 7168 --num_experts 256 --top_k 8 --validate -vv --quant_dtype nvfp4
[INFO] Actual inter-rank traffic: dispatch=1170.583 MiB, combine=4149.633 MiB
[PERF] a2a_total      :: median time 1.073 ms; std 0.001 ms; achieved tflops nan TFLOPs/sec; achieved tb_per_sec 5.198 TB/sec
[INFO] The reported achieved tb_per_sec is the aggregate bandwidth of all participating ranks.
```



## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added MoE all-to-all communication benchmark with FP8 and NVFP4
options, per‑phase timing, optional NVTX profiling, and bandwidth
metrics.
* Added new output schema for MoE communication and included it in
reported benchmark columns; new benchmark API group added.

* **Chores**
* Integrated MoE communication routine into benchmark configuration,
argument parsing, and dispatch.

* **Tests**
* Enhanced test helper with type annotations and a descriptive
docstring.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
BingooYang pushed a commit to BingooYang/flashinfer that referenced this pull request Mar 13, 2026
murphymatt pushed a commit to fw-ai/flashinfer that referenced this pull request Mar 31, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

This is a port of NVIDIA/TensorRT-LLM#9822 which
was done by @bobboli

This feature is necessary for SGlang integration because some DP workers
may have 0 tokens. The workaround to use a dummy token is quite messy
and brittle.

## 🔍 Related Issues

Follow up to flashinfer-ai/flashinfer#2102

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved robustness of mixture-of-experts all-to-all communication to
gracefully handle scenarios with zero local tokens, preventing
synchronization failures and ensuring stable operation in edge cases.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants