Skip to content

[PG] Support full reduction ops (Product/Min/Max) and fix reduce kernel indexing bug#1440

Merged
UNIDY2002 merged 4 commits intokvcache-ai:mainfrom
hhr2449:main
Jan 27, 2026
Merged

[PG] Support full reduction ops (Product/Min/Max) and fix reduce kernel indexing bug#1440
UNIDY2002 merged 4 commits intokvcache-ai:mainfrom
hhr2449:main

Conversation

@hhr2449
Copy link
Copy Markdown
Contributor

@hhr2449 hhr2449 commented Jan 26, 2026

Description

Part of #1225

This PR implements support for reduction operations in the CUDA backend and fixes a indexing bug in the reduceKernel.

Key Changes:

  1. Support for New Reduce Ops:

    • Implemented PRODUCT, MIN, and MAX logic in reduceKernel
    • Updated launchReduceKernel assertions to allow these operations.
  2. Bugfix in reduceKernel:

    • Fixed an indexing issue when initializing the accumulator (acc) in the CUDA kernel.
    • Before: acc = src[elem_idx]
    • After: acc = src[rank * numElements + elem_idx]
  3. Unit Tests:

    • Updated mooncake-wheel/tests/test_mooncake_backend.py to include test cases for all_reduce_product, all_reduce_min, and all_reduce_max.
    • Ensured tests are running on CUDA device to trigger the kernel execution.

Type of Change

  • Types
    • Bug fix
    • New feature
      • Transfer Engine
      • Mooncake Store
      • Mooncake EP
      • Integration
      • P2P Store
      • Python Wheel
    • Breaking change
    • CI/CD
    • Documentation update
    • Other

How Has This Been Tested?

I ran the updated unit tests in a GPU environment to verify the correctness of the new reduction operations and the kernel logic.

  • Test Command: python3 -m unittest mooncake-wheel.tests.test_mooncake_backend
  • Results: Passed 5 tests (all_reduce_sum, all_reduce_product, all_reduce_min, all_reduce_max, all_gather).

Checklist

  • I have performed a self-review of my own code.
  • I have formatted my own code using ./scripts/code_format.sh before submitting.
  • I have updated the documentation.
  • I have added tests to prove my changes are effective.

…tests in test_mooncake_backend.py to cover new ops
…tests in test_mooncake_backend.py to cover new ops
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @hhr2449, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the CUDA backend's distributed communication capabilities by introducing support for product, minimum, and maximum reduction operations. Concurrently, it addresses a crucial indexing error in the reduceKernel that previously led to incorrect accumulator initialization. The changes are thoroughly validated with new unit tests, ensuring robust and accurate collective operations.

Highlights

  • New Reduction Operations: Implemented support for PRODUCT, MIN, and MAX reduction operations within the CUDA backend's reduceKernel and updated the launchReduceKernel assertions to recognize these new operations.
  • Critical Bug Fix: Resolved an indexing bug in the reduceKernel where the accumulator (acc) was incorrectly initialized. The fix changes acc = src[elem_idx] to acc = src[rank * numElements + elem_idx] to correctly access elements across different ranks.
  • Expanded Unit Tests: Added new unit tests in test_mooncake_backend.py for all_reduce_product, all_reduce_min, and all_reduce_max to ensure the correctness of the newly implemented reduction operations and the bug fix, specifically targeting CUDA device execution.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully adds support for PRODUCT, MIN, and MAX reduction operations to the CUDA backend and fixes a critical indexing bug in the reduceKernel. The changes in the CUDA kernel are correct and the TORCH_CHECK has been updated accordingly. The accompanying Python tests are well-written and cover the new functionality. I've included a few minor suggestions to improve code style and adhere to PEP 8 guidelines in the test file.

Comment on lines +25 to +26
tensor = torch.tensor([2], dtype = torch.int32, device = "cuda")
dist.all_reduce(tensor, op = dist.ReduceOp.PRODUCT)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are extra spaces around the = for keyword arguments. According to the PEP 8 style guide, there should be no spaces in this context. This issue is also present on lines 30-31 and 35-36.

Suggested change
tensor = torch.tensor([2], dtype = torch.int32, device = "cuda")
dist.all_reduce(tensor, op = dist.ReduceOp.PRODUCT)
tensor = torch.tensor([2], dtype=torch.int32, device="cuda")
dist.all_reduce(tensor, op=dist.ReduceOp.PRODUCT)
References
  1. PEP 8 recommends not using spaces around the = sign for keyword arguments. (link)

@codecov-commenter
Copy link
Copy Markdown

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Copy Markdown
Collaborator

@UNIDY2002 UNIDY2002 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job!

@UNIDY2002 UNIDY2002 merged commit 0b8e1d6 into kvcache-ai:main Jan 27, 2026
16 checks passed
XucSh pushed a commit to XucSh/Mooncake that referenced this pull request Jan 27, 2026
JasonZhang517 pushed a commit to JasonZhang517/Mooncake that referenced this pull request Feb 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants