Skip to content

Fix compilation error: add missing <optional> header#2772

Merged
aleozlx merged 3 commits intoflashinfer-ai:mainfrom
he-yufeng:fix/missing-optional-header
Apr 13, 2026
Merged

Fix compilation error: add missing <optional> header#2772
aleozlx merged 3 commits intoflashinfer-ai:mainfrom
he-yufeng:fix/missing-optional-header

Conversation

@he-yufeng
Copy link
Copy Markdown
Contributor

@he-yufeng he-yufeng commented Mar 12, 2026

trtllm_allreduce_fusion.cuh and trtllm_moe_allreduce_fusion.cuh use std::optional and std::nullopt but only include <cuda/std/optional>, which conditionally includes the standard <optional> header based on __cpp_lib_optional. When that macro is not defined (varies by CUDA toolkit version and compiler), std::optional is undefined and compilation fails.

Add the standard <optional> include directly.

Fixes #2767

Summary by CodeRabbit

  • Chores
    • Small internal compatibility update: added standard optional header usage alongside existing platform-specific optional support in a couple of communication modules. Improves build robustness and maintainability with no changes to public APIs or runtime behavior. Low-risk change aimed at smoothing compilation across environments.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a compilation issue by ensuring the necessary <optional> header is always included in specific CUDA utility files. This prevents build failures that could occur when std::optional is not implicitly brought in by <cuda/std/optional> under certain compiler configurations, thereby improving the robustness and portability of the codebase.

Highlights

  • Compilation Fix: Addressed a compilation error in trtllm_allreduce_fusion.cuh and trtllm_moe_allreduce_fusion.cuh where std::optional was undefined due to conditional inclusion of the standard <optional> header.
  • Header Inclusion: Explicitly included the standard <optional> header in the affected files to ensure std::optional and std::nullopt are always properly defined, regardless of CUDA toolkit version or compiler settings.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • include/flashinfer/comm/trtllm_allreduce_fusion.cuh
    • Added direct inclusion of <optional> to resolve std::optional definition issues.
  • include/flashinfer/comm/trtllm_moe_allreduce_fusion.cuh
    • Added direct inclusion of <optional> to resolve std::optional definition issues.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 12, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a7fcba1f-f0f9-4d2f-8857-46d74a9b307b

📥 Commits

Reviewing files that changed from the base of the PR and between 19a5fa9 and 58008c2.

📒 Files selected for processing (1)
  • include/flashinfer/comm/trtllm_moe_allreduce_fusion.cuh
✅ Files skipped from review due to trivial changes (1)
  • include/flashinfer/comm/trtllm_moe_allreduce_fusion.cuh

📝 Walkthrough

Walkthrough

This PR adds the C++ standard header <optional> to two CUDA header files in include/flashinfer/comm/ (trtllm_allreduce_fusion.cuh and trtllm_moe_allreduce_fusion.cuh) by inserting #include <optional> alongside the existing #include <cuda/std/optional>; no other code changes.

Changes

Cohort / File(s) Summary
Header Include Fixes
include/flashinfer/comm/trtllm_allreduce_fusion.cuh, include/flashinfer/comm/trtllm_moe_allreduce_fusion.cuh
Added #include <optional> in each file to ensure the C++ <optional> header is available when cuda/std/optional does not itself include it. No other edits.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

🐰 I hopped through headers, found a tiny spot,
A missing optional that mattered a lot.
One gentle include, stitched up the seam,
Now compiles hum softly — a coder's dream. 🥕✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the main change: adding a missing header to fix compilation errors.
Description check ✅ Passed The PR description clearly explains the issue, root cause, and solution, though the author did not fill out the structured template sections.
Linked Issues check ✅ Passed The changes directly address issue #2767 by adding the standard header to both affected files, resolving the compilation error.
Out of Scope Changes check ✅ Passed All changes are directly related to fixing the compilation error described in issue #2767; no out-of-scope modifications are present.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a compilation error by adding the missing <optional> header in two files. My review includes a minor suggestion to improve the organization of the include statements for better readability and adherence to common C++ style guidelines.

Comment on lines 10 to 14
#include <optional>

#include <cuda/std/optional>
#include <tuple>
#include <type_traits>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and to follow common C++ style guides, it's good practice to group standard library headers together and sort them alphabetically. This makes it easier to see which standard headers are included.

#include <optional>
#include <tuple>
#include <type_traits>

#include <cuda/std/optional>

Comment on lines 10 to 14
#include <optional>

#include <cuda/std/optional>
#include <tuple>
#include <type_traits>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve code organization and follow standard C++ conventions, it's recommended to group standard library headers together. This change also sorts them alphabetically for better readability.

#include <optional>
#include <tuple>
#include <type_traits>

#include <cuda/std/optional>

@aleozlx aleozlx enabled auto-merge (squash) March 19, 2026 21:06
Move `#include <optional>` after `#include <cuda/std/optional>`
to satisfy the include-order linter, and remove the stray blank
line between them.
auto-merge was automatically disabled March 21, 2026 01:03

Head branch was pushed to by a user without write access

@he-yufeng
Copy link
Copy Markdown
Contributor Author

@yzh119 this one's approved and ready — just a missing <optional> header. Mind merging?

@aleozlx aleozlx added the run-ci label Apr 13, 2026
@aleozlx
Copy link
Copy Markdown
Collaborator

aleozlx commented Apr 13, 2026

/bot run

@aleozlx aleozlx enabled auto-merge (squash) April 13, 2026 03:58
@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !534 has been created, and the CI pipeline #48369395 is currently running. I'll report back once the pipeline job completes.

@aleozlx aleozlx merged commit 5678471 into flashinfer-ai:main Apr 13, 2026
32 of 34 checks passed
aleozlx added a commit to aleozlx/flashinfer that referenced this pull request Apr 14, 2026
aleozlx added a commit that referenced this pull request Apr 14, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

#2772

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Refactor**
* Updated internal CUDA device code for improved compatibility and
consistency in quantization and memory computation kernels used in
AllReduce fusion operations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
aleozlx added a commit that referenced this pull request Apr 15, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

#2772

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Refactor**
* Updated internal CUDA device code for improved compatibility and
consistency in quantization and memory computation kernels used in
AllReduce fusion operations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

compilation error due to lack of cpp header file <optional> in trtllm_allreduce*.cuh

3 participants