Skip to content

[inductor] Remove more skip_if_cpp_wrapper from test_torchinductor.py#177306

Closed
desertfire wants to merge 1 commit intogh/desertfire/670/basefrom
gh/desertfire/670/head
Closed

[inductor] Remove more skip_if_cpp_wrapper from test_torchinductor.py#177306
desertfire wants to merge 1 commit intogh/desertfire/670/basefrom
gh/desertfire/670/head

Conversation

@desertfire
Copy link
Copy Markdown
Contributor

@desertfire desertfire commented Mar 12, 2026

Stack from ghstack (oldest at bottom):

Remove most cpp_wrapper skips from test_torchinductor.py since they can
pass now. For some tests, change their skips to be conditioned on
autotune_at_compile_time instead of cpp_wrapper.

Fix run_and_get_kernels to extract kernel code using R"TRITON(...)" pattern
for lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings
rather than Python triple-quoted strings.

The remaining skips require more feature parity work to match cpp_wrapper with
python_wrapper.

Authored with Claude.

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @jataylo

Remove most cpp_wrapper skips from test_torchinductor.py since they can
pass now. For some tests, change their skips to be conditioned on
autotune_at_compile_time instead of cpp_wrapper.

Fix `run_and_get_kernels` to extract kernel code using `R"TRITON(...)"` pattern
for lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings
rather than Python triple-quoted strings.

The remaining skips require more feature parity work to match cpp_wrapper with
python_wrapper.

Authored with Claude.

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Mar 12, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/177306

Note: Links to docs will display an error until the docs builds have been completed.

⏳ 1 Pending, 3 Unrelated Failures

As of commit d83a473 with merge base b180c2f (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Mar 12, 2026

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

def test_deterministic_codegen(self):
if "cpu" in str(self.device) and config.is_fbcode():
raise unittest.SkipTest("cpp packaging is wacky in fbcode")
if "cpu" in str(self.device) and config.cpp_wrapper:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can add to above conditional

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do in the next round of fbcode clean up.

@desertfire
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot Bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 13, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Mar 14, 2026
Add `aten._grouped_mm.default` to the AOTI fallback ops list so that
a c-shim is generated, enabling cpp_wrapper mode for grouped_mm.

Authored with Claude.

Pull Request resolved: #177307
Approved by: https://github.com/yushangdi
ghstack dependencies: #175548, #177306
EmanueleCoradin pushed a commit to EmanueleCoradin/pytorch that referenced this pull request Mar 30, 2026
…pytorch#177306)

Remove most cpp_wrapper skips from test_torchinductor.py since they can
pass now. For some tests, change their skips to be conditioned on
autotune_at_compile_time instead of cpp_wrapper.

Fix `run_and_get_kernels` to extract kernel code using `R"TRITON(...)"` pattern
for lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings
rather than Python triple-quoted strings.

The remaining skips require more feature parity work to match cpp_wrapper with
python_wrapper.

Authored with Claude.

Pull Request resolved: pytorch#177306
Approved by: https://github.com/PaulZhang12
ghstack dependencies: pytorch#175548
EmanueleCoradin pushed a commit to EmanueleCoradin/pytorch that referenced this pull request Mar 30, 2026
Add `aten._grouped_mm.default` to the AOTI fallback ops list so that
a c-shim is generated, enabling cpp_wrapper mode for grouped_mm.

Authored with Claude.

Pull Request resolved: pytorch#177307
Approved by: https://github.com/yushangdi
ghstack dependencies: pytorch#175548, pytorch#177306
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
…pytorch#177306)

Remove most cpp_wrapper skips from test_torchinductor.py since they can
pass now. For some tests, change their skips to be conditioned on
autotune_at_compile_time instead of cpp_wrapper.

Fix `run_and_get_kernels` to extract kernel code using `R"TRITON(...)"` pattern
for lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings
rather than Python triple-quoted strings.

The remaining skips require more feature parity work to match cpp_wrapper with
python_wrapper.

Authored with Claude.

Pull Request resolved: pytorch#177306
Approved by: https://github.com/PaulZhang12
ghstack dependencies: pytorch#175548
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
Add `aten._grouped_mm.default` to the AOTI fallback ops list so that
a c-shim is generated, enabling cpp_wrapper mode for grouped_mm.

Authored with Claude.

Pull Request resolved: pytorch#177307
Approved by: https://github.com/yushangdi
ghstack dependencies: pytorch#175548, pytorch#177306
@github-actions github-actions Bot deleted the gh/desertfire/670/head branch April 13, 2026 02:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/torchtitan Run TorchTitan integration tests ciflow/trunk Trigger trunk jobs on your pull request Merged module: inductor topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants