[inductor] Remove more skip_if_cpp_wrapper from test_torchinductor.py#177306
[inductor] Remove more skip_if_cpp_wrapper from test_torchinductor.py#177306desertfire wants to merge 1 commit intogh/desertfire/670/basefrom
Conversation
Remove most cpp_wrapper skips from test_torchinductor.py since they can pass now. For some tests, change their skips to be conditioned on autotune_at_compile_time instead of cpp_wrapper. Fix `run_and_get_kernels` to extract kernel code using `R"TRITON(...)"` pattern for lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings rather than Python triple-quoted strings. The remaining skips require more feature parity work to match cpp_wrapper with python_wrapper. Authored with Claude. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/177306
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 1 Pending, 3 Unrelated FailuresAs of commit d83a473 with merge base b180c2f ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
| def test_deterministic_codegen(self): | ||
| if "cpu" in str(self.device) and config.is_fbcode(): | ||
| raise unittest.SkipTest("cpp packaging is wacky in fbcode") | ||
| if "cpu" in str(self.device) and config.cpp_wrapper: |
There was a problem hiding this comment.
nit: can add to above conditional
There was a problem hiding this comment.
Will do in the next round of fbcode clean up.
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Add `aten._grouped_mm.default` to the AOTI fallback ops list so that a c-shim is generated, enabling cpp_wrapper mode for grouped_mm. Authored with Claude. Pull Request resolved: #177307 Approved by: https://github.com/yushangdi ghstack dependencies: #175548, #177306
…pytorch#177306) Remove most cpp_wrapper skips from test_torchinductor.py since they can pass now. For some tests, change their skips to be conditioned on autotune_at_compile_time instead of cpp_wrapper. Fix `run_and_get_kernels` to extract kernel code using `R"TRITON(...)"` pattern for lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings rather than Python triple-quoted strings. The remaining skips require more feature parity work to match cpp_wrapper with python_wrapper. Authored with Claude. Pull Request resolved: pytorch#177306 Approved by: https://github.com/PaulZhang12 ghstack dependencies: pytorch#175548
Add `aten._grouped_mm.default` to the AOTI fallback ops list so that a c-shim is generated, enabling cpp_wrapper mode for grouped_mm. Authored with Claude. Pull Request resolved: pytorch#177307 Approved by: https://github.com/yushangdi ghstack dependencies: pytorch#175548, pytorch#177306
…pytorch#177306) Remove most cpp_wrapper skips from test_torchinductor.py since they can pass now. For some tests, change their skips to be conditioned on autotune_at_compile_time instead of cpp_wrapper. Fix `run_and_get_kernels` to extract kernel code using `R"TRITON(...)"` pattern for lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings rather than Python triple-quoted strings. The remaining skips require more feature parity work to match cpp_wrapper with python_wrapper. Authored with Claude. Pull Request resolved: pytorch#177306 Approved by: https://github.com/PaulZhang12 ghstack dependencies: pytorch#175548
Add `aten._grouped_mm.default` to the AOTI fallback ops list so that a c-shim is generated, enabling cpp_wrapper mode for grouped_mm. Authored with Claude. Pull Request resolved: pytorch#177307 Approved by: https://github.com/yushangdi ghstack dependencies: pytorch#175548, pytorch#177306
Stack from ghstack (oldest at bottom):
Remove most cpp_wrapper skips from test_torchinductor.py since they can
pass now. For some tests, change their skips to be conditioned on
autotune_at_compile_time instead of cpp_wrapper.
Fix
run_and_get_kernelsto extract kernel code usingR"TRITON(...)"patternfor lazy compile cpp_wrapper mode, since kernels are embedded in C++ raw strings
rather than Python triple-quoted strings.
The remaining skips require more feature parity work to match cpp_wrapper with
python_wrapper.
Authored with Claude.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @jataylo