Fix unit test test_equivalent_template_code#160432
Fix unit test test_equivalent_template_code#160432shengfukevin wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160432
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit a18efd6 with merge base 01584d2 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D80101711 |
clee2000
left a comment
There was a problem hiding this comment.
Current failure seems unrelated
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: periodic / linux-jammy-cuda12.8-py3-gcc11-slow-gradcheck / test (default, 3, 8, linux.g5.4xlarge.nvidia.gpu, module:slowgradcheck) Details for Dev Infra teamRaised by workflow job |
dfc5b50 to
6e1e24f
Compare
Summary: Fix unit test test_equivalent_template_code #159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test. Test Plan: buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest Rollback Plan: Reviewed By: clee2000 Differential Revision: D80101711
|
This pull request was exported from Phabricator. Differential Revision: D80101711 |
Summary: Pull Request resolved: pytorch#160432 Fix unit test test_equivalent_template_code pytorch#159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test. Test Plan: buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest Rollback Plan: Reviewed By: clee2000 Differential Revision: D80101711
6e1e24f to
4b5a670
Compare
Summary: Fix unit test test_equivalent_template_code pytorch#159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test. Test Plan: buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest Rollback Plan: Reviewed By: clee2000 Differential Revision: D80101711
4b5a670 to
a18efd6
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80101711 |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: s390x-periodic / linux-manylinux-2_28-py3-cpu-s390x / test (default, 5, 10, linux.s390x) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: Fix unit test test_equivalent_template_code #159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test. Test Plan: buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest Rollback Plan: Differential Revision: D80101711 Pull Request resolved: #160432 Approved by: https://github.com/clee2000
Summary: Fix unit test test_equivalent_template_code #159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test. Test Plan: buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest Rollback Plan: Differential Revision: D80101711 Pull Request resolved: #160432 Approved by: https://github.com/clee2000
Summary: Fix unit test test_equivalent_template_code pytorch#159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test. Test Plan: buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest Rollback Plan: Differential Revision: D80101711 Pull Request resolved: pytorch#160432 Approved by: https://github.com/clee2000
Summary: Fix unit test test_equivalent_template_code pytorch#159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test. Test Plan: buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest Rollback Plan: Differential Revision: D80101711 Pull Request resolved: pytorch#160432 Approved by: https://github.com/clee2000
Summary:
Fix unit test test_equivalent_template_code
#159920 treats ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test.
Test Plan:
buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest
Rollback Plan:
Differential Revision: D80101711
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben