[ROCm][CI] fix test_max_autotune.py:test_max_autotune_exhaustive()#176162
[ROCm][CI] fix test_max_autotune.py:test_max_autotune_exhaustive()#176162AmdSampsa wants to merge 1 commit intopytorch:mainfrom
Conversation
…mplate config heuristics, but it needed to use rocm template heuristics instead
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/176162
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 Cancelled Job, 2 Unrelated FailuresAs of commit f0d16ed with merge base 0569e4a ( CANCELLED JOB - The following job was cancelled. Please retry:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
The remaining failing CI test: FAILED CONSISTENTLY: test/dynamo/test_structured_trace.py::StructuredTraceTest::test_ddp_graphs
Stopping at first consistent failure
The following tests failed consistently: ['test/dynamo/test_structured_trace.py::StructuredTraceTest::test_ddp_graphs']Seems to have nothing to do with our (atomic) unit test. When I run it manually on my test MI350, that dynamo test passes ok. |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: Limited CI on H100 / linux-jammy-cuda12_8-py3_10-gcc11-sm90-FA3-ABI-stable-test / test Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -f "all remaining failure are unrelated" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ytorch#176162) test_max_autotune.py:test_max_autotune_exhaustive() was using cuda template config heuristics, but it needed to use rocm template heuristics instead Pull Request resolved: pytorch#176162 Approved by: https://github.com/jeffdaily
test_max_autotune.py:test_max_autotune_exhaustive() was using cuda template config heuristics, but it needed to use rocm template heuristics instead
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @jataylo @hongxiayang @naromero77amd @pragupta @jerrymannil @xinyazhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben