Migrate pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11 -> pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc13#157748
Migrate pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11 -> pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc13#157748atalman wants to merge 6 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/157748
Note: Links to docs will display an error until the docs builds have been completed. ❌ 13 New Failures, 14 Unrelated FailuresAs of commit 025f869 with merge base 288bf54 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Do you plan to migrate aarch64 ci image as a separate PR? |
|
HI @robert-hardwick yes. Will try to migrate CI step by step. This way its easier to deal with errors we see in CI |
|
@pytorchmergebot rebase -b main |
|
@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here |
|
Successfully rebased |
3fe7619 to
4f5f838
Compare
|
@pytorchmergebot rebase -b main |
|
@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here |
|
Rebase failed due to Command Raised by https://github.com/pytorch/pytorch/actions/runs/16332182309 |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Binary builds where migrated to gcc13 a while ago: #152825
This migrates pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11 jobs to pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc13. Make sure we have a coverage for GCC13 in CI
Reference issue, users seeing regression with gcc13: #157626