Use gcc13 in Manylinux 2.28 images#152825
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152825
Note: Links to docs will display an error until the docs builds have been completed. ❌ 9 New Failures, 5 Pending, 1 Unrelated FailureAs of commit 6778233 with merge base 2f09e79 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot rebase -b main |
|
@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here |
|
Successfully rebased |
0b4d42b to
8e180a5
Compare
| DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=13 --build-arg NINJA_VERSION=1.12.1" | ||
| MANY_LINUX_VERSION="2_28_aarch64" | ||
| ;; | ||
| manylinuxcxx11-abi-builder:cpu-cxx11-abi) |
There was a problem hiding this comment.
Do we want to do anything about this build ? Looks like its still using gcc=9
There was a problem hiding this comment.
Looks like currently it is using gcc11? https://github.com/pytorch/pytorch/blob/main/.ci/docker/manywheel/Dockerfile_2_28_aarch64#L4
We may need a PR to test the docker image before merging 🤔
malfet
left a comment
There was a problem hiding this comment.
Sure, though I though that manylinux standard mandates minimum compler version, isn't it?
|
Hi @malfet as far as I know its not mandated, glibc version is. I believe you commended on the issue earlier: #114232 (comment) |
|
@pytorchmergebot merge -f "all required tests and lint are green" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…el for GCC 12 and above (#158117) This PR disables `strict-aliasing` GCC C++ optimization flag on all AArch64 cpus for GCC versions 12 and above. Pull Request #152825 upgraded gcc version from 11 to 13 in manywheel which caused several segmentation faults in unit tests ( not visible in CI workflows because the jammy gcc version has not been updated yet ). We Identified the problem also exists in GCC12 hence the ` __GNUC__ >= 12` Fixes #157626 fixes these tests failures when pytorch is built in GCC12 and above ``` test_ops.py::TestCommonCPU::test_noncontiguous_samples_grid_sampler_2d_cpu_float32 Fatal Python error: Segmentation fault test_ops.py::TestCommonCPU::test_dtypes_grid_sampler_2d_cpu Fatal Python error: Segmentation fault test_ops.py::TestMathBitsCPU::test_neg_view_nn_functional_grid_sample_cpu_float64 free(): invalid next size (fast) test_ops.py::TestCompositeComplianceCPU::test_backward_grid_sampler_2d_cpu_float32 Fatal Python error: Segmentation fault test_ops.py::TestCommonCPU::test_dtypes_nn_functional_grid_sample_cpu Fatal Python error: Segmentation fault ``` Pull Request resolved: #158117 Approved by: https://github.com/malfet
This is needed because manylinux uses GCC-13 since #152825 As a result of the current compiler version mismatches, we've seen tests passing jammy-aarch64 pre-commit CI, but failing for wheels built in manylinux Related to: #166736 Pull Request resolved: #166849 Approved by: https://github.com/robert-hardwick, https://github.com/malfet, https://github.com/Skylion007, https://github.com/atalman
Related to: #152426