Use fabi-version=11 to ensure compatibility between gcc7 and gcc9 binaries#81058
Use fabi-version=11 to ensure compatibility between gcc7 and gcc9 binaries#81058atalman wants to merge 6 commits intopytorch:masterfrom
Conversation
🔗 Helpful links
✅ No Failures (0 Pending)As of commit b03cacd (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
malfet
left a comment
There was a problem hiding this comment.
This is a wrong location to put this constraint, as it has nothing to do with generic PyTorch builds, but specific to how we are packaging it for release
As discussed will not be doing this change in this PR, but refactor in pytorch/builder#1084 |
|
@pytorchbot merge |
|
@pytorchbot successfully started a merge job. Check the current status here |
|
Merge failed due to Refusing to merge as mandatory check(s) pull failed for rule superuser |
|
@pytorchbot merge |
|
@pytorchbot successfully started a merge job. Check the current status here |
|
@pytorchbot merge |
|
@pytorchbot successfully started a merge job. Check the current status here |
|
Hey @atalman. |
…aries (#81058) (#81058) Summary: Fixes: #80489 Test using cuda 11.3 manywheel binary: ``` import torch print(torch.__version__) print(torch._C._PYBIND11 (d55b25a633b7e2e6122becf6dbdf0528df6e8b13)_BUILD_ABI) ```` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 ``` Functorch test torch : 1.13.0.dev20220707+cu113, functorch with cu102 ``` import torch print(torch.__version__) print(torch._C._PYBIND11 (d55b25a633b7e2e6122becf6dbdf0528df6e8b13)_BUILD_ABI) from functorch import vmap x = torch.randn(2, 3, 5) vmap(lambda x: x, out_dims=3)(x) ``` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 /home/atalman/temp/testc1.py:5: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:73.) x = torch.randn(2, 3, 5) Traceback (most recent call last): File "/home/atalman/temp/testc1.py", line 6, in <module> vmap(lambda x: x, out_dims=3)(x) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 361, in wrapped return _flat_vmap( File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 488, in _flat_vmap return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 165, in _unwrap_batched flat_outputs = [ File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 166, in <listcomp> _remove_batch_dim(batched_output, vmap_level, batch_size, out_dim) IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3) ``` Related Builder PR: pytorch/builder#1083 Test PR: #81232 Pull Request resolved: #81058 Approved by: https://github.com/zou3519, https://github.com/malfet Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d552ba3b4f53da9b6a5f6e0463111e43b367ef8a Reviewed By: DanilBaibak Differential Revision: D37813240 Pulled By: atalman fbshipit-source-id: 94d94e777b0e9d5da106173c06117b3019ba71c4
|
OK CI is actually OK, it is our internal build failed. I will try to look into it this afternoon. |
|
this seems to break our setup when using gcc8. I run into errors like any insight? |
|
I guess pytorch/xla would just always enable |
…aries (pytorch#81058) (pytorch#81058) Summary: Fixes: pytorch#80489 Test using cuda 11.3 manywheel binary: ``` import torch print(torch.__version__) print(torch._C._PYBIND11 (pytorch@d55b25a633b7e2e6122becf6dbdf0528df6e8b13)_BUILD_ABI) ```` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 ``` Functorch test torch : 1.13.0.dev20220707+cu113, functorch with cu102 ``` import torch print(torch.__version__) print(torch._C._PYBIND11 (pytorch@d55b25a633b7e2e6122becf6dbdf0528df6e8b13)_BUILD_ABI) from functorch import vmap x = torch.randn(2, 3, 5) vmap(lambda x: x, out_dims=3)(x) ``` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 /home/atalman/temp/testc1.py:5: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:73.) x = torch.randn(2, 3, 5) Traceback (most recent call last): File "/home/atalman/temp/testc1.py", line 6, in <module> vmap(lambda x: x, out_dims=3)(x) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 361, in wrapped return _flat_vmap( File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 488, in _flat_vmap return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 165, in _unwrap_batched flat_outputs = [ File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 166, in <listcomp> _remove_batch_dim(batched_output, vmap_level, batch_size, out_dim) IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3) ``` Related Builder PR: pytorch/builder#1083 Test PR: pytorch#81232 Pull Request resolved: pytorch#81058 Approved by: https://github.com/zou3519, https://github.com/malfet Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d552ba3b4f53da9b6a5f6e0463111e43b367ef8a Reviewed By: DanilBaibak Differential Revision: D37813240 Pulled By: atalman fbshipit-source-id: 94d94e777b0e9d5da106173c06117b3019ba71c4
…aries (#81058) (#81058) (#81884) Summary: Fixes: #80489 Test using cuda 11.3 manywheel binary: ``` import torch print(torch.__version__) print(torch._C._PYBIND11 (d55b25a633b7e2e6122becf6dbdf0528df6e8b13)_BUILD_ABI) ```` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 ``` Functorch test torch : 1.13.0.dev20220707+cu113, functorch with cu102 ``` import torch print(torch.__version__) print(torch._C._PYBIND11 (d55b25a633b7e2e6122becf6dbdf0528df6e8b13)_BUILD_ABI) from functorch import vmap x = torch.randn(2, 3, 5) vmap(lambda x: x, out_dims=3)(x) ``` Output ``` 1.13.0.dev20220707+cu113 _cxxabi1011 /home/atalman/temp/testc1.py:5: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:73.) x = torch.randn(2, 3, 5) Traceback (most recent call last): File "/home/atalman/temp/testc1.py", line 6, in <module> vmap(lambda x: x, out_dims=3)(x) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 361, in wrapped return _flat_vmap( File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 488, in _flat_vmap return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func) File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 165, in _unwrap_batched flat_outputs = [ File "/home/atalman/conda/lib/python3.9/site-packages/functorch/_src/vmap.py", line 166, in <listcomp> _remove_batch_dim(batched_output, vmap_level, batch_size, out_dim) IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3) ``` Related Builder PR: pytorch/builder#1083 Test PR: #81232 Pull Request resolved: #81058 Approved by: https://github.com/zou3519, https://github.com/malfet Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d552ba3b4f53da9b6a5f6e0463111e43b367ef8a Reviewed By: DanilBaibak Differential Revision: D37813240 Pulled By: atalman fbshipit-source-id: 94d94e777b0e9d5da106173c06117b3019ba71c4
#4137) * Added pytorch patch file to revert pytorch/pytorch#81058 * updated the diff file.
Fixes: #80489
Test using cuda 11.3 manywheel binary:
Output
Functorch test torch : 1.13.0.dev20220707+cu113, functorch with cu102
Output
Related Builder PR: pytorch/builder#1083
Test PR: #81232