Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous#155590
Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous#155590laithsakka wants to merge 57 commits intogh/laithsakka/206/basefrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155590
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 2 Cancelled JobsAs of commit 8c174c4 with merge base 0f9c1b3 ( NEW FAILURE - The following job has failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
[ghstack-poisoned]
Like is_contiguous, but more dynamic shape-friendly. Returns uncertain false instead of throwing data-dependent errors for tensors with unbacked sizes or strides that can be either contiguous or not. The property is cached, meaning that if some runtime assertions are added that can be used to infer that the tensor is always contiguous after caching, this can still return false. This should be used in places where the contiguity check is not material or when there is a general path even if less performant that can be taken (for example, clone path in reshape or contiguous call). [ghstack-poisoned]
|
this one too pytorch/aten/src/ATen/native/TensorProperties.cpp Lines 115 to 116 in 2161be8 |
Like is_contiguous, but more dynamic shape-friendly. Returns uncertain false instead of throwing data-dependent errors for tensors with unbacked sizes or strides that can be either contiguous or not. The property is cached, meaning that if some runtime assertions are added that can be used to infer that the tensor is always contiguous after caching, this can still return false. This should be used in places where the contiguity check is not material or when there is a general path even if less performant that can be taken (for example, clone path in reshape or contiguous call). [ghstack-poisoned]
|
address failures add single unit test I think we might need to add the sym_max() in the def_contig to make it work like what i did in the python version, |
Like is_contiguous, but more dynamic shape-friendly. Returns uncertain false instead of throwing data-dependent errors for tensors with unbacked sizes or strides that can be either contiguous or not. The property is cached, meaning that if some runtime assertions are added that can be used to infer that the tensor is always contiguous after caching, this can still return false. This should be used in places where the contiguity check is not material or when there is a general path even if less performant that can be taken (for example, clone path in reshape or contiguous call). [ghstack-poisoned]
Like is_contiguous, but more dynamic shape-friendly. Returns uncertain false instead of throwing data-dependent errors for tensors with unbacked sizes or strides that can be either contiguous or not. The property is cached, meaning that if some runtime assertions are added that can be used to infer that the tensor is always contiguous after caching, this can still return false. This should be used in places where the contiguity check is not material or when there is a general path even if less performant that can be taken (for example, clone path in reshape or contiguous call). [ghstack-poisoned]
Like is_contiguous, but more dynamic shape-friendly. Returns uncertain false instead of throwing data-dependent errors for tensors with unbacked sizes or strides that can be either contiguous or not. The property is cached, meaning that if some runtime assertions are added that can be used to infer that the tensor is always contiguous after caching, this can still return false. This should be used in places where the contiguity check is not material or when there is a general path even if less performant that can be taken (for example, clone path in reshape or contiguous call). [ghstack-poisoned]
Like is_contiguous, but more dynamic shape-friendly. Returns uncertain false instead of throwing data-dependent errors for tensors with unbacked sizes or strides that can be either contiguous or not. The property is cached, meaning that if some runtime assertions are added that can be used to infer that the tensor is always contiguous after caching, this can still return false. This should be used in places where the contiguity check is not material or when there is a general path even if less performant that can be taken (for example, clone path in reshape or contiguous call). [ghstack-poisoned]
…e c++ is_contiguous_or_false. " When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. is_contiguous_or_false is similar to is_contiguous but can return false instead of throwing DDE. similarly, is_non_overlapping_and_dense_or_false. In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . This PR uses is_contiguous_or_false in contiguous, the semantics is if we do not know if the input is contiguous we just clone . Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…e c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); is_contiguous_or_false is a helper function that does that. In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
|
@pytorchbot merge |
Merge failedReason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR! Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot revert -m "Breaking 1000s of internal builds, it cant be properly landed internally, there are no options except revert and codev. D77556361" -c ghfirst |
|
@pytorchbot successfully started a revert job. Check the current status here. |
|
@laithsakka your PR has been successfully reverted. |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Comment with id 3024775982 not found Details for Dev Infra teamRaised by workflow job |
…de, and introduce c++ sym_is_contiguous" When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); is_contiguous_or_false is a helper function that does that. In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032) cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorch revert -m "I was asked by infra team to land this internally first" |
|
@pytorchbot revert -m "was asked by to land this internally " -c ghfirst |
|
@pytorchbot successfully started a revert job. Check the current status here. |
|
@laithsakka your PR has been successfully reverted. |
Stack from ghstack (oldest at bottom):
When we compute contiguity for a tensor with dynamic shapes we first:
sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called
on it to avoid data dependent errors.
ex:
bool is_contiguous = input.sym_is_contiguous().guard_or_false(FILE, LINE);
is_contiguous_or_false is a helper function that does that.
In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last .
We use this patter in this PR for several locations to avoid DDEs.
Differential Revision: D77183032
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov