[dynamo] Add handler to constant-fold elementwise_dtypes during tracing#177743
[dynamo] Add handler to constant-fold elementwise_dtypes during tracing#177743ydwu4 wants to merge 3 commits intogh/ydwu4/393/basefrom
Conversation
elementwise_dtypes was registered as TorchInGraphFunctionVariable via torch._higher_order_ops.out_dtype, causing dynamo to try putting it in the FX graph. Since it returns (dtype, dtype) rather than tensors, this failed with "torch.* op returned non-Tensor". The fix adds a handler that evaluates elementwise_dtypes eagerly on fake tensor metadata during compilation and returns the result as constants. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/177743
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2fc8827 with merge base a345892 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
elementwise_dtypes was registered as TorchInGraphFunctionVariable via torch._higher_order_ops.out_dtype, causing dynamo to try putting it in the FX graph. Since it returns (dtype, dtype) rather than tensors, this failed with "torch.* op returned non-Tensor". The fix adds a handler that evaluates elementwise_dtypes eagerly on fake tensor metadata during compilation and returns the result as constants. ghstack-source-id: f4d6f22 Pull Request resolved: #177743
This PR needs a
|
…uring tracing" Fix issue 2 discovered in #177166. elementwise_dtypes was registered as TorchInGraphFunctionVariable via torch._higher_order_ops.out_dtype, causing dynamo to try putting it in the FX graph. Since it returns (dtype, dtype) rather than tensors, this failed with "torch.* op returned non-Tensor". The fix adds a handler that evaluates elementwise_dtypes eagerly on fake tensor metadata during compilation and returns the result as constants. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx kadeng chauhang amjames Lucaskabela jataylo azahed98 [ghstack-poisoned]
elementwise_dtypes was registered as TorchInGraphFunctionVariable via torch._higher_order_ops.out_dtype, causing dynamo to try putting it in the FX graph. Since it returns (dtype, dtype) rather than tensors, this failed with "torch.* op returned non-Tensor". The fix adds a handler that evaluates elementwise_dtypes eagerly on fake tensor metadata during compilation and returns the result as constants. ghstack-source-id: c826f82 Pull Request resolved: #177743
…uring tracing" Fix issue 2 discovered in #177166. elementwise_dtypes was registered as TorchInGraphFunctionVariable via torch._higher_order_ops.out_dtype, causing dynamo to try putting it in the FX graph. Since it returns (dtype, dtype) rather than tensors, this failed with "torch.* op returned non-Tensor". The fix adds a handler that evaluates elementwise_dtypes eagerly on fake tensor metadata during compilation and returns the result as constants. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx kadeng chauhang amjames Lucaskabela jataylo azahed98 [ghstack-poisoned]
elementwise_dtypes was registered as TorchInGraphFunctionVariable via torch._higher_order_ops.out_dtype, causing dynamo to try putting it in the FX graph. Since it returns (dtype, dtype) rather than tensors, this failed with "torch.* op returned non-Tensor". The fix adds a handler that evaluates elementwise_dtypes eagerly on fake tensor metadata during compilation and returns the result as constants. ghstack-source-id: ba09207 Pull Request resolved: #177743
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
Fix issue 2 discovered in #177166.
elementwise_dtypes was registered as TorchInGraphFunctionVariable via
torch._higher_order_ops.out_dtype, causing dynamo to try putting it in the
FX graph. Since it returns (dtype, dtype) rather than tensors, this failed
with "torch.* op returned non-Tensor". The fix adds a handler that evaluates
elementwise_dtypes eagerly on fake tensor metadata during compilation and
returns the result as constants.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @chauhang @amjames @Lucaskabela @jataylo @azahed98