various fixes for functionalization <> XLA integration#88506
various fixes for functionalization <> XLA integration#88506bdhirsh wants to merge 2 commits intogh/bdhirsh/343/basefrom
Conversation
[ghstack-poisoned]
This PR needs a labelIf your changes are user facing and intended to be a part of release notes, please use a label starting with If not, please add the For more information, see https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work. |
|
Moar pr description plz |
|
@bdhirsh, can this be merged? |
The fixes in this PR are mostly around: - More asserts in `FunctionalTensorWrapper`, so bugs show up more cleanly in cases where we e.g. forget to wrap an output - make the *_scatter ops `CompositeExplicitAutogradNonFunctional`, so we get a better error message and XLA doesn't accidentally try to us them - Fix LTC/XLA codegen in core to handle multi-tensor out= ops with no returns - Better erroring: Allow XLA to use the CPU fallback from core in a way so that it always errors on view ops, which XLA should no longer see. [ghstack-poisoned]
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
The fixes in this PR are mostly around:
FunctionalTensorWrapper, so bugs show up more cleanly in cases where we e.g. forget to wrap an outputCompositeExplicitAutogradNonFunctional, so we get a better error message and XLA doesn't accidentally try to us themStack from ghstack (oldest at bottom):