add API for re-functionalization an aten op; other functionalization fixes#79420
add API for re-functionalization an aten op; other functionalization fixes#79420bdhirsh wants to merge 11 commits intogh/bdhirsh/249/basefrom
Conversation
…fixes [ghstack-poisoned]
🔗 Helpful links
❌ 2 New FailuresAs of commit a9a2430 (more details on the Dr. CI page): Expand to see more
🕵️ 2 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
ezyang
left a comment
There was a problem hiding this comment.
I've already reviewed this right
yup! |
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
|
Rebased |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
closing, a version of this was already landed |
I moved out the changes to
FunctionalTensorWrapper.hfrom the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.Specifically, the LTC PR will make a few operators like
pixel_shufflethat are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that throughfunctionalize_aten_op. This PR also contains the changes to:detach()forFunctionalTensorWrapperI have an XLA patch here to do the re-functionalizing: pytorch/xla#3646
Stack from ghstack (oldest at bottom):