Skip to content

various fixes for functionalization <> XLA integration#88506

Closed
bdhirsh wants to merge 2 commits intogh/bdhirsh/343/basefrom
gh/bdhirsh/343/head
Closed

various fixes for functionalization <> XLA integration#88506
bdhirsh wants to merge 2 commits intogh/bdhirsh/343/basefrom
gh/bdhirsh/343/head

Conversation

@bdhirsh
Copy link
Copy Markdown
Collaborator

@bdhirsh bdhirsh commented Nov 4, 2022

The fixes in this PR are mostly around:

  • More asserts in FunctionalTensorWrapper, so bugs show up more cleanly in cases where we e.g. forget to wrap an output
  • make the *_scatter ops CompositeExplicitAutogradNonFunctional, so we get a better error message and XLA doesn't accidentally try to us them
  • Fix LTC/XLA codegen in core to handle multi-tensor out= ops with no returns
  • Better erroring: Allow XLA to use the CPU fallback from core in a way so that it always errors on view ops, which XLA should no longer see.

Stack from ghstack (oldest at bottom):

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Nov 4, 2022

bdhirsh added a commit that referenced this pull request Nov 4, 2022
ghstack-source-id: c76b7b1
Pull Request resolved: #88506
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Nov 4, 2022

This PR needs a label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

For more information, see https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Nov 5, 2022

Moar pr description plz

@wonjoo-wj
Copy link
Copy Markdown
Collaborator

@bdhirsh, can this be merged?

The fixes in this PR are mostly around:
- More asserts in `FunctionalTensorWrapper`, so bugs show up more cleanly in cases where we e.g. forget to wrap an output
- make the *_scatter ops `CompositeExplicitAutogradNonFunctional`, so we get a better error message and XLA doesn't accidentally try to us them
- Fix LTC/XLA codegen in core to handle multi-tensor out= ops with no returns
- Better erroring: Allow XLA to use the CPU fallback from core in a way so that it always errors on view ops, which XLA should no longer see.




[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Nov 10, 2022
ghstack-source-id: 2b7b4cd
Pull Request resolved: #88506
@anjali411 anjali411 removed their request for review November 28, 2022 14:51
@github-actions
Copy link
Copy Markdown
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions Bot added the Stale label Jan 27, 2023
@albanD albanD removed their request for review January 31, 2023 18:47
@github-actions github-actions Bot closed this Mar 2, 2023
@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/343/head branch June 8, 2023 15:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants