Skip to content

Don't decompose functional ops in predispatch functionalization#116383

Closed
tugsbayasgalan wants to merge 8 commits intogh/tugsbayasgalan/182/basefrom
gh/tugsbayasgalan/182/head
Closed

Don't decompose functional ops in predispatch functionalization#116383
tugsbayasgalan wants to merge 8 commits intogh/tugsbayasgalan/182/basefrom
gh/tugsbayasgalan/182/head

Conversation

@tugsbayasgalan
Copy link
Copy Markdown
Contributor

@tugsbayasgalan tugsbayasgalan commented Dec 25, 2023

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Dec 25, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/116383

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 852881b with merge base 36dccc2 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

tugsbayasgalan added a commit that referenced this pull request Dec 25, 2023
tugsbayasgalan added a commit that referenced this pull request Dec 27, 2023
torch.ops.aten.dropout.default, # type: ignore[has-type]
torch.ops.aten.batch_norm.default, # type: ignore[has-type]
torch.ops.aten.native_batch_norm.default, # type: ignore[has-type]
torch.ops.aten._batch_norm_impl_index.default, # type: ignore[has-type]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think we want every op that is a "maybe-mutating/maybe-aliasing" op to be in this list. That also includes:

aten.reshape
aten.contiguous
aten.cudnn_batch_norm
aten.miopen_batch_norm
...??? (hopefully that's it)

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add a test, similar to the batch norm test that you have (proving that batch_norm still decomposes with pre-dispatch functionalization), but also for:

reshape() -> should decompose into view()

contiguous -> should be a no-op if the input is already contiguous

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think reshape and contiguous are already marked as view op. so i think we should be good without adding it to the list. Wrote some test cases to verify it

torch.ops.prim.device.default, # type: ignore[has-type]
]

# These are ops that claim to be functional, but actually are not
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe call these "maybe-mutating / maybe-aliasing" ops

We should probably consider adding a tag for these tbh (probably ok not to in this PR)

# turn off decomp for predispatch right now. Therefore, we do best
# effort by not decomposing ops that are functional in PreDispatch functionalization
# for now.
if self._dispatch_key is not None:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe make this self._dispatch_key == torch._C._DispatchKey.PreDispatch for clarity (it's kinda annoying that _dispatch_key seems like it can be an arbitrary dispatc key here, but we know it's either None or PreDispatch. Maybe we should just make it a bool)

Copy link
Copy Markdown
Collaborator

@bdhirsh bdhirsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Just missing the extra "maybe aliasing" ops + extra tests (giving a preemptive stamp).

We talked about the testing strategy for composite op schemas + pre-dispatch tracing more generally, I think it's fine to not do in this PR (but we should probably do it before turning pre-dispatch on by default in export).

tugsbayasgalan added a commit that referenced this pull request Dec 28, 2023
tugsbayasgalan added a commit that referenced this pull request Dec 28, 2023
@tugsbayasgalan
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Dec 28, 2023
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@tugsbayasgalan
Copy link
Copy Markdown
Contributor Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label Dec 28, 2023
@tugsbayasgalan
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/tugsbayasgalan/182/head branch December 31, 2023 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants