Skip to content

Use presence of _symint in kernel name to generate symint sig or not#84579

Closed
ezyang wants to merge 14 commits intogh/ezyang/1367/basefrom
gh/ezyang/1367/head
Closed

Use presence of _symint in kernel name to generate symint sig or not#84579
ezyang wants to merge 14 commits intogh/ezyang/1367/basefrom
gh/ezyang/1367/head

Conversation

@ezyang
Copy link
Copy Markdown
Contributor

@ezyang ezyang commented Sep 6, 2022

Stack from ghstack (oldest at bottom):

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key. This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not. This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention. There is some hacking around
for view generation code.

I also add support for external backends to specify 'symint' operators, for which we generate SymInt signatures instead of regular signatures.

Signed-off-by: Edward Z. Yang ezyang@fb.com

Differential Revision: D39310060

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Sep 6, 2022

🔗 Helpful links

✅ No Failures (24 Pending)

As of commit 3d31ef2 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

…sig or not"

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
…sig or not"

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
…sig or not"

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
…sig or not"

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
…sig or not"

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 6, 2022
Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: 2e883d5
Pull Request resolved: #84579
@wconstab
Copy link
Copy Markdown
Contributor

wconstab commented Sep 6, 2022

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key

Before this PR, was the idea that 'meta' kernels were getting c++ symint signatures but real kernels weren't, and now this becomes more explicit because for either meta or composite kernels you just have to write _symint in the name and take out the guesswork?

@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Sep 6, 2022

Before this PR, was the idea that 'meta' kernels were getting c++ symint signatures but real kernels weren't, and now this becomes more explicit because for either meta or composite kernels you just have to write _symint in the name and take out the guesswork?

Yep

Comment thread torchgen/api/types.py Outdated
return NativeSignature(f.func, prefix=prefix, symint=backend_index.symint)
meta = backend_index.get_kernel(f)
return NativeSignature(
f.func, prefix=prefix, symint=meta is not None and meta.supports_symint()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does having a python decomp count here in favor of being symint compatible, or is a 'meta' kernel always strictly necessary?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python decomp suffices, meta is not required

return self.reshape(size);
} else {
auto output = at::_ops::view::call(self, size);
auto output = at::_ops::view::call(self, c10::SymIntArrayRef::fromIntArrayRef(size));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok what's going on in this kernel? the view_copy name above didn't get _symint tacked on, and some of the symint related stuff got removed, but then we are constructing symints and invoking view.

Comment thread torchgen/native_function_generation.py Outdated
if func.kind() == SchemaKind.out
else cpp.name(func)
)
) + "_symint" # TODO: hmm
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is _symint assumed here? shouldn't this be driven by the yaml?

Copy link
Copy Markdown
Contributor

@wconstab wconstab Sep 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# Details are in the function, but we only generate composite kernels (in some cases) today.

Are we assuming all composite kernels are supposed to be _symint?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sooo I am not entirely sure here, but the idea is that these kernels are autogenerated, and they handle SymInts, so we should give them symint names so they get the right signature. It's a bit dodgy though... I got here by just tweaking until it compiled

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that sounds reasonable

clarification: these 'generated' kernels are going to call into handwritten kernels (adapting from functional to out or vice versa) right? if so, would we be potentially having a situation like this?
foo_out <-- handwritten, no symint support
foo_symint <-- generated, just trivially unwraps symints and expects ints, calls foo_out

if so, i still don't think that's a problem. other than, we should probably make it clearer what's going on

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Err, not really? The generated kernels typically call into other API as defined by the codegen; e.g., a generated out will generate a call to functional and then copy_ (but the functional call is not a direct native:: call; it'll just be to some dispatched api).

Copy link
Copy Markdown
Contributor

@wconstab wconstab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds like a better policy to me, and it looks correct outside of the minor questions I raised

@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Sep 7, 2022

I'm going to squish this with the earlier diff as it won't be possible to get XLA working without both sides of the coin. I'm wrong they are separable

ezyang added a commit to pytorch/xla that referenced this pull request Sep 7, 2022
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
…sig or not"

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
@ezyang ezyang requested review from a team and albanD as code owners September 7, 2022 04:09
ezyang added a commit to pytorch/xla that referenced this pull request Sep 7, 2022
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
…sig or not"


Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

I also add support for external backends to specify 'symint' operators, for which we generate SymInt signatures instead of regular signatures.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Differential Revision: [D39310060](https://our.internmc.facebook.com/intern/diff/D39310060)

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 7, 2022
Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: 7a70312
Pull Request resolved: #84579
@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Sep 8, 2022

@ezyang has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Sep 8, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: This PR has internal changes and must be landed via Phabricator

Details for Dev Infra team Raised by workflow job

@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Sep 8, 2022

@ezyang has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit to pytorch/functorch that referenced this pull request Sep 9, 2022
…(#84579)

Summary:
X-link: pytorch/pytorch#84579

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

I also add support for external backends to specify 'symint' operators, for which we generate SymInt signatures instead of regular signatures.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Reviewed By: wconstab

Differential Revision: D39310060

Pulled By: ezyang

fbshipit-source-id: ee29f3a2d7aed9f32bb6e62d8da427507733482c
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered without a flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: Command git -C /home/runner/work/pytorch/pytorch cherry-pick -x 00d0da0f39fa6373ee5bb33e7ecf33c7c6144168 returned non-zero exit code 1

Auto-merging .github/ci_commit_pins/xla.txt
CONFLICT (content): Merge conflict in .github/ci_commit_pins/xla.txt
Auto-merging aten/src/ATen/native/TensorShape.cpp
Auto-merging torchgen/gen_vmap_plumbing.py
error: could not apply 00d0da0f39... Use presence of _symint in kernel name to generate symint sig or not
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".
Details for Dev Infra team Raised by workflow job

@izaitsevfb
Copy link
Copy Markdown
Contributor

@pytorchbot rebase

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Rebase failed due to Command git -C /home/runner/work/pytorch/pytorch rebase refs/remotes/origin/master gh/ezyang/1367/orig returned non-zero exit code 1

Rebasing (1/1)
Auto-merging .github/ci_commit_pins/xla.txt
CONFLICT (content): Merge conflict in .github/ci_commit_pins/xla.txt
Auto-merging aten/src/ATen/native/TensorShape.cpp
Auto-merging torchgen/gen_vmap_plumbing.py
error: could not apply 00d0da0f39... Use presence of _symint in kernel name to generate symint sig or not
hint: Resolve all conflicts manually, mark them as resolved with
hint: "git add/rm <conflicted_files>", then run "git rebase --continue".
hint: You can instead skip this commit: run "git rebase --skip".
hint: To abort and get back to the state before "git rebase", run "git rebase --abort".
Could not apply 00d0da0f39... Use presence of _symint in kernel name to generate symint sig or not

Raised by https://github.com/pytorch/pytorch/actions/runs/3024376973

…sig or not"


Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

I also add support for external backends to specify 'symint' operators, for which we generate SymInt signatures instead of regular signatures.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Differential Revision: [D39310060](https://our.internmc.facebook.com/intern/diff/D39310060)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Sep 9, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/84579

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures, 1 Pending

As of commit fcb056c:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

seemethere added a commit that referenced this pull request Sep 9, 2022
Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: c42301c
Pull Request resolved: #84579
@seemethere
Copy link
Copy Markdown
Member

Rebased to include new xla hash from @zou3519, f00dd2f35ecf6455d97237d63c70c9c8ec190940

@seemethere
Copy link
Copy Markdown
Member

@pytorchbot merge -f "already passed CI, merged internally, need to get in ASAP"

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the force (-f) flag. This means your change will be merged immediately, bypassing any CI checks (ETA: 1-5 minutes). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Sep 9, 2022

Hey @ezyang.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Sep 9, 2022
…84579)

Summary:
Pull Request resolved: #84579

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

I also add support for external backends to specify 'symint' operators, for which we generate SymInt signatures instead of regular signatures.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan:
Imported from OSS

**Static Docs Preview: executorch**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D39310060/V6/executorch/)|

|**Modified Pages**|

Reviewed By: wconstab

Differential Revision: D39310060

Pulled By: ezyang

fbshipit-source-id: ee29f3a2d7aed9f32bb6e62d8da427507733482c
ezyang added a commit to pytorch/xla that referenced this pull request Sep 9, 2022
…ng the tests or ops explicitly (#3978)

* Companion PR for pytorch/pytorch#84579

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* pin

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* modify expect counters

* undo counter fix

* Delete .torch_pin

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
@facebook-github-bot facebook-github-bot deleted the gh/ezyang/1367/head branch September 13, 2022 14:20
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 25, 2026
…ytorch#84579)

Something people found confusing was that whether or not a native::
signature would get SymInt or not in its type was based on the dispatch
key.  This changes it so that SymInt or not in type is based on whether
or not you have _symint in the name of the kernel or not.  This means
that even when we make operators support SymInt, you no longer have to
go and update all the preexisting definitions; instead, you now
selectively write _symint to opt individual kernels into SymInt support.

I then go and update a bunch of kernels that don't have proper SymInt
support to make use of this convention.  There is some hacking around
for view generation code.

I also add support for external backends to specify 'symint' operators, for which we generate SymInt signatures instead of regular signatures.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D39310060](https://our.internmc.facebook.com/intern/diff/D39310060)
Pull Request resolved: pytorch#84579
Approved by: https://github.com/wconstab
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants