Skip to content

Improve docs around Math/DefaultBackend & add PythonDispatcher class.#50854

Closed
ailzhang wants to merge 6 commits intogh/ailzhang/43/basefrom
gh/ailzhang/43/head
Closed

Improve docs around Math/DefaultBackend & add PythonDispatcher class.#50854
ailzhang wants to merge 6 commits intogh/ailzhang/43/basefrom
gh/ailzhang/43/head

Conversation

@ailzhang
Copy link
Copy Markdown
Contributor

@ailzhang ailzhang commented Jan 20, 2021

Stack from ghstack:

Differential Revision: D26008542

ailzhang pushed a commit that referenced this pull request Jan 20, 2021
@ailzhang
Copy link
Copy Markdown
Contributor Author

Example output of python_dispatcher.py

# toy.register(["CPU", "Math", "AutogradCPU"])
 λ ~/dev-pytorch/tools toydispatcher python python_dispatcher.py

Computed Dispatch Table
key             kernel
---------------------------
CPU             fn_CPU [kernel]
XLA             fn_Math [math kernel]
QuantizedCPU    fn_Math [math kernel]
AutogradOther   fn_Math [math kernel]
AutogradCPU     fn_AutogradCPU [kernel]
AutogradXLA     fn_Math [math kernel]

# toy.register(["CPU", "QuantizedCPU", "Math", "AutogradCPU"])
 λ ~/dev-pytorch/tools toydispatcher python python_dispatcher.py

Computed Dispatch Table
key             kernel
---------------------------
CPU             fn_CPU [kernel]
XLA             fn_Math [math kernel]
QuantizedCPU    fn_QuantizedCPU [kernel]
AutogradOther   ambiguous_autogradother [ambiguous autogradother]
AutogradCPU     fn_AutogradCPU [kernel]
AutogradXLA     fn_Math [math kernel]

Comment thread tools/python_dispatcher.py
Copy link
Copy Markdown
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Toy is a little bit of a misnomer, because you are actually using the real live production dispatcher (similarly, there's no risk of the implementation of getting out of date.)

@ailzhang ailzhang requested a review from smessmer January 21, 2021 22:20
ailzhang pushed a commit that referenced this pull request Jan 21, 2021
ailzhang pushed a commit that referenced this pull request Jan 22, 2021
Copy link
Copy Markdown

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work - it's especially cool to have the python thing running off the actual dispatcher!

Got a decent pile of suggested edits in here but nothing huge. Definitely ping if any of them don't make sense or seem like I've missed the point 😁

Comment thread aten/src/ATen/native/README.md Outdated
Comment thread aten/src/ATen/native/README.md Outdated
Comment thread aten/src/ATen/native/README.md Outdated
Comment thread aten/src/ATen/native/README.md Outdated
Comment thread aten/src/ATen/native/README.md
Comment thread aten/src/ATen/native/README.md Outdated
you can put them in alias `Autograd` or specific keys like `AutogradCPU`.

If you add `dispatch` section to any API that didn't have it before, you **have to** move
the old implementation to `Math` field so that it's still available for other backends to use.
Copy link
Copy Markdown

@bhosmer bhosmer Jan 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to clarify what we're telling them to do here. "move the old implementation to the Math field" means specifying a kernel name that wasn't there before, right? The name we generate is mostly identical to the op name, but e.g. we need to tell what to do about overload names (I think we just always put an underscore between the base name and the overload name, but I'm not sure if there are exceptions, and also I'm not 100% sure we just pass through the casing) and whatever other name mangling we do.

Aside: I notice there are some ops in native_functions.yaml that only specify Math kernels right now, with the same name as the op - random examples: broadcast_to, row_stack. Are those entries necessary?

Copy link
Copy Markdown
Contributor Author

@ailzhang ailzhang Jan 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No they're not necessary. No dispatch section is equivalent with having only a Math kernel. I've updated the PR to describe it more precisely here, let's me know if this works! :D

Comment thread aten/src/ATen/native/README.md Outdated
Comment thread torch/_python_dispatcher.py Outdated
Comment thread torch/_python_dispatcher.py Outdated
Comment thread torch/_python_dispatcher.py Outdated
ailzhang pushed a commit that referenced this pull request Jan 25, 2021
Copy link
Copy Markdown

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome. Couple followup edits, main one being because of a confusing suggestion of mine in the earlier review, sorry about that 😛

Comment thread aten/src/ATen/native/README.md Outdated
Comment thread aten/src/ATen/native/README.md Outdated
Comment thread aten/src/ATen/native/README.md Outdated

If you add backend specific kernel to any operator that only had `Math` kernel before(no dispatch section
or only `Math` entry in dispatch section), you **have to** move the old implementation to a `<op>_math`
kernel and put it in `Math` field so that it's still available for other backends to use.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we actually require a name change though, right? Sorry about my confusing earlier comment btw - I was thinking about telling them how to reconstruct the name of the old implementation, but of course they can just look it up. How about this for the whole paragraph:


Important: because a Math kernel is implicitly registered for ops with no dispatch: section, when you add a backend-specific kernel (and hence a dispatch: section) to one of these, you must also
add a Math: entry that names the old kernel implementation (it's named after the op, with _<overload name> added if applicable), so that it's still available for other backends to use.


(I was going to put something about adding a _math suffix for clarity, but I just checked native_functions and we don't actually do that! Most Math: kernels are just the op name, with a couple that put math_ in the front. So maybe we shouldn't suggest anything in the docs, and just let people take their cues from what they find in native_functions. (If you think it's important for clarity we could say something here, but then we should codemod to make native_functions match, I'd say)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM! Thanks for the suggestion!

ailzhang pushed a commit that referenced this pull request Jan 25, 2021
Copy link
Copy Markdown

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😍

@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 26, 2021

Codecov Report

Merging #50854 (fe9ddd4) into gh/ailzhang/43/base (28869d5) will decrease coverage by 0.00%.
The diff coverage is 96.36%.

@@                   Coverage Diff                   @@
##           gh/ailzhang/43/base   #50854      +/-   ##
=======================================================
- Coverage                80.92%   80.92%   -0.01%     
=======================================================
  Files                     1926     1927       +1     
  Lines                   210030   210085      +55     
=======================================================
+ Hits                    169967   170010      +43     
- Misses                   40063    40075      +12     

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ailzhang merged this pull request in a51b9a8.

@facebook-github-bot facebook-github-bot deleted the gh/ailzhang/43/head branch January 29, 2021 15:21
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…pytorch#50854)

Summary: Pull Request resolved: pytorch#50854

Test Plan: Imported from OSS

Reviewed By: bhosmer

Differential Revision: D26008542

Pulled By: ailzhang

fbshipit-source-id: e9c0aa97ac2537ff612f5faf348fcb613da09479
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants