Add API for open registration between operators and subclasses (and modes)#130064
Add API for open registration between operators and subclasses (and modes)#130064zou3519 wants to merge 7 commits intogh/zou3519/1016/basefrom
Conversation
…odes) We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/130064
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 73bc42c with merge base 9c1ba5a ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…sses (and modes)" We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests [ghstack-poisoned]
…odes) We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests ghstack-source-id: 7675dbc Pull Request resolved: #130064
…sses (and modes)" We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot revert -m 'Sorry for reverting your change but test_profiler_tree is failing in trunk after this lands https://hud.pytorch.org/pytorch/pytorch/commit/922d2737d5e0ad22ee1dcf91c48ab09d641de840, maybe a landrace' -c landrace |
|
@pytorchbot successfully started a revert job. Check the current status here. |
|
@zou3519 your PR has been successfully reverted. |
…s (and modes) (#130064)" This reverts commit 922d273. Reverted #130064 on behalf of https://github.com/huydhn due to Sorry for reverting your change but test_profiler_tree is failing in trunk after this lands https://hud.pytorch.org/pytorch/pytorch/commit/922d2737d5e0ad22ee1dcf91c48ab09d641de840, maybe a landrace ([comment](#130064 (comment)))
…sses (and modes)" We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests [ghstack-poisoned]
|
@huydhn afaict that test was added in 2022, is something else wrong with CI? I did confirm that the test failed locally. |
This is the API for defining the interaction between a torch_dispatch class and a custom op. Taking API bikeshedding. Test Plan: - new tests Pull Request resolved: #130261 Approved by: https://github.com/albanD ghstack dependencies: #130064
…odes) (pytorch#130064) We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests Pull Request resolved: pytorch#130064 Approved by: https://github.com/albanD
…0261) This is the API for defining the interaction between a torch_dispatch class and a custom op. Taking API bikeshedding. Test Plan: - new tests Pull Request resolved: pytorch#130261 Approved by: https://github.com/albanD ghstack dependencies: pytorch#130064
This reverts commit bef085b. Reverted #130079 on behalf of https://github.com/izaitsevfb due to depends on #130064 which needs to be reverted ([comment](#130079 (comment)))
Fixes #129617 Pull Request resolved: #130079 Approved by: https://github.com/zou3519
…30261)" This reverts commit bb9a73f. Reverted #130261 on behalf of https://github.com/izaitsevfb due to depends on #130064 which needs to be reverted ([comment](#130261 (comment)))
|
@pytorchbot revert -m "fails internal builds, see D59553526" -c ghfirst |
|
@pytorchbot successfully started a revert job. Check the current status here. |
|
@zou3519 your PR has been successfully reverted. |
…s (and modes) (#130064)" This reverts commit c23d103. Reverted #130064 on behalf of https://github.com/izaitsevfb due to fails internal builds, see [D59553526](https://www.internalfb.com/diff/D59553526) ([comment](#130064 (comment)))
…sses (and modes)" We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests [ghstack-poisoned]
| static auto find_torch_dispatch_rule = | ||
| py::module_::import("torch._library.simple_registry") | ||
| .attr("find_torch_dispatch_rule"); |
There was a problem hiding this comment.
| static auto find_torch_dispatch_rule = | |
| py::module_::import("torch._library.simple_registry") | |
| .attr("find_torch_dispatch_rule"); | |
| static const py::handle find_torch_dispatch_rule = | |
| py::module_::import("torch._library.simple_registry") | |
| .attr("find_torch_dispatch_rule").release(); |
Use py::handle instead of py::object.
There was a problem hiding this comment.
What's the difference?
Also I can fix this up (go back to the original approach you suggested) when we update pybind11 internally (which will hopefully be soon); I want to merge this PR sooner than later because it may have performance implications.
There was a problem hiding this comment.
py::handle is equivalent to PyObject *. It will not do Py_DECREF(obj.m_ptr) on C++ instance destruction where py::object will. handle = object.release() will leak the instance to keep it alive on Python side.
…sses (and modes)" We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests [ghstack-poisoned]
This is the API for defining the interaction between a torch_dispatch class and a custom op. Taking API bikeshedding. Test Plan: - new tests Pull Request resolved: #130261 Approved by: https://github.com/albanD ghstack dependencies: #130064
…odes) (pytorch#130064) We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests Pull Request resolved: pytorch#130064 Approved by: https://github.com/albanD
…0261) This is the API for defining the interaction between a torch_dispatch class and a custom op. Taking API bikeshedding. Test Plan: - new tests Pull Request resolved: pytorch#130261 Approved by: https://github.com/albanD ghstack dependencies: pytorch#130064
This reverts commit bef085b. Reverted pytorch#130079 on behalf of https://github.com/izaitsevfb due to depends on pytorch#130064 which needs to be reverted ([comment](pytorch#130079 (comment)))
…torch#130261)" This reverts commit bb9a73f. Reverted pytorch#130261 on behalf of https://github.com/izaitsevfb due to depends on pytorch#130064 which needs to be reverted ([comment](pytorch#130261 (comment)))
…s (and modes) (pytorch#130064)" This reverts commit c23d103. Reverted pytorch#130064 on behalf of https://github.com/izaitsevfb due to fails internal builds, see [D59553526](https://www.internalfb.com/diff/D59553526) ([comment](pytorch#130064 (comment)))
…odes) (pytorch#130064) We add torch.library.Library._register_torch_dispatch_rule. Here, a user can provide us a specific rule to run for a specific (torch_dispatch_class, operator) pair. The motivation is that a user might want to extend a subclass/mode but may not have access to the source code of the subclass/mode. I'll make this public in a follow-up PR if we think the approach and API is good. Keep in mind that many subclasses will likely deliver their own open registration solution (DTensor has register_sharding_prop_rule and NJT has register_jagged_op); _register_torch_dispatch_rule is meant as a catch-all open registration mechanism for when the subclass hasn't provided anything more specific. Test Plan: - new tests Pull Request resolved: pytorch#130064 Approved by: https://github.com/albanD
…0261) This is the API for defining the interaction between a torch_dispatch class and a custom op. Taking API bikeshedding. Test Plan: - new tests Pull Request resolved: pytorch#130261 Approved by: https://github.com/albanD ghstack dependencies: pytorch#130064
Stack from ghstack (oldest at bottom):
We add torch.library.Library._register_torch_dispatch_rule. Here, a user
can provide us a specific rule to run for a specific
(torch_dispatch_class, operator) pair. The motivation is that a user
might want to extend a subclass/mode but may not have access to the
source code of the subclass/mode.
I'll make this public in a follow-up PR if we think the approach and API
is good.
Keep in mind that many subclasses will likely deliver their own open
registration solution (DTensor has register_sharding_prop_rule and NJT
has register_jagged_op); _register_torch_dispatch_rule is meant as a
catch-all open registration mechanism for when the subclass hasn't
provided anything more specific.
Test Plan: