Add sum and sum_out to stable ops#168062
Add sum and sum_out to stable ops#168062mikaylagawarecki wants to merge 4 commits intogh/mikaylagawarecki/387/basefrom
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/168062
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit e50299e with merge base 5d1459a ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
[ghstack-poisoned]
[ghstack-poisoned]
| self.assertEqual(result_dtype, expected_dtype) | ||
|
|
||
| # Test sum without dim (sum all elements) - pass empty list | ||
| result_all = libtorch_agnostic.ops.my_sum(t, []) |
There was a problem hiding this comment.
| result_all = libtorch_agnostic.ops.my_sum(t, []) | |
| result_all = libtorch_agnostic.ops.my_sum(t) |
would be best to not have to pass in an empty list at all
There was a problem hiding this comment.
I'm noticing you probably don't include this because TORCH_BOX doesn't support std::optional headeronlyarrayref -> I think it's ok to fall back to writing a boxing function for this case in order to show that calling just sum(t) would work.
There was a problem hiding this comment.
Fixed by making TORCH_BOX support std::optional headeronlyarrayref> instead
[ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Pull Request resolved: pytorch#168062 Approved by: https://github.com/albanD ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711
…ch_call_dispatcher) (pytorch#169872) Scalar's StableIValue conversion is not supported yet. If we don't land this we can recommend `fill_(empty(...))` instead Pull Request resolved: pytorch#169872 Approved by: https://github.com/albanD ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711, pytorch#168062
From https://github.com/pytorch/audio/blob/main/src/libtorchaudio/stable/ops.h Technically it should have been ok not to port these but looking at these carefully I realized the subtract ported to audio ~would have undefined behavior :/~ is broken ``` inline Tensor subtract(const Tensor& self, const Tensor& other) { const auto num_args = 2; std::array<StableIValue, num_args> stack{ torch::stable::detail::from(self), torch::stable::detail::from(other)}; TORCH_ERROR_CODE_CHECK(torch_call_dispatcher( "aten::subtract", "Tensor", stack.data(), TORCH_ABI_VERSION)); return torch::stable::detail::to<torch::stable::Tensor>(stack[0]); } ``` as it missed `alpha` the signature for `subtract.Tensor` is `func: subtract.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor`. ~This is also our bad as although out of bounds reads on the stableivalue stack would be caught by asan, without asan they are silent correctness issues (PR coming to fix).~ Use the old path to support this as we don't support stableivalue conversion for Scalar yet. Pull Request resolved: #169880 Approved by: https://github.com/albanD ghstack dependencies: #169703, #169709, #169711, #168062, #169872
Pull Request resolved: pytorch#168062 Approved by: https://github.com/albanD ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711
…ch_call_dispatcher) (pytorch#169872) Scalar's StableIValue conversion is not supported yet. If we don't land this we can recommend `fill_(empty(...))` instead Pull Request resolved: pytorch#169872 Approved by: https://github.com/albanD ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711, pytorch#168062
…ch#169880) From https://github.com/pytorch/audio/blob/main/src/libtorchaudio/stable/ops.h Technically it should have been ok not to port these but looking at these carefully I realized the subtract ported to audio ~would have undefined behavior :/~ is broken ``` inline Tensor subtract(const Tensor& self, const Tensor& other) { const auto num_args = 2; std::array<StableIValue, num_args> stack{ torch::stable::detail::from(self), torch::stable::detail::from(other)}; TORCH_ERROR_CODE_CHECK(torch_call_dispatcher( "aten::subtract", "Tensor", stack.data(), TORCH_ABI_VERSION)); return torch::stable::detail::to<torch::stable::Tensor>(stack[0]); } ``` as it missed `alpha` the signature for `subtract.Tensor` is `func: subtract.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor`. ~This is also our bad as although out of bounds reads on the stableivalue stack would be caught by asan, without asan they are silent correctness issues (PR coming to fix).~ Use the old path to support this as we don't support stableivalue conversion for Scalar yet. Pull Request resolved: pytorch#169880 Approved by: https://github.com/albanD ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711, pytorch#168062, pytorch#169872
Stack from ghstack (oldest at bottom):