Skip to content

Add sum and sum_out to stable ops#168062

Closed
mikaylagawarecki wants to merge 4 commits intogh/mikaylagawarecki/387/basefrom
gh/mikaylagawarecki/387/head
Closed

Add sum and sum_out to stable ops#168062
mikaylagawarecki wants to merge 4 commits intogh/mikaylagawarecki/387/basefrom
gh/mikaylagawarecki/387/head

Conversation

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/168062

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit e50299e with merge base 5d1459a (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

mikaylagawarecki added a commit that referenced this pull request Nov 18, 2025
ghstack-source-id: e5fa5bb
Pull Request resolved: #168062
mikaylagawarecki added a commit that referenced this pull request Nov 18, 2025
ghstack-source-id: dab72ec
Pull Request resolved: #168062
self.assertEqual(result_dtype, expected_dtype)

# Test sum without dim (sum all elements) - pass empty list
result_all = libtorch_agnostic.ops.my_sum(t, [])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
result_all = libtorch_agnostic.ops.my_sum(t, [])
result_all = libtorch_agnostic.ops.my_sum(t)

would be best to not have to pass in an empty list at all

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm noticing you probably don't include this because TORCH_BOX doesn't support std::optional headeronlyarrayref -> I think it's ok to fall back to writing a boxing function for this case in order to show that calling just sum(t) would work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed by making TORCH_BOX support std::optional headeronlyarrayref> instead

mikaylagawarecki added a commit that referenced this pull request Dec 8, 2025
ghstack-source-id: ace8cfd
Pull Request resolved: #168062
@mikaylagawarecki mikaylagawarecki added ciflow/trunk Trigger trunk jobs on your pull request release notes: cpp release notes category topic: new features topic category labels Dec 8, 2025
@mikaylagawarecki mikaylagawarecki marked this pull request as ready for review December 8, 2025 19:40
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

@mikaylagawarecki
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Dec 9, 2025
…ch_call_dispatcher) (#169872)

Scalar's StableIValue conversion is not supported yet.

If we don't land this we can recommend `fill_(empty(...))` instead

Pull Request resolved: #169872
Approved by: https://github.com/albanD
ghstack dependencies: #169703, #169709, #169711, #168062
liangxs pushed a commit to liangxs/pytorch that referenced this pull request Dec 9, 2025
liangxs pushed a commit to liangxs/pytorch that referenced this pull request Dec 9, 2025
…ch_call_dispatcher) (pytorch#169872)

Scalar's StableIValue conversion is not supported yet.

If we don't land this we can recommend `fill_(empty(...))` instead

Pull Request resolved: pytorch#169872
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711, pytorch#168062
pytorchmergebot pushed a commit that referenced this pull request Dec 9, 2025
From https://github.com/pytorch/audio/blob/main/src/libtorchaudio/stable/ops.h

Technically it should have been ok not to port these but looking at these carefully I realized the subtract ported to audio ~would have undefined behavior :/~ is broken

```
inline Tensor subtract(const Tensor& self, const Tensor& other) {
  const auto num_args = 2;
  std::array<StableIValue, num_args> stack{
      torch::stable::detail::from(self), torch::stable::detail::from(other)};
  TORCH_ERROR_CODE_CHECK(torch_call_dispatcher(
      "aten::subtract", "Tensor", stack.data(), TORCH_ABI_VERSION));
  return torch::stable::detail::to<torch::stable::Tensor>(stack[0]);
}
```

as it missed `alpha` the signature for `subtract.Tensor` is  `func: subtract.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor`. ~This is also our bad as although out of bounds reads on the stableivalue stack would be caught by asan, without asan they are silent correctness issues (PR coming to fix).~

Use the old path to support this as we don't support stableivalue conversion for Scalar yet.

Pull Request resolved: #169880
Approved by: https://github.com/albanD
ghstack dependencies: #169703, #169709, #169711, #168062, #169872
skpark-rh pushed a commit to skpark-rh/pytorch that referenced this pull request Dec 10, 2025
skpark-rh pushed a commit to skpark-rh/pytorch that referenced this pull request Dec 10, 2025
…ch_call_dispatcher) (pytorch#169872)

Scalar's StableIValue conversion is not supported yet.

If we don't land this we can recommend `fill_(empty(...))` instead

Pull Request resolved: pytorch#169872
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711, pytorch#168062
skpark-rh pushed a commit to skpark-rh/pytorch that referenced this pull request Dec 10, 2025
…ch#169880)

From https://github.com/pytorch/audio/blob/main/src/libtorchaudio/stable/ops.h

Technically it should have been ok not to port these but looking at these carefully I realized the subtract ported to audio ~would have undefined behavior :/~ is broken

```
inline Tensor subtract(const Tensor& self, const Tensor& other) {
  const auto num_args = 2;
  std::array<StableIValue, num_args> stack{
      torch::stable::detail::from(self), torch::stable::detail::from(other)};
  TORCH_ERROR_CODE_CHECK(torch_call_dispatcher(
      "aten::subtract", "Tensor", stack.data(), TORCH_ABI_VERSION));
  return torch::stable::detail::to<torch::stable::Tensor>(stack[0]);
}
```

as it missed `alpha` the signature for `subtract.Tensor` is  `func: subtract.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor`. ~This is also our bad as although out of bounds reads on the stableivalue stack would be caught by asan, without asan they are silent correctness issues (PR coming to fix).~

Use the old path to support this as we don't support stableivalue conversion for Scalar yet.

Pull Request resolved: pytorch#169880
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#169703, pytorch#169709, pytorch#169711, pytorch#168062, pytorch#169872
@github-actions github-actions bot deleted the gh/mikaylagawarecki/387/head branch January 8, 2026 02:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: cpp release notes category topic: new features topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants