Skip to content

[ATen] Add reduction tag to reduction operators#165155

Closed
eellison wants to merge 3 commits intogh/eellison/848/basefrom
gh/eellison/848/head
Closed

[ATen] Add reduction tag to reduction operators#165155
eellison wants to merge 3 commits intogh/eellison/848/basefrom
gh/eellison/848/head

Conversation

@eellison
Copy link
Contributor

@eellison eellison commented Oct 10, 2025

Stack from ghstack (oldest at bottom):

Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction
operator variants across 21 operator families (sum, mean, min, max,
argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std,
std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm).

This tag categorizes operators that perform reduction operations,
computing aggregate values across one or more dimensions of input
tensor(s).

Based on PR #153342 - co-written with @AlonSardas.

Just as we have pointwise tag - this can be useful for compiler passes, or for opting into sharding rules.

cc @mlazos

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165155

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ 1 Pending, 1 Unrelated Failure

As of commit 96c4da9 with merge base ab82456 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

eellison added a commit that referenced this pull request Oct 10, 2025
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction
operator variants across 21 operator families (sum, mean, min, max,
argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std,
std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm).

This tag categorizes operators that perform reduction operations,
computing aggregate values across one or more dimensions of input
tensor(s). This categorization can be useful for analysis, optimization,
and compilation tasks.

Note: Only dimensional reduction variants (e.g., min.dim, max.dim) are
tagged. Simple unary aggregations (min(Tensor), max(Tensor)) and binary/
elementwise operations (min.other, max.other) are excluded.

Based on PR #153342

ghstack-source-id: c0e25de
Pull Request resolved: #165155
@eellison eellison requested review from bdhirsh and zou3519 October 10, 2025 16:28
@github-actions
Copy link
Contributor

Attention! native_functions.yaml was changed

If you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info.


Caused by:

[ghstack-poisoned]
eellison added a commit that referenced this pull request Oct 10, 2025
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction
operator variants across 21 operator families (sum, mean, min, max,
argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std,
std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm).

This tag categorizes operators that perform reduction operations,
computing aggregate values across one or more dimensions of input
tensor(s). This categorization can be useful for analysis, optimization,
and compilation tasks.

Note: Only dimensional reduction variants (e.g., min.dim, max.dim) are
tagged. Simple unary aggregations (min(Tensor), max(Tensor)) and binary/
elementwise operations (min.other, max.other) are excluded.

Based on PR #153342

ghstack-source-id: cbb7930
Pull Request resolved: #165155
@eellison
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 20, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build

Details for Dev Infra team Raised by workflow job

@eellison
Copy link
Contributor Author

@pytorchbot merge -i

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 1 checks: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: Command git -C /home/runner/work/pytorch/pytorch cherry-pick -x d9a9fedf32f303fcbb20fb258287aa4eaeefb991 returned non-zero exit code 1

Auto-merging aten/src/ATen/native/native_functions.yaml
CONFLICT (content): Merge conflict in aten/src/ATen/native/native_functions.yaml
error: could not apply d9a9fedf32f... [ATen] Add reduction tag to reduction operators
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Details for Dev Infra team Raised by workflow job

[ghstack-poisoned]
eellison added a commit that referenced this pull request Oct 21, 2025
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction
operator variants across 21 operator families (sum, mean, min, max,
argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std,
std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm).

This tag categorizes operators that perform reduction operations,
computing aggregate values across one or more dimensions of input
tensor(s). This categorization can be useful for analysis, optimization,
and compilation tasks.

Note: Only dimensional reduction variants (e.g., min.dim, max.dim) are
tagged. Simple unary aggregations (min(Tensor), max(Tensor)) and binary/
elementwise operations (min.other, max.other) are excluded.

Based on PR #153342

ghstack-source-id: 2f58603
Pull Request resolved: #165155
@eellison
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build

Details for Dev Infra team Raised by workflow job

@eellison
Copy link
Contributor Author

@pytorchbot merge -i

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 1 checks: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Oct 21, 2025
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction
operator variants across 21 operator families (sum, mean, min, max,
argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std,
std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm).

This tag categorizes operators that perform reduction operations,
computing aggregate values across one or more dimensions of input
tensor(s).

Based on PR pytorch#153342 - co-written with @AlonSardas.

Just as we have pointwise tag - this can be useful for compiler passes, or for opting into sharding rules.

Pull Request resolved: pytorch#165155
Approved by: https://github.com/ezyang, https://github.com/zou3519, https://github.com/mlazos
zhudada0120 pushed a commit to zhudada0120/pytorch that referenced this pull request Oct 22, 2025
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction
operator variants across 21 operator families (sum, mean, min, max,
argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std,
std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm).

This tag categorizes operators that perform reduction operations,
computing aggregate values across one or more dimensions of input
tensor(s).

Based on PR pytorch#153342 - co-written with @AlonSardas.

Just as we have pointwise tag - this can be useful for compiler passes, or for opting into sharding rules.

Pull Request resolved: pytorch#165155
Approved by: https://github.com/ezyang, https://github.com/zou3519, https://github.com/mlazos
@github-actions github-actions bot deleted the gh/eellison/848/head branch November 21, 2025 02:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: inductor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants