[ATen] Add reduction tag to reduction operators#165155
[ATen] Add reduction tag to reduction operators#165155eellison wants to merge 3 commits intogh/eellison/848/basefrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165155
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ⏳ 1 Pending, 1 Unrelated FailureAs of commit 96c4da9 with merge base ab82456 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction operator variants across 21 operator families (sum, mean, min, max, argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std, std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm). This tag categorizes operators that perform reduction operations, computing aggregate values across one or more dimensions of input tensor(s). This categorization can be useful for analysis, optimization, and compilation tasks. Note: Only dimensional reduction variants (e.g., min.dim, max.dim) are tagged. Simple unary aggregations (min(Tensor), max(Tensor)) and binary/ elementwise operations (min.other, max.other) are excluded. Based on PR #153342 ghstack-source-id: c0e25de Pull Request resolved: #165155
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction operator variants across 21 operator families (sum, mean, min, max, argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std, std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm). This tag categorizes operators that perform reduction operations, computing aggregate values across one or more dimensions of input tensor(s). This categorization can be useful for analysis, optimization, and compilation tasks. Note: Only dimensional reduction variants (e.g., min.dim, max.dim) are tagged. Simple unary aggregations (min(Tensor), max(Tensor)) and binary/ elementwise operations (min.other, max.other) are excluded. Based on PR #153342 ghstack-source-id: cbb7930 Pull Request resolved: #165155
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Command Details for Dev Infra teamRaised by workflow job |
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction operator variants across 21 operator families (sum, mean, min, max, argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std, std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm). This tag categorizes operators that perform reduction operations, computing aggregate values across one or more dimensions of input tensor(s). This categorization can be useful for analysis, optimization, and compilation tasks. Note: Only dimensional reduction variants (e.g., min.dim, max.dim) are tagged. Simple unary aggregations (min(Tensor), max(Tensor)) and binary/ elementwise operations (min.other, max.other) are excluded. Based on PR #153342 ghstack-source-id: 2f58603 Pull Request resolved: #165155
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction operator variants across 21 operator families (sum, mean, min, max, argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std, std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm). This tag categorizes operators that perform reduction operations, computing aggregate values across one or more dimensions of input tensor(s). Based on PR pytorch#153342 - co-written with @AlonSardas. Just as we have pointwise tag - this can be useful for compiler passes, or for opting into sharding rules. Pull Request resolved: pytorch#165155 Approved by: https://github.com/ezyang, https://github.com/zou3519, https://github.com/mlazos
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction operator variants across 21 operator families (sum, mean, min, max, argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std, std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm). This tag categorizes operators that perform reduction operations, computing aggregate values across one or more dimensions of input tensor(s). Based on PR pytorch#153342 - co-written with @AlonSardas. Just as we have pointwise tag - this can be useful for compiler passes, or for opting into sharding rules. Pull Request resolved: pytorch#165155 Approved by: https://github.com/ezyang, https://github.com/zou3519, https://github.com/mlazos
Stack from ghstack (oldest at bottom):
Add a new 'reduction' tag to tags.yaml and apply it to 98 reduction
operator variants across 21 operator families (sum, mean, min, max,
argmin, argmax, amin, amax, aminmax, prod, all, any, norm, var, std,
std_mean, var_mean, nansum, logsumexp, count_nonzero, linalg_vector_norm).
This tag categorizes operators that perform reduction operations,
computing aggregate values across one or more dimensions of input
tensor(s).
Based on PR #153342 - co-written with @AlonSardas.
Just as we have pointwise tag - this can be useful for compiler passes, or for opting into sharding rules.
cc @mlazos