Skip to content

Ignore reductions of zero-dim tensors per PyTorch conventions#1771

Merged
naoyam merged 2 commits intodevelfrom
zero_dim_reduction
Jun 24, 2022
Merged

Ignore reductions of zero-dim tensors per PyTorch conventions#1771
naoyam merged 2 commits intodevelfrom
zero_dim_reduction

Conversation

@naoyam
Copy link
Copy Markdown
Collaborator

@naoyam naoyam commented Jun 23, 2022

Fixes #1768

We should also add a python test, but I'm having a build problem right now.

@naoyam naoyam force-pushed the zero_dim_reduction branch from 52e8806 to 6cffe01 Compare June 24, 2022 04:05
@naoyam naoyam changed the title [WIP] Ignore reducitons of zero-dim tensors per PyTorch conventions Ignore reductions of zero-dim tensors per PyTorch conventions Jun 24, 2022
@naoyam naoyam requested a review from jjsjann123 June 24, 2022 04:06

// PyTorch allows reduction of 0-dim tensors
if (tv->domain()->noReductions().size() == 0) {
return reductionOpZeroDimTensor(tv);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick, do we need to explicitly construct out here? Wondering if this gives us anything other than a simple return set(inp);

Copy link
Copy Markdown
Collaborator

@jjsjann123 jjsjann123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, minor comment/question.

@naoyam naoyam merged commit 452c773 into devel Jun 24, 2022
@naoyam naoyam deleted the zero_dim_reduction branch June 24, 2022 16:07
shmsong pushed a commit to shmsong/pytorch that referenced this pull request Jul 24, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- TransformPropagator refactor: switched to Dijkstra instead of exhaustive enumeration on all possible paths to reduce compilation time on transform propagation;
- Indexing refactor: remove reference tensor creation in all tensor indexing logic (csarofeen#1690)
- (more) generic grouped grid reduction kernel;
- Minor parser/fuser patches:
  1. zero-dim tensor reduction support
  3. no-op binary removal within fused graph
  4. expand supported in fusion

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
a054b3e Refactor TransormPropagator to allow specifying a position and propagating to part of the DAG (csarofeen#1775)
d67e1cd Indexing refactor stage 1: remove reference tensor creation in all tensor indexing logic (csarofeen#1690)
1b65299 Issue 1770 (csarofeen#1774)
35b0427 Avoid compilation errors like below: (csarofeen#1773)
452c773 Ignore reductions of zero-dim tensors per PyTorch conventions (csarofeen#1771)
31d6c56 TransformPropagator refactor (csarofeen#1769)
570c5a8 Merge pull request csarofeen#1767 from csarofeen/upstream_merge_0621
9d6c3d8 merging upstream 61305cd
0ed815f New TransformPropagator algorithm (csarofeen#1763)
6c19520 no-op binary removal (csarofeen#1764)
ec7fa41 Proper propagation of IterType (csarofeen#1762)
b263562 Fix dimensionality check (csarofeen#1759)
2d6343f More generic grouped grid reduction kernel (csarofeen#1740)
64e2b56 [nvfuser] prevent spamming warning message (pytorch#77777) (csarofeen#1758)
0c43162 [nvFuser] Improving bitwise ops support (pytorch#77158) (csarofeen#1757)
b93a147 Parser expand (csarofeen#1754)
```

RUN_TORCHBENCH: nvfuser
Pull Request resolved: pytorch#80355
Approved by: https://github.com/davidberard98
jjsjann123 added a commit to NVIDIA/Fuser that referenced this pull request Jul 25, 2025
Reducing 0-dimensional inputs is allowed since
csarofeen/pytorch#1771.
In this PR I added a fallback path for the `var_mean` op, so that it
doesn't hit this check
https://github.com/NVIDIA/Fuser/blob/4483f51a396a8c594caae8078e3cb82c7d3caa44/csrc/ops/arith.cpp#L1724
The change in the `numFeatures` function is required to avoid segfaults.

---------

Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
Co-authored-by: jjsjann123 <jiej@nvidia.com>
nsarka pushed a commit to nsarka/Fuser that referenced this pull request Jul 28, 2025
Reducing 0-dimensional inputs is allowed since
csarofeen/pytorch#1771.
In this PR I added a fallback path for the `var_mean` op, so that it
doesn't hit this check
https://github.com/NVIDIA/Fuser/blob/4483f51a396a8c594caae8078e3cb82c7d3caa44/csrc/ops/arith.cpp#L1724
The change in the `numFeatures` function is required to avoid segfaults.

---------

Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
Co-authored-by: jjsjann123 <jiej@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support 0d tensors in reductionOp

2 participants