Skip to content

[nvfuser] prevent spamming warning message#77777

Closed
jjsjann123 wants to merge 2 commits intopytorch:masterfrom
jjsjann123:nvfuser_warning_spam_patch
Closed

[nvfuser] prevent spamming warning message#77777
jjsjann123 wants to merge 2 commits intopytorch:masterfrom
jjsjann123:nvfuser_warning_spam_patch

Conversation

@jjsjann123
Copy link
Collaborator

updating TORCH_WARN to TORCH_WARN_ONCE to prevent spamming the log

@jjsjann123 jjsjann123 requested a review from davidberard98 May 18, 2022 19:30
@jjsjann123 jjsjann123 marked this pull request as ready for review May 18, 2022 19:30
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 18, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit be9cfff (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label May 18, 2022
@jjsjann123
Copy link
Collaborator Author

@pytorchbot merge this

@github-actions
Copy link
Contributor

Hey @jjsjann123.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request May 20, 2022
Summary:
updating TORCH_WARN to TORCH_WARN_ONCE to prevent spamming the log

Pull Request resolved: #77777
Approved by: https://github.com/davidberard98

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/17fbb857346e99bbba660f87d4caebafc6d145de

Reviewed By: seemethere

Differential Revision: D36537716

Pulled By: seemethere

fbshipit-source-id: 2028860df4b28a701cdde6340af5e82d1c91c7ac
jjsjann123 added a commit to csarofeen/pytorch that referenced this pull request Jun 8, 2022
updating TORCH_WARN to TORCH_WARN_ONCE to prevent spamming the log
Pull Request resolved: pytorch#77777
Approved by: https://github.com/davidberard98
jjsjann123 added a commit to csarofeen/pytorch that referenced this pull request Jun 8, 2022
updating TORCH_WARN to TORCH_WARN_ONCE to prevent spamming the log
Pull Request resolved: pytorch#77777
Approved by: https://github.com/davidberard98
pytorchmergebot pushed a commit that referenced this pull request Jul 13, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- TransformPropagator refactor: switched to Dijkstra instead of exhaustive enumeration on all possible paths to reduce compilation time on transform propagation;
- Indexing refactor: remove reference tensor creation in all tensor indexing logic (#1690)
- (more) generic grouped grid reduction kernel;
- Minor parser/fuser patches:
  1. zero-dim tensor reduction support
  3. no-op binary removal within fused graph
  4. expand supported in fusion

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
a054b3e Refactor TransormPropagator to allow specifying a position and propagating to part of the DAG (#1775)
d67e1cd Indexing refactor stage 1: remove reference tensor creation in all tensor indexing logic (#1690)
1b65299 Issue 1770 (#1774)
35b0427 Avoid compilation errors like below: (#1773)
452c773 Ignore reductions of zero-dim tensors per PyTorch conventions (#1771)
31d6c56 TransformPropagator refactor (#1769)
570c5a8 Merge pull request #1767 from csarofeen/upstream_merge_0621
9d6c3d8 merging upstream 61305cd
0ed815f New TransformPropagator algorithm (#1763)
6c19520 no-op binary removal (#1764)
ec7fa41 Proper propagation of IterType (#1762)
b263562 Fix dimensionality check (#1759)
2d6343f More generic grouped grid reduction kernel (#1740)
64e2b56 [nvfuser] prevent spamming warning message (#77777) (#1758)
0c43162 [nvFuser] Improving bitwise ops support (#77158) (#1757)
b93a147 Parser expand (#1754)
```

RUN_TORCHBENCH: nvfuser
Pull Request resolved: #80355
Approved by: https://github.com/davidberard98
facebook-github-bot pushed a commit that referenced this pull request Jul 13, 2022
Summary:
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- TransformPropagator refactor: switched to Dijkstra instead of exhaustive enumeration on all possible paths to reduce compilation time on transform propagation;
- Indexing refactor: remove reference tensor creation in all tensor indexing logic (#1690)
- (more) generic grouped grid reduction kernel;
- Minor parser/fuser patches:
  1. zero-dim tensor reduction support
  3. no-op binary removal within fused graph
  4. expand supported in fusion

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
a054b3e Refactor TransormPropagator to allow specifying a position and propagating to part of the DAG (#1775)
d67e1cd Indexing refactor stage 1: remove reference tensor creation in all tensor indexing logic (#1690)
1b65299 Issue 1770 (#1774)
35b0427 Avoid compilation errors like below: (#1773)
452c773 Ignore reductions of zero-dim tensors per PyTorch conventions (#1771)
31d6c56 TransformPropagator refactor (#1769)
570c5a8 Merge pull request #1767 from csarofeen/upstream_merge_0621
9d6c3d8 merging upstream 61305cd
0ed815f New TransformPropagator algorithm (#1763)
6c19520 no-op binary removal (#1764)
ec7fa41 Proper propagation of IterType (#1762)
b263562 Fix dimensionality check (#1759)
2d6343f More generic grouped grid reduction kernel (#1740)
64e2b56 [nvfuser] prevent spamming warning message (#77777) (#1758)
0c43162 [nvFuser] Improving bitwise ops support (#77158) (#1757)
b93a147 Parser expand (#1754)
```

RUN_TORCHBENCH: nvfuser

Pull Request resolved: #80355

Reviewed By: qihqi

Differential Revision: D37573400

Pulled By: davidberard98

fbshipit-source-id: 52ab68d89ec01ef61f69f5abeb18c9d3a312aa64
jjsjann123 added a commit to jjsjann123/nvfuser that referenced this pull request Oct 29, 2022
updating TORCH_WARN to TORCH_WARN_ONCE to prevent spamming the log
Pull Request resolved: pytorch/pytorch#77777
Approved by: https://github.com/davidberard98
jjsjann123 added a commit to jjsjann123/nvfuser that referenced this pull request Nov 10, 2022
updating TORCH_WARN to TORCH_WARN_ONCE to prevent spamming the log
Pull Request resolved: pytorch/pytorch#77777
Approved by: https://github.com/davidberard98
jjsjann123 added a commit to jjsjann123/nvfuser that referenced this pull request Nov 10, 2022
updating TORCH_WARN to TORCH_WARN_ONCE to prevent spamming the log
Pull Request resolved: pytorch/pytorch#77777
Approved by: https://github.com/davidberard98
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants