Skip to content

[nvFuser] Improving bitwise ops support#77158

Closed
zasdfgbnm wants to merge 6 commits intomasterfrom
nvfuser-frontend-bitwise
Closed

[nvFuser] Improving bitwise ops support#77158
zasdfgbnm wants to merge 6 commits intomasterfrom
nvfuser-frontend-bitwise

Conversation

@zasdfgbnm
Copy link
Collaborator

@zasdfgbnm zasdfgbnm commented May 10, 2022

  • Some renaming to better match PyTorch API:
    • lshift -> bitwise_left_shift
    • rshift -> bitwise_right_shift
    • andOp -> bitwise_and
    • orOp -> bitwise_or
    • xorOp -> bitwise_xor
    • notOp -> bitwise_not
  • Fix type inferences and type checking of these ops
  • Add bitwise_* to parser and python frontend
  • Improve test coverage

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 10, 2022

🔗 Helpful links

❌ 2 New Failures

As of commit c4b065b (more details on the Dr. CI page):

Expand to see more
  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (1/2)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-05-17T01:42:05.0687053Z The PR is introduc...m to confirm whether this change is wanted or not.
2022-05-17T01:42:05.0673606Z processing existing schema:  text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0)
2022-05-17T01:42:05.0675168Z processing existing schema:  count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-17T01:42:05.0676204Z processing existing schema:  duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-17T01:42:05.0677450Z processing existing schema:  source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0)
2022-05-17T01:42:05.0679557Z processing existing schema:  line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0)
2022-05-17T01:42:05.0680298Z processing existing schema:  __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-17T01:42:05.0682679Z processing existing schema:  enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-17T01:42:05.0683394Z processing existing schema:  disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-17T01:42:05.0684964Z processing existing schema:  _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0)
2022-05-17T01:42:05.0686246Z processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0)
2022-05-17T01:42:05.0687053Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2022-05-17T01:42:05.0687359Z 
2022-05-17T01:42:05.0687418Z Broken ops: [
2022-05-17T01:42:05.0687666Z 	aten::lift(Tensor self) -> (Tensor)
2022-05-17T01:42:05.0687947Z 	aten::ccol_indices(Tensor(a) self) -> (Tensor(a))
2022-05-17T01:42:05.0688239Z 	aten::ccol_indices_copy(Tensor self) -> (Tensor)
2022-05-17T01:42:05.0688637Z 	aten::index_reduce(Tensor self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True) -> (Tensor)
2022-05-17T01:42:05.0689150Z 	aten::index_reduce.out(Tensor self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True, Tensor(a!) out) -> (Tensor(a!))
2022-05-17T01:42:05.0689658Z 	aten::index_reduce_(Tensor(a!) self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True) -> (Tensor(a!))
2022-05-17T01:42:05.0690050Z 	aten::glu_jvp(Tensor glu, Tensor x, Tensor dx, int dim) -> (Tensor)
2022-05-17T01:42:05.0690437Z 	aten::_sparse_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> (Tensor)

See GitHub Actions build pull / pytorch-xla-linux-bionic-py3.7-clang8 / test (xla, 1, 1, linux.2xlarge) (2/2)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-05-17T01:48:18.4963839Z TypeError: run_gen...got an unexpected keyword argument 'get_device_fn'
2022-05-17T01:48:16.4415776Z + python setup.py install
2022-05-17T01:48:17.3548963Z No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
2022-05-17T01:48:17.3653713Z Building torch_xla version: 1.12
2022-05-17T01:48:17.3654192Z XLA Commit ID: 90a525b5333b1a67ae22fc2df5fe01ceceb30a5e
2022-05-17T01:48:17.3654630Z PyTorch Commit ID: c4b065b3d0608708dce580abcb616a97a8049567
2022-05-17T01:48:17.3715019Z /var/lib/jenkins/workspace /var/lib/jenkins/workspace/xla
2022-05-17T01:48:18.4048165Z /var/lib/jenkins/workspace/xla
2022-05-17T01:48:18.4962304Z Traceback (most recent call last):
2022-05-17T01:48:18.4962876Z   File "/var/lib/jenkins/workspace/xla/scripts/gen_lazy_tensor.py", line 84, in <module>
2022-05-17T01:48:18.4963314Z     get_device_fn="torch_xla::bridge::GetXlaDevice")
2022-05-17T01:48:18.4963839Z TypeError: run_gen_lazy_tensor() got an unexpected keyword argument 'get_device_fn'
2022-05-17T01:48:18.5037181Z Failed to generate lazy files: ['python', '/var/lib/jenkins/workspace/xla/scripts/gen_lazy_tensor.py']
2022-05-17T01:48:18.6592827Z + cleanup
2022-05-17T01:48:18.6593097Z + retcode=1
2022-05-17T01:48:18.6593323Z + set +x
2022-05-17T01:48:18.6625182Z ##[error]Process completed with exit code 1.
2022-05-17T01:48:18.6732506Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-05-17T01:48:18.6732743Z with:
2022-05-17T01:48:18.6733344Z   github-token: ***
2022-05-17T01:48:18.6733523Z env:
2022-05-17T01:48:18.6733865Z   IN_CI: 1

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label May 10, 2022
@zasdfgbnm zasdfgbnm changed the title Improving bitwise ops support [nvFuser] Improving bitwise ops support May 10, 2022
@zasdfgbnm
Copy link
Collaborator Author

Looks like GitHub does not allow me to request review from @csarofeen either

@samdow samdow added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 10, 2022
@zasdfgbnm
Copy link
Collaborator Author

@pytorchbot retest this please

Copy link
Collaborator

@kevinstephano kevinstephano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

Copy link
Collaborator

@jjsjann123 jjsjann123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stamping but feel free to update the tests with the cache configure if you want.

@zasdfgbnm
Copy link
Collaborator Author

@ngimel I have updated this PR to match with the updated eager mode behavior in #77621

@zasdfgbnm zasdfgbnm requested a review from ngimel May 17, 2022 01:50
@zasdfgbnm
Copy link
Collaborator Author

@pytorchbot merge this please

@github-actions
Copy link
Contributor

Hey @zasdfgbnm.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@zasdfgbnm zasdfgbnm deleted the nvfuser-frontend-bitwise branch May 18, 2022 17:28
facebook-github-bot pushed a commit that referenced this pull request May 20, 2022
Summary:
- Some renaming to better match PyTorch API:
  - `lshift` -> `bitwise_left_shift`
  - `rshift` -> `bitwise_right_shift`
  - `andOp` -> `bitwise_and`
  - `orOp` -> `bitwise_or`
  - `xorOp` -> `bitwise_xor`
  - `notOp` -> `bitwise_not`
- Fix type inferences and type checking of these ops
- Add `bitwise_*` to parser and python frontend
- Improve test coverage

Pull Request resolved: #77158
Approved by: https://github.com/kevinstephano, https://github.com/jjsjann123

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/4eec865f5802f2c34eb34f9f89165f0aa4b5502f

Reviewed By: seemethere

Differential Revision: D36494215

Pulled By: seemethere

fbshipit-source-id: 4985890b93046c53f597ccd3fce2bebcdf07b19c
jjsjann123 pushed a commit to csarofeen/pytorch that referenced this pull request Jun 8, 2022
- Some renaming to better match PyTorch API:
  - `lshift` -> `bitwise_left_shift`
  - `rshift` -> `bitwise_right_shift`
  - `andOp` -> `bitwise_and`
  - `orOp` -> `bitwise_or`
  - `xorOp` -> `bitwise_xor`
  - `notOp` -> `bitwise_not`
- Fix type inferences and type checking of these ops
- Add `bitwise_*` to parser and python frontend
- Improve test coverage
Pull Request resolved: pytorch#77158
Approved by: https://github.com/kevinstephano, https://github.com/jjsjann123
jjsjann123 added a commit to csarofeen/pytorch that referenced this pull request Jun 8, 2022
- Some renaming to better match PyTorch API:
  - `lshift` -> `bitwise_left_shift`
  - `rshift` -> `bitwise_right_shift`
  - `andOp` -> `bitwise_and`
  - `orOp` -> `bitwise_or`
  - `xorOp` -> `bitwise_xor`
  - `notOp` -> `bitwise_not`
- Fix type inferences and type checking of these ops
- Add `bitwise_*` to parser and python frontend
- Improve test coverage
Pull Request resolved: pytorch#77158
Approved by: https://github.com/kevinstephano, https://github.com/jjsjann123

Co-authored-by: Xiang Gao <qasdfgtyuiop@gmail.com>
pytorchmergebot pushed a commit that referenced this pull request Jul 13, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- TransformPropagator refactor: switched to Dijkstra instead of exhaustive enumeration on all possible paths to reduce compilation time on transform propagation;
- Indexing refactor: remove reference tensor creation in all tensor indexing logic (#1690)
- (more) generic grouped grid reduction kernel;
- Minor parser/fuser patches:
  1. zero-dim tensor reduction support
  3. no-op binary removal within fused graph
  4. expand supported in fusion

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
a054b3e Refactor TransormPropagator to allow specifying a position and propagating to part of the DAG (#1775)
d67e1cd Indexing refactor stage 1: remove reference tensor creation in all tensor indexing logic (#1690)
1b65299 Issue 1770 (#1774)
35b0427 Avoid compilation errors like below: (#1773)
452c773 Ignore reductions of zero-dim tensors per PyTorch conventions (#1771)
31d6c56 TransformPropagator refactor (#1769)
570c5a8 Merge pull request #1767 from csarofeen/upstream_merge_0621
9d6c3d8 merging upstream 61305cd
0ed815f New TransformPropagator algorithm (#1763)
6c19520 no-op binary removal (#1764)
ec7fa41 Proper propagation of IterType (#1762)
b263562 Fix dimensionality check (#1759)
2d6343f More generic grouped grid reduction kernel (#1740)
64e2b56 [nvfuser] prevent spamming warning message (#77777) (#1758)
0c43162 [nvFuser] Improving bitwise ops support (#77158) (#1757)
b93a147 Parser expand (#1754)
```

RUN_TORCHBENCH: nvfuser
Pull Request resolved: #80355
Approved by: https://github.com/davidberard98
facebook-github-bot pushed a commit that referenced this pull request Jul 13, 2022
Summary:
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- TransformPropagator refactor: switched to Dijkstra instead of exhaustive enumeration on all possible paths to reduce compilation time on transform propagation;
- Indexing refactor: remove reference tensor creation in all tensor indexing logic (#1690)
- (more) generic grouped grid reduction kernel;
- Minor parser/fuser patches:
  1. zero-dim tensor reduction support
  3. no-op binary removal within fused graph
  4. expand supported in fusion

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
a054b3e Refactor TransormPropagator to allow specifying a position and propagating to part of the DAG (#1775)
d67e1cd Indexing refactor stage 1: remove reference tensor creation in all tensor indexing logic (#1690)
1b65299 Issue 1770 (#1774)
35b0427 Avoid compilation errors like below: (#1773)
452c773 Ignore reductions of zero-dim tensors per PyTorch conventions (#1771)
31d6c56 TransformPropagator refactor (#1769)
570c5a8 Merge pull request #1767 from csarofeen/upstream_merge_0621
9d6c3d8 merging upstream 61305cd
0ed815f New TransformPropagator algorithm (#1763)
6c19520 no-op binary removal (#1764)
ec7fa41 Proper propagation of IterType (#1762)
b263562 Fix dimensionality check (#1759)
2d6343f More generic grouped grid reduction kernel (#1740)
64e2b56 [nvfuser] prevent spamming warning message (#77777) (#1758)
0c43162 [nvFuser] Improving bitwise ops support (#77158) (#1757)
b93a147 Parser expand (#1754)
```

RUN_TORCHBENCH: nvfuser

Pull Request resolved: #80355

Reviewed By: qihqi

Differential Revision: D37573400

Pulled By: davidberard98

fbshipit-source-id: 52ab68d89ec01ef61f69f5abeb18c9d3a312aa64
jjsjann123 pushed a commit to jjsjann123/nvfuser that referenced this pull request Oct 29, 2022
- Some renaming to better match PyTorch API:
  - `lshift` -> `bitwise_left_shift`
  - `rshift` -> `bitwise_right_shift`
  - `andOp` -> `bitwise_and`
  - `orOp` -> `bitwise_or`
  - `xorOp` -> `bitwise_xor`
  - `notOp` -> `bitwise_not`
- Fix type inferences and type checking of these ops
- Add `bitwise_*` to parser and python frontend
- Improve test coverage
Pull Request resolved: pytorch/pytorch#77158
Approved by: https://github.com/kevinstephano, https://github.com/jjsjann123
jjsjann123 pushed a commit to jjsjann123/nvfuser that referenced this pull request Nov 10, 2022
- Some renaming to better match PyTorch API:
  - `lshift` -> `bitwise_left_shift`
  - `rshift` -> `bitwise_right_shift`
  - `andOp` -> `bitwise_and`
  - `orOp` -> `bitwise_or`
  - `xorOp` -> `bitwise_xor`
  - `notOp` -> `bitwise_not`
- Fix type inferences and type checking of these ops
- Add `bitwise_*` to parser and python frontend
- Improve test coverage
Pull Request resolved: pytorch/pytorch#77158
Approved by: https://github.com/kevinstephano, https://github.com/jjsjann123
jjsjann123 added a commit to jjsjann123/nvfuser that referenced this pull request Nov 10, 2022
- Some renaming to better match PyTorch API:
  - `lshift` -> `bitwise_left_shift`
  - `rshift` -> `bitwise_right_shift`
  - `andOp` -> `bitwise_and`
  - `orOp` -> `bitwise_or`
  - `xorOp` -> `bitwise_xor`
  - `notOp` -> `bitwise_not`
- Fix type inferences and type checking of these ops
- Add `bitwise_*` to parser and python frontend
- Improve test coverage
Pull Request resolved: pytorch/pytorch#77158
Approved by: https://github.com/kevinstephano, https://github.com/jjsjann123

Co-authored-by: Xiang Gao <qasdfgtyuiop@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants