Skip to content

Expand the test of torch.addbmm and torch.baddbmm#47079

Closed
zasdfgbnm wants to merge 19 commits intomasterfrom
addbmm-tests
Closed

Expand the test of torch.addbmm and torch.baddbmm#47079
zasdfgbnm wants to merge 19 commits intomasterfrom
addbmm-tests

Conversation

@zasdfgbnm
Copy link
Copy Markdown
Collaborator

@zasdfgbnm zasdfgbnm commented Oct 29, 2020

This is to satisfy the request at #42553 (comment). See also #47124

@albanD albanD requested a review from ngimel October 29, 2020 21:38
@albanD albanD added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 29, 2020
@codecov
Copy link
Copy Markdown

codecov Bot commented Oct 29, 2020

Codecov Report

Merging #47079 into master will increase coverage by 24.79%.
The diff coverage is 100.00%.

@@             Coverage Diff             @@
##           master   #47079       +/-   ##
===========================================
+ Coverage   36.03%   60.82%   +24.79%     
===========================================
  Files         437     2748     +2311     
  Lines       55239   254096   +198857     
===========================================
+ Hits        19906   154565   +134659     
- Misses      35333    99531    +64198     

@facebook-github-bot
Copy link
Copy Markdown
Contributor

Hi @zasdfgbnm!

Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but we do not have a signature on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

Comment thread test/test_torch.py
@onlyOnCPUAndCUDA
@dtypesIfCUDA(*(torch.testing.get_all_fp_dtypes(include_bfloat16=AMPERE_OR_ROCM)))
@dtypes(*(torch.testing.get_all_complex_dtypes() + [torch.float, torch.double]))
@tf32_on_and_off(0.05)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

floating_and_complex_types() on the previous line

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this commit missing?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not missing, after offline discussion, we decided that instead of testing on floating_and_complex_types(), we test on all fp and complex, and assertRaise for unsupported dtypes. So this comment no longer applies.

Comment thread test/test_torch.py Outdated

for perm1, perm2 in product(permutations((0, 1, 2)), repeat=2):
for perm3 in permutations((0, 1)):
b1 = torch.randn(num_batches, M, N, dtype=dtype, device=device)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as for baddbmm, please compare with numpy instead of this convoluted test logic. Also, if it makes sense to unify this test with baddbmm (at first glance it looks similar), please do so.

Copy link
Copy Markdown
Collaborator Author

@zasdfgbnm zasdfgbnm Oct 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to test separately. The reason is, the output of addbmm is 2D but baddbmm is 3D, so it is

        for perm1, perm2 in product(permutations((0, 1, 2)), repeat=2):
            for perm3 in permutations((0, 1)):

vs

for perm1, perm2, perm3 in product(permutations((0, 1, 2)), repeat=3):

I don't know how these two can be combined easily.

Edit: maybe possible, I think I found a solution

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I don't think numpy has something like addbmm, the the difference is only torch.bmm(b1, b2).sum(0, False) vs numpy.matmul(b1, b2).sum(0, False), in this case, does it still make sense to change to numpy?

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Oct 30, 2020

💊 CI failures summary and remediations

As of commit 781f4af (more details on the Dr. CI page):



🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_test1 (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

[ FAILED ] TestQTensor.QuantizePerChannel4d
    Which is: 93 
[  FAILED  ] TestQTensor.QuantizePerChannel4d (0 ms) 
[ RUN      ] TestQTensor.QuantizePerChannel4dChannelsLast 
[       OK ] TestQTensor.QuantizePerChannel4dChannelsLast (14 ms) 
[----------] 7 tests from TestQTensor (21 ms total) 
 
[----------] Global test environment tear-down 
[==========] 7 tests from 1 test case ran. (21 ms total) 
[  PASSED  ] 6 tests. 
[  FAILED  ] 1 test, listed below: 
[  FAILED  ] TestQTensor.QuantizePerChannel4d 
 
 1 FAILED TEST 
"quantized_test" failed with exit code 1 
+ cleanup
+ retcode=1
+ set +x

🚧 1 ongoing upstream failure:

These were probably caused by upstream breakages that are not fixed yet:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 17 times.

@zasdfgbnm
Copy link
Copy Markdown
Collaborator Author

Broadcasting test added

@zasdfgbnm zasdfgbnm changed the title Expand the test of torch.addbmm Expand the test of torch.addbmm and torch.baddbmm Oct 30, 2020
@zasdfgbnm
Copy link
Copy Markdown
Collaborator Author

zasdfgbnm commented Oct 30, 2020

@ngimel I have resolved most of your comments:

  1. Add broadcasting tests
  2. unify addbmm and baddbmm
  3. I moved baddbmm test to numpy

except that

  1. I didn't move addbmm test to numpy, because we have to do .sum anyway

@zasdfgbnm
Copy link
Copy Markdown
Collaborator Author

@ngimel My CLA is resolved

Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
@zasdfgbnm
Copy link
Copy Markdown
Collaborator Author

this should be ready

Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated

res2.addbmm_(b1, b2)
self.assertEqual(res2, res.sum(0, False))
getattr(res2, func + "_")(b1, b2)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider explicitly passing the variants you want (i.e. passing torch.foo and torch.Tensor.foo and torch.Tensor.foo_)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can not, because I need a string for the name of the operator to assert on warning messages.

Comment thread test/test_torch.py
self.assertEqual(res2, ref)
res3.copy_(res2)

with self.maybeWarnsRegex(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewer's note (not for this PR): we need a way of always triggering these warnings. "maybeWarnsRegex" doesn't do anything, currently.

Comment thread test/test_torch.py
if self.device_type == 'cpu':
is_supported = True
if dtype == torch.bfloat16:
self.precision = 1 # 43 vs 43.75
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewer's note: It'd be nicer if we supported per-dtype precision overrides (also atol and rtol overrides)

Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ngimel merged this pull request in f19637e.

@zasdfgbnm zasdfgbnm deleted the addbmm-tests branch November 5, 2020 05:16
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
This is to satisfy the request at pytorch#42553 (comment). See also pytorch#47124

Pull Request resolved: pytorch#47079

Reviewed By: ejguan

Differential Revision: D24735356

Pulled By: ngimel

fbshipit-source-id: 122fceb4902658f350c2fd6f92455adadd0ec2a4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants