Skip to content

Move std and var tests to OpInfos#50901

Closed
peterbell10 wants to merge 9 commits intogh/peterbell10/45/basefrom
gh/peterbell10/45/head
Closed

Move std and var tests to OpInfos#50901
peterbell10 wants to merge 9 commits intogh/peterbell10/45/basefrom
gh/peterbell10/45/head

Conversation

@peterbell10
Copy link
Copy Markdown
Collaborator

@peterbell10 peterbell10 commented Jan 21, 2021

Stack from ghstack:

Differential Revision: D26083289

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Jan 21, 2021

💊 CI failures summary and remediations

As of commit 3f36143 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@peterbell10 peterbell10 requested a review from mruberry January 21, 2021 21:03
Comment thread test/test_ops.py
test_backward = (
(dtype.is_complex and op.test_complex_grad) or
(dtype.is_floating_point and (not op.skip_bfloat16_grad or dtype != torch.bfloat16)))
samples = op.sample_inputs(device, dtype, requires_grad=test_backward)
Copy link
Copy Markdown
Collaborator Author

@peterbell10 peterbell10 Jan 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std and var aren't whitelisted for complex autograd, so test_complex_grad=False needs to never call backward() at all. Why wasn't this here already? Well it looks like sigmoid is the only other user of test_complex_grad and it explicitly skips these two jit tests:

# RuntimeError: sigmoid does not support automatic differentiation for outputs with complex dtype.
SkipInfo('TestCommon', 'test_variant_consistency_jit',
dtypes=[torch.complex64, torch.complex128]),
SkipInfo('TestCommon', 'test_variant_consistency_eager',
dtypes=[torch.complex64, torch.complex128]),),

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes to the JIT testing look good to me, it was a bad oversight. Thanks for catching it

@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 22, 2021

Codecov Report

Merging #50901 (ac0a3cc) into gh/peterbell10/45/base (186c3da) will increase coverage by 0.00%.
The diff coverage is 97.32%.

@@                   Coverage Diff                   @@
##           gh/peterbell10/45/base   #50901   +/-   ##
=======================================================
  Coverage                   80.91%   80.92%           
=======================================================
  Files                        1924     1926    +2     
  Lines                      210008   210019   +11     
=======================================================
+ Hits                       169923   169950   +27     
+ Misses                      40085    40069   -16     

Comment thread test/test_ops.py
samples = op.sample_inputs(device, dtype, requires_grad=True)
test_backward = (
(dtype.is_complex and op.test_complex_grad) or
(dtype.is_floating_point and (not op.skip_bfloat16_grad or dtype != torch.bfloat16)))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @Lilyjjo

supports_tensor_out=False,
test_complex_grad=False,
test_inplace_grad=False,
skips=(SkipInfo('TestGradients', 'test_fn_gradgrad'),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you elaborate on these two skips?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the first skip, after reviewing it I realised the tests passed as-is. (probably fixed by the test_complex_grad changes here). As for the second skip, the CPU version of std and var have only partial support for half and complex, because they sometimes call into the old TH version for all-reductions:

// NOTE: CPU performance significantly regressed when attempting to port to ATen,
// so this dispatches differently based on device type.
// See https://github.com/pytorch/pytorch/pull/43858.
if (self.device().type() == kCPU) {
return at::_var(self, unbiased);
}

So for the OpInfo: Half, ComplexFloat and ComplexDouble are declared as unsupported on cpu but not all sample inputs will raise.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. Would you file an issue and add a comment citing it?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #51127



def sample_inputs_std_var(op_info, device, dtype, requires_grad):
tensor_nd = torch.rand(S, S, S, device=device, dtype=dtype, requires_grad=requires_grad)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why rand and not make_tensor?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to make_tensor now.

Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @peterbell10!

The cleanup in test_ops.py looks great. I have a suggestion and a couple questions about the OpInfos, though. Should be simple to resolve.

@peterbell10
Copy link
Copy Markdown
Collaborator Author

@mruberry PTAL.

@mruberry mruberry self-requested a review January 26, 2021 18:14
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

Would you just add a comment with an issue for the skips and also remove the std and var entries here:

('std', '', _small_3d, lambda t, d: [], 1e-3, 1e-5, 1e-5, _float_types, _cpu_types, False),

Then ping me and I'll merge this.

@peterbell10
Copy link
Copy Markdown
Collaborator Author

@mruberry

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@mruberry merged this pull request in 9b6d463.

peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Jan 29, 2021
ghstack-source-id: 8a65fb5
Pull Request resolved: pytorch#50901
@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/45/head branch January 31, 2021 15:18
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary: Pull Request resolved: pytorch#50901

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D26083289

Pulled By: mruberry

fbshipit-source-id: 7e14ff37bba46dd456e0bc0aa9c4e0a632d0734c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants