Skip to content

Add tan_cuda for complex dtypes#38400

Closed
anjali411 wants to merge 8 commits intogh/anjali411/23/basefrom
gh/anjali411/23/head
Closed

Add tan_cuda for complex dtypes#38400
anjali411 wants to merge 8 commits intogh/anjali411/23/basefrom
gh/anjali411/23/head

Conversation

@anjali411
Copy link
Copy Markdown
Contributor

@anjali411 anjali411 commented May 13, 2020

Stack from ghstack:

Differential Revision: D21572209

@anjali411 anjali411 requested a review from albanD May 13, 2020 15:42
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented May 13, 2020

💊 CI failures summary and remediations

As of commit d912829 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

May 15 13:03:35 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
May 15 13:03:35 processing existing schema:  aten::std.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 
May 15 13:03:35 processing existing schema:  aten::std.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 15 13:03:35 processing existing schema:  aten::std.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 
May 15 13:03:35 processing existing schema:  aten::std.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 15 13:03:35 processing existing schema:  aten::std_mean(Tensor self, bool unbiased=True) -> (Tensor, Tensor) 
May 15 13:03:35 processing existing schema:  aten::std_mean.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 
May 15 13:03:35 processing existing schema:  aten::std_mean.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 
May 15 13:03:35 processing existing schema:  aten::t(Tensor(a) self) -> (Tensor(a)) 
May 15 13:03:35 processing existing schema:  aten::t_(Tensor(a!) self) -> (Tensor(a!)) 
May 15 13:03:35 processing existing schema:  aten::tan_(Tensor(a!) self) -> (Tensor(a!)) 
May 15 13:03:35 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
May 15 13:03:35  
May 15 13:03:35 Broken ops: [ 
May 15 13:03:35 	profiler::_call_end_callbacks_on_jit_fut(Tensor x, Future(t) y) -> (Future(t)) 
May 15 13:03:35 	aten::_sparse_softmax_backward_data(Tensor grad_output, Tensor output, int dim, Tensor self) -> (Tensor) 
May 15 13:03:35 	aten::_sparse_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 
May 15 13:03:35 	aten::_sparse_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 
May 15 13:03:35 	aten::_sparse_softmax.Dimname(Tensor self, str dim, *, int? dtype=None) -> (Tensor) 
May 15 13:03:35 	aten::_sparse_log_softmax_backward_data(Tensor grad_output, Tensor output, int dim, Tensor self) -> (Tensor) 
May 15 13:03:35 	aten::_sparse_log_softmax(Tensor self, int dim, bool half_to_float) -> (Tensor) 
May 15 13:03:35 	aten::_sparse_log_softmax.int(Tensor self, int dim, int? dtype=None) -> (Tensor) 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 41 times.

Comment thread test/test_torch.py Outdated
('rsqrt', '', lambda t, d: _small_3d(t, d) + 1, lambda t, d: [], 1e-2, 1e-5, 1e-4, _float_types_no_half),
('sinh', '', lambda t, d: _small_3d(t, d).clamp(-1, 1), lambda t, d: [], 1e-3, 1e-5, 1e-5, _float_types),
('tan', '', lambda t, d: _small_3d(t, d).clamp(-1, 1), lambda t, d: [], 1e-3, 1e-5, 1e-5, _float_types),
('tan', 'complex', lambda t, d: _small_3d(t, d), lambda t, d: [], 1e-3, 1e-5, 1e-5, _complex_types)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am surprised that _complex_types did not exist yet here. How do we test the other methods that already support complex?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately there are no existing tests for these ops for complex dtypes. we should add it soon though ...

Comment thread test/test_autograd.py
test_case.assertEqual(self_variable.size(), self_variable.grad.size())

separate_complex_tests = ['log', 'log10', 'log1p', 'log2', 'reciprocal']
separate_complex_tests = ['log', 'log10', 'log1p', 'log2', 'reciprocal', 'tan']
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this one in the reparate list? It should work for real numbers as well no?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test for floating points uses clamp in the input and clamp does not work for complex

anjali411 added a commit that referenced this pull request May 13, 2020
ghstack-source-id: f2bada5
Pull Request resolved: #38400
Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

anjali411 added a commit that referenced this pull request May 13, 2020
ghstack-source-id: 9f3a926
Pull Request resolved: #38400
anjali411 added a commit that referenced this pull request May 14, 2020
ghstack-source-id: 6d1d74a
Pull Request resolved: #38400
* #38399 Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request May 14, 2020
ghstack-source-id: 647e636
Pull Request resolved: #38400
anjali411 added a commit that referenced this pull request May 15, 2020
ghstack-source-id: dce14f0
Pull Request resolved: #38400
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@anjali411 merged this pull request in 242af6c.

1 similar comment
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@anjali411 merged this pull request in 242af6c.

krshrimali added a commit to krshrimali/pytorch that referenced this pull request May 19, 2020
@facebook-github-bot facebook-github-bot deleted the gh/anjali411/23/head branch May 19, 2020 14:16
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#38400

* pytorch#38399 Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only

Test Plan: Imported from OSS

Differential Revision: D21572209

Pulled By: anjali411

fbshipit-source-id: 7036029e9f8336139f5d54e0dfff9759f3bf8376
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants