Skip to content

Migrate AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 to c10::complex#37977

Closed
zasdfgbnm wants to merge 2 commits intomasterfrom
remove-AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND3
Closed

Migrate AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 to c10::complex#37977
zasdfgbnm wants to merge 2 commits intomasterfrom
remove-AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND3

Conversation

@zasdfgbnm
Copy link
Copy Markdown
Collaborator

AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND3 is removed
AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 is now using c10::complex

@zasdfgbnm zasdfgbnm added the module: complex Related to complex number support in PyTorch label May 6, 2020
@zasdfgbnm zasdfgbnm requested a review from anjali411 May 6, 2020 23:27
@zasdfgbnm zasdfgbnm mentioned this pull request May 6, 2020
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented May 7, 2020

💊 CI failures summary and remediations

As of commit ee80476 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

May 07 03:56:22 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
May 07 03:56:22 processing existing schema:  aten::var.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 07 03:56:22 processing existing schema:  aten::var.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 
May 07 03:56:22 processing existing schema:  aten::var.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 07 03:56:22 processing existing schema:  aten::var_mean(Tensor self, bool unbiased=True) -> (Tensor, Tensor) 
May 07 03:56:22 processing existing schema:  aten::var_mean.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 
May 07 03:56:22 processing existing schema:  aten::var_mean.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 
May 07 03:56:22 processing existing schema:  aten::view_as(Tensor self, Tensor other) -> (Tensor) 
May 07 03:56:22 processing existing schema:  aten::where.self(Tensor condition, Tensor self, Tensor other) -> (Tensor) 
May 07 03:56:22 processing existing schema:  aten::where(Tensor condition) -> (Tensor[]) 
May 07 03:56:22 processing existing schema:  aten::_s_where(Tensor condition, Tensor self, Tensor other) -> (Tensor) 
May 07 03:56:22 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
May 07 03:56:22  
May 07 03:56:22 Broken ops: [ 
May 07 03:56:22 	aten::list_with_default(int[] list, int[] defaults) -> (int[]) 
May 07 03:56:22 ] 
May 07 03:56:22 + cleanup 
May 07 03:56:22 + retcode=1 
May 07 03:56:22 + set +x 
May 07 03:56:22 =================== sccache compilation log =================== 
May 07 03:56:22 =========== If your build fails, please take a look at the log above for possible reasons =========== 
May 07 03:56:22 Compile requests                 0 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 5 times.

@zasdfgbnm
Copy link
Copy Markdown
Collaborator Author

Strange, the failure is real

@zasdfgbnm
Copy link
Copy Markdown
Collaborator Author

@anjali411 The test failure was caused by the wrong implementation of std::abs. It should be fixed now.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411
Copy link
Copy Markdown
Contributor

@anjali411 The test failure was caused by the wrong implementation of std::abs. It should be fixed now.

which test was failing?

@anjali411
Copy link
Copy Markdown
Contributor

@anjali411 The test failure was caused by the wrong implementation of std::abs. It should be fixed now.

which test was failing?

oh I see is_close test was failing in test_torch.py

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@zasdfgbnm zasdfgbnm deleted the remove-AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND3 branch May 8, 2020 05:23
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@anjali411 merged this pull request in f4d9713.

@mruberry
Copy link
Copy Markdown
Collaborator

mruberry commented May 8, 2020

Unlanding. Unfortunately this appears to have broken pytorch_linux_xenial_cuda9_2_cudnn7_py3_gcc7_build. Copy of relevant portion of the log:

May 08 06:43:58 [ 80%] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_ConvolutionMM2d.cu.o
May 08 06:44:00 /usr/local/cuda/include/thrust/detail/complex/complex.inl(64): error: no suitable conversion function from "const c10::complex<float>" to "float" exists
May 08 06:44:00           detected during:
May 08 06:44:00             instantiation of "thrust::complex<T>::complex(const R &) [with T=float, R=c10::complex<float>]" 
May 08 06:44:00 /var/lib/jenkins/workspace/c10/util/complex_type.h(441): here
May 08 06:44:00             instantiation of "T std::abs(const c10::complex<T> &) [with T=float]" 
May 08 06:44:00 /var/lib/jenkins/workspace/aten/src/ATen/native/cuda/UnarySignKernels.cu(16): here
May 08 06:44:00             instantiation of "scalar_t at::native::abs_wrapper(scalar_t) [with scalar_t=c10::complex<float>]" 
May 08 06:44:00 /var/lib/jenkins/workspace/aten/src/ATen/native/cuda/UnarySignKernels.cu(28): here
May 08 06:44:00 
May 08 06:44:00 /usr/local/cuda/include/thrust/detail/complex/complex.inl(64): error: no suitable conversion function from "const c10::complex<double>" to "double" exists
May 08 06:44:00           detected during:
May 08 06:44:00             instantiation of "thrust::complex<T>::complex(const R &) [with T=double, R=c10::complex<double>]" 
May 08 06:44:00 /var/lib/jenkins/workspace/c10/util/complex_type.h(441): here
May 08 06:44:00             instantiation of "T std::abs(const c10::complex<T> &) [with T=double]" 
May 08 06:44:00 /var/lib/jenkins/workspace/aten/src/ATen/native/cuda/UnarySignKernels.cu(16): here
May 08 06:44:00             instantiation of "scalar_t at::native::abs_wrapper(scalar_t) [with scalar_t=c10::complex<double>]" 
May 08 06:44:00 /var/lib/jenkins/workspace/aten/src/ATen/native/cuda/UnarySignKernels.cu(28): here
May 08 06:44:00 
May 08 06:44:00 2 errors detected in the compilation of "/tmp/tmpxft_00003ebf_00000000-6_UnarySignKernels.cpp1.ii".
May 08 06:44:00 CMake Error at torch_cuda_generated_UnarySignKernels.cu.o.Release.cmake:281 (message):
May 08 06:44:00   Error generating file
May 08 06:44:00   /var/lib/jenkins/workspace/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_UnarySignKernels.cu.o
May 08 06:44:00 
May 08 06:44:00 

@anjali411 anjali411 removed the merged label May 8, 2020
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…ch#37977)

Summary:
`AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND3` is removed
`AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3` is now using `c10::complex`
Pull Request resolved: pytorch#37977

Differential Revision: D21449612

Pulled By: anjali411

fbshipit-source-id: 236070946b9d6fc89533d196f17fa9c7275d83b5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: complex Related to complex number support in PyTorch open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants