Skip to content

Added type promotion logic for complex numbers#34093

Closed
anjali411 wants to merge 10 commits intopytorch:masterfrom
anjali411:complex_type_promotion
Closed

Added type promotion logic for complex numbers#34093
anjali411 wants to merge 10 commits intopytorch:masterfrom
anjali411:complex_type_promotion

Conversation

@anjali411
Copy link
Contributor

@anjali411 anjali411 commented Mar 2, 2020

Issue: #33780
After this PR:

  1. dtype promotion logic will correctly work for ops involving complex scalars
  2. added alias for complex64 (cfloat) and complex128 (cdouble)
  3. added an internal function get_complex_default_dtype (consciously not exposed in public API)

1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex64)

torch.set_default_dtype(torch.float64)
1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex128)

1j + torch.ones(2)
tensor([(1.0000 + 1.0000j), (1.0000 + 1.0000j)], dtype=torch.complex128)

torch.tensor(1j) + torch.ones(2,2)
tensor([[(1.0000 + 1.0000j), (1.0000 + 1.0000j)],
[(1.0000 + 1.0000j), (1.0000 + 1.0000j)]], dtype=torch.complex128)

@dr-ci
Copy link

dr-ci bot commented Mar 2, 2020

💊 CircleCI build failures summary and remediations

As of commit 9f86e56 (more details on the Dr. CI page):


  • 2/3 failures introduced in this PR

  • 1/3 broken upstream at merge base 36e3c00 from Mar 23 until Mar 24 (47 commits; 77ccb5c - 3f896ef)

    Please rebase on the viable/strict branch (expand for instructions)

    If your commit is newer than viable/strict, you can try basing on an older, stable commit:

    git fetch https://github.com/pytorch/pytorch viable/strict
    git rebase --onto FETCH_HEAD $(git merge-base origin/master HEAD)
    

    If your commit is older than viable/strict:

    git fetch https://github.com/pytorch/pytorch viable/strict
    git rebase FETCH_HEAD
    

    Check out the recency history of this "viable master" tracking branch.


🕵️ 1 new failure recognized by patterns

The following build failures do not appear to be due to upstream breakages (reran 1 job to discount flakiness):

See CircleCI build caffe2_onnx_main_py3_6_clang7_ubuntu16_04_build (1/1)

Step: "Build" (full log | pattern match details) <confirmed not flaky by 2 failures>

Mar 24 17:49:11 fatal: The remote end hung up unexpectedly
DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/caffe2/py3.6-clang7-ubuntu16.04:345 
 
real	0m26.179s 
user	0m0.060s 
sys	0m0.064s 
Mar 24 17:49:00 ++ export BUILD_ENVIRONMENT=caffe2-onnx-main-py3.6-clang7-ubuntu16.04-build 
Mar 24 17:49:00 ++ BUILD_ENVIRONMENT=caffe2-onnx-main-py3.6-clang7-ubuntu16.04-build 
Mar 24 17:49:00 ++ git submodule sync 
Mar 24 17:49:00 ++ git submodule update -q --init --recursive 
Mar 24 17:49:11 error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function. 
Mar 24 17:49:11 fatal: The remote end hung up unexpectedly 
Mar 24 17:49:11 fatal: early EOF 
Mar 24 17:49:11 fatal: index-pack failed 
Mar 24 17:49:11 fatal: clone of 'https://github.com/eigenteam/eigen-git-mirror.git' into submodule path 'third_party/eigen' failed 

2 jobs timed out:

  • pytorch_linux_xenial_py3_clang5_asan_test
  • pytorch_macos_10_13_py3_test

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

This comment has been revised 71 times.

@anjali411 anjali411 requested a review from nairbv March 3, 2020 15:41
@anjali411 anjali411 force-pushed the complex_type_promotion branch from fbea963 to 2317698 Compare March 3, 2020 15:41
@anjali411 anjali411 changed the title [WIP] added type promotion logic for complex Added type promotion logic for complex numbers Mar 3, 2020
@anjali411 anjali411 added the module: complex Related to complex number support in PyTorch label Mar 3, 2020
@anjali411 anjali411 requested a review from ezyang March 3, 2020 16:04
@anjali411 anjali411 force-pushed the complex_type_promotion branch from 6835e9d to 960af5c Compare March 4, 2020 20:24
@ezyang ezyang removed their request for review March 4, 2020 23:16
@ezyang
Copy link
Contributor

ezyang commented Mar 4, 2020

I'm going to let @nairbv take this one in its entirety

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: is it a conscious decision that a complex type is not also considered a floating type? I could see considering complex to be a special case of floating types.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think complex should be considered a floating point type

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will make a follow-up PR for this

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's update this to be else if anyway though so it's future proof to that change.

@anjali411 anjali411 force-pushed the complex_type_promotion branch from 960af5c to 12ddcfd Compare March 5, 2020 20:04
@anjali411 anjali411 force-pushed the complex_type_promotion branch 2 times, most recently from c5e6ec3 to 51389dc Compare March 6, 2020 18:57
Copy link
Contributor Author

@anjali411 anjali411 Mar 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using randn because we don't have rand for complex dtype rn #34380

@anjali411 anjali411 force-pushed the complex_type_promotion branch from b2ea48f to c550bfe Compare March 6, 2020 20:07
@anjali411
Copy link
Contributor Author

change isFloatingPoint || isComplex to isFloatingPoint after #34392

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411
Copy link
Contributor Author

cc. @ailzhang for the xla test failure

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry Can you suggest how to skip complex dtypes entirely for XLA? ;) Like the half types?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm that only works when dtype appears in the decorator and it's skipped in instantiation of tests. But here we explicitly call with complex dtypes so it doesn't work. (please correct me if I'm wrong).
@anjali411 would you mind refactor these tests to keep float and complex in separate tests? They can call into common helper functions. Since type promotion is pretty major behavior, I do want to keep XLA running the part they're able to run. Thanks a lot!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this test too. Thanks!!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we weren't going to add e.g. torch.*ComplexFloatTensor? IMO we should change the "Tensor types" header to "Legacy constructor"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made these changes before our discussion. will remove the changes and also updated the header title

@mruberry
Copy link
Collaborator

Unlanding. This is causing failures on master:

Mar 19 12:40:54 ERROR [0.003s]: test_many_promotions_cpu (__main__.TestTypePromotionCPU)
Mar 19 12:40:54 ----------------------------------------------------------------------
Mar 19 12:40:54 Traceback (most recent call last):
Mar 19 12:40:54   File "/opt/python/nightly/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 207, in instantiated_test
Mar 19 12:40:54     return test(self, device_arg)
Mar 19 12:40:54   File "test_type_promotion.py", line 23, in wrapped_fn
Mar 19 12:40:54     fn(*args, **kwargs)
Mar 19 12:40:54   File "test_type_promotion.py", line 231, in test_many_promotions
Mar 19 12:40:54     complex_dtypes = get_all_complex_dtypes()
Mar 19 12:40:54 NameError: name 'get_all_complex_dtypes' is not defined

Maybe a missing import?

@mruberry mruberry reopened this Mar 19, 2020
@anjali411 anjali411 force-pushed the complex_type_promotion branch from 91c6a50 to 3d99fc8 Compare March 19, 2020 16:18
@anjali411
Copy link
Contributor Author

Unlanding. This is causing failures on master:

Mar 19 12:40:54 ERROR [0.003s]: test_many_promotions_cpu (__main__.TestTypePromotionCPU)
Mar 19 12:40:54 ----------------------------------------------------------------------
Mar 19 12:40:54 Traceback (most recent call last):
Mar 19 12:40:54   File "/opt/python/nightly/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 207, in instantiated_test
Mar 19 12:40:54     return test(self, device_arg)
Mar 19 12:40:54   File "test_type_promotion.py", line 23, in wrapped_fn
Mar 19 12:40:54     fn(*args, **kwargs)
Mar 19 12:40:54   File "test_type_promotion.py", line 231, in test_many_promotions
Mar 19 12:40:54     complex_dtypes = get_all_complex_dtypes()
Mar 19 12:40:54 NameError: name 'get_all_complex_dtypes' is not defined

Maybe a missing import?

yeah my bad I needed to call torch.testing.get_all_complex_dtypes() instead of get_all_complex_dtypes(). thanks!!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411 anjali411 removed the merged label Mar 19, 2020
@anjali411 anjali411 force-pushed the complex_type_promotion branch from 4c01324 to 72d7df4 Compare March 20, 2020 22:39
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411 anjali411 force-pushed the complex_type_promotion branch from d19a6ff to 9f86e56 Compare March 24, 2020 17:44
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in c73e970.

facebook-github-bot pushed a commit that referenced this pull request Mar 27, 2020
#35528)

Summary:
copy_() launch failure fixed on cuda for complex #35344  so removing the workaround added in PR #34093
Pull Request resolved: #35528

Differential Revision: D20693228

Pulled By: anjali411

fbshipit-source-id: dbb6369aa5a21574a0a4fe878ca10e4ecc605f6b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: complex Related to complex number support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants