Skip to content

Padding: support complex dtypes#50594

Closed
peterbell10 wants to merge 10 commits intogh/peterbell10/42/basefrom
gh/peterbell10/42/head
Closed

Padding: support complex dtypes#50594
peterbell10 wants to merge 10 commits intogh/peterbell10/42/basefrom
gh/peterbell10/42/head

Conversation

@peterbell10
Copy link
Copy Markdown
Collaborator

@peterbell10 peterbell10 commented Jan 15, 2021

Stack from ghstack:

Fixes #50234

Differential Revision: D25987316

peterbell10 added a commit that referenced this pull request Jan 15, 2021
ghstack-source-id: dc42eb6
Pull Request resolved: #50594
@peterbell10 peterbell10 requested a review from mruberry January 15, 2021 18:43
@anjali411 anjali411 added the module: complex Related to complex number support in PyTorch label Jan 15, 2021
peterbell10 added a commit that referenced this pull request Jan 16, 2021
ghstack-source-id: fea094d
Pull Request resolved: #50594
peterbell10 added a commit that referenced this pull request Jan 16, 2021
ghstack-source-id: 36b3856
Pull Request resolved: #50594
peterbell10 added a commit that referenced this pull request Jan 18, 2021
ghstack-source-id: a282912
Pull Request resolved: #50594
@mruberry mruberry requested a review from anjali411 January 19, 2021 14:10
Comment thread test/test_spectral_ops.py
torch.randn(807, device=device, dtype=dtype),
torch.randn(12, 14, device=device, dtype=dtype),
torch.randn(9, 6, device=device, dtype=dtype)),
torch.randn(12, 60, device=device, dtype=dtype)),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why change these inputs?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reflect mode needs to have enough data to reflect, otherwise you get an error:

RuntimeError: Argument #4: Padding size should be less than the corresponding input dimension, but got: [...]

Comment thread test/test_spectral_ops.py Outdated
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @peterbell10! Overall this looks great and very thorough. I've asked @anjali411 to take a look, too. I have two questions inline that I'm curious to hear your thinking on.

peterbell10 added a commit that referenced this pull request Jan 19, 2021
ghstack-source-id: a432890
Pull Request resolved: #50594
peterbell10 added a commit that referenced this pull request Jan 19, 2021
ghstack-source-id: bcd6374
Pull Request resolved: #50594
@peterbell10
Copy link
Copy Markdown
Collaborator Author

peterbell10 commented Jan 19, 2021

@mruberry PTAL, added the padding mode missing from the tests. Also fixed merge conflicts with #50632.

As an aside, is there anything we can do to reducing merge conflicts in the GRADIENT_IMPLEMENTED_FOR_COMPLEX list? I've had multiple conflicts for every PR that adds complex gradient support. Perhaps one entry per line and alpha-sorting would help.

@mruberry
Copy link
Copy Markdown
Collaborator

I'll take a look ASAP.

As an aside, is there anything we can do to reducing merge conflicts in the GRADIENT_IMPLEMENTED_FOR_COMPLEX list? I've had multiple conflicts for every PR that adds complex gradient support. Perhaps one entry per line and alpha-sorting would help.

Good point. What do you think, @anjali411?

Copy link
Copy Markdown
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

As an aside, is there anything we can do to reducing merge conflicts in the GRADIENT_IMPLEMENTED_FOR_COMPLEX list? I've had multiple conflicts for every PR that adds complex gradient support. Perhaps one entry per line and alpha-sorting would help.

@peterbell10 yeah I agree that merge conflicts are pretty annoying however I think there'd still be merge conflicts with those solutions no?

@peterbell10
Copy link
Copy Markdown
Collaborator Author

@anjali411 two PRs appending to the same list are guaranteed to create merge conflicts. Inserting into the middle of an alpha-sorted list only creates conflicts if the operators are next to each other in the alphabet. So yes there can still be merge conflicts, but at least some of the time it can be merged automatically.

@anjali411
Copy link
Copy Markdown
Contributor

@anjali411 two PRs appending to the same list are guaranteed to create merge conflicts. Inserting into the middle of an alpha-sorted list only creates conflicts if the operators are next to each other in the alphabet. So yes there can still be merge conflicts, but at least some of the time it can be merged automatically.

makes sense okay!

anjali411 added a commit that referenced this pull request Jan 21, 2021
ghstack-source-id: e64d52c
Pull Request resolved: #50594
@anjali411
Copy link
Copy Markdown
Contributor

anjali411 commented Jan 21, 2021

[Internal test failure on phabricator] One of the caffe2 tests fail with the following error:

    ✗ Fail: caffe2/test:jit - test_nn_module_tests (jit.test_complexity.TestComplexity) (6.023)
Test output:
> RuntimeError: ('User provided tensor is real for a test that runs with complex dtype, ', 'which is not supported for now')
  File "/usr/local/fbcode/platform007/lib/python3.7/unittest/case.py", line 59, in testPartExecutor
    yield
  File "/usr/local/fbcode/platform007/lib/python3.7/unittest/case.py", line 628, in run
    testMethod()
  File "/data/users/chourdiaanjali/fbsource/fbcode/buck-out/opt/gen/caffe2/test/jit#binary,link-tree/torch/testing/_internal/common_utils.py", line 595, in wrapper
    fn(*args, **kwargs)
  File "/data/users/chourdiaanjali/fbsource/fbcode/buck-out/opt/gen/caffe2/test/jit#binary,link-tree/jit/test_complexity.py", line 73, in test_nn_module_tests
    out = try_get_nn_module_compiled_mod_and_inputs(**test)
  File "/data/users/chourdiaanjali/fbsource/fbcode/buck-out/opt/gen/caffe2/test/jit#binary,link-tree/torch/testing/_internal/jit_metaprogramming_utils.py", line 546, in try_get_nn_module_compiled_mod_and_inputs
    args_variable, kwargs_variable = create_input(input)
  File "/data/users/chourdiaanjali/fbsource/fbcode/buck-out/opt/gen/caffe2/test/jit#binary,link-tree/torch/testing/_internal/common_methods_invocations.py", line 2692, in create_input
    args_out = tuple(map_arg(arg) for arg in call_args)
  File "/data/users/chourdiaanjali/fbsource/fbcode/buck-out/opt/gen/caffe2/test/jit#binary,link-tree/torch/testing/_internal/common_methods_invocations.py", line 2692, in <genexpr>
    args_out = tuple(map_arg(arg) for arg in call_args)
  File "/data/users/chourdiaanjali/fbsource/fbcode/buck-out/opt/gen/caffe2/test/jit#binary,link-tree/torch/testing/_internal/common_methods_invocations.py", line 2683, in map_arg
    "which is not supported for now")
stdout:
Couldn't download test skip set, leaving all tests enabled...

stderr:


Summary
  Fail: 1
    ✗ caffe2/test:jit - test_nn_module_tests (jit.test_complexity.TestComplexity)

peterbell10 pushed a commit to peterbell10/pytorch that referenced this pull request Jan 21, 2021
ghstack-source-id: e64d52c
Pull Request resolved: pytorch#50594
peterbell10 added a commit that referenced this pull request Jan 21, 2021
ghstack-source-id: 8a14203
Pull Request resolved: #50594
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@anjali411 merged this pull request in db079a9.

@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/42/head branch January 26, 2021 15:21
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#50594

Fixes pytorch#50234

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25987316

Pulled By: anjali411

fbshipit-source-id: c298b771fe52b267a86938e886ea402badecfe3e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: complex Related to complex number support in PyTorch open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants