Skip to content

An implementation of torch.tile as requested in pytorch/pytorch#38349#47974

Closed
arbfay wants to merge 11 commits intopytorch:masterfrom
arbfay:torchTile
Closed

An implementation of torch.tile as requested in pytorch/pytorch#38349#47974
arbfay wants to merge 11 commits intopytorch:masterfrom
arbfay:torchTile

Conversation

@arbfay
Copy link
Copy Markdown
Contributor

@arbfay arbfay commented Nov 14, 2020

The approach is to simply reuse torch.repeat but adding one more functionality to tile, which is to prepend 1's to reps arrays if there are more dimensions to the tensors than the reps given in input. Thus for a tensor of shape (64, 3, 24, 24) and reps of (2, 2) will become (1, 1, 2, 2), which is what NumPy does.

I've encountered some instability with the test on my end, where I could get a random failure of the test (due to, sometimes, random value of self.dim(), and sometimes, segfaults). I'd appreciate any feedback on the test or an explanation for this instability so I can this.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

Hi @arbfay!

Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Nov 15, 2020

💊 CI failures summary and remediations

As of commit e253f12 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 68 times.

@zou3519 zou3519 requested a review from mruberry November 16, 2020 16:35
@zou3519
Copy link
Copy Markdown
Contributor

zou3519 commented Nov 16, 2020

cc @mruberry for numpy compatibility - could you take a look at this please?

@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Nov 16, 2020
Comment thread aten/src/ATen/native/TensorShape.cpp Outdated
Comment thread aten/src/ATen/native/TensorShape.cpp Outdated
Comment thread aten/src/ATen/native/TensorShape.cpp Outdated
Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_tensor_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread aten/src/ATen/native/TensorShape.cpp Outdated
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @arbfay, thanks for this contribution!

Overall this PR looks pretty good. I'm not totally sure why you're getting the error you are. It may be as simple as calling repeat instead of at::repeat. You might have to look at the intermediate values and debug the issue, thought.

I've left some comments that are mostly style/docs suggestions. The tests also need to be more thorough to account for more possible cases.

Looking forward to reviewing the follow-up!

@arbfay
Copy link
Copy Markdown
Contributor Author

arbfay commented Nov 18, 2020

Hi @mruberry,

Thanks for this review! I expected lots of comments for my first PR to PyTorch, and I got served 😄
For the failures, I've increased stability as much as I could with the last two commits. Hopefully, it will be enough now for all the pipelines. (Okay I will say it: it works perfectly in my environment)

Waiting now for your next review

@arbfay arbfay requested a review from mruberry November 18, 2020 08:53
@mruberry
Copy link
Copy Markdown
Collaborator

This failure on ROCm is real:

10:24:18 ======================================================================
10:24:18 ERROR [0.051s]: test_tile_cuda (__main__.TestTorchDeviceTypeCUDA)
10:24:18 ----------------------------------------------------------------------
10:24:18 Traceback (most recent call last):
10:24:18   File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 834, in wrapper
10:24:18     method(*args, **kwargs)
10:24:18   File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 272, in instantiated_test
10:24:18     result = test_fn(self, *args)
10:24:18   File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 667, in only_fn
10:24:18     return fn(self, device, *args, **kwargs)
10:24:18   File "test_torch.py", line 6071, in test_tile
10:24:18     expected = np.tile(t.numpy(), dims)
10:24:18 TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
10:24:18 
10:24:20 ----------------------------------------------------------------------

You can always use the pattern t.cpu().numpy(). This will convert CUDA tensors to CPU tensors (and then convert those to NumPy) and just directly convert CPU tensors to NumPy (the .cpu() will be a no-op for them).

Would you correct this test and I'll take a look after we get an updated test signal?

@arbfay
Copy link
Copy Markdown
Contributor Author

arbfay commented Nov 18, 2020

@mruberry I think it's all ok with the tests (the one failing is because the pipeline is broken I think)

Comment thread aten/src/ATen/native/TensorShape.cpp Outdated
Comment thread test/test_torch.py Outdated
Comment thread torch/_torch_docs.py Outdated
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @arbfay!

Just a couple minor implementation issues and a small doc extension + test extension needed. Otherwise this PR looks good.

For the tests, this is currently missing autograd tests. That's understandable since we don't do a good job explaining how to add them.

You can automatically generate autograd tests by adding an entry to

#46041 recently did this, too, so that might be a useful reference. Let me know if you have any questions.

@arbfay
Copy link
Copy Markdown
Contributor Author

arbfay commented Nov 19, 2020

Thanks @mruberry !

I've been struggling a bit with the autograd tests. I've added more doc as asked, and used self.repeat instead of at::native::repeat for the return statements.

@arbfay arbfay requested a review from mruberry November 19, 2020 16:10
@mruberry
Copy link
Copy Markdown
Collaborator

I hope you don't mind but I tweaked the docs slight and tried to fix the common_methods() tests. If my fix for the tests didn't work properly then we can remove them for now to unblock this PR. Let's just wait for the CI to run.

@arbfay
Copy link
Copy Markdown
Contributor Author

arbfay commented Nov 20, 2020

Thanks for your commits @mruberry, they are more than welcome!

I've tried with your modification in common_methods and only one test is still a problem (no_reps_dims, I pushed a commit to remove it). This is great, I've been struggling to understand what was causing those failed assertions.

Let's wait for the pipelines.

@codecov
Copy link
Copy Markdown

codecov Bot commented Nov 20, 2020

Codecov Report

Merging #47974 (e253f12) into master (6da26fe) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #47974      +/-   ##
==========================================
- Coverage   81.15%   81.14%   -0.01%     
==========================================
  Files        1838     1838              
  Lines      198653   198663      +10     
==========================================
- Hits       161214   161211       -3     
- Misses      37439    37452      +13     

@arbfay
Copy link
Copy Markdown
Contributor Author

arbfay commented Nov 20, 2020

@mruberry is the PR good to go?

Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests look good. Nice work, @arbfay!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@mruberry merged this pull request in 2e0a8b7.

@mruberry
Copy link
Copy Markdown
Collaborator

Congrats @arbfay! Thanks for implementing torch.tile!

@arbfay
Copy link
Copy Markdown
Contributor Author

arbfay commented Nov 25, 2020

Thanks @mruberry, and thanks for the help! Been a nice ride, and learned a lot!

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…h#47974)

Summary:
The approach is to simply reuse `torch.repeat` but adding one more functionality to tile, which is to prepend 1's to reps arrays if there are more dimensions to the tensors than the reps given in input. Thus for a tensor of shape (64, 3, 24, 24) and reps of (2, 2) will become (1, 1, 2, 2), which is what NumPy does.

I've encountered some instability with the test on my end, where I could get a random failure of the test (due to, sometimes, random value of `self.dim()`, and sometimes, segfaults). I'd appreciate any feedback on the test or an explanation for this instability so I can this.

Pull Request resolved: pytorch#47974

Reviewed By: ngimel

Differential Revision: D25148963

Pulled By: mruberry

fbshipit-source-id: bf63b72c6fe3d3998a682822e669666f7cc97c58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants