[fix] torch.cat: Don't resize out if it is already of the correct size.#49937
[fix] torch.cat: Don't resize out if it is already of the correct size.#49937kshitij12345 wants to merge 6 commits intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit 2337b9c (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. This comment has been revised 21 times. |
Codecov Report
@@ Coverage Diff @@
## master #49937 +/- ##
==========================================
+ Coverage 80.49% 80.68% +0.19%
==========================================
Files 1902 1902
Lines 206356 206357 +1
==========================================
+ Hits 166099 166494 +395
+ Misses 40257 39863 -394 |
52fa618 to
a34a1d4
Compare
| result.resize_(result_size, first_tensor_mem_format); | ||
|
|
||
| // skip resizing if size of result is same as expected | ||
| if ((result.sizes() != result_size) || result.suggest_memory_format() != first_tensor_mem_format){ |
There was a problem hiding this comment.
Why do you need second condition here? It would lead to restriding of result instead of respecting its strides?
There was a problem hiding this comment.
The reason for second condition is that if result has the standard layout and if first_tensor_mem_format is say channels_last, then even if they have same shape, their strides would be different. In that case we would want to resize the result.
Sample:
>>> torch.ones((4, 3, 2, 5)).stride()
(30, 10, 5, 1)
>>> torch.ones((4, 3, 2, 5)).contiguous(memory_format=torch.channels_last).stride()
(30, 1, 15, 3)There was a problem hiding this comment.
According to https://github.com/pytorch/pytorch/wiki/Developer-FAQ#how-does-out-work-in-pytorch, if out has the correct size it has to be used directly, even if it's strides are not what operation would naturally produce.
ccb669c to
7f1f9dc
Compare
* update condition to conform out= behaviour
|
There looks to be merge conflict in CI. Pushing a new commit. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…e. (pytorch#49937) Summary: Fixes pytorch#49878 Pull Request resolved: pytorch#49937 Reviewed By: mruberry Differential Revision: D25851564 Pulled By: ngimel fbshipit-source-id: 9a78922642d5bace70d887a88fa9e92d88038120
Fixes #49878