[nnc] Strides to Tensor#72962
[nnc] Strides to Tensor#72962IvanKobzarev wants to merge 14 commits intogh/ivankobzarev/114/basefrom
Conversation
[ghstack-poisoned]
CI Flow Status⚛️ CI FlowRuleset - Version:
|
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit d35ce16 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
[ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
navahgar
left a comment
There was a problem hiding this comment.
The changes in the PR look good to me. But I have a more high-level question. What is the usecase we are trying to target here?
The reason here is to remove assumptions of output memory format in our lowerings which is based on knowledge of internal implementation of the ops. But take the output format from the outputStrides of result of tracing graph for shapes (which also traces for strides). In that case for all the lowerings we need to add outputStrides where we had outputShapes. |
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@ZolotukhinM @navahgar this PR is a part of NHWC support. |
navahgar
left a comment
There was a problem hiding this comment.
Sorry, I missed this earlier.
LGTM. Can you please check the failures?
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Differential Revision: [D34589306](https://our.internmc.facebook.com/intern/diff/D34589306) [ghstack-poisoned]
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
1 similar comment
|
@IvanKobzarev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Pull Request resolved: #72962 Test Plan: Imported from OSS Reviewed By: ZolotukhinM, cpuhrsch Differential Revision: D34589306 Pulled By: IvanKobzarev fbshipit-source-id: ecee5249760ecc0c8b2edb1842b90218899bc944
|
Hey @IvanKobzarev. |
|
This PR is likely to have broken TorchVision. See pytorch/vision#5873 for details. |
|
Minimal code to reproduce the issue: import torch
from torchvision.transforms import ColorJitter
meth_kwargs = {"brightness": 0.2, "contrast": 0.2, "saturation": 0.2, "hue": 0.2}
print("-")
channels = 3
device = "cpu"
torch.random.manual_seed(0)
t = ColorJitter(**meth_kwargs)
scripted_fn = torch.jit.script(t)
batch_tensor = torch.randint(0, 256, (4, channels, 23, 34), dtype=torch.uint8, device=device)
torch.manual_seed(12)
scripted_fn(batch_tensor)
print("--")
channels = 1
device = "cpu"
torch.random.manual_seed(0)
t = ColorJitter(**meth_kwargs)
scripted_fn = torch.jit.script(t)
batch_tensor = torch.randint(0, 256, (4, channels, 23, 34), dtype=torch.uint8, device=device)
torch.manual_seed(12)
scripted_fn(batch_tensor)
print("---")
channels = 3
device = "cuda"
torch.random.manual_seed(0)
t = ColorJitter(**meth_kwargs)
scripted_fn = torch.jit.script(t)
batch_tensor = torch.randint(0, 256, (4, channels, 23, 34), dtype=torch.uint8, device=device)
torch.manual_seed(12)
scripted_fn(batch_tensor)
print("----")
channels = 1
device = "cuda"
torch.random.manual_seed(0)
t = ColorJitter(**meth_kwargs)
scripted_fn = torch.jit.script(t)
batch_tensor = torch.randint(0, 256, (4, channels, 23, 34), dtype=torch.uint8, device=device)
torch.manual_seed(12)
scripted_fn(batch_tensor)Output: Surprisingly, changing order of test channels : 3,1,3,1 -> 1,3,1,3 and there is no error... More precisely, it seems to fail on these lines : |
|
@pytorchbot revert this @IvanKobzarev @datumbox we believe this is causing errors in. torch audio so going to try to revert. If this is wrongfully reverted, please just re open and re land. |
|
Reverting PR 72962 failed due to Can't revert PR that was landed via phabricator as D34589306 |
|
@zengk95 It is causing errors on Torch Video as well, so I'm fine to have it reverted. I don't think that worked though judging from the bot's message. |
This reverts commit 9390609. Fixes pytorch/vision#5873 Pull Request resolved: #76332 Approved by: https://github.com/seemethere
|
@vfdev-5 |
Summary: This reverts commit 9390609. Fixes pytorch/vision#5873 Pull Request resolved: #76332 Approved by: https://github.com/seemethere Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/1d55518198de14a011e7acd6c604afe512ee28b4 Reviewed By: osalpekar Differential Revision: D35938180 Pulled By: zengk95 fbshipit-source-id: 15f48235b3b11ce9ca2379781f9e75cd1af39aed
Stack from ghstack (oldest at bottom):
Differential Revision: D34589306