Switch default memory format of to (and similar) operators to Preserve#30088
Switch default memory format of to (and similar) operators to Preserve#30088VitalyFedyunin wants to merge 8 commits intogh/VitalyFedyunin/88/basefrom
Conversation
[ghstack-poisoned]
…and similar) operators to Preserve" [ghstack-poisoned]
…perators to Preserve" Differential Revision: [D18624984](https://our.internmc.facebook.com/intern/diff/D18624984) [ghstack-poisoned]
… of to (and similar) operators to Preserve" Differential Revision: [D18624984](https://our.internmc.facebook.com/intern/diff/D18624984) [ghstack-poisoned]
…lar) operators to Preserve" Differential Revision: [D18624984](https://our.internmc.facebook.com/intern/diff/D18624984) [ghstack-poisoned]
… operators to Preserve" Differential Revision: [D18624984](https://our.internmc.facebook.com/intern/diff/D18624984) [ghstack-poisoned]
CircleCI build failures summaryAs of commit 2c146b9:
Detailed failure analysisOne may explore the probable reasons each build failed interactively on the Dr. CI website. 1 new failure recognized by patternsThe following build failures don't appear to be due to upstream breakage:
|
| Job | Step | Status |
|---|---|---|
| Build | 🛑 Broken upstream | |
| Build | 🛑 Broken upstream |
This comment was automatically generated by Dr. CI.
Follow this link to opt-out of these comments for your Pull Requests.
Please report bugs/suggestions on the GitHub issue tracker.
This comment has been revised 7 times.
| static inline Tensor to_impl(const Tensor& self, const TensorOptions& options, bool non_blocking, bool copy, c10::optional<c10::MemoryFormat> optional_memory_format) { | ||
| auto memory_format = | ||
| optional_memory_format.value_or(MemoryFormat::Contiguous); | ||
| optional_memory_format.value_or(MemoryFormat::Preserve); |
There was a problem hiding this comment.
wasn't this broken before? Because if I didn't pass in a memory format, memory_format would be set to contiguous.
But then suggest_memory_format wouldn't be contiguous, so I create a new tensor and copy. But that doesn't seem like the correct semantics for the function.
There was a problem hiding this comment.
Indeed, looks like (in master) if we have channels last tensor and call .to to same dtype etc. we will get contiguous output instead of self.
This PR will solve the issue.
There was a problem hiding this comment.
is that behavior in 1.4? Should we introduce a patch to fix it?
There was a problem hiding this comment.
it would probably be good to fix that in a separate patch -- in case this gets reverted (happens with BC changes), we can have the bug fix in without this.
There was a problem hiding this comment.
and seems worthy of a test :).
…tors to Preserve" Differential Revision: [D18624984](https://our.internmc.facebook.com/intern/diff/D18624984) [ghstack-poisoned]
…tors to Preserve" Differential Revision: [D18624984](https://our.internmc.facebook.com/intern/diff/D18624984) [ghstack-poisoned]
|
@VitalyFedyunin merged this pull request in fde3d70. |
ghstack-source-id: 73c809d Pull Request resolved: pytorch/pytorch#30088
Summary: Pull Request resolved: pytorch#30088 Test Plan: Imported from OSS Differential Revision: D18624984 Pulled By: VitalyFedyunin fbshipit-source-id: 54901786d7496c7dce785140b0585ac9093b1d86
Stack from ghstack:
.tooperator to preserve strides whenselfshould be returned #31227 Fix.tooperator to preserve strides whenselfshould be returnedzeros,ones,full#31131 Add memory_format support tozeros,ones,full(still need tests)Differential Revision: D18624984