Unify TensorOptions signatures#39611
Closed
smessmer wants to merge 37 commits intogh/smessmer/225/basefrom
Closed
Conversation
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
This was referenced Jun 5, 2020
smessmer
added a commit
that referenced
this pull request
Jun 5, 2020
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) ghstack-source-id: 105381798 Pull Request resolved: #39611
24 tasks
💊 CI failures summary and remediationsAs of commit fb61874 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
Contributor
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jun 8, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 105490308 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jun 9, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 105560051 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
Closed
smessmer
added a commit
that referenced
this pull request
Jun 10, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 105656331 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jun 11, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 105733177 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jun 23, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 106426628 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jun 23, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 106453931 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jun 23, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 106459644 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jun 26, 2020
Pull Request resolved: #39611 A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. ghstack-source-id: 106709972 Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/)
ezyang
reviewed
Jun 29, 2020
ezyang
approved these changes
Jun 29, 2020
Contributor
|
NB: this breaks forward compatibility |
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. This should be backwards compatible because non-optional things are a subtype of optional things in the binary representation on the stack, i.e. if you read an optional from the stack but there was a non-optional, that's fine, so unboxing is BC. Boxing is not BC because it writes to the stack and that's contravariant, but for loading mobile models only the BC of unboxing is important. This PR will break forward compatibility. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. This should be backwards compatible because non-optional things are a subtype of optional things in the binary representation on the stack, i.e. if you read an optional from the stack but there was a non-optional, that's fine, so unboxing is BC. Boxing is not BC because it writes to the stack and that's contravariant, but for loading mobile models only the BC of unboxing is important. This PR will break forward compatibility but only in the sense that newly written models might use the new feature of calling the ops with optional values and those new models will not be runnable on old PyTorch versions. Any model existing today will not be FC broken and can be serialized with a new PyTorch to run on older PyTorch versions. Detailed reasoning: --- This doesn't break BC because an IValue holding a optional<ScalarType> that is defined looks exactly the same as an IValue holding a non-optional ScalarType. So any old model that got serialized and expects to call the non-optional operators will be able to call the optional operators and PyTorch won't be able to tell the difference. This also doesn't immediately break FC since any model written today will only call those ops with defined values, so even if a pre-existing model gets re-serialized after my change, the values will be serialized as defined values and an old pytorch will be able to call the non-optional version of the operator with it. However, after this change, users will be able to write new models that call these operators with None values. And if somebody writes such a model, then it will not be loadable on older PyTorch. So in a sense, it is FC breaking. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. This should be backwards compatible because non-optional things are a subtype of optional things in the binary representation on the stack, i.e. if you read an optional from the stack but there was a non-optional, that's fine, so unboxing is BC. Boxing is not BC because it writes to the stack and that's contravariant, but for loading mobile models only the BC of unboxing is important. This PR will break forward compatibility but only in the sense that newly written models might use the new feature of calling the ops with optional values and those new models will not be runnable on old PyTorch versions. Any model existing today will not be FC broken and can be serialized with a new PyTorch to run on older PyTorch versions. Detailed reasoning: --- This doesn't break BC because an IValue holding a optional<ScalarType> that is defined looks exactly the same as an IValue holding a non-optional ScalarType. So any old model that got serialized and expects to call the non-optional operators will be able to call the optional operators and PyTorch won't be able to tell the difference. This also doesn't immediately break FC since any model written today will only call those ops with defined values, so even if a pre-existing model gets re-serialized after my change, the values will be serialized as defined values and an old pytorch will be able to call the non-optional version of the operator with it. However, after this change, users will be able to write new models that call these operators with None values. And if somebody writes such a model, then it will not be loadable on older PyTorch. So in a sense, it is FC breaking. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. This should be backwards compatible because non-optional things are a subtype of optional things in the binary representation on the stack, i.e. if you read an optional from the stack but there was a non-optional, that's fine, so unboxing is BC. Boxing is not BC because it writes to the stack and that's contravariant, but for loading mobile models only the BC of unboxing is important. This PR will break forward compatibility but only in the sense that newly written models might use the new feature of calling the ops with optional values and those new models will not be runnable on old PyTorch versions. Any model existing today will not be FC broken and can be serialized with a new PyTorch to run on older PyTorch versions. Detailed reasoning: --- This doesn't break BC because an IValue holding a optional<ScalarType> that is defined looks exactly the same as an IValue holding a non-optional ScalarType. So any old model that got serialized and expects to call the non-optional operators will be able to call the optional operators and PyTorch won't be able to tell the difference. This also doesn't immediately break FC since any model written today will only call those ops with defined values, so even if a pre-existing model gets re-serialized after my change, the values will be serialized as defined values and an old pytorch will be able to call the non-optional version of the operator with it. However, after this change, users will be able to write new models that call these operators with None values. And if somebody writes such a model, then it will not be loadable on older PyTorch. So in a sense, it is FC breaking. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full. This should be backwards compatible because non-optional things are a subtype of optional things in the binary representation on the stack, i.e. if you read an optional from the stack but there was a non-optional, that's fine, so unboxing is BC. Boxing is not BC because it writes to the stack and that's contravariant, but for loading mobile models only the BC of unboxing is important. This PR will break forward compatibility but only in the sense that newly written models might use the new feature of calling the ops with optional values and those new models will not be runnable on old PyTorch versions. Any model existing today will not be FC broken and can be serialized with a new PyTorch to run on older PyTorch versions. Detailed reasoning: --- This doesn't break BC because an IValue holding a optional<ScalarType> that is defined looks exactly the same as an IValue holding a non-optional ScalarType. So any old model that got serialized and expects to call the non-optional operators will be able to call the optional operators and PyTorch won't be able to tell the difference. This also doesn't immediately break FC since any model written today will only call those ops with defined values, so even if a pre-existing model gets re-serialized after my change, the values will be serialized as defined values and an old pytorch will be able to call the non-optional version of the operator with it. However, after this change, users will be able to write new models that call these operators with None values. And if somebody writes such a model, then it will not be loadable on older PyTorch. So in a sense, it is FC breaking. Differential Revision: [D21915788](https://our.internmc.facebook.com/intern/diff/D21915788/) [ghstack-poisoned]
Contributor
|
This pull request has been merged in b8d2ccf. |
Closed
smessmer
added a commit
that referenced
this pull request
Jul 9, 2020
#39611 unified signatures of some ops taking TensorOptions arguments by making them optional. That has FC implications but only for models writting with a PyTorch version after that version (see explanation in description of that PR). However, it also changed the default from `pin_memory=False` to `pin_memory=None`, which actually breaks FC for preexisting models too if they're re-exported with a newer PyTorch, because we materialize default values when exporting. This is bad. This PR reverts that particular part of #39611 to revert the FC breakage. Differential Revision: [D22461661](https://our.internmc.facebook.com/intern/diff/D22461661/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jul 9, 2020
#39611 unified signatures of some ops taking TensorOptions arguments by making them optional. That has FC implications but only for models writting with a PyTorch version after that version (see explanation in description of that PR). However, it also changed the default from `pin_memory=False` to `pin_memory=None`, which actually breaks FC for preexisting models too if they're re-exported with a newer PyTorch, because we materialize default values when exporting. This is bad. This PR reverts that particular part of #39611 to revert the FC breakage. Differential Revision: [D22461661](https://our.internmc.facebook.com/intern/diff/D22461661/) ghstack-source-id: 107456654 Pull Request resolved: #41198
smessmer
added a commit
that referenced
this pull request
Jul 9, 2020
#39611 unified signatures of some ops taking TensorOptions arguments by making them optional. That has FC implications but only for models writting with a PyTorch version after that version (see explanation in description of that PR). However, it also changed the default from `pin_memory=False` to `pin_memory=None`, which actually breaks FC for preexisting models too if they're re-exported with a newer PyTorch, because we materialize default values when exporting. This is bad. This PR reverts that particular part of #39611 to revert the FC breakage. Differential Revision: [D22461661](https://our.internmc.facebook.com/intern/diff/D22461661/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jul 9, 2020
#39611 unified signatures of some ops taking TensorOptions arguments by making them optional. That has FC implications but only for models writting with a PyTorch version after that version (see explanation in description of that PR). However, it also changed the default from `pin_memory=False` to `pin_memory=None`, which actually breaks FC for preexisting models too if they're re-exported with a newer PyTorch, because we materialize default values when exporting. This is bad. This PR reverts that particular part of #39611 to revert the FC breakage. Differential Revision: [D22461661](https://our.internmc.facebook.com/intern/diff/D22461661/) [ghstack-poisoned]
smessmer
added a commit
that referenced
this pull request
Jul 9, 2020
Pull Request resolved: #41198 #39611 unified signatures of some ops taking TensorOptions arguments by making them optional. That has FC implications but only for models writting with a PyTorch version after that version (see explanation in description of that PR). However, it also changed the default from `pin_memory=False` to `pin_memory=None`, which actually breaks FC for preexisting models too if they're re-exported with a newer PyTorch, because we materialize default values when exporting. This is bad. This PR reverts that particular part of #39611 to revert the FC breakage. ghstack-source-id: 107475024 Differential Revision: [D22461661](https://our.internmc.facebook.com/intern/diff/D22461661/)
facebook-github-bot
pushed a commit
that referenced
this pull request
Jul 14, 2020
Summary: Pull Request resolved: #41198 #39611 unified signatures of some ops taking TensorOptions arguments by making them optional. That has FC implications but only for models writting with a PyTorch version after that version (see explanation in description of that PR). However, it also changed the default from `pin_memory=False` to `pin_memory=None`, which actually breaks FC for preexisting models too if they're re-exported with a newer PyTorch, because we materialize default values when exporting. This is bad. This PR reverts that particular part of #39611 to revert the FC breakage. ghstack-source-id: 107475024 Test Plan: waitforsandcastle Reviewed By: bhosmer Differential Revision: D22461661 fbshipit-source-id: ba2776267c3bba97439df66ecb50be7c1971d20d
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
A few ops have been taking non-optional ScalarType, Device and Layout. That isn't supported by the hacky wrapper that makes those
kernels work with the c10 operator library. This PR unifies the signatures and makes those ops c10-full.
This should be backwards compatible because non-optional things are a subtype of optional things in the binary representation on the stack, i.e. if you read an optional from the stack but there was a non-optional, that's fine, so unboxing is BC. Boxing is not BC because it writes to the stack and that's contravariant, but for loading mobile models only the BC of unboxing is important.
This PR will break forward compatibility but only in the sense that newly written models might use the new feature of calling the ops with optional values and those new models will not be runnable on old PyTorch versions. Any model existing today will not be FC broken and can be serialized with a new PyTorch to run on older PyTorch versions.
Detailed reasoning:
This doesn't break BC because an IValue holding a optional that is defined looks exactly the same as an IValue holding a non-optional ScalarType. So any old model that got serialized and expects to call the non-optional operators will be able to call the optional operators and PyTorch won't be able to tell the difference.
This also doesn't immediately break FC since any model written today will only call those ops with defined values, so even if a pre-existing model gets re-serialized after my change, the values will be serialized as defined values and an old pytorch will be able to call the non-optional version of the operator with it.
However, after this change, users will be able to write new models that call these operators with None values. And if somebody writes such a model, then it will not be loadable on older PyTorch. So in a sense, it is FC breaking.
Differential Revision: D21915788