Merge Tensor and Variable types.#28287
Closed
ezyang wants to merge 27 commits intogh/ezyang/480/basefrom
Closed
Conversation
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This was referenced Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 699c091 Pull Request resolved: #28287
albanD
reviewed
Oct 18, 2019
| return Tensor(self_impl_copy); | ||
| } | ||
|
|
||
| /// NOTE: `var.variable_data()` in C++ has the same semantics as `tensor.data` |
Collaborator
There was a problem hiding this comment.
Does that mean that var.variable_data() is the same as var.detach()?
Contributor
There was a problem hiding this comment.
I think the only difference between var.variable_data() (aka. tensor.data in Python) and var.detach() (aka. tensor.detach() in Python) is that the former doesn't share version counter, but the latter does.
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Oct 18, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: f8c9827 Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Oct 21, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 7ea0c4c Pull Request resolved: #28287
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
Contributor
Author
|
This diff is now rebased past my other changes! |
Member
CircleCI build failures summaryAs of commit 658d692:
Here are the reasons each build failed:
This comment was automatically generated by Dr. CI. Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 7 time(s). |
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Nov 14, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 0b361ad Pull Request resolved: #28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Nov 15, 2019
…t on Tensor." Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml This also replaces `current_version` with just `_version`. This is a carved out portion of #28287, rebased past Tensor-Variable merge. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18504934](https://our.internmc.facebook.com/intern/diff/D18504934) [ghstack-poisoned]
facebook-github-bot
pushed a commit
that referenced
this pull request
Nov 18, 2019
#29667) Summary: Pull Request resolved: #29667 Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml This also replaces `current_version` with just `_version`. This is a carved out portion of #28287, rebased past Tensor-Variable merge. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Differential Revision: D18504934 Pulled By: ezyang fbshipit-source-id: be7adf45b637daffe2b0b1631eb31d967525fc31
ezyang
added a commit
to ezyang/pytorch
that referenced
this pull request
Nov 19, 2019
Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml This is a carved out portion of pytorch#28287, rebased past Tensor-Variable merge. Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 0d2141e Pull Request resolved: pytorch#29667
ezyang
added a commit
to ezyang/pytorch
that referenced
this pull request
Nov 19, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: f17eaa6 Pull Request resolved: pytorch#28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Nov 20, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 0b43237 Pull Request resolved: #28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Nov 20, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 79ab8d7 Pull Request resolved: #28287
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
This PR eliminates the static distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. To do this, I need Tensor to have API parity with Variable. I have already moved most of the methods I don't want in Tensor off Variable. These implementations are all placed in Tensor.cpp. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) This diff is BC breaking in a few ways: - Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for `torch::autograd` functions no longer works, you have to explicitly qualify them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`) - Because Variable and Tensor are now the same type, code which assumes that they are different types (e.g., for the purposes of templating, or enable_if checks) will not work until you delete the (now) redundant overload/specialization. (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`) Some other notes: - I'm not sure what was going with the old template implementation of `extract_vars`, but I couldn't get the sfinae version to work. Replacing it with an overloading based version made it work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D18571426](https://our.internmc.facebook.com/intern/diff/D18571426) [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Nov 20, 2019
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 67b7804 Pull Request resolved: #28287
xxtEchjovs44
pushed a commit
to xxtEchjovs44/pytorch
that referenced
this pull request
Jan 29, 2020
This PR eliminates the static (but not dynamic) distinction between Tensor and Variable. Every Variable is a Tensor, no need to static_cast or call the Variable constructor. The dynamic distinction will be eliminated in a later diff. To do this, I need Tensor to have API parity with Variable. Thanks to the efforts of Will Feng and others, most of the hard work has already been done; I just dump all public methods on Variable into Tensor. After doing this, there a few places the implementations migrate: - Some previously inline implementations only reference TensorImpl. This can be placed inline in TensorBody.h - Some previously inline implementations reference AutogradMeta. For the time being, AutogradMeta continues to live in variable.h; thus, these implementations must move to be out-of-line, in Tensor.cpp - However, there are also some template methods. Those methods are retained variable.h - Some previous implementations are defined in native_functions.yaml. In this case, I don't define them explicitly in Tensor; instead they are placed in VariableTypeManual.cpp. When I did this, I would have deleted documentation; instead, this documentation was moved to native_functions.yaml - All out-of-line implementations that don't fall under the previous category get put in Tensor.cpp. - Private inline methods got turned into non-method helper functions. There was only one of these, _create_cpp_hook I have to add a number of new forward declarations (and sometimes not forward declarations) to Tensor.h. One API difference is that all Variable methods now have const, so we no longer have faux const-correctness (see zdevito/ATen#27 for back story) I would have preferred to eliminate the dynamic distinction first, but I wanted inline access to AutogradMeta in Tensor, and the AutogradMeta struct references Variable (furthermore, I cannot make it reference Tensor, as we return Variable by mutable reference from grad() to support the "x.grad() = ..." idiom). Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: ebf5f22 Pull Request resolved: pytorch/pytorch#28287
This was referenced Feb 13, 2020
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
This PR eliminates the static distinction between
Tensor and Variable. Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.
To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.
One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see zdevito/ATen#27 for
back story)
This diff is BC breaking in a few ways:
torch::autogradfunctions no longer works, you have to explicitly qualifythem with
torch::autograd(examples:torch/nn/parallel/data_parallel.h)they are different types (e.g., for the purposes of templating, or enable_if checks)
will not work until you delete the (now) redundant overload/specialization.
(examples:
torch/nn/modules/container/any.h,torch/csrc/utils/pybind.h)Some other notes:
extract_vars,but I couldn't get the sfinae version to work. Replacing it with an overloading based version
made it work.
Signed-off-by: Edward Z. Yang ezyang@fb.com
Differential Revision: D18571426