Closed
Conversation
Summary: 1. Reserve Currently, Reserve will allocate new memory and old data in the tensor is also preserved, and Resize is relying on this behavior in some call-site, e.g. https://github.com/pytorch/pytorch/blob/master/caffe2/operators/reservoir_sampling.cc#L103, where we should be using Extend. We want to bring semantics of Reserve to be more aligned with std::vector, i.e. we want it to be an optimization about memory allocation and remove the semantics about preserving the data. We'll remove the guarantee that data will be preserved after Reserve, and Extend will be the only API that preserves old data when we do in-place extension of memory. This also helps with the later refactoring on split Storage from Tensor. Also, we'll only pass in the outer dimension to Reserve which means the later dimensions should be set before we call Reserve. 2. Extend/Shrink Previously, Extend actually means ExtendBy and Shrink means ShrinkTo, I would like to add a ExtendTo for convenience, and change Shrink to ShrinkTo. Old functions calling Extend is still there, although it actually means Extend by, but I think it still makes sense to have it. 3. Usage Patterns The expected usage patterns right now is: ``` t->Resize({0, 32, 32, 32}); t->template mutable_data<T>(); // set meta_ t->Reserve(100); auto* t_data = t->template mutable_data<T>(); // feed data to tensor using t_data for (int i = 0; i < 100; ++i) { t->Extend(1, 50, &context_); // you can continue to use t_data if you have reserved enough space // otherwise, you should call t->template mutable_data<T> again to // get the new data pointer since Extend will allocate new memory even // though the original data is preserved. } ``` Reviewed By: ezyang Differential Revision: D9128147 fbshipit-source-id: 6163fe35fe729e71e1c10ec255011c92b70c60a7
Contributor
Author
|
rebuild succeeded. |
ezyang
added a commit
to ezyang/pytorch
that referenced
this pull request
Aug 6, 2018
This reverts commit e0d4357.
goodlux
pushed a commit
to goodlux/pytorch
that referenced
this pull request
Aug 15, 2018
Summary: Pull Request resolved: pytorch#10261 1. Reserve Currently, Reserve will allocate new memory and old data in the tensor is also preserved, and Resize is relying on this behavior in some call-site, e.g. https://github.com/pytorch/pytorch/blob/master/caffe2/operators/reservoir_sampling.cc#L103, where we should be using Extend. We want to bring semantics of Reserve to be more aligned with std::vector, i.e. we want it to be an optimization about memory allocation and remove the semantics about preserving the data. We'll remove the guarantee that data will be preserved after Reserve, and Extend will be the only API that preserves old data when we do in-place extension of memory. This also helps with the later refactoring on split Storage from Tensor. Also, we'll only pass in the outer dimension to Reserve which means the later dimensions should be set before we call Reserve. 2. Extend/Shrink Previously, Extend actually means ExtendBy and Shrink means ShrinkTo, I would like to add a ExtendTo for convenience, and change Shrink to ShrinkTo. Old functions calling Extend is still there, although it actually means Extend by, but I think it still makes sense to have it. 3. Usage Patterns The expected usage patterns right now is: ``` t->Resize({0, 32, 32, 32}); t->template mutable_data<T>(); // set meta_ t->Reserve(100); auto* t_data = t->template mutable_data<T>(); // feed data to tensor using t_data for (int i = 0; i < 100; ++i) { t->Extend(1, 50, &context_); // you can continue to use t_data if you have reserved enough space // otherwise, you should call t->template mutable_data<T> again to // get the new data pointer since Extend will allocate new memory even // though the original data is preserved. } ``` Reviewed By: ezyang Differential Revision: D9128147 fbshipit-source-id: e765f6566d73deafe2abeef0b2cc0ebcbfebd096
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary:
Currently, Reserve will allocate new memory and old data in the tensor is also preserved,
and Resize is relying on this behavior in some call-site, e.g. https://github.com/pytorch/pytorch/blob/master/caffe2/operators/reservoir_sampling.cc#L103, where we should be using Extend.
We want to bring semantics of Reserve to be more aligned with std::vector, i.e. we want it to be
an optimization about memory allocation and remove the semantics about preserving the data. We'll remove the guarantee that data will be preserved after Reserve, and Extend will be the only API that preserves old data when we do in-place extension of memory. This also helps with the later refactoring on split Storage from Tensor.
Also, we'll only pass in the outer dimension to Reserve which means the later dimensions should be set before we call Reserve.
Previously, Extend actually means ExtendBy and Shrink means ShrinkTo, I would like to add a ExtendTo for convenience, and change Shrink to ShrinkTo.
Old functions calling Extend is still there, although it actually means Extend by, but I think it still makes sense to have it.
The expected usage patterns right now is:
Reviewed By: ezyang
Differential Revision: D9128147