Conversation
Differential Revision: D9995559 Differential Version: 58616425
Differential Revision: D9995559 Differential Version: 58617218
Differential Revision: D9995559 Differential Version: 58630166
Differential Revision: D9995559 Differential Version: 58630679
Differential Revision: D9995559 Differential Version: 58638144
Differential Revision: D9995559 Differential Version: 58638317
Differential Revision: D9995559 Differential Version: 58649580
Differential Revision: D9995559 Differential Version: 58650202
|
|
||
| class CAFFE2_API UndefinedTensorImpl final : public TensorImpl { | ||
| UndefinedTensorImpl() : TensorImpl(at::Storage()){}; | ||
| UndefinedTensorImpl() : TensorImpl(CPU){}; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Differential Revision: D9995559 Differential Version: 58689897
Differential Revision: D9995559 Differential Version: 58739233
dzhulgakov
left a comment
There was a problem hiding this comment.
wow, we're very close!
| auto newCapacity = sizes_; | ||
| newCapacity[0] = std::max<size_t>( | ||
| newDims[0], std::ceil(sizes_[0] * (growthPct + 100) / 100)); | ||
| auto oldData = std::move(storage_.data_ptr()); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Differential Revision: D9995559 Differential Version: 58826237
|
|
||
| template <typename T> | ||
| inline T* data() const { | ||
| // TODO: This is bad: it means storage.data<T>() calls only work on |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| // A global boolean variable to control whether we free memory when a Tensor | ||
| // is shrinked to a smaller size. As a result, a Tensor is always going to | ||
| // keep the memory allocated for its maximum capacity reshaped to so far. | ||
| CAFFE2_DECLARE_bool(caffe2_keep_on_shrink); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| /** | ||
| * A utility function to convert vector<int> to vector<int64_t>. | ||
| */ | ||
| inline std::vector<int64_t> ToVectorint64_t(const std::vector<int>& src) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/core/TensorImpl.h
Outdated
| * in Caffe2 that stores a contiguous memory with its shape information. | ||
| * | ||
| * The TensorImpl class is essentially a wrapper around a device-specific memory | ||
| * (the device is specified by the Context template argument), and deals with |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/core/TensorImpl.h
Outdated
| * | ||
| * The TensorImpl class is essentially a wrapper around a device-specific memory | ||
| * (the device is specified by the Context template argument), and deals with | ||
| * the allocation and de-allocation of such memory. We make a simplified |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| TensorImpl(TensorTypeId type_id, const caffe2::TypeMeta& data_type, Allocator *allocator, bool is_variable); | ||
| TensorImpl(Storage&& storage, TensorTypeId type_id, bool is_variable); | ||
|
|
||
| explicit TensorImpl(at::Storage storage) : storage_(std::move(storage)), storage_offset_(0) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/core/TensorImpl.h
Outdated
|
|
||
| /* | ||
| * Since we removed template from tensor, we now store a static | ||
| * context pointer in tensor, which indicates the type of the tensor. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| storage_ = at::Storage(GetDeviceType(), src.meta()); | ||
| data_type_ = src.meta(); | ||
| } | ||
| if (src.size() == -1) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
| if (src.size() == -1) { | ||
| sizes_.clear(); | ||
| numel_ = -1; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| * items is the same, the underlying storage is kept. | ||
| */ | ||
| template <typename... Ts> | ||
| void Resize(Ts... dim_source) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Differential Revision: D9995559 Differential Version: 58921808
Differential Revision: D9995559 Differential Version: 58923260
Differential Revision: D9995559 Differential Version: 58923468
Differential Revision: D9995559 Differential Version: 59091385
| return storage_.unsafe_data<T>() + storage_offset_; | ||
| } | ||
|
|
||
| inline void* data() const { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
jerryzh168
left a comment
There was a problem hiding this comment.
Approved via ghapprove -s D10051424
Differential Revision: D9995559 Differential Version: 59124130
Summary: Pull Request resolved: pytorch/pytorch#11971 - Switched TensorImpl::data<T>() to use Storage::unsafe_data<T>() to work around an outstanding bug in the Storage::data<T>() implementation where it only works on Ts which are valid ScalarType - Qualify a bunch of identifiers which still live in caffe2:: namespace - strides returns an IntList now - s/update_strides/update_to_contiguous_strides/ - Correctly compute type_id_ for the Storage only constructor from Caffe2. This is special cased to only work for CPU and CUDA dense tensors. - Fix some signed-unsigned comparisons in Caffe2 code (OSS build for ATen/core has more restrictive warning tests.) Reviewed By: jerryzh168 Differential Revision: D9995559 fbshipit-source-id: 9c74032e011189e1c7e9a98d20f2bd1e25ad2e5c
Stack:
:white_circle: #11970 Make ATen-core and caffe2 mutually recursive / merge template data() 💚
:black_circle: #11971 Merge TensorImpl. 💚
around an outstanding bug in the Storage::data() implementation
where it only works on Ts which are valid ScalarType
This is special cased to only work for CPU and CUDA dense tensors.
ATen/core has more restrictive warning tests.)
Differential Revision: D9995559