Unify isViewable, handle n-dimensional empty tensors.#8883
Unify isViewable, handle n-dimensional empty tensors.#8883gchanan merged 4 commits intopytorch:masterfrom
Conversation
1) Unifies the two isViewable functions in ATen and TH. 2) Handle n-dimensional empty tensors in the implementation 3) Clarify some comments. This requires an extra copy in the TH case, but that will go away.
|
Can we unify the THC version somehow too? https://github.com/pytorch/pytorch/blob/master/aten/src/THC/generic/THCTensor.cpp#L242 |
|
why is there a THC version :(. |
|
Yeah sorry I forgot about that one earlier. :( We basically duplicated everything for TH & THC, so... |
|
@pytorchbot retest this please. |
|
We should probably unify THTensor/THCTensor doubletime, but there seem to be some yaks in the way... |
aten/src/TH/THTensor.cpp
Outdated
| // On a high level, | ||
| // 1. separate tensor->size into chunks of dimensions, where the dimensions are | ||
| // ``contiguous'' in each chunk, i.e., stride[i] = size[i+1] * stride[i+1] | ||
| // 2. view_size must be able to be separated into same number of chunks as tensor->size was separated into, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| // 2. view_size must be able to be separated into same number of chunks as tensor->size was separated into, | ||
| // where each chunk of view_size has matching ``numel'', i.e., number of subspaces, | ||
| // as the corresponding chunk of tensor->size. | ||
| at::optional<std::vector<int64_t>> |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
left a comment
There was a problem hiding this comment.
Always hard to tell what has changed when you move code around and make changes at the same time :)
|
@pytorchbot retest this please. |
* upstream/master: (42 commits) [c10d] No default device for ProcessGroupGloo (pytorch#8888) Fix default values for affine= in the docstrings of InstanceNormXd (pytorch#8895) Stop making dynamic allocations of PinnedMemoryAllocator. (pytorch#8896) [C++ API] Rework optimization package (pytorch#8815) Mention MPICH_MAX_THREAD_SAFETY=multiple. (pytorch#8580) Unify isViewable, handle n-dimensional empty tensors. (pytorch#8883) Add pos_weight argument to nn.BCEWithLogitsLoss (pytorch#5660) (pytorch#6856) [build] Enable clang-specific warnings only when using clang (pytorch#8869) Fix cmake cudnn autodetection (pytorch#8891) [c10d] Fix link order for building C++ tests (pytorch#8889) directly add_subdirectory(nanopb) from torch CMakeLists (pytorch#8870) [C++ API] Bag of fixes (pytorch#8843) [build] Raise in cmake when seeing NVCC{9/9.1} + GCC6 combo (pytorch#8863) Create avg_pool1d in ATen (pytorch#8880) throw error when grid_sample is passed unsupported mode (pytorch#8884) Allow autograd to work even when the shape of values cannot be determined (pytorch#8641) Make at::Tensor::to() const (pytorch#8839) [auto] Update onnx to 458c521 - Fix typo (onnx/onnx#1143) onnx/onnx@458c521 [Caffe2] Fix gradient_check on in-place ops (pytorch#8828) Fix as_strided_backward (pytorch#8721) ...
This requires an extra copy in the TH case, but that will go away.