Conversation
2d1bd9c to
c25be47
Compare
22ebcf9 to
c1d0f50
Compare
|
@fehiepsi this will fix the gradcheck :) |
check for input overlapping too [doc] clarify gradcheck behabior when input is overlapping longer note
|
|
||
| # The actual implementations live in Declarations.cwrap. These are just to | ||
| # provide default values for storage_offset=self.storage_offset() | ||
| - func: as_strided(Tensor self, IntList size, IntList stride) -> Tensor |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| self: grad | ||
| src: grad.gather(dim, index) | ||
|
|
||
| - name: select(Tensor self, int64_t dim, int64_t index) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| src.as_strided(sizes, strides, storage_offset - base.storage_offset()).copy_(grad); | ||
| return src; | ||
|
|
||
| // NOTE [ as_strided Backward ] |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| This check will likely fail if :attr:`input` is of less precision, e.g., | ||
| ``FloatTensor``. | ||
|
|
||
| .. warning:: |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Thanks for notifying me @ssnl! |
|
|
||
| // NOTE [ as_strided Backward ] | ||
| // | ||
| // `storage_offset` is ignored for simplicity in this note. If you just want the |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Since all references of these notes are within this file, I'll just keep them here. When we actually pull in |
* upstream/master: (42 commits) [c10d] No default device for ProcessGroupGloo (pytorch#8888) Fix default values for affine= in the docstrings of InstanceNormXd (pytorch#8895) Stop making dynamic allocations of PinnedMemoryAllocator. (pytorch#8896) [C++ API] Rework optimization package (pytorch#8815) Mention MPICH_MAX_THREAD_SAFETY=multiple. (pytorch#8580) Unify isViewable, handle n-dimensional empty tensors. (pytorch#8883) Add pos_weight argument to nn.BCEWithLogitsLoss (pytorch#5660) (pytorch#6856) [build] Enable clang-specific warnings only when using clang (pytorch#8869) Fix cmake cudnn autodetection (pytorch#8891) [c10d] Fix link order for building C++ tests (pytorch#8889) directly add_subdirectory(nanopb) from torch CMakeLists (pytorch#8870) [C++ API] Bag of fixes (pytorch#8843) [build] Raise in cmake when seeing NVCC{9/9.1} + GCC6 combo (pytorch#8863) Create avg_pool1d in ATen (pytorch#8880) throw error when grid_sample is passed unsupported mode (pytorch#8884) Allow autograd to work even when the shape of values cannot be determined (pytorch#8641) Make at::Tensor::to() const (pytorch#8839) [auto] Update onnx to 458c521 - Fix typo (onnx/onnx#1143) onnx/onnx@458c521 [Caffe2] Fix gradient_check on in-place ops (pytorch#8828) Fix as_strided_backward (pytorch#8721) ...
| } | ||
|
|
||
| Tensor slice_backward(Tensor grad, IntList input_sizes, int64_t dim, int64_t start, int64_t end, int64_t step) { | ||
| auto grad_input = at::zeros(input_sizes, grad.type()); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
* make as_strided safer * patching as_strided; and stop using it in backward * Test a simple case in as_strided_backward * a long note * remove boundary checks of as_strided; implement slow path * wip * fix as_strided backward when input is overlapping check for input overlapping too [doc] clarify gradcheck behabior when input is overlapping longer note * fix a deprecation warning in test_autograd * nits
Partially fixes #8626
fixes #8649
fixes the immediate issue in #8577 , but we should make these things a hard error. @ezyang commented that this is doable in
setUpandtearDown@colesbury