Implements Cumprod function for autograd#1439
Implements Cumprod function for autograd#1439apaszke merged 14 commits intopytorch:masterfrom martinarjovsky:master
Conversation
torch/autograd/_functions/tensor.py
Outdated
|
|
||
| if (input == 0).any(): | ||
| ones = input.index_select(self.dim, | ||
| LT(range(1))).fill_(1).clone() |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| LT = torch.LongTensor | ||
|
|
||
| if (input == 0).any(): | ||
| ones = input.index_select(self.dim, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| # At this point ommitted_products is the same size | ||
| # as input, except on the dimension dim where it's | ||
| # dim_size - k | ||
| assert ommitted_products.size()[self.dim] == dim_size - k, \ |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| torch.sum(grad_output * ommitted_products, dim=self.dim)) | ||
|
|
||
| else: | ||
| output = torch.cumprod(input, dim=self.dim) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| class Cumprod(Function): | ||
|
|
||
| def __init__(self, dim): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| LT(range(k))), dim=self.dim) | ||
|
|
||
| prods_from_k_plus_1 = torch.cumprod(input.index_select( | ||
| self.dim, LT(range(k+1, dim_size))), dim=self.dim) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| LT(range(k))), dim=self.dim) | ||
|
|
||
| ommitted_products = prods_until_k | ||
| else: #k == 0 |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| else: | ||
| LT = torch.LongTensor | ||
|
|
||
| if (input == 0).any(): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| prods_from_k_plus_1 = torch.cumprod(input.index_select( | ||
| self.dim, LT(range(k+1, dim_size))), dim=self.dim) | ||
|
|
||
| ommitted_products = prods_until_k.expand_as( |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| # as input, except on the dimension dim where it's | ||
| # dim_size - k | ||
| assert ommitted_products.size()[self.dim] == dim_size - k, \ | ||
| "Dimension error" |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| "Dimension error" | ||
|
|
||
| if k != 0: | ||
| size_to_expand = [l for l in input.size()] |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| size_to_expand = [l for l in input.size()] | ||
| size_to_expand[self.dim] = k # Adding zeros to missing dimensions | ||
| size_to_expand = torch.Size(size_to_expand) | ||
| expanded_zeros = zeros.expand(size_to_expand) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Did we consider using the implementation of def cumprod(x):
return exp(cumsum(log(x)))Maybe it's very unstable computing it like that? |
|
@fmassa I'd expect it to be very very unstable for |
|
@apaszke I'm not so sure that it would be very very unstable for x close to 0, the forward pass doesn't seem to be that sensitive. |
|
@fmassa This would have two problems.
The case with nonzero inputs (including x < 0) is handled in two lines already by doing output = torch.cumprod(input, dim=self.dim)
return reverse_cumsum(output * grad_output, dim=self.dim) / inputIt's the case with zeros that's hard :) |
|
Just fixed (or at least attempted) the issues. I added a detailed comment of the algorithm for the backward pass but it's likely way to much. Should I take it out and put it in the PR instead? |
torch/autograd/_functions/tensor.py
Outdated
| idx = torch.LongTensor(range(dim_size)) | ||
|
|
||
| ones = input.select(self.dim, 0).unsqueeze(self.dim).clone().fill_(1) | ||
| zeros = ones.clone().fill_(0) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| ret = torch.cumsum(-x, dim=dim) | ||
|
|
||
| end_idx = ret.size(dim) - 1 | ||
| ret_sum = ret.narrow(dim, end_idx, 1) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| return grad_input | ||
|
|
||
|
|
||
| def reverse_cumsum(x, dim): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| self.dim | ||
| ) | ||
|
|
||
| grad_input.index_copy_( |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| else: | ||
| idx = torch.LongTensor(range(dim_size)) | ||
|
|
||
| ones = input.select(self.dim, 0).unsqueeze(self.dim).clone().fill_(1) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| if input.is_cuda: | ||
| idx = torch.cuda.LongTensor(range(dim_size)) | ||
| else: | ||
| idx = torch.LongTensor(range(dim_size)) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| ommitted_products = torch.cat( | ||
| (zeros.expand(size_to_expand), ommitted_products), | ||
| self.dim | ||
| ) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot add to whitelist |
|
@apaszke Anything else I should do? :) |
|
Sorry, we're all busy with v0.2 features. I'll try to review it soon and let you know, but it should be good. |
torch/autograd/_functions/tensor.py
Outdated
| input, = self.saved_tensors | ||
| dim_size = input.size(self.dim) | ||
| if dim_size == 1: | ||
| return grad_output.clone() |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
|
|
||
| ones_size = list(input.size()) | ||
| ones_size[self.dim] = 1 | ||
| ones = input.new().resize_(ones_size).fill_(1) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
| ones_size = list(input.size()) | ||
| ones_size[self.dim] = 1 | ||
| ones = input.new().resize_(ones_size).fill_(1) | ||
| zeros = ones * 0 |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/_functions/tensor.py
Outdated
|
|
||
| grad_input.select(self.dim, k).copy_(torch.sum( | ||
| grad_output[dim_padding + (slice(k, None),)] * omitted_products, | ||
| dim=self.dim).squeeze()) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Thank you! |
* initial volta support * mma parallel type && cleanup * cleanup * alignment * comment * change request * fix same parallel type * move validation pass * comment and cleanup * lint * comment and cleanup * comment and format
* SWDEV-469009 - skips for flaky distributed tests * Missed a file
In reference to #1095 :)
The algorithm for nonzero elements is straightforward and pretty fast, but when zero elements
are present is pretty hard to do vectorized (and O(n^2) with n elements in the dimension of the input where the cumprod is done), hence the code is a bit complex.
A set of extra tests passed is present here https://gist.github.com/martinarjovsky/1d4679c54b2fc5cddf73cdd45b359c9f