Implement nn.functional.interpolate based on upsample.#8591
Implement nn.functional.interpolate based on upsample.#8591ailzhang wants to merge 24 commits intopytorch:masterfrom
Conversation
|
does it enable bilinear interpolation for ByteTensors as well? (#5580) |
ssnl
left a comment
There was a problem hiding this comment.
Thanks for doing this. The kernel code mostly looks fine, but I didn't spend too much time on it. Here are some general comments:
- The kernel names are wrong. The NN kernels shouldn't have
interp. All kernels should have correct dimension number. - For the special copying case, can we use
THTensor_(copy)instead of a nested loop?
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_nn.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@vadimkantorov Actually all methods like pooling/interpolation don't support byte/half/long tensor. I would prefer to keep it in a separate diff, or even do it along with TH->Aten refactor. cc: @soumith @ssnl @fmassa What do you think? |
|
Yeah I think you should keep those out of this diff. |
f216d18 to
d0f1935
Compare
fmassa
left a comment
There was a problem hiding this comment.
Hi,
I've had a quick look and I think there are a few memory leaks that should be fixed.
Also, given that we support 1d, 2d and 3d data, I wonder if we should name it resize_images, or maybe something a bit more generic?
Not for this PR
For the future, we might want to specify which dimensions we want to interpolate, so that this is not specific to a given data format.
For example
a = torch.rand(1, 3, 100, 100)
A = torch.nn.functional.interpolate(a, axis=[-2. -1], mode='nearest')
# NHWC format
a = torch.rand(1, 100, 100, 3)
A = torch.nn.functional.interpolate(a, axis=[1, 2])
# non-batch mode, for single image
a = torch.rand(3, 100, 100)
A = torch.nn.functional.interpolate(a, axis=[-2, -1])
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Anothe naming suggestion is |
|
@pytorchbot retest this please |
fmassa
left a comment
There was a problem hiding this comment.
I think this looks good to merge, thanks Ailing!
I have a bunch of minor nits that are not merge blockers, so feel free to postpone addressing them in a follow-up PR.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/THCUNN/generic/THCUNN.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/THNN/generic/THNN.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_nn.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/upsampling.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
I have the impression that the ROCM build failure is unrelated https://ci.pytorch.org/jenkins/job/pytorch-builds/job/py2-clang3.8-rocmnightly-ubuntu16.04-build/1826//console , but might be better for others to have a look as well |
|
@pytorchbot retest this please (let's see if it works with me :-) ) |
|
@pytorchbot retest this please |
2 similar comments
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ailzhang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
e43fe9e to
f99c543
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ailzhang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: This PR addresses #5823. * fix docstring: upsample doesn't support LongTensor * Enable float scale up & down sampling for linear/bilinear/trilinear modes. (following SsnL 's commit) * Enable float scale up & down sampling for nearest mode. Note that our implementation is slightly different from TF that there's actually no "align_corners" concept in this mode. * Add a new interpolate function API to replace upsample. Add deprecate warning for upsample. * Add an area mode which is essentially Adaptive_average_pooling into resize_image. * Add test cases for interpolate in test_nn.py * Add a few comments to help understand *linear interpolation code. * There is only "*cubic" mode missing in resize_images API which is pretty useful in practice. And it's labeled as hackamonth here #1552. I discussed with SsnL that we probably want to implement all new ops in ATen instead of THNN/THCUNN. Depending on the priority, I could either put it in my queue or leave it for a HAMer. * After the change, the files named as *Upsampling*.c works for both up/down sampling. I could rename the files if needed. Differential Revision: D8729635 Pulled By: ailzhang fbshipit-source-id: a98dc5e1f587fce17606b5764db695366a6bb56b
|
This is merged by facebook-github-bot but somehow didn't close automatically. Closing |
Summary: This PR addresses pytorch#5823. * fix docstring: upsample doesn't support LongTensor * Enable float scale up & down sampling for linear/bilinear/trilinear modes. (following SsnL 's commit) * Enable float scale up & down sampling for nearest mode. Note that our implementation is slightly different from TF that there's actually no "align_corners" concept in this mode. * Add a new interpolate function API to replace upsample. Add deprecate warning for upsample. * Add an area mode which is essentially Adaptive_average_pooling into resize_image. * Add test cases for interpolate in test_nn.py * Add a few comments to help understand *linear interpolation code. * There is only "*cubic" mode missing in resize_images API which is pretty useful in practice. And it's labeled as hackamonth here pytorch#1552. I discussed with SsnL that we probably want to implement all new ops in ATen instead of THNN/THCUNN. Depending on the priority, I could either put it in my queue or leave it for a HAMer. * After the change, the files named as *Upsampling*.c works for both up/down sampling. I could rename the files if needed. Differential Revision: D8729635 Pulled By: ailzhang fbshipit-source-id: a98dc5e1f587fce17606b5764db695366a6bb56b
This PR addresses #5823.
fix docstring: upsample doesn't support LongTensor
Enable float scale up & down sampling for linear/bilinear/trilinear modes. (following @ssnl 's commit)
Enable float scale up & down sampling for nearest mode. Note that our implementation is slightly different from TF that there's actually no "align_corners" concept in this mode.
Add a new interpolate function API to replace upsample. Add deprecate warning for upsample.
Add an area mode which is essentially Adaptive_average_pooling into resize_image.
Add test cases for interpolate in test_nn.py
Add a few comments to help understand *linear interpolation code.
There is only "*cubic" mode missing in resize_images API which is pretty useful in practice. And it's labeled as hackamonth here [Feature Request] Unstructured Interpolation for PyTorch Tensor #1552. I discussed with @ssnl that we probably want to implement all new ops in ATen instead of THNN/THCUNN. Depending on the priority, I could either put it in my queue or leave it for a HAMer.
After the change, the files named as Upsampling.c works for both up/down sampling. I could rename the files if needed.
cc: @ssnl @fmassa