Skip to content

caffe2: UpsampleBilinear CUDA implementation#12843

Closed
d4l3k wants to merge 1 commit intopytorch:masterfrom
d4l3k:export-D10453776
Closed

caffe2: UpsampleBilinear CUDA implementation#12843
d4l3k wants to merge 1 commit intopytorch:masterfrom
d4l3k:export-D10453776

Conversation

@d4l3k
Copy link
Copy Markdown
Member

@d4l3k d4l3k commented Oct 18, 2018

Summary:
This adds a cuda implementation for the UpsampleBilinearOp and UpsampleBilinearGradientOp.

The CUDA code is based off of the corresponding ResizeNearest operators but with bilinear interpolation logic taken from the CPU implementation.

Differential Revision: D10453776

@d4l3k d4l3k requested a review from houseroad October 19, 2018 20:23
Copy link
Copy Markdown
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Summary:
Pull Request resolved: pytorch#12843

This adds a cuda implementation for the UpsampleBilinearOp and UpsampleBilinearGradientOp.

The CUDA code is based off of the corresponding ResizeNearest operators but with bilinear interpolation logic taken from the CPU implementation.

Reviewed By: houseroad

Differential Revision: D10453776

fbshipit-source-id: 51343794b63e0df9d8ad7feb443162f86241ede0
@ezyang ezyang added the merged label Jun 25, 2019
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#12843

This adds a cuda implementation for the UpsampleBilinearOp and UpsampleBilinearGradientOp.

The CUDA code is based off of the corresponding ResizeNearest operators but with bilinear interpolation logic taken from the CPU implementation.

Reviewed By: houseroad

Differential Revision: D10453776

fbshipit-source-id: b29ac330b72465974ddb27c0587bca590773fdec
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants