Add flexible bilinear upsampling aspect ratio redux#1317
Merged
soumith merged 10 commits intopytorch:masterfrom May 3, 2017
andrewgiessel:add-flexible-bilinear-upsampling-aspect-ratio-redux
Merged
Add flexible bilinear upsampling aspect ratio redux#1317soumith merged 10 commits intopytorch:masterfrom andrewgiessel:add-flexible-bilinear-upsampling-aspect-ratio-redux
soumith merged 10 commits intopytorch:masterfrom
andrewgiessel:add-flexible-bilinear-upsampling-aspect-ratio-redux
Conversation
apaszke
reviewed
Apr 21, 2017
| self.size = size | ||
| if scale_factor is not None and not isinstance(scale_factor, (Integral, tuple)): | ||
| raise ValueError('scale_factor must be of integer type or tuple of integer types') | ||
| self.size = _pair(size) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| # we have to be a tuple at this point | ||
| try: | ||
| assert len(self.scale_factor) == 2 | ||
| for i in self.scale_factor: |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| self.output_size = ( | ||
| input.size(2) * self.scale_factor, | ||
| input.size(3) * self.scale_factor, | ||
| input.size(2) * self.scale_factor[0], |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| """ | ||
|
|
||
| def __init__(self, size=None, scale_factor=None): | ||
| super(UpsamplingBilinear2d, self).__init__(size, scale_factor) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/upsampling.py
Outdated
| raise ValueError('scale_factor must be of integer type') | ||
| if scale_factor is not None and not isinstance(scale_factor, (Integral, tuple)): | ||
| raise ValueError('scale_factor must be of integer type or tuple of integer types') | ||
| self.size = _pair(size) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| if self.scale_factor is not None: | ||
| self.scale_factor = _pair(self.scale_factor) | ||
| # we have to be a tuple at this point |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
added 3 commits
April 21, 2017 20:33
Contributor
Author
|
Thanks for the comments @apaszke! I think I made all the changes you asked for. Please, let me know if I didn't understand something (in particular I just c/p your |
This allows for the base class to be used for upsampling routines other than 2d. I also renamed _check_bilinear_2d_scale_factor().
Contributor
Author
soumith
approved these changes
May 3, 2017
Collaborator
|
thanks Andrew! |
Jiaming-Liu
pushed a commit
to Jiaming-Liu/pytorch
that referenced
this pull request
May 18, 2017
houseroad
added a commit
to houseroad/pytorch
that referenced
this pull request
Sep 6, 2018
…8ffb52 (pytorch#11346) Summary: Pull Request resolved: pytorch#11346 Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c Included changes: - **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed> - **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana> - **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang> - **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan> - **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill> - **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang> Reviewed By: jamesr66a Differential Revision: D9691533 fbshipit-source-id: 1a8c22262ae4946897e4be030d3f1cf3a3ad58b6
facebook-github-bot
pushed a commit
that referenced
this pull request
Sep 7, 2018
…8ffb52 (#11346) Summary: Pull Request resolved: #11346 Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c Included changes: - **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (#1377) <James Reed> - **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (#1246) <Akshay Chalana> - **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (#1374) <Lu Fang> - **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix #1344 (#1371) <Marat Dukhan> - **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (#1317) <Ryan Hill> - **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (#1368) <Lu Fang> Reviewed By: jamesr66a Differential Revision: D9691533 fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
PenghuiCheng
pushed a commit
to PenghuiCheng/pytorch
that referenced
this pull request
Sep 11, 2018
…8ffb52 (pytorch#11346) Summary: Pull Request resolved: pytorch#11346 Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c Included changes: - **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed> - **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana> - **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang> - **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan> - **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill> - **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang> Reviewed By: jamesr66a Differential Revision: D9691533 fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
hubertlu-tw
pushed a commit
to hubertlu-tw/pytorch
that referenced
this pull request
Nov 1, 2022
… and `fused_weight_gradient_mlp_cuda` is missing (pytorch#1317)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR addresses issue #1257, which aims to add non-coupled scaling factors for bilinear 2d upsampling. I've added two new tests, and currently everything passes. This is a relatively simple change which only touches python code.
Furthermore, this is is a re-do of PR #1279, wherein I messed up a rebase. There are some in-line comments there that might be worth skimming through, as well, but I tried to address most of the concerns raised there. In particular, I maintained the base class of all the upsampling methods.
Tests pass, on GPU and CPU.
tagging @apaszke @fmassa @soumith. Thanks in advance everyone.
ps: I'll probably need a pointer on how to best rebase to master, eventually ;)