add input dim check for conv2d#1695
Merged
soumith merged 4 commits intopytorch:masterfrom Jun 2, 2017
Merged
Conversation
Collaborator
|
Contributor
Author
|
I had to add the check below if input is not None and input.dim() != 4:
raise ValueError("Expected 4D tensor as input, got {}D tensor instead.".format(input.dim()))as the usual check for None happens in the call to ConvNd, where it's too late to assert that the input should be 4D. Not super happy with this, but can't think of another, clean, way to do this. |
Contributor
|
Can you add dim checks to other conv functions too? |
Contributor
Author
|
sure thing |
Contributor
Author
|
Done for convxD and conv_transposedxD. |
soumith
approved these changes
Jun 2, 2017
houseroad
added a commit
to houseroad/pytorch
that referenced
this pull request
Jan 4, 2019
…b18ba1 (pytorch#15739) Summary: Pull Request resolved: pytorch#15739 Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8 Included changes: - **[8384c78](onnx/onnx@8384c78)**: add constantofshape (pytorch#1582) <Rui Zhu> - **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (pytorch#1707) <Paul Jesse Hellemn> - **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (pytorch#1695)" (pytorch#1702) <Lu Fang> - **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (pytorch#1695) <Hector Li> - **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (pytorch#1699) <Dmitri Smirnov> - **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (pytorch#1696) <Ke Zhang> - **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (pytorch#1694) <Maik Riechert> - **[f612532](onnx/onnx@f612532)**: Fix typos (pytorch#1686) <Eundoo Song> Reviewed By: zrphercule Differential Revision: D13581674 fbshipit-source-id: a961667184b09d2822815ba5d3fa4198a4c57e88
facebook-github-bot
pushed a commit
that referenced
this pull request
Jan 4, 2019
…b18ba1 (#15739) Summary: Pull Request resolved: #15739 Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8 Included changes: - **[8384c78](onnx/onnx@8384c78)**: add constantofshape (#1582) <Rui Zhu> - **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (#1707) <Paul Jesse Hellemn> - **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (#1695)" (#1702) <Lu Fang> - **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (#1695) <Hector Li> - **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (#1699) <Dmitri Smirnov> - **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (#1696) <Ke Zhang> - **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (#1694) <Maik Riechert> - **[f612532](onnx/onnx@f612532)**: Fix typos (#1686) <Eundoo Song> Reviewed By: zrphercule Differential Revision: D13581674 fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464
mrshenli
pushed a commit
to mrshenli/pytorch
that referenced
this pull request
Jan 6, 2019
…b18ba1 (pytorch#15739) Summary: Pull Request resolved: pytorch#15739 Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8 Included changes: - **[8384c78](onnx/onnx@8384c78)**: add constantofshape (pytorch#1582) <Rui Zhu> - **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (pytorch#1707) <Paul Jesse Hellemn> - **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (pytorch#1695)" (pytorch#1702) <Lu Fang> - **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (pytorch#1695) <Hector Li> - **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (pytorch#1699) <Dmitri Smirnov> - **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (pytorch#1696) <Ke Zhang> - **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (pytorch#1694) <Maik Riechert> - **[f612532](onnx/onnx@f612532)**: Fix typos (pytorch#1686) <Eundoo Song> Reviewed By: zrphercule Differential Revision: D13581674 fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464
jagadish-amd
pushed a commit
to jagadish-amd/pytorch
that referenced
this pull request
Jan 14, 2025
…torch#137717) (pytorch#1695) The logsumexp tensor was considered for internal use only but apparently exposed to unit tests and inductors. The stream should be selected after picking the current device. Otherwise the code is checking the default device's architecture. Fixes pytorch#131316 pytorch#137414 Pull Request resolved: pytorch#137717 Approved by: https://github.com/drisspg Co-authored-by: Jack Taylor <108682042+jataylo@users.noreply.github.com> (cherry picked from commit 770fcaf) Co-authored-by: Xinya Zhang <Xinya.Zhang@amd.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
fix for #1606