Honor existing dim_param in shape inference#3896
Conversation
Before this change, existing dim_param was overwritten by generated symbols like `unk__`. `NonZero` is used to test the behavior. Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com>
21dbcfa to
c774bf9
Compare
Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com>
| make_tensor_value_info("E", TensorProto.FLOAT, (2, -1))]) | ||
| # the symbolic shape of E and output should be the same | ||
| assert self._get_shape_from_name(inferred_model, 'E') == self._get_shape_from_name(inferred_model, 'output') | ||
| make_tensor_value_info("E", TensorProto.FLOAT, (2, -1)), |
There was a problem hiding this comment.
Maybe it would be better to change line 100 above and replace 'M' by None? And leave the current check?
| self._assert_valueinfo_shape(inferred_model, [make_tensor_value_info("C", TensorProto.FLOAT, (2, -1))]) | ||
| # the symbolic shape of C and output should be the same | ||
| assert self._get_shape_from_name(inferred_model, 'C') == self._get_shape_from_name(inferred_model, 'output') | ||
| self._assert_valueinfo_shape(inferred_model, [ |
There was a problem hiding this comment.
Should we change line 79, to specify output shape as [2, None] or [None, None], and leave the current check as-is?
| auto* existingDim = existingType->mutable_shape()->mutable_dim(i); | ||
| if (!existingDim->has_dim_value()) { | ||
| if ((!existingDim->has_dim_value() && !existingDim->has_dim_param()) || | ||
| inferredDim.has_dim_value()) { |
There was a problem hiding this comment.
condition
(!existingDim->has_dim_value() && !existingDim->has_dim_param()
can be folded in for loop on line 179. This way when existing shape is empty we dont have to run through 2 for loops (1 on line 179 and 1 on line 184)
There was a problem hiding this comment.
Is this what you meant? be56060
Note I think we still need (!existingDim->has_dim_value() && !existingDim->has_dim_param() here for the case where existingType has a known rank (i.e., existingType->has_shape() is true) with unknown dimensions.
There was a problem hiding this comment.
yes thanks for adding line 175. And yes, I agree we still need this check here for the case where existingType has a known rank.
Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com>
Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com>
#3902) * Fix typos (#3894) Signed-off-by: Lewis Tunstall <lewis.c.tunstall@gmail.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Honor existing dim_param in shape inference (#3896) * Honor existing dim_param Before this change, existing dim_param was overwritten by generated symbols like `unk__`. `NonZero` is used to test the behavior. Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com> * Fix test expectation of symbolic_shape_test.py Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com> * Keep the original check as suggested in review Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com> * Simplify the logic of mergeShapesAndTypes Signed-off-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com> Co-authored-by: Ashwini Khade <askhade@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * for issue 3849 to confirm that type check is performed during checker.check_model call Signed-off-by: Liqun Fu <liqfu@microsoft.com> * fix path to InferenceError Signed-off-by: Liqun Fu <liqfu@microsoft.com> * replace whitelist by safelist (#3900) Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * add checker test case for input type mismatch conflict Signed-off-by: Liqun Fu <liqfu@microsoft.com> * remove old numpy in ipynb (#3916) Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Shapeinf for functions (#3722) * add shape inference for model local functions Signed-off-by: Ashwini Khade <askhade@microsoft.com> * Plus tests and more changes Signed-off-by: Ashwini Khade <askhade@microsoft.com> * fix typo Signed-off-by: Ashwini Khade <askhade@microsoft.com> * Plus updates Signed-off-by: Ashwini Khade <askhade@microsoft.com> * Plus updates per review Signed-off-by: Ashwini Khade <askhade@microsoft.com> * plus updates Signed-off-by: Ashwini Khade <askhade@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Fix old ConvTranspose shape inference and softmax upgrader (#3893) * Fixed ConvTranspose-1 shape inference Brings change from #3188 (for ConvTranspose-11) into old shape inference Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com> * Fix softmax adapter Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com> Co-authored-by: G. Ramalingam <grama@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Fix Linux i686 Release CI failure due to the latest NumPy (#3918) * 3.8 and 3.9 use numpy 1.21.5 Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * 1.21.4 Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * 6be8011a073feeca08d256ac64335a19fc8eee4c6312668b6781ace71db0de20 Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * 2021-12-19-a4d7f5a Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * 1.21.5 Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * 1.16.6 Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * 1.16.6 Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * do not check generated Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * remove unrelated numpy Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * if Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * remove generation Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * name Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * add back Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * for testing Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * no 1.16.6 Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Remind release manager to remove old onnx-weekly packages after release (#3923) * remind release manager to remove old onnx-weekly packages Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * add steps Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Simplify function definition of context-dependent functions (#3882) * Simplify function definition of context-dependent functions Signed-off-by: Ganesan Ramalingam <grama@microsoft.com> * Add missing parenthesis Signed-off-by: Ganesan Ramalingam <grama@microsoft.com> * Fix errors in function defs Signed-off-by: Ganesan Ramalingam <grama@microsoft.com> * Eliminate unused variable Signed-off-by: Ganesan Ramalingam <grama@microsoft.com> * Add int64 type specifier to literal Signed-off-by: Ganesan Ramalingam <grama@microsoft.com> Co-authored-by: Ashwini Khade <askhade@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Migration to using main branch (#3925) * add warning announce Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> * Rename to main branch globally Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Fix the bug of shape in docs (#3927) * fix the bug of shape Signed-off-by: namasikanam <namasikanam@gmail.com> * fix the bug of shape Signed-off-by: namasikanam <namasikanam@gmail.com> Co-authored-by: Ashwini Khade <askhade@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Append dim even both dim value and param are not set (#3828) * Append dim even both dim value and param are not set Signed-off-by: Joe <joe@preferred.jp> * Copy dim Signed-off-by: Joe <joe@preferred.jp> * Revert "Copy dim" This reverts commit 6623505. Signed-off-by: Joe <joe@preferred.jp> * Simplify code Signed-off-by: Joe <joe@preferred.jp> Co-authored-by: Ashwini Khade <askhade@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * bump to 10.15 in azp (#3941) Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Add explanation to run gtest (#3933) Signed-off-by: Joe <joe@preferred.jp> Co-authored-by: Ashwini Khade <askhade@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> * Update TreeEnsembleClassifier and TreeEnsembleRegressor to support tensor as attributes (#3897) * update TreeEnsembleClassifier and TreeEnsembleRegressor Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * change the type of another attribute Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * add missing file Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * update documentation Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * eol Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * add field with _as_tensor Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * fix error messages Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * fix error message Signed-off-by: xavier dupré <xavier.dupre@gmail.com> * fix missing change Signed-off-by: xavier dupré <xavier.dupre@gmail.com> Co-authored-by: xavier dupré <xavier.dupre@gmail.com> Co-authored-by: G. Ramalingam <grama@microsoft.com> Signed-off-by: Liqun Fu <liqfu@microsoft.com> Co-authored-by: lewtun <lewis.c.tunstall@gmail.com> Co-authored-by: Shinichiro Hamaji <shinichiro.hamaji@gmail.com> Co-authored-by: Ashwini Khade <askhade@microsoft.com> Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com> Co-authored-by: Chun-Wei Chen <jacky82226@gmail.com> Co-authored-by: Matteo Salvarezza <matteo.salvarezza@gmail.com> Co-authored-by: G. Ramalingam <grama@microsoft.com> Co-authored-by: Xingyu Xie <namasikanam@gmail.com> Co-authored-by: Joe <joe@preferred.jp> Co-authored-by: Xavier Dupré <xadupre@users.noreply.github.com> Co-authored-by: xavier dupré <xavier.dupre@gmail.com>
Description
Before this change, existing
dim_paramwas overwritten by generated symbols likeunk__.NonZerois used to test the behavior.Motivation and Context
We'd like to annotate output values of ops which produce unknown dims (e.g., NonZero) by pre-setting
dim_params of such values. However, current ONNX overwrites them.Reproduce script:
Previous behavior:
This PR:
I guess no one would object the change for (2) in the above table, which is the reason why I created this PR. I'm not sure about (1). To me, it is more natural to check if the existing
dim_paramis equal to the inferred one like what we do fordim_value, but it'd break backward compatibility. What do you think?