Conversation
bowang007
approved these changes
May 12, 2023
core/conversion/evaluators/prim.cpp
Outdated
| const torch::jit::IValue* outputs = args.at(n->input()).IValue(); | ||
| auto outputVec = outputs->toList().vec(); | ||
| LOG_DEBUG("==== OUTPUT VEC: " << outputVec); | ||
| TORCHTRT_THROW_ERROR("========= FORCED ERROR "); |
Collaborator
There was a problem hiding this comment.
These 2 lines need to be removed
core/conversion/evaluators/prim.cpp
Outdated
| // Outputs is an IValue which has list of tensors which can be found in ctx->evaluated_value_map | ||
| const torch::jit::IValue* outputs = args.at(n->input()).IValue(); | ||
| auto outputVec = outputs->toList().vec(); | ||
| LOG_DEBUG("==== OUTPUT VEC: " << outputVec); |
…en::size Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com> chore: Add allow-shape-tensors option to torchtrtc, preserve static dimensions in the aten::size layer Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com> chore: Linter fixes Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com> chore: remove debug throw error Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>
narendasan
requested changes
May 12, 2023
core/conversion/evaluators/aten.cpp
Outdated
| if (ctx->settings.allow_shape_tensors) { | ||
| return dynamic_size_layer(ctx, n, args); | ||
| } else { | ||
| LOG_WARNING("There may be undefined behavior using dynamic shape and aten::size "); |
Collaborator
There was a problem hiding this comment.
Is this undefined behavior due to using aten::size with dynamic shape without shape tensors or the other way around. Might need rewording
| auto tensor = tensor_var.ITensor(); | ||
| if (ctx->input_is_dynamic) { | ||
| return dynamic_size_layer(ctx, n, args); | ||
| if (ctx->settings.allow_shape_tensors) { |
Collaborator
There was a problem hiding this comment.
We should be checking if input is dynamic, shape tensors are enabled and the input contains a placeholder dimension before using the shape tensor code path. Otherwise static aten size is sufficient and will cause less errors
| if (tensor_var.isITensor()) { | ||
| if (ctx->input_is_dynamic) { | ||
| return dynamic_size_layer(ctx, n, args); | ||
| if (ctx->settings.allow_shape_tensors) { |
Collaborator
There was a problem hiding this comment.
Same comments here from above about conditions and warnings
bowang007
added a commit
that referenced
this pull request
May 15, 2023
feat: Wrap dynamic size handling in a compilation flag
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR does the following
The aten::size dynamic shape handling is now disabled by default.
allow_shape_tensors=Truewill enable it.Modifications to aten::size include preserving the static dimensions in the input, thereby not converting any known types eg:
IntintoITensorsType of change
Please delete options that are not relevant and/or add your own.
Checklist: