Dont optimize slicing dispatch when we are tracing#11156
Closed
jamesr66a wants to merge 1 commit intopytorch:masterfrom
Closed
Dont optimize slicing dispatch when we are tracing#11156jamesr66a wants to merge 1 commit intopytorch:masterfrom
jamesr66a wants to merge 1 commit intopytorch:masterfrom
Conversation
zdevito
approved these changes
Aug 31, 2018
Contributor
zdevito
left a comment
There was a problem hiding this comment.
Seems fine. It demonstrates why it is hard to get tracing right. The API surface has these random places where they skip calling the code that gets recorded.
apaszke
approved these changes
Aug 31, 2018
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
jamesr66a is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
49dba30 to
c91746d
Compare
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
jamesr66a has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
PenghuiCheng
pushed a commit
to PenghuiCheng/pytorch
that referenced
this pull request
Sep 11, 2018
Summary: Previously when we had a slicing expression like `x[0:5, 0]`, where the sliced tensor was of size `5` in dimension 0, we would skip dispatching the actual slice call as an optimization. This caused incorrect behavior under tracing, as we would not record the slice op and thus if we encountered an input with a different shape while running the trace, we would get incorrect results. Pull Request resolved: pytorch#11156 Differential Revision: D9622252 Pulled By: jamesr66a fbshipit-source-id: 822f2e8f01504e131f53bd9ef51c171c7913a7cc
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Previously when we had a slicing expression like
x[0:5, 0], where the sliced tensor was of size5in dimension 0, we would skip dispatching the actual slice call as an optimization.This caused incorrect behavior under tracing, as we would not record the slice op and thus if we encountered an input with a different shape while running the trace, we would get incorrect results.