If none of the schema match, add ImplicitTensorToNum conversions where needed.#10180
If none of the schema match, add ImplicitTensorToNum conversions where needed.#10180eellison wants to merge 6 commits intopytorch:masterfrom
Conversation
62dc1ba to
f38910e
Compare
f38910e to
c465c4c
Compare
zdevito
left a comment
There was a problem hiding this comment.
Structure looks good. A few small comments.
torch/csrc/jit/script/compiler.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/init.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/module.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/compiler.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/compiler.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ed5bb93 to
ef06dfa
Compare
|
Hmmm do we really want this patch? IIRC the rules for those casts in the Python frontend are quite complex at the moment. In particular, a tensor can be converted to a scalar argument if and only if it doesn't require grad, but we don't have this information statically (maybe except if we know that it's integral, but that's not the usual case in our frontend anyway). |
|
@apaszke I think we should add this (otherwise we require a bunch of extra casts). However, we should keep the dynamic behavior the same. That is, TensorToNum should check requires grad and throw. |
zdevito
left a comment
There was a problem hiding this comment.
Some comments inline. We should strive to match the behavior of the python frontend for all num<->tensor conversions. In the case of TensorToNum we should throw an exception at runtime if we attempt to convert a requires_grad tensor.
torch/csrc/jit/script/compiler.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/compiler.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/compiler.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
7e0554a to
0fa8d2f
Compare
zdevito
left a comment
There was a problem hiding this comment.
I think we should make a few changes here before merging:
- We should only have 1 TensorToNum that always does checking.
- Shape analysis should not throw exceptions
torch/csrc/jit/register_prim_ops.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/type.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/type.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/compiler.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
2ae764f to
e7b68d2
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
e7b68d2 to
caeb1d9
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
caeb1d9 to
c0e86b6
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
c0e86b6 to
85876cf
Compare
…ed to ints/floats as arguments if they are 0 dim & no grad is set. A new operator has been added to implement this, since these conditions are not all known at compile time.
…s in the compiler & shape analysis
85876cf to
b9a33ed
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
eellison is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…e needed. (pytorch#10180) Summary: When matching schema, first try to match without adding TensorToNum conversions. Then make another pass where TensorToNum conversions are allowed. Pull Request resolved: pytorch#10180 Differential Revision: D9438153 Pulled By: eellison fbshipit-source-id: 80541b5abd06e9d4187e89dda751f44dab6f58c5
When matching schema, first try to match without adding TensorToNum conversions. Then make another pass where TensorToNum conversions are allowed.