[Resubmission] Create at::linear#10799
Conversation
torch/onnx/symbolic.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/onnx/symbolic.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/onnx/symbolic.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@jamesr66a @zdevito @ezyang looks like I didn't break anything by disabling explicit tracing in |
jamesr66a
left a comment
There was a problem hiding this comment.
So this degenerate case still exists
In [3]: @torch.jit.script
...: def foo(x, y, z):
...: return x.linear(y, z)
...:
In [4]: print(foo.graph)
graph(%x : Dynamic
%y : Dynamic
%z : Dynamic) {
%3 : Dynamic = aten::linear(%x, %y, %z)
return (%3);
}
But I think this is more of a script bug and not related to your PR. So LGTM
facebook-github-bot
left a comment
There was a problem hiding this comment.
goldsborough has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Symbolic for ONNX Fix up ONNX symbolic Add linear to DONT_RECORD_TRACE ?
e2da001 to
5602f73
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
goldsborough is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
goldsborough is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Resubmission of pytorch/pytorch#10755 with fix for ONNX ezyang jamesr66a Pull Request resolved: pytorch/pytorch#10799 Differential Revision: D9482168 Pulled By: goldsborough fbshipit-source-id: 85d4bdfcf0d451f2e7a1c83c5f5415cdd6caacdc
* upstream/master: (89 commits) move HeatmapMaxKeypointOp unittest to oss fix xfails involving literals (pytorch#10905) Bag of Distributions doc fixes (pytorch#10894) Remove FIXME_zerol() from test_jit.py (pytorch#10900) Increase BC for PackedSequence ctor (pytorch#9864) Remove ability of Scalars to hold Tensors. Begin a bestiary of MSVC/NVCC bugs. (pytorch#10883) Prevent JIT from overspecializing to every single size configuration (pytorch#10844) Handling failing test on ROCm. Update mobile predictor caller's interface Cache isContiguous and numel Create class constant for string literal 'blob_names' Conv BN fusion for 3D conv (pytorch#10239) Stop using symbolic override for tracing RNNs (pytorch#10638) Add registry to pybind_state (pytorch#10759) Remove the nanopb submodule Create at::linear (pytorch#10799) Refactor THCNumerics and add common math functions for at::Half (pytorch#10301) Remove Tensor constructor of Scalar. (pytorch#10852) Revert D9492561: [pytorch][PR] Moving the operator argument to the front for kernelPointwiseApply. ...
Summary: Resubmission of pytorch#10755 with fix for ONNX ezyang jamesr66a Pull Request resolved: pytorch#10799 Differential Revision: D9482168 Pulled By: goldsborough fbshipit-source-id: 85d4bdfcf0d451f2e7a1c83c5f5415cdd6caacdc
Resubmission of #10755 with fix for ONNX
@ezyang @jamesr66a