[JIT] Support custom ops in ScriptModule and tidy up test files#10610
[JIT] Support custom ops in ScriptModule and tidy up test files#10610goldsborough wants to merge 4 commits intopytorch:masterfrom
Conversation
facebook-github-bot
left a comment
There was a problem hiding this comment.
goldsborough has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
dzhulgakov
left a comment
There was a problem hiding this comment.
Looks good to me, but @zdevito is the one to make a call here
ede4c21 to
ceca7b4
Compare
zdevito
left a comment
There was a problem hiding this comment.
Looks good. Comments are minor.
torch/csrc/jit/script/compiler.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/module.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ceca7b4 to
ec2fb5a
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
goldsborough is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| @@ -1,4 +1,4 @@ | |||
| graph(%x : Dynamic) { | |||
| %1 : Dynamic = ^aten::relu()(%x) | |||
| %1 : Dynamic = aten::relu(%x) | |||
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Summary: This PR is stacked on #10610, and only adds changes in one file `.jenkins/pytorch/test.sh`, where we now build the custom op tests and run them. I'd also like to take this PR to discuss whether the [`TorchConfig.cmake`](https://github.com/pytorch/pytorch/blob/master/cmake/TorchConfig.cmake.in) I made is robust enough (we will also see in the CI) orionr Yangqing dzhulgakov what do you think? Also ezyang for CI changes Pull Request resolved: #10611 Differential Revision: D9597627 Pulled By: goldsborough fbshipit-source-id: f5af8164c076894f448cef7e5b356a6b3159f8b3
Summary: This PR is stacked on pytorch/pytorch#10610, and only adds changes in one file `.jenkins/pytorch/test.sh`, where we now build the custom op tests and run them. I'd also like to take this PR to discuss whether the [`TorchConfig.cmake`](https://github.com/pytorch/pytorch/blob/master/cmake/TorchConfig.cmake.in) I made is robust enough (we will also see in the CI) orionr Yangqing dzhulgakov what do you think? Also ezyang for CI changes Pull Request resolved: pytorch/pytorch#10611 Differential Revision: D9597627 Pulled By: goldsborough fbshipit-source-id: f5af8164c076894f448cef7e5b356a6b3159f8b3
) Summary: This PR adds support for using custom ops in ScriptModules, the last step for our custom op strategy. You can now write ``` import torch torch.ops.load_library('libcustom_ops.so') class Model(torch.jit.ScriptModule): def __init__(self): super(Model, self).__init__() torch.jit.script_method def forward(self, input): return torch.ops.custom.op(input) + 1 model = Model() model.forward(torch.ones(5)) # Works model.save("model.pt") # Works model = torch.jit.load("model.pt") # Works ``` You can then load the `model.pt` in C++ and execute its `forward` method! Missing for this was the fact that the script compiler didn't know to convert `ops.custom.op` into a `BuiltinFunction` which then emits a function call. For this I came up with the following strategy inside `torch/csrc/jit/scrip/init.cpp`: 1. When we access `torch.ops`, we return a `CustomOpValue` (subclass of `PythonValue`), whose purpose is only to return a `CustomOpNamespaceValue` (subclass of `PythonValue`) whenever something under it is accessed. 2. `CustomOpNamespaceValue` will then for each field accessed on it return a `BuiltinFunction`. This doesn't reduce performance for any calls that are not to `torch.ops` (as opposed to inspecting every function call's name the call site, for example). I also had to fix `BuiltinFunction` to not assume the namespace is always `aten::`. A lot of other changes are just tidying up the Python and C++ test harness before I integrate it in CI. zdevito dzhulgakov Pull Request resolved: pytorch#10610 Differential Revision: D9387832 Pulled By: goldsborough fbshipit-source-id: c00f431db56c7502a66fe1f813fe78067f428ecb
Summary: This PR is stacked on pytorch#10610, and only adds changes in one file `.jenkins/pytorch/test.sh`, where we now build the custom op tests and run them. I'd also like to take this PR to discuss whether the [`TorchConfig.cmake`](https://github.com/pytorch/pytorch/blob/master/cmake/TorchConfig.cmake.in) I made is robust enough (we will also see in the CI) orionr Yangqing dzhulgakov what do you think? Also ezyang for CI changes Pull Request resolved: pytorch#10611 Differential Revision: D9597627 Pulled By: goldsborough fbshipit-source-id: f5af8164c076894f448cef7e5b356a6b3159f8b3
This PR adds support for using custom ops in ScriptModules, the last step for our custom op strategy. You can now write
You can then load the
model.ptin C++ and execute itsforwardmethod!Missing for this was the fact that the script compiler didn't know to convert
ops.custom.opinto aBuiltinFunctionwhich then emits a function call. For this I came up with the following strategy insidetorch/csrc/jit/scrip/init.cpp:torch.ops, we return aCustomOpValue(subclass ofPythonValue), whose purpose is only to return aCustomOpNamespaceValue(subclass ofPythonValue) whenever something under it is accessed.CustomOpNamespaceValuewill then for each field accessed on it return aBuiltinFunction.This doesn't reduce performance for any calls that are not to
torch.ops(as opposed to inspecting every function call's name the call site, for example).I also had to fix
BuiltinFunctionto not assume the namespace is alwaysaten::.A lot of other changes are just tidying up the Python and C++ test harness before I integrate it in CI.
@zdevito @dzhulgakov