Add peephole optimization for type_as operators.#9316
Add peephole optimization for type_as operators.#9316runtian-zhou wants to merge 5 commits intopytorch:masterfrom
Conversation
torch/csrc/jit/passes/peephole.cpp
Outdated
| } | ||
| } break; | ||
| case aten::type_as: { | ||
| if (n->inputs().size() != 2) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| @@ -0,0 +1,5 @@ | |||
| graph(%0 : Double(1) | |||
| %1 : Double(2)) { | |||
| %2 : Double(1) = aten::type_as(%0, %1) | |||
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
zdevito
left a comment
There was a problem hiding this comment.
The tests look good. We should make sure type_as is removed even if sizes are different.
torch/csrc/jit/passes/peephole.cpp
Outdated
| Value* LHS(n->input(0)); | ||
| Value* RHS(n->input(1)); | ||
| // If LHS and RHS have the same static type, remove the type_as operator. | ||
| if ((RHS->type()->kind() != TypeKind::DynamicType && |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/passes/peephole.cpp
Outdated
| } break; | ||
| case aten::type_as: { | ||
| JIT_ASSERT(n->inputs().size() == 2); | ||
| Value* LHS(n->input(0)); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/passes/peephole.cpp
Outdated
| Value* LHS(n->input(0)); | ||
| Value* RHS(n->input(1)); | ||
| // If LHS and RHS have the same static type, remove the type_as operator. | ||
| if ((RHS->type()->kind() != TypeKind::DynamicType && |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot retest this please |
torch/csrc/jit/passes/peephole.cpp
Outdated
| Value* RHS(n->input(1)); | ||
| // If LHS and RHS have the same static type, remove the type_as operator. | ||
| if (RHS->type()->kind() == TypeKind::TensorType) { | ||
| auto LType = (*LHS->type()).cast<TensorType>(); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/passes/peephole.cpp
Outdated
| } break; | ||
| case aten::type_as: { | ||
| JIT_ASSERT(n->inputs().size() == 2); | ||
| Value* LHS(n->input(0)); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_jit.py
Outdated
| self.assertExpectedGraph(trace, subname="same_size") | ||
| trace, z = torch.jit.get_trace_graph(f, (a, c)) | ||
| self.run_pass('peephole', trace) | ||
| self.assertExpectedGraph(trace, subname="different_size") |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_jit.py
Outdated
|
|
||
| fn = torch.jit.script(f) | ||
| torch._C._jit_pass_peephole(fn.graph) | ||
| self.assertExpectedGraph(fn.graph) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_jit.py
Outdated
|
|
||
| trace, z = torch.jit.get_trace_graph(f, (a, c)) | ||
| self.run_pass('peephole', trace) | ||
| self.assertExpectedGraph(trace, subname="different_device") |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_jit.py
Outdated
| self.assertExpectedGraph(trace, subname="different_size") | ||
| trace, z = torch.jit.get_trace_graph(f, (a, d)) | ||
| self.run_pass('peephole', trace) | ||
| self.assertExpectedGraph(trace, subname="different_type") |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@zdevito has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@pytorchbot retest this please |
* upstream/master: (24 commits) Implement tensor weak references (pytorch#9363) Nuke TestCollectEnv (pytorch#9459) Add test case for segmentation fault fix in grad_fn (pytorch#9457) Add peephole optimization for type_as operators. (pytorch#9316) Fix out-of-range error for test_neg (pytorch#9431) add depthwise conv support for mkldnn (pytorch#8782) Refactor `_log_sum_exp` (pytorch#9173) Add ModuleDict and ParameterDict containers (pytorch#8463) Introduce SupervisedPtr, delete THAllocator and THCDeviceAllocator (pytorch#9358) Introducing IsInf (pytorch#9169) add device to CUDAEvent (pytorch#9415) Make localScalar error message more intuitive (pytorch#9443) Only accept continguous tensors in TopK for cuda (pytorch#9441) Add support for .norm() pytorch onnx export and ReduceL1/ReduceL2 caffe2 operators (pytorch#9299) Only view() rhs of index_put if we need to (pytorch#9424) Add BatchBucketizeOp in caffe2 (pytorch#9385) Implementation of Wngrad optimizer caffe2 python wrapper and unit test on least square regression (pytorch#9001) Implementation and operator test for Wngrad optimizer (pytorch#8999) Fix segmentation fault in grad_fn (pytorch#9292) update docs (pytorch#9423) ...
Summary: If the type_as operator takes in two values with the same type, remove that operator. Pull Request resolved: pytorch#9316 Reviewed By: zdevito Differential Revision: D8808355 fbshipit-source-id: 2d5710a6380b22f4568fc38a439061b5340c4eb1
Summary: If the type_as operator takes in two values with the same type, remove that operator. Pull Request resolved: pytorch#9316 Reviewed By: zdevito Differential Revision: D8808355 fbshipit-source-id: 2d5710a6380b22f4568fc38a439061b5340c4eb1
Summary: If the type_as operator takes in two values with the same type, remove that operator. Pull Request resolved: pytorch#9316 Reviewed By: zdevito Differential Revision: D8808355 fbshipit-source-id: 2d5710a6380b22f4568fc38a439061b5340c4eb1
If the type_as operator takes in two values with the same type, remove that operator.