Add a tagged union type that replaces tensor in the interpreter.#9368
Add a tagged union type that replaces tensor in the interpreter.#9368zdevito wants to merge 2 commits intopytorch:masterfrom
Conversation
c1d2ce4 to
6ed3af6
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@zdevito has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
IValue is short for interpreter value. It is used frequently so a short name is important. This will allow us to implement more non-tensor types in an efficient way and remove many hacks from the compiler. This PR is limited. It only introduces IValue and changes interpreter to use it. Follow up PRs will: * Change the way aten_ops consume non-tensor types so that integer lists, are no longer represented as Tensors. * Introduce TensorList as a fundamental type and remove all vararg handling in gen_jit_dispatch * Change the compiler to implement math on primitive numbers rather than converting to tensors.
6ed3af6 to
569e13f
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@zdevito has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
apaszke
left a comment
There was a problem hiding this comment.
LGTM. Some suggestions that might make the code nicer, but nothing big.
| std::vector<at::Tensor> toTensors(at::ArrayRef<IValue> ivalues) { | ||
| return fmap(ivalues, [](const IValue& v) { | ||
| return v.toTensor(); | ||
| }); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| // helper to run interpreter on variables until we switch | ||
| // everything to IValue | ||
| inline variable_tensor_list runOneStage(const Code & code, variable_tensor_list inputs) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/graph_executor.cpp
Outdated
| inline variable_tensor_list runOneStage(const Code & code, variable_tensor_list inputs) { | ||
| std::vector<IValue> stack(inputs.begin(), inputs.end()); | ||
| InterpreterState(code).runOneStage(stack); | ||
| // note: we never unwrapped inputs, because we want autograd to record the trace |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| std::vector<IValue> unwrapVariables(variable_tensor_list && list) const { | ||
| return fmap(list, [](const Variable& v) -> IValue { | ||
| return v.defined() ? autograd::as_variable_ref(v).detach() : at::Tensor(); | ||
| }); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
| private: | ||
| PointerType * pImpl; | ||
| }; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/register_prim_ops.cpp
Outdated
| return [num_inputs](Stack& stack) { | ||
| bool first = true; | ||
| for (at::Tensor i : last(stack, num_inputs)) { | ||
| for (IValue i_ : last(stack, num_inputs)) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/script/compiler.cpp
Outdated
| auto list = constant_as<at::IntList>(input); | ||
| auto list = constant_as<std::vector<int64_t>>(input); | ||
| if(list) | ||
| return std::vector<int64_t>(*list); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/interpreter.h
Outdated
| // outputs for that stage, suspending the computation. | ||
| // Call this function again continues computation where it left off. | ||
| void runOneStage(std::vector<at::Tensor> & stack); | ||
| void runOneStage(std::vector<IValue> & stack); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/test_jit.cpp
Outdated
| auto foo2 = std::move(bar); | ||
| JIT_ASSERT(foo->use_count() == 3); | ||
| JIT_ASSERT(foo2.isIntList()); | ||
| JIT_ASSERT(bar.isInt()); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/jit/test_jit.cpp
Outdated
|
|
||
| auto move_it = std::move(baz).toIntList(); | ||
| JIT_ASSERT(foo->use_count() == 2); | ||
| JIT_ASSERT(baz.isInt()); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
* Adds a None type for default values * Other minor fixes.
e0570f9 to
c1d2413
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@zdevito has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…orch#9368) Summary: IValue is short for interpreter value. It is used frequently so a short name is important. This will allow us to implement more non-tensor types in an efficient way and remove many hacks from the compiler. This PR is limited. It only introduces IValue and changes interpreter to use it. Follow up PRs will: * Change the way aten_ops consume non-tensor types so that integer lists, are no longer represented as Tensors. * Introduce TensorList as a fundamental type and remove all vararg handling in gen_jit_dispatch * Change the compiler to implement math on primitive numbers rather than converting to tensors. jamesr66a apaszke Pull Request resolved: pytorch#9368 Reviewed By: ezyang Differential Revision: D8817598 Pulled By: zdevito fbshipit-source-id: 29dce80611ce5f6384234de9d12a67861d2b112f
…orch#9368) Summary: IValue is short for interpreter value. It is used frequently so a short name is important. This will allow us to implement more non-tensor types in an efficient way and remove many hacks from the compiler. This PR is limited. It only introduces IValue and changes interpreter to use it. Follow up PRs will: * Change the way aten_ops consume non-tensor types so that integer lists, are no longer represented as Tensors. * Introduce TensorList as a fundamental type and remove all vararg handling in gen_jit_dispatch * Change the compiler to implement math on primitive numbers rather than converting to tensors. jamesr66a apaszke Pull Request resolved: pytorch#9368 Reviewed By: ezyang Differential Revision: D8817598 Pulled By: zdevito fbshipit-source-id: 29dce80611ce5f6384234de9d12a67861d2b112f
…orch#9368) Summary: IValue is short for interpreter value. It is used frequently so a short name is important. This will allow us to implement more non-tensor types in an efficient way and remove many hacks from the compiler. This PR is limited. It only introduces IValue and changes interpreter to use it. Follow up PRs will: * Change the way aten_ops consume non-tensor types so that integer lists, are no longer represented as Tensors. * Introduce TensorList as a fundamental type and remove all vararg handling in gen_jit_dispatch * Change the compiler to implement math on primitive numbers rather than converting to tensors. jamesr66a apaszke Pull Request resolved: pytorch#9368 Reviewed By: ezyang Differential Revision: D8817598 Pulled By: zdevito fbshipit-source-id: 29dce80611ce5f6384234de9d12a67861d2b112f
IValue is short for interpreter value. It is used frequently so a short name is important.
This will allow us to implement more non-tensor types in an efficient way and remove
many hacks from the compiler.
This PR is limited. It only introduces IValue and changes interpreter to use it.
Follow up PRs will:
are no longer represented as Tensors.
@jamesr66a @apaszke