Conversation
|
@apaszke when you get a chance can you take a look? |
|
Sorry I’ve been working on the loop unrolling PR. I’ll try to take a look over the weekend |
apaszke
left a comment
There was a problem hiding this comment.
Looks great! I'm a bit concerned about THPVariable_Wrap in createPythonOp, so please take a look at that, but ok apart from this.
| py::function func = py::reinterpret_borrow<py::function>(py::handle(op->pyobj.get())); | ||
| bool tracing_autograd_python_function = op->tracing_autograd_python_function; | ||
| bool has_handle = hasHandleOutput(op); | ||
| JIT_ASSERT(!hasHandleOutput(op)); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| m.def("_tracer_enter", [](variable_list trace_inputs, std::size_t num_backwards) { | ||
| return tracer::enter(std::move(trace_inputs), num_backwards + 1, true); | ||
| return tracer::enter(std::move(trace_inputs), num_backwards + 1); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } else if (arg_type == 't') { | ||
| auto var = peek(stack, next_tensor, num_inputs); | ||
| py_inputs[i] = | ||
| py::reinterpret_steal<py::object>(THPVariable_Wrap(var)); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
…e2_core_hip * 'caffe2_core_hip' of github.com:petrex/pytorch: (40 commits) [auto] Update onnx to 52f7528 - add more shape inference tests (onnx/onnx#971) onnx/onnx@52f7528 JIT cleanup (pytorch#7631) fix to build sleef when using cmake 3.11.1 (pytorch#7679) Fix typo in document (pytorch#7725) [auto] Update onnx to 6f4b1b1 - Tests for Gemm operator (onnx/onnx#885) onnx/onnx@6f4b1b1 [auto] Update onnx to c6c6aad - Enhance the 1-element broadcast case (onnx/onnx#902) onnx/onnx@c6c6aad serialization for torch.device (pytorch#7713) Fix compile flags for MSVC (pytorch#7703) Fix exporting Sum to onnx (pytorch#7685) Renanme ZFNet to ZFNet512 (pytorch#7723) Implement __reduce__ for torch.dtype (pytorch#7699) Remove unnecessary include in vec256_float.h (pytorch#7711) Update from facebook (pytorch#7696) fix for cuda 9.2 builds (pytorch#7709) make BatchSampler subclass of Sampler, and expose (pytorch#7707) Dont emit warning for ABI incompatibility when PyTorch was built from source (pytorch#7681) remove index from python bindings (fixes: pytorch#7639) (pytorch#7690) Update _torch_docs.py (pytorch#7700) Fix the wrong usage of environment variables detection in cmake Changes from D7881937 and D7963936 plus an edit (pytorch#7605) ...
Cleans up dead code in the JIT: * Remove interpreter_autograd_function * Remove Handles * Remove HandleBuilder * Remove creates_handles, and tracing_autograd_python_function flags * Remove unused var_args * Fix submodules
This removes functionality related to @compile that is no longer used:
Know to be still remaining but now made dead: