Skip to content

Commit db79cf9

Browse files
committed
Update on "[compiled autograd] Always proxy autograd.Function nodes; handle AOT backwards"
We will always proxy autograd.Function nodes in compiled autograd's initial graph capture (previously there was an option to proxy vs trace into the autograd.Function) We have some requirements for the AOTBackward. Compiled Autograd runs accumulate grad reordering passes on the AOTBackward graph directly after the initial graph capture, so we can't just proxy a single node for it. Instead, we: - proxy the AOTBackward prologue function into the CA graph - copy-paste the AOTBackward graph into the CA graph - trace directly through the epilogue (the traced nodes go into the CA graph). Tracing through the epilogue is safe (assuming no Tensor subclasses) because the only thing the epilogue does is drop some outputs. The Tensor subclass situation was already broken so this doesn't regress anything but this PR sets it up to be fixed (in a followup, where we will proxy "make_subclass" calls into the graph from the epilogue). Test Plan: - existing tests cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang aakhundov xmfan [ghstack-poisoned]
2 parents 06be922 + 80a4db4 commit db79cf9

1 file changed

Lines changed: 4 additions & 4 deletions

File tree

torch/csrc/autograd/functions/tensor.cpp

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -74,8 +74,8 @@ variable_list CopyBackwards::apply_with_saved(
7474
std::vector<at::TypePtr> schema = {
7575
IValuePacker<std::array<bool, 2>>::packed_type(),
7676
IValuePacker<c10::TensorOptions>::packed_type()};
77-
const auto& interface = torch::dynamo::autograd::getPyCompilerInterface();
78-
interface->bind_function(
77+
const auto& pyinterface = torch::dynamo::autograd::getPyCompilerInterface();
78+
pyinterface->bind_function(
7979
saved.get_py_compiler(),
8080
name(),
8181
CopyBackwards_apply_functional_ivalue,
@@ -91,8 +91,8 @@ variable_list CopyBackwards::apply_with_saved(
9191
IValuePacker<std::vector<std::optional<InputMetadata>>>::pack(
9292
torch::dynamo::autograd::get_input_metadata(next_edges()));
9393

94-
const auto& interface = torch::dynamo::autograd::getPyCompilerInterface();
95-
auto result = interface->call_function(
94+
const auto& pyinterface = torch::dynamo::autograd::getPyCompilerInterface();
95+
auto result = pyinterface->call_function(
9696
saved.get_py_compiler(),
9797
"apply_functional",
9898
name(),

0 commit comments

Comments
 (0)