Skip to content

Egaerly execute inplace ops if in eager mode#7322

Merged
JackCaoG merged 3 commits intomasterfrom
JackCaoG/in_place_update_egaer
Jun 21, 2024
Merged

Egaerly execute inplace ops if in eager mode#7322
JackCaoG merged 3 commits intomasterfrom
JackCaoG/in_place_update_egaer

Conversation

@JackCaoG
Copy link
Copy Markdown
Collaborator

Even with functionization we still see some inplace ops, like optimization_barrier_, all_reduce_, currently in eager mode they won't be execute.

The eager mode today works by

XLATensorPtr XLATensor::Create(
torch::lazy::Value ir_value, const torch::lazy::BackendDevice& device,
std::optional<at::ScalarType> logical_element_type) {
XLATensorPtr xtensor = c10::make_intrusive<XLATensor>(
XLATensor(std::move(ir_value), device, logical_element_type));
XLAGraphExecutor* graph_executor = XLAGraphExecutor::Get();
graph_executor->RegisterTensor(xtensor->data());
if (UseEagerDebugMode() || graph_executor->UseEagerMode()) {
std::vector<XLATensorPtr> xtensors({xtensor});
graph_executor->ApplyEagerSync(xtensors);
}
return xtensor;
}

when creating a new XLATensor with a IR, we will execute that IR. This doesn't handle the inplace update cases since there is no new XLATensor being created.

I also need to handle the two special cases in follow up pr

  1. random seed IR
  2. all_reduce_token IR

@JackCaoG JackCaoG added egaer usability Bugs/features related to improving the usability of PyTorch/XLA eager PyTorch/XLA eager-mode and removed egaer labels Jun 20, 2024
@JackCaoG JackCaoG marked this pull request as ready for review June 21, 2024 01:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

eager PyTorch/XLA eager-mode usability Bugs/features related to improving the usability of PyTorch/XLA

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants