Enable passing down dynamic dimensions from torch to XLA#5790
Enable passing down dynamic dimensions from torch to XLA#5790
Conversation
qihqi
left a comment
There was a problem hiding this comment.
Please change the title of PR to something more descriptive: something like "Enable passing down dynamic dimensions from torch to XLA" etc.
Also all C++ use of macro change that to env var like the python one. (so that no need to recompile to enable the feature)
| : xla::Reshape(input, shape.dimensions()); | ||
| } | ||
|
|
||
| #if EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM |
There was a problem hiding this comment.
not a fan of macro, if you want to guard these functions you can add a check like XLA_CHECK(getenv(EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM))
| get_dim_ops.push_back(xla::GetDimensionSize(aux_op, i)); | ||
|
|
||
| auto s = ShapeHelper::ShapeOfXlaOp(get_dim_ops.back()); | ||
| std::cout << "implicitB shape: " << xla::ShapeUtil::HumanString(s) |
| xla::internal::XlaBuilderFriend builder_friend; | ||
| auto* inst = builder_friend.GetInstruction(result_ops[0]); | ||
| auto* mutable_dynamic = | ||
| inst->mutable_shape()->mutable_is_dynamic_dimension(); | ||
| if (mutable_dynamic->empty()) { | ||
| for (int i = 0; i < inst->dimensions_size(); i++) { | ||
| mutable_dynamic->Add(false); | ||
| } | ||
| } | ||
| auto* mutable_dims = inst->mutable_shape()->mutable_dimensions(); | ||
| for (const auto dim : casted->dynamic_dims()) { | ||
| mutable_dynamic->Set(dim, true); | ||
| mutable_dims->Set(dim, kUnboundedSize); | ||
| } |
There was a problem hiding this comment.
do we always need to run these or we should guard this in a conditional?
There was a problem hiding this comment.
Let's guard it by checking if the XlaNode has dynamic dim
671c981 to
23ee18c
Compare
…d dynamism code path
48f931b to
af14415
Compare
| 0, input_shape.element_type(), input.builder())); | ||
| xla::XlaOp scalar = XlaHelpers::ScalarValue<float>( | ||
| 0, input_shape.element_type(), input.builder()); | ||
| if (experimental_unbounded_dynamism) { |
There was a problem hiding this comment.
Can this path execute in the non export path?
Wdyt we limit this experimental condition to torch.export path?
There was a problem hiding this comment.
Per the discussion below, seems there is no better solution than using a env variable to enable the unbounded dynamism lowering. But we can put more thought on this one.
There was a problem hiding this comment.
SG. This issue doesn't block this PR. Let's prepare a proposal in parallel.
| [](int64_t size) { return size == kUnboundedSize; }); | ||
| } | ||
|
|
||
| xla::XlaOp XlaHelpers::DynamicUnboundedReshape( |
There was a problem hiding this comment.
Can you please prepare an opset issue to track supporting this mode of dynamism?
Similar work example: #5764
There was a problem hiding this comment.
Sure, I think we have a list internally
There was a problem hiding this comment.
Does it work to maintain the list on GH?
| }; | ||
|
|
||
| static const bool experimental_unbounded_dynamism = | ||
| runtime::sys_util::GetEnvBool("EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM", false); |
There was a problem hiding this comment.
Also, for adding env variables, please keep this file up to date.
https://github.com/pytorch/xla/blob/master/configuration.yaml
There was a problem hiding this comment.
Current thought is to use env variable to limit the code path to "Export Only", as you've pointed out in many places. If there are better mechanisms to accomplish that we can also use something different.
f2e65aa to
5459950
Compare
JackCaoG
left a comment
There was a problem hiding this comment.
mostly lgtm, minor nits
e7401b3 to
f8927e6
Compare
f8927e6 to
07d2b43
Compare
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
* port sandeep unbounded dynamism change * Enable unbounded dynamism using env var, add more guards for unbounded dynamism code path --------- Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
This is based on #5778. This PR focuses on supporting unbounded dynamism in LTC infra. The e2e would work when the unbounded dynamism support on XLA side is merged, and required op lowering logic is updated in torch_xla.
EXPERIMENTAL_XLA_UNBOUNDED_DYNAMISM=1Test:
Tested with @sdasgup3's XLA change to propagate unbounded dynamism branch. To make the patch works with torch_xla HEAD, the branch needs to be rebased to the pinned XLA version. Rebased branch
The unbounded dynamism can be propagated in the following example:
which gives