lower replication_pad3d and replication_pad3d_backward#6566
lower replication_pad3d and replication_pad3d_backward#6566
Conversation
|
CI failed due to |
wonjoo-wj
left a comment
There was a problem hiding this comment.
Thanks, @ManfeiBai! I didn't know we can directly re-use the existing replication pad lowering logic, but it seems like we can. Nice!
Also why did we lower the _backward variant? Is it required?
|
|
||
| XLATensorPtr replication_pad3d(const XLATensorPtr& input, | ||
| std::vector<int64_t> padding); | ||
| XLATensorPtr replication_pad3d_backward(const XLATensorPtr& grad_output, |
There was a problem hiding this comment.
nit: should add empty newline here
There was a problem hiding this comment.
Thanks, Wonjoo, according to https://github.com/pytorch/xla/blob/0192ff75324d51d748d76f7717bbccabc15d1db8/torch_xla/csrc/tensor_methods.h#L729C1-L745C71, do we want to keep the same style like 1d and 2d without empty newline here?
There was a problem hiding this comment.
Ah, then let's just keep it as is. Thanks!
yes, we don't have test in |
wonjoo-wj
left a comment
There was a problem hiding this comment.
SGTM, thanks! Feel free to include the newline change with another PR, don't want to block this PR for a nit comment.
follow up Reenable PR for #6537 and #6554
passed local test:
print metric locally: https://gist.github.com/ManfeiBai/e661eab6fae8a10a1828369a2a016b8e