[JIT] add support for lu_unpack#33736
Conversation
[ghstack-poisoned]
💊 CircleCI build failures summary and remediationsAs of commit 0f20546:
Detailed failure analysisOne may explore the probable reasons each build failed interactively on the Dr. CI website. 🕵️ 1 new failure recognized by patternsThe following build failures do not appear to be due to upstream breakage:
|
| return torch.lu(x, get_infos=True) | ||
|
|
||
| self.checkScript(lu_infos, (torch.randn(2, 3, 3),)) | ||
| self.checkScript(lu, (torch.randn(2, 3, 3),)) |
There was a problem hiding this comment.
I believe this got changed back erroneously.
| return tensor.split(split_size_or_sections, dim) | ||
|
|
||
| # equivalent to itertools.product(indices) | ||
| def indices_product(indices): |
There was a problem hiding this comment.
Is there a way to bypass this when JIT isn't on?
There was a problem hiding this comment.
it's possible, but why exactly ? there should be a high bar for introducing divergent behavior for JIT & Python.
This function produces the exact same values as product. When I ran
import time
now = time.perf_counter()
for i in range(100):
for shape in ((3, 3), (5, 3, 3), (7, 3, 5, 5), (7, 5, 3, 3, 3)):
a = torch.randn(*shape, dtype=dtype, device=device)
a_lu, p = torch.lu(a, pivot=pivot)
p_ref, l_ref, u_ref = torch.lu_unpack(a_lu, p)
print(time.perf_counter() - now)
I consistently got 6.5 seconds for this implementation, and 7 seconds for the itertools.product implementation. So it also seems to be (somewhat) faster.
There was a problem hiding this comment.
I thought there might be a performance issue, but evidently not. Thank you for clarifying.
driazati
left a comment
There was a problem hiding this comment.
looks good after a couple small fixes
| # equivalent to itertools.product(indices) | ||
| def indices_product(indices): | ||
| # type: (List[int]) -> (List[List[int]]) | ||
| empty_list : List[int] = [] |
There was a problem hiding this comment.
We still have to support python 2 so I think you have to use torch.jit.annotate
| for k, j in enumerate(index_tensor_with_indices_list(LU_pivots_zero_idx, idx)): | ||
| final_order[k], final_order[j] = final_order[j], final_order[k] | ||
| P[idx] = P[idx].index_select(1, torch.as_tensor(final_order, device=LU_pivots.device)) | ||
| # TODO: remove index_tensor_with_indices_list when TorchScript supports indexing Tensor with list |
There was a problem hiding this comment.
Can you file an issue for this and link it in the comment?
| for k, j in enumerate(LU_pivots_zero_idx[idx]): | ||
|
|
||
| # TODO: rewrite when TorchScript supports product and map as | ||
| # product(*map(lambda x: list(range(x)), shape[:-2])) |
There was a problem hiding this comment.
Same here about issue link
[ghstack-poisoned]
[ghstack-poisoned]
Differential Revision: [D20121914](https://our.internmc.facebook.com/intern/diff/D20121914) [ghstack-poisoned]
Differential Revision: [D20121914](https://our.internmc.facebook.com/intern/diff/D20121914) [ghstack-poisoned]
Summary: Pull Request resolved: #33736 Test Plan: Imported from OSS Differential Revision: D20121914 Pulled By: eellison fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
Summary: Pull Request resolved: pytorch#33736 Test Plan: Imported from OSS Differential Revision: D20121914 Pulled By: eellison fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
Summary: Pull Request resolved: pytorch#33736 Test Plan: Imported from OSS Differential Revision: D20121914 Pulled By: eellison fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
Stack from ghstack:
Differential Revision: D20121914