Skip to content

[JIT] add support for lu_unpack#33736

Closed
eellison wants to merge 5 commits intogh/eellison/56/basefrom
gh/eellison/56/head
Closed

[JIT] add support for lu_unpack#33736
eellison wants to merge 5 commits intogh/eellison/56/basefrom
gh/eellison/56/head

Conversation

@eellison
Copy link
Copy Markdown
Contributor

@eellison eellison commented Feb 25, 2020

Stack from ghstack:

Differential Revision: D20121914

[ghstack-poisoned]
@eellison eellison requested a review from apaszke as a code owner February 25, 2020 00:50
eellison pushed a commit that referenced this pull request Feb 25, 2020
ghstack-source-id: c034432
Pull Request resolved: #33736
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Feb 25, 2020
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Feb 25, 2020

💊 CircleCI build failures summary and remediations

As of commit 0f20546:

  • 1/1 failures introduced in this PR

Detailed failure analysis

One may explore the probable reasons each build failed interactively on the Dr. CI website.

🕵️ 1 new failure recognized by patterns

The following build failures do not appear to be due to upstream breakage:

See CircleCI build binary_linux_manywheel_2_7mu_cpu_devtoolset7_test (1/1)

Step: "Run in docker" (full log | pattern match details)

Feb 26 23:50:34 ImportError: No module named typing
Feb 26 23:50:34 	librt.so.1 => /lib64/librt.so.1 (0x00007fb9bbd6a000) 
Feb 26 23:50:34 	libdl.so.2 => /lib64/libdl.so.2 (0x00007fb9bbb66000) with runpath  
Feb 26 23:50:34 + [[ manywheel == \l\i\b\t\o\r\c\h ]] 
Feb 26 23:50:34 + python -c 'import torch' 
Feb 26 23:50:34 Traceback (most recent call last): 
Feb 26 23:50:34   File "<string>", line 1, in <module> 
Feb 26 23:50:34   File "/opt/python/cp27-cp27mu/lib/python2.7/site-packages/torch/__init__.py", line 320, in <module> 
Feb 26 23:50:34     from .functional import * 
Feb 26 23:50:34   File "/opt/python/cp27-cp27mu/lib/python2.7/site-packages/torch/functional.py", line 6, in <module> 
Feb 26 23:50:34     from typing import List 
Feb 26 23:50:34 ImportError: No module named typing 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

This comment has been revised 25 times.

Comment thread test/test_jit.py Outdated
return torch.lu(x, get_infos=True)

self.checkScript(lu_infos, (torch.randn(2, 3, 3),))
self.checkScript(lu, (torch.randn(2, 3, 3),))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this got changed back erroneously.

Comment thread torch/functional.py Outdated
return tensor.split(split_size_or_sections, dim)

# equivalent to itertools.product(indices)
def indices_product(indices):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to bypass this when JIT isn't on?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's possible, but why exactly ? there should be a high bar for introducing divergent behavior for JIT & Python.
This function produces the exact same values as product. When I ran

import time
now = time.perf_counter()
for i in range(100):
    for shape in ((3, 3), (5, 3, 3), (7, 3, 5, 5), (7, 5, 3, 3, 3)):
    a = torch.randn(*shape, dtype=dtype, device=device)
    a_lu, p = torch.lu(a, pivot=pivot)
    p_ref, l_ref, u_ref = torch.lu_unpack(a_lu, p)
print(time.perf_counter() - now)

I consistently got 6.5 seconds for this implementation, and 7 seconds for the itertools.product implementation. So it also seems to be (somewhat) faster.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought there might be a performance issue, but evidently not. Thank you for clarifying.

Copy link
Copy Markdown
Contributor

@driazati driazati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good after a couple small fixes

Comment thread torch/functional.py Outdated
# equivalent to itertools.product(indices)
def indices_product(indices):
# type: (List[int]) -> (List[List[int]])
empty_list : List[int] = []
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still have to support python 2 so I think you have to use torch.jit.annotate

Comment thread torch/functional.py Outdated
for k, j in enumerate(index_tensor_with_indices_list(LU_pivots_zero_idx, idx)):
final_order[k], final_order[j] = final_order[j], final_order[k]
P[idx] = P[idx].index_select(1, torch.as_tensor(final_order, device=LU_pivots.device))
# TODO: remove index_tensor_with_indices_list when TorchScript supports indexing Tensor with list
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you file an issue for this and link it in the comment?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when we had similar issues we just closed them - #32246, #27493

Comment thread torch/functional.py Outdated
for k, j in enumerate(LU_pivots_zero_idx[idx]):

# TODO: rewrite when TorchScript supports product and map as
# product(*map(lambda x: list(range(x)), shape[:-2]))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here about issue link

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@eellison merged this pull request in f31b1d3.

hczhu pushed a commit that referenced this pull request Feb 28, 2020
Summary: Pull Request resolved: #33736

Test Plan: Imported from OSS

Differential Revision: D20121914

Pulled By: eellison

fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
@facebook-github-bot facebook-github-bot deleted the gh/eellison/56/head branch March 1, 2020 15:17
ttumiel pushed a commit to ttumiel/pytorch that referenced this pull request Mar 4, 2020
Summary: Pull Request resolved: pytorch#33736

Test Plan: Imported from OSS

Differential Revision: D20121914

Pulled By: eellison

fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary: Pull Request resolved: pytorch#33736

Test Plan: Imported from OSS

Differential Revision: D20121914

Pulled By: eellison

fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants