Skip to content

[Ready] Make potrs batched#13453

Closed
vishwakftw wants to merge 7 commits intopytorch:masterfrom
vishwakftw:batch_potrs
Closed

[Ready] Make potrs batched#13453
vishwakftw wants to merge 7 commits intopytorch:masterfrom
vishwakftw:batch_potrs

Conversation

@vishwakftw
Copy link
Copy Markdown
Contributor

@vishwakftw vishwakftw commented Nov 1, 2018

  • This is a straightforward PR, building up on the batch inverse PR, except for one change:
    • The GENERATE_LINALG_HELPER_n_ARGS macro has been removed, since it is not very general and the resulting code is actually not very copy-pasty.

Billing of changes:

  • Add batching for potrs
  • Add relevant tests
  • Modify doc string

Minor changes:

  • Remove _gesv_single, _getri_single from aten_interned_strings.h.
  • Add test for CUDA potrs (2D Tensor op)
  • Move the batched shape checking to LinearAlgebraUtils.h

Comment thread test/test_cuda.py Outdated

This comment was marked as off-topic.

This comment was marked as off-topic.

- This is straightforward PR, building up on the batch inverse PR, except for one change:
  - The GENERATE_LINALG_HELPER_n_ARGS macro has been removed, since it is not very general
    and the resulting code is actually not very copy-pasty.
@vishwakftw vishwakftw changed the title [WIP] Make potrs batched [Ready] Make potrs batched Nov 3, 2018
@vishwakftw
Copy link
Copy Markdown
Contributor Author

@zou3519 This is ready for review, just for your information.

@zou3519
Copy link
Copy Markdown
Contributor

zou3519 commented Nov 5, 2018

@vishwakftw I'll take a look later today or tomorrow

@zou3519 zou3519 self-assigned this Nov 5, 2018
@vishwakftw
Copy link
Copy Markdown
Contributor Author

Thank you, appreciate it. :-)

@zou3519
Copy link
Copy Markdown
Contributor

zou3519 commented Nov 6, 2018

No, thank you for your contribution :)

@zou3519 zou3519 self-requested a review November 6, 2018 16:29
@zou3519 zou3519 removed their assignment Nov 6, 2018
Comment thread aten/src/ATen/Declarations.cwrap Outdated

template<class scalar_t>
void lapackGetri(int n, scalar_t *a, int lda, int *ipiv, scalar_t *work, int lwork, int *info) {
void lapackGetri(int n, scalar_t* a, int lda, int* ipiv, scalar_t* work, int lwork, int* info) {

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Comment thread aten/src/ATen/native/cuda/BatchLinearAlgebra.cu Outdated
Comment thread aten/src/ATen/native/cuda/BatchLinearAlgebra.cu Outdated
'_cumsum.*', '_cumprod.*', '_sum.*', '_prod.*', '_th_.*',
'arange.*', 'range.*', '_gesv.*', '_getri.*', '_inverse.*', 'slice',
'randint(_out)?',
'arange.*', 'range.*', '_gesv.*', '_getri.*', '_inverse.*', '_potrs.*',

This comment was marked as off-topic.

Comment thread torch/_torch_docs.py
Comment thread tools/autograd/derivatives.yaml
Comment thread test/common_utils.py Outdated
Comment thread test/test_torch.py Outdated
b = cast(torch.randn(2, 1, 3, 4, 6))
L = get_cholesky(A, upper)
x = torch.potrs(b, L, upper=upper)
x_exp = torch.Tensor(solve(A.cpu().numpy(), b.cpu().numpy()))

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread test/test_torch.py Outdated
self.assertEqual(x.data, cast(x_exp))

# broadcasting A
A = cast(random_symmetric_pd_matrix(4))

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread test/test_torch.py Outdated
- minor refactor for potrs tests
Copy link
Copy Markdown
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thank you @vishwakftw!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@vishwakftw
Copy link
Copy Markdown
Contributor Author

@zou3519 is there anything that I need to do?

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@zou3519
Copy link
Copy Markdown
Contributor

zou3519 commented Nov 9, 2018

@vishwakftw I think it should be good, I'll let you know if any action is required

@vishwakftw
Copy link
Copy Markdown
Contributor Author

@zou3519 just a notification: there were merge conflicts after the recent changes to Tensor.h/TensorMethods.h/Type.h. I fixed them.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Nov 9, 2018
Summary:
- This is a straightforward PR, building up on the batch inverse PR, except for one change:
  - The GENERATE_LINALG_HELPER_n_ARGS macro has been removed, since it is not very general and the resulting code is actually not very copy-pasty.

Billing of changes:
- Add batching for `potrs`
- Add relevant tests
- Modify doc string

Minor changes:
- Remove `_gesv_single`, `_getri_single` from `aten_interned_strings.h`.
- Add test for CUDA `potrs` (2D Tensor op)
- Move the batched shape checking to `LinearAlgebraUtils.h`
Pull Request resolved: pytorch/pytorch#13453

Reviewed By: soumith

Differential Revision: D12942039

Pulled By: zou3519

fbshipit-source-id: 1b8007f00218e61593fc415865b51c1dac0b6a35
@vishwakftw vishwakftw deleted the batch_potrs branch November 10, 2018 03:42
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
- This is a straightforward PR, building up on the batch inverse PR, except for one change:
  - The GENERATE_LINALG_HELPER_n_ARGS macro has been removed, since it is not very general and the resulting code is actually not very copy-pasty.

Billing of changes:
- Add batching for `potrs`
- Add relevant tests
- Modify doc string

Minor changes:
- Remove `_gesv_single`, `_getri_single` from `aten_interned_strings.h`.
- Add test for CUDA `potrs` (2D Tensor op)
- Move the batched shape checking to `LinearAlgebraUtils.h`
Pull Request resolved: pytorch#13453

Reviewed By: soumith

Differential Revision: D12942039

Pulled By: zou3519

fbshipit-source-id: 1b8007f00218e61593fc415865b51c1dac0b6a35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants