fix a bug where an uninitialized at::Tensor was passed to createPyObject#2239
Merged
soumith merged 1 commit intopytorch:masterfrom Jul 29, 2017
Merged
fix a bug where an uninitialized at::Tensor was passed to createPyObject#2239soumith merged 1 commit intopytorch:masterfrom
soumith merged 1 commit intopytorch:masterfrom
Conversation
ruotianluo
added a commit
to ruotianluo/pytorch-1
that referenced
this pull request
Jul 31, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330': Add ATen overload to AutoGPU. (pytorch#2234) Add comments for default value (pytorch#2242) Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235) fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239) Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170) Added aarch64 support (pytorch#2226)
ruotianluo
added a commit
to ruotianluo/pytorch-1
that referenced
this pull request
Jul 31, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits) Add ATen overload to AutoGPU. (pytorch#2234) Add comments for default value (pytorch#2242) Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235) fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239) Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170) Added aarch64 support (pytorch#2226) Increase tol. for float tensor qr big test. Improve Variable.retain_grad add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables Implement BatchNorm double backwards (pytorch#2207) [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221) fix for ATen API Change Opt into Trusty builds. (pytorch#2214) allow retain to be specified for unsafeTensorFromTH Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218) fix osx build errors related to long/int64_t Note [Undefined-dim versus 0-dim] Remove __func__ hack in auto nn. Enable Conv groups gradgradchecks. (pytorch#2216) fix a bug where some scalars were getting truncated to integers incorrectly. ...
ruotianluo
added a commit
to ruotianluo/pytorch-1
that referenced
this pull request
Aug 1, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits) Add ATen overload to AutoGPU. (pytorch#2234) Add comments for default value (pytorch#2242) Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235) fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239) Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170) Added aarch64 support (pytorch#2226) Increase tol. for float tensor qr big test. Improve Variable.retain_grad add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables Implement BatchNorm double backwards (pytorch#2207) [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221) fix for ATen API Change Opt into Trusty builds. (pytorch#2214) allow retain to be specified for unsafeTensorFromTH Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218) fix osx build errors related to long/int64_t Note [Undefined-dim versus 0-dim] Remove __func__ hack in auto nn. Enable Conv groups gradgradchecks. (pytorch#2216) fix a bug where some scalars were getting truncated to integers incorrectly. ...
jagadish-amd
pushed a commit
to jagadish-amd/pytorch
that referenced
this pull request
Jul 9, 2025
…cm7.0/7.1 (pytorch#2239) Revamped version of pytorch#2108 PR to: - enable complex data types for sparse matmul on ROCm - fix sparse addmm/baddbmm on ROCm - fix sparse hipification for ROCm - fix/enable sparse tests on ROCm (~50 tests total for non-fp16/bf16): - enable fp16/bf16 sparse path for rocm7.0 - enable fp16/bf16 sparse tests for rocm7.0/7.1 ``` test_sparse_csr.py::TestSparseCSRCUDA::test_bmm_cuda_* test_sparse.py::TestSparseCUDA::test_sparse_matmul_cuda_* test_sparse_csr.py::TestSparseCSRCUDA::test_mm_cuda_float64 test_sparse_csr.py::TestSparseCSRCUDA::test_addmm_all_sparse_csr_SparseCS* test_sparse_csr.py::TestSparseCSRCUDA::test_addmm_sizes_all_sparse_csr_* test_sparse_csr.py::TestSparseCSRCUDA::test_sparse_addmm_cuda_float16 ```
leonling-ll
pushed a commit
to leonling-ll/pytorch
that referenced
this pull request
Jan 23, 2026
…cm7.0/7.1 (pytorch#2239) Revamped version of pytorch#2108 PR to: - enable complex data types for sparse matmul on ROCm - fix sparse addmm/baddbmm on ROCm - fix sparse hipification for ROCm - fix/enable sparse tests on ROCm (~50 tests total for non-fp16/bf16): - enable fp16/bf16 sparse path for rocm7.0 - enable fp16/bf16 sparse tests for rocm7.0/7.1 ``` test_sparse_csr.py::TestSparseCSRCUDA::test_bmm_cuda_* test_sparse.py::TestSparseCUDA::test_sparse_matmul_cuda_* test_sparse_csr.py::TestSparseCSRCUDA::test_mm_cuda_float64 test_sparse_csr.py::TestSparseCSRCUDA::test_addmm_all_sparse_csr_SparseCS* test_sparse_csr.py::TestSparseCSRCUDA::test_addmm_sizes_all_sparse_csr_* test_sparse_csr.py::TestSparseCSRCUDA::test_sparse_addmm_cuda_float16 ``` (cherry picked from commit cc2a69c)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.