Delete denseTypeIdWithDefault and toDense#54016
Closed
ezyang wants to merge 5 commits intogh/ezyang/948/basefrom
Closed
Delete denseTypeIdWithDefault and toDense#54016ezyang wants to merge 5 commits intogh/ezyang/948/basefrom
ezyang wants to merge 5 commits intogh/ezyang/948/basefrom
Conversation
I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This was referenced Mar 15, 2021
Contributor
💊 CI failures summary and remediationsAs of commit 9970b6b (more details on the Dr. CI page):
4 failures not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
Contributor
Author
|
cc @gchanan |
ezyang
added a commit
that referenced
this pull request
Mar 15, 2021
I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 3793ebf Pull Request resolved: #54016
I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
ezyang
added a commit
that referenced
this pull request
Mar 15, 2021
I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: ee0d5bd Pull Request resolved: #54016
This was referenced Mar 15, 2021
I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
This was referenced Mar 17, 2021
bhosmer
approved these changes
Mar 17, 2021
bhosmer
left a comment
There was a problem hiding this comment.
Would it be worth adding an assertion that the passed dispatch_key isn't sparse? It seems as though the code must have been written with that possibility in mind, so would be nice to fail fast in that case.
Contributor
Author
I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Differential Revision: [D27109511](https://our.internmc.facebook.com/intern/diff/D27109511) [ghstack-poisoned]
Contributor
laurentdupin
pushed a commit
to laurentdupin/pytorch
that referenced
this pull request
Apr 24, 2026
Summary: Pull Request resolved: pytorch#54016 I managed to convince myself that typeIdWithDefault was sufficient for the sparse constructor case. Here is the reasoning. The surface reading of the use site of denseTypeIdWithDefault is to convert what could be a sparse dispatch key into the dense version so we can properly allocate underlying dense tensors for the sparse constructor call. But WHERE does this dispatch key come from? Inspection of call sites reveals that dispatch key is provided by torch::tensors::get_default_dispatch_key(). This key is NEVER sparse, as that would correspond to setting sparse tensors to be the default tensor via torch.set_default_tensor_type() (which is forbidden, and even if it worked most of everything in PyTorch would break). That means that typeIdWithDefault is a sufficient replacmenet. With denseTypeIdWithDefault removed, we can also delete toDense as this was the sole use of that function. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Reviewed By: mruberry Differential Revision: D27109511 Pulled By: ezyang fbshipit-source-id: c698eff0ab54c0c101fe9f55be3b7657584c4372
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case. Here is the reasoning.
The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call. But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key(). This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break). That means that typeIdWithDefault is a sufficient replacmenet.
With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.
Signed-off-by: Edward Z. Yang ezyang@fb.com
Differential Revision: D27109511