Skip to content

Delete denseTypeIdWithDefault and toDense#54016

Closed
ezyang wants to merge 5 commits intogh/ezyang/948/basefrom
gh/ezyang/948/head
Closed

Delete denseTypeIdWithDefault and toDense#54016
ezyang wants to merge 5 commits intogh/ezyang/948/basefrom
gh/ezyang/948/head

Conversation

@ezyang
Copy link
Copy Markdown
Contributor

@ezyang ezyang commented Mar 15, 2021

Stack from ghstack:

I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case. Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call. But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key(). This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break). That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang ezyang@fb.com

Differential Revision: D27109511

I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Mar 15, 2021

💊 CI failures summary and remediations

As of commit 9970b6b (more details on the Dr. CI page):


  • 5/5 failures possibly* introduced in this PR
    • 1/5 non-scanned failure(s)

4 failures not recognized by patterns:

Job Step Action
CircleCI pytorch_linux_xenial_py3_clang5_asan_test2 Run tests 🔁 rerun
CircleCI pytorch_linux_bionic_py3_8_gcc9_coverage_build Build 🔁 rerun
CircleCI pytorch_windows_vs2019_py36_cuda10.1_test1 Install Cuda 🔁 rerun
CircleCI pytorch_windows_vs2019_py36_cuda10.1_test2 Test 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
@ezyang ezyang requested a review from bdhirsh March 15, 2021 18:23
@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Mar 15, 2021

cc @gchanan

ezyang added a commit that referenced this pull request Mar 15, 2021
I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: 3793ebf
Pull Request resolved: #54016
@ezyang ezyang requested review from ailzhang and bhosmer March 15, 2021 18:23
I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Mar 15, 2021
I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: ee0d5bd
Pull Request resolved: #54016
Copy link
Copy Markdown
Contributor

@ailzhang ailzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
Copy link
Copy Markdown

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be worth adding an assertion that the passed dispatch_key isn't sparse? It seems as though the code must have been written with that possibility in mind, so would be nice to fail fast in that case.

@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Mar 17, 2021

@bhosmer ~~Due to some diffs stacked on top this is a little difficult to do in this PR directly; I will add the asserts in #54034 ~~ Actually I was wrong, this is fine.

I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D27109511](https://our.internmc.facebook.com/intern/diff/D27109511)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ezyang merged this pull request in 7e7533b.

@facebook-github-bot facebook-github-bot deleted the gh/ezyang/948/head branch March 21, 2021 14:17
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#54016

I managed to convince myself that typeIdWithDefault was sufficient for
the sparse constructor case.  Here is the reasoning.

The surface reading of the use site of denseTypeIdWithDefault is
to convert what could be a sparse dispatch key into the dense version
so we can properly allocate underlying dense tensors for the sparse
constructor call.  But WHERE does this dispatch key come from?
Inspection of call sites reveals that dispatch key is provided by
torch::tensors::get_default_dispatch_key().  This key is NEVER
sparse, as that would correspond to setting sparse tensors to be
the default tensor via torch.set_default_tensor_type() (which is
forbidden, and even if it worked most of everything in PyTorch would
break).  That means that typeIdWithDefault is a sufficient replacmenet.

With denseTypeIdWithDefault removed, we can also delete toDense
as this was the sole use of that function.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D27109511

Pulled By: ezyang

fbshipit-source-id: c698eff0ab54c0c101fe9f55be3b7657584c4372
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants