One of the things @gchanan keeps telling me we should do (and we still haven't done, all these years later), is get rid of torch.set_default_tensor_type. Why is it bad? Let me count the ways:
Unfortunately, it is also widely used (FB only: https://fburl.com/codesearch/b3bmi3lo ) (GitHub: https://github.com/search?q=torch.set_default_tensor_type&type=code)
Let's get rid of it. Because it is widely used, we will have to hard deprecate some aspects of the functionality, and soft deprecate others. Here is how I propose we go about doing it.
Deprecation cycle one. We narrow the API so it only supports exactly six call patterns:
torch.set_default_tensor_type(torch.HalfTensor)
torch.set_default_tensor_type(torch.FloatTensor)
torch.set_default_tensor_type(torch.DoubleTensor)
torch.set_default_tensor_type(torch.cuda.HalfTensor)
torch.set_default_tensor_type(torch.cuda.FloatTensor)
torch.set_default_tensor_type(torch.cuda.DoubleTensor)
All other call patterns are deprecated and to be removed. This list of call patterns is effectively all calls that actually work in set_default_tensor_type today; however, the purpose of hardcoding only six known patterns is to make sure we don't "accidentally" add support for more surface (as is what happened for HalfTensor) and to give us more leeway in refactoring our code (as we no longer have to assume, e.g., that set_default_tensor_type would work with XLA tensors).
Once other call types are removed, we can then refactor set_default_tensor_type into two pieces of functionality: setting default dtype (which exists today as set_default_dtype) and _bc_only_set_default_cuda, which is a BC-only knob for "turning on CUDA" as was done before by the torch.cuda invocations. set_default_tensor_type is updated to call these two pieces of functionality and no longer is a direct part of the C bindings; furthermore, we can refactor tensor_new.cpp to
Deprecation cycle two. Ideally, we come up with some resolution to the #27878 cluster of issues. Now we can deprecate and remove set_default_tensor_type entirely, telling users to use set_default_dtype or some hypothetical #27878. Defaulting can be removed entirely from tensor_new.cpp
One of the things @gchanan keeps telling me we should do (and we still haven't done, all these years later), is get rid of torch.set_default_tensor_type. Why is it bad? Let me count the ways:
Unfortunately, it is also widely used (FB only: https://fburl.com/codesearch/b3bmi3lo ) (GitHub: https://github.com/search?q=torch.set_default_tensor_type&type=code)
Let's get rid of it. Because it is widely used, we will have to hard deprecate some aspects of the functionality, and soft deprecate others. Here is how I propose we go about doing it.
Deprecation cycle one. We narrow the API so it only supports exactly six call patterns:
All other call patterns are deprecated and to be removed. This list of call patterns is effectively all calls that actually work in
set_default_tensor_typetoday; however, the purpose of hardcoding only six known patterns is to make sure we don't "accidentally" add support for more surface (as is what happened for HalfTensor) and to give us more leeway in refactoring our code (as we no longer have to assume, e.g., that set_default_tensor_type would work with XLA tensors).Once other call types are removed, we can then refactor set_default_tensor_type into two pieces of functionality: setting default dtype (which exists today as
set_default_dtype) and_bc_only_set_default_cuda, which is a BC-only knob for "turning on CUDA" as was done before by the torch.cuda invocations.set_default_tensor_typeis updated to call these two pieces of functionality and no longer is a direct part of the C bindings; furthermore, we can refactor tensor_new.cpp toDeprecation cycle two. Ideally, we come up with some resolution to the #27878 cluster of issues. Now we can deprecate and remove
set_default_tensor_typeentirely, telling users to useset_default_dtypeor some hypothetical #27878. Defaulting can be removed entirely from tensor_new.cpp