Use c10::variant for enums in C++ API#26837
Conversation
[BC-breaking] Use torch::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
|
@ShahriarSS Would you like to review this PR? |
[BC-breaking] Use torch::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
[BC-breaking] Use torch::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
[BC-breaking] Use torch::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
[BC-breaking] Use torch::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
|
@ShahriarSS Yes those are skipped intentionally because we likely need to rewrite how RNN are implemented in C++ API. I will fix the enum issue when I work on parity for the RNN layers. |
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
|
@yf225 Then this looks good to me. Just one thing: Is it a bad idea to have a macro that defines the enum and it's extern const like TORCH_ARG or TORCH_MODULE (is it even possible)? |
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
|
@ShahriarSS Using macro is a great idea - I will add it in this PR. |
|
Can you give some background on why we want to do this? |
|
@smessmer I think the discussion in #15149 captures it the best, and I think there are three main benefits of using
I will update the PR description to reflect this rationale. |
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
torch/csrc/api/include/torch/enum.h
Outdated
| namespace torch { | ||
|
|
||
| /// Variable of `Nonlinearity` type can take one of the following values: | ||
| /// - `torch::kLinear` |
There was a problem hiding this comment.
@yf225 Aren't these comments redundant? the accepted values are defined in the body of declaration itself.
There was a problem hiding this comment.
I am a bit worried that people won't directly see the mapping from enumtype::Linear to torch::kLinear, and this comment should help them discover that.
There was a problem hiding this comment.
@ShahriarSS I changed the enum type from enumtype::Linear to enumtype::torch::kLinear, so that people can more easily figure out that they should use torch::kLinear.
torch/csrc/api/include/torch/enum.h
Outdated
| >; | ||
|
|
||
| /// Variable of `FanMode` type can take one of the following values: | ||
| /// - `torch::kFanIn` |
There was a problem hiding this comment.
Same thing here about being redundant.
[BC-breaking] Use c10::variant for enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
c10/util/variant.h
Outdated
|
|
||
| #ifdef MPARK_VARIABLE_TEMPLATES | ||
| constexpr variant_in_place_t in_place{}; | ||
| constexpr variant_in_place_t variant_in_place{}; |
There was a problem hiding this comment.
It's for avoiding conflict with the same variable name in c10/util/Optional.h. I added a comment in the c10-specific change log for better clarity.
There was a problem hiding this comment.
We should just make variant.h use the optional.h definition. in_place is part of the standard.
There was a problem hiding this comment.
I moved the common definition for in_place into c10/util/in_place.h, so that both variant.h and optional.h can use it.
|
|
||
| #define TORCH_ENUM_DECLARE(name) \ | ||
| namespace torch { \ | ||
| namespace enumtype { \ |
There was a problem hiding this comment.
How'd you decide on this name? Is it just an internal name or is it visible to users?
There was a problem hiding this comment.
It is only an internal name - the type of the enum is e.g. torch::enumtype::kFanIn, and the variable that has this type is torch::kFanIn, which is what's used by the end user.
|
The way of referencing the enums got a lot wordier in this PR. Was this intentional? |
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Yes it was intentional - they will be changed to the short form in #27933. |
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
|
@pytorchbot rebase this please |
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Add c10::variant-based enums in C++ API gh-metadata: pytorch pytorch 26837 gh/yf225/7/head
Summary:
This PR adds temporary declarations for `torch::k{name}` enums, so that we can submit a PR to rename the enum usage in torchvision. And then, after the changes to torchvision is done, we can remove the temporary declarations in pytorch#26837 to officially move over to using `c10::variant` for enums.
Pull Request resolved: pytorch#27051
Differential Revision: D17672220
Pulled By: yf225
fbshipit-source-id: 4ae77634e8c7efa3404698f7c1a69177cbb5dab3
Summary:
This PR adds temporary declarations for `torch::k{name}` enums, so that we can submit a PR to rename the enum usage in torchvision. And then, after the changes to torchvision is done, we can remove the temporary declarations in pytorch#26837 to officially move over to using `c10::variant` for enums.
Pull Request resolved: pytorch#27051
Differential Revision: D17672220
Pulled By: yf225
fbshipit-source-id: 4ae77634e8c7efa3404698f7c1a69177cbb5dab3
Summary: Pull Request resolved: pytorch#26837 Test Plan: Imported from OSS Differential Revision: D17579438 Pulled By: yf225 fbshipit-source-id: 9ac59df28a317fdb3be2cc02c65962ad99117127
Stack from ghstack:
The discussion in #15149 captures it the best, and there are two main benefits of using
c10::variant:c10::variantwe can constrain the the range of acceptable enums for a particular function.std::stringas enum, usingc10::variantwe can save the string comparison overhead, and also constrain the the range of acceptable enums for a particular function.Differential Revision: D17579438