Make at::numeric_limits non CUDA-specific#50902
Make at::numeric_limits non CUDA-specific#50902peterbell10 wants to merge 13 commits intogh/peterbell10/46/basefrom
Conversation
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit c75c44b (more details on the Dr. CI page):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
| #include <float.h> | ||
|
|
||
| // NumericLimits.h is a holder for numeric limits definitions of commonly used | ||
| // types. This header is very specific to ROCm HIP and may be removed in the |
There was a problem hiding this comment.
cc @ngimel
If this file is really intended only to support ROCm and std::numeric_limits is preferred, should we really make this change?
Follow-up question: has ROCm fixed its std::numeric_limits handling since this was written? Is it still necessary?
There was a problem hiding this comment.
cc @jeffdaily for ROCm question. ROCm has improved a lot since this was written, it's very possible that it's no longer needed.
There was a problem hiding this comment.
std and var are implemented in SharedReduceOps.h which gets included in both CPU and CUDA code. So, this is useful, assuming the file is still necessary for ROCm compilation.
There was a problem hiding this comment.
I'll need a bit of time to confirm the ROCm question.
There was a problem hiding this comment.
Hey @jeffdaily - do you have an idea for how long this will take to review?
There was a problem hiding this comment.
I would like confirmation if the support of std::numeric_limits is limited to ROCm and not CUDA? I can't tell whether CUDA supports all features of numeric_limits?
There was a problem hiding this comment.
Everything in std::numeric_limits is constexpr, so at least in theory should work in CUDA device code when the nvcc flag --expt-relaxed-constexpr is used (which PyTorch does).
There was a problem hiding this comment.
cc @ptrblck and @zasdfgbnm
Does CUDA support std::numeric_limits? @colesbury's comment in this file says it does.
I don't want to take much more time on this issue, however. @peterbell10 can we just run the CI with this file removed and std::numeric_limits used and see what happens?
There was a problem hiding this comment.
Seems to be compiling fine when I use std::numeric_limits inside of at::numeric_limits. However, it occurs to me that the warning is meaningless since the functions in at::numeric_limits have different names and meanings from the standard std::numeric_limits functions.
Either way, that should mean I don't need the changes in this PR any more.
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. ghstack-source-id: e682efb Pull Request resolved: pytorch#50902
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
This allows numeric_limits to be used in code shared between CPU and CUDA implementations. [ghstack-poisoned]
Stack from ghstack:
This allows numeric_limits to be used in code shared between CPU and CUDA
implementations.