-
Notifications
You must be signed in to change notification settings - Fork 247
[FEA] RMM should not pad allocations #865
Copy link
Copy link
Closed
Labels
0 - BacklogIn queue waiting for assignmentIn queue waiting for assignmentfeature requestNew feature or requestNew feature or requesttech debtdebt Internal clean up and improvements to reduce maintenance and technical debt in generaldebt Internal clean up and improvements to reduce maintenance and technical debt in general
Description
Is your feature request related to a problem? Please describe.
Currently, RMM pads every allocation that goes through the rmm::device_memory_resource interface to a multiple of 8 bytes in size.
rmm/include/rmm/mr/device/device_memory_resource.hpp
Lines 102 to 105 in 8527317
| void* allocate(std::size_t bytes, cuda_stream_view stream = cuda_stream_view{}) | |
| { | |
| return do_allocate(rmm::detail::align_up(bytes, 8), stream); | |
| } |
This was originally added in #165, which doesn't provide explanation of why. I believe it was to allow accessing structures via aliases in cuIO in a way that would otherwise be UB.
Describe the solution you'd like
We should not pad allocations unnecessarily. RMM allocation should not have surprising behavior like this.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
0 - BacklogIn queue waiting for assignmentIn queue waiting for assignmentfeature requestNew feature or requestNew feature or requesttech debtdebt Internal clean up and improvements to reduce maintenance and technical debt in generaldebt Internal clean up and improvements to reduce maintenance and technical debt in general
Type
Projects
Status
Done