Remove padding from device_memory_resource#1278
Merged
rapids-bot[bot] merged 2 commits intorapidsai:branch-23.08from Jun 2, 2023
Merged
Remove padding from device_memory_resource#1278rapids-bot[bot] merged 2 commits intorapidsai:branch-23.08from
rapids-bot[bot] merged 2 commits intorapidsai:branch-23.08from
Conversation
Contributor
Author
|
See #865 (comment) for more of a summary on the decision here. |
rongou
approved these changes
May 30, 2023
Contributor
Author
|
Thanks for the reminder Mark. I've created test PRs:
|
Member
|
I created an issue for cuSpatial as well. Assigning you @vyasr . rapidsai/cuspatial#1164 |
Contributor
Author
|
Updated with a cuspatial PR. |
Contributor
Author
|
OK, we now have tests passing on all the testing PRs (cugraph's has an unrelated error blocking wheel builds). Think this is good to go now. |
Contributor
Author
|
/merge |
|
Sorry for jumping in late. What about padding when using arena allocator? It seems that we need to modify this too: rmm/include/rmm/mr/device/arena_memory_resource.hpp Lines 137 to 144 in 0c08dd5 |
Member
|
Could you please raise an issue? |
This was referenced Jun 16, 2023
Done. |
rapids-bot bot
pushed a commit
to rapidsai/cudf
that referenced
this pull request
Jun 23, 2023
After `rmm` removed memory padding (rapidsai/rmm#1278), some of cuIO code started to have out-of-bound access issues because many of its compute kernels shift the input pointers back and forth to satisfy some alignment. This adds back padding to various memory buffers so the buffers now will have some extra space enough for such shifting. With this fix, the reported issues (#13567, #13571, #13570) no longer show up. Closes: * #13567 * #13571 * #13570 Authors: - Nghia Truong (https://github.com/ttnghia) Approvers: - David Wendt (https://github.com/davidwendt) - Bradley Dice (https://github.com/bdice) - Vyas Ramasubramani (https://github.com/vyasr) - Vukasin Milovanovic (https://github.com/vuule) URL: #13586
3 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR removes the
align_upcall that enforces 8B padding on all RMM device memory allocations. Padding, where needed, should be the responsibility of callers, possibly with the help of a padding adapter. It should not be enforced by rmm.I looked at all other calls to
align_upin the library but didn't find any others that needed removing. The other calls all seemed to be used to guarantee specific intentional alignment requirements of other memory resources or adapters. I would appreciate verification from reviewers, though.I've labeled this PR as breaking because it could break consumers that were implicitly relying on the current padding behavior.
Resolves #865
Resolves #1156
Checklist