Restore AcceleratorAllocatorConfig to avoid potential regression#165129
Restore AcceleratorAllocatorConfig to avoid potential regression#165129guangyey wants to merge 2 commits intogh/guangyey/212/basefrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165129
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 7c5ed43 with merge base ca96c67 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| auto env##_name = c10::utils::get_env(#env); \ | ||
| if (env##_name.has_value()) { \ | ||
| if (deprecated) { \ | ||
| TORCH_WARN_ONCE(#env " is deprecated, use PYTORCH_ALLOC_CONF instead"); \ |
There was a problem hiding this comment.
TORCH_WARN would introduce overhead. I remove it in case it is the root cause of the regression.
| roundup_power2_divisions_.begin(), | ||
| static_cast<std::vector<size_t>::difference_type>( | ||
| last_index + 1)), | ||
| static_cast<std::vector<size_t>::difference_type>(last_index)), |
There was a problem hiding this comment.
Drop this bug fix to keep consistent with CUDAAllocatorConfig.
|
@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
Starting merge as part of PR stack under #165281 |
1 similar comment
|
Starting merge as part of PR stack under #165281 |
Pull Request resolved: #165131 Approved by: https://github.com/Skylion007 ghstack dependencies: #165129
Pull Request resolved: #165135 Approved by: https://github.com/Skylion007 ghstack dependencies: #165129, #165131
Pull Request resolved: #165136 Approved by: https://github.com/Skylion007 ghstack dependencies: #165129, #165131, #165135
…orch#165129) # Motivation This PR aims to restore `AcceleratorAllocatorConfig` to avoid the potential regression mentioned in pytorch#160666 (comment) These code change would be reverted in the following PR pytorch#165304 Pull Request resolved: pytorch#165129 Approved by: https://github.com/albanD
…65131) Pull Request resolved: pytorch#165131 Approved by: https://github.com/Skylion007 ghstack dependencies: pytorch#165129
Pull Request resolved: pytorch#165135 Approved by: https://github.com/Skylion007 ghstack dependencies: pytorch#165129, pytorch#165131
Pull Request resolved: pytorch#165136 Approved by: https://github.com/Skylion007 ghstack dependencies: pytorch#165129, pytorch#165131, pytorch#165135
* pytorch#165129 Pull Request resolved: pytorch#165281 Approved by: https://github.com/albanD ghstack dependencies: pytorch#165129, pytorch#165131, pytorch#165135, pytorch#165136
…orch#165129) # Motivation This PR aims to restore `AcceleratorAllocatorConfig` to avoid the potential regression mentioned in pytorch#160666 (comment) These code change would be reverted in the following PR pytorch#165304 Pull Request resolved: pytorch#165129 Approved by: https://github.com/albanD
…65131) Pull Request resolved: pytorch#165131 Approved by: https://github.com/Skylion007 ghstack dependencies: pytorch#165129
Pull Request resolved: pytorch#165135 Approved by: https://github.com/Skylion007 ghstack dependencies: pytorch#165129, pytorch#165131
Pull Request resolved: pytorch#165136 Approved by: https://github.com/Skylion007 ghstack dependencies: pytorch#165129, pytorch#165131, pytorch#165135
* pytorch#165129 Pull Request resolved: pytorch#165281 Approved by: https://github.com/albanD ghstack dependencies: pytorch#165129, pytorch#165131, pytorch#165135, pytorch#165136
Stack from ghstack (oldest at bottom):
Motivation
This PR aims to restore
AcceleratorAllocatorConfigto avoid the potential regression mentioned in #160666 (comment)These code change would be reverted in the following PR #165304