Decrease number of bytes used by uninitialized tokens_ in KernelFunction#160764
Decrease number of bytes used by uninitialized tokens_ in KernelFunction#160764mikaylagawarecki wants to merge 3 commits intogh/mikaylagawarecki/335/basefrom
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160764
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit b468461 with merge base a9fabeb ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…KernelFunction" [ghstack-poisoned]
…KernelFunction" [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 3 jobs have failed, first few of them are: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / build, trunk / linux-jammy-rocm-py3.10 / build, trunk / verify-cachebench-cpu-build / build Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…nelFunction (pytorch#160764)" This reverts commit 30384ab.
…ion (pytorch#160764) std::unique_ptr to decrease bytes from 24 to 8 Since std::unique_ptr is not copyable this required defining the copy / copy assignment constructors. Which made me realize we shouldn't be copying `tokens_` in those. Pull Request resolved: pytorch#160764 Approved by: https://github.com/albanD
…nelFunction (pytorch#160764)" This reverts commit 30384ab.
…nelFunction (pytorch#160764)" This reverts commit 30384ab. (cherry picked from commit cd45fe7)
…nelFunction (pytorch#160764)" This reverts commit 30384ab.
std::unique_ptr to decrease bytes from 24 to 8
Since std::unique_ptr is not copyable this required defining the copy / copy assignment constructors. Which made me realize we shouldn't be copying
tokens_in those.Stack from ghstack (oldest at bottom):