[nativert] hook up memory planning to execution frame#157053
[nativert] hook up memory planning to execution frame#157053dolpm wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/157053
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 6ac419a with merge base 61712e6 ( NEW FAILURE - The following job has failed:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
Summary: Pull Request resolved: pytorch#157053 pretty simple. if planner exists, which implies that planning is enabled, create a manager for each frame. the associated serial executor will use the withMemoryPlannner fn to ensure the deallocation is done after execution completes. Test Plan: CI Rollback Plan: Reviewed By: henryoier Differential Revision: D73635809
|
This pull request was exported from Phabricator. Differential Revision: D73635809 |
Summary: Pull Request resolved: pytorch#157053 pretty simple. if planner exists, which implies that planning is enabled, create a manager for each frame. the associated serial executor will use the withMemoryPlannner fn to ensure the deallocation is done after execution completes. Test Plan: CI Differential Revision: D73635809
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
Merge startedYour change will be merged while ignoring the following 2 checks: pull / cuda12.8-py3.10-gcc9-sm75 / test (pr_time_benchmarks, 1, 1, linux.g4dn.metal.nvidia.gpu, unstable), Lint / lintrunner-noclang / linux-job Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: pretty simple. if planner exists, which implies that planning is enabled, create a manager for each frame. the associated serial executor will use the withMemoryPlannner fn to ensure the deallocation is done after execution completes.
Test Plan: CI
Differential Revision: D73635809