Skip to content

Mark buffers that reuse other buffers#93329

Closed
wconstab wants to merge 9 commits intogh/wconstab/85/basefrom
gh/wconstab/85/head
Closed

Mark buffers that reuse other buffers#93329
wconstab wants to merge 9 commits intogh/wconstab/85/basefrom
gh/wconstab/85/head

Conversation

@wconstab
Copy link
Copy Markdown
Contributor

@wconstab wconstab commented Jan 31, 2023

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Jan 31, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/93329

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ee866d5:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

wconstab added a commit that referenced this pull request Jan 31, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: 128472a
Pull Request resolved: #93329
@wconstab wconstab added the topic: not user facing topic category label Jan 31, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Jan 31, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: 5348c69
Pull Request resolved: #93329
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 1, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: 3659a00
Pull Request resolved: #93329
@wconstab wconstab requested a review from jansel February 1, 2023 01:32
@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Feb 1, 2023

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 1, 2023
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: New commits were pushed while merging. Please rerun the merge command.

Details for Dev Infra team Raised by workflow job

@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Feb 1, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed (Rule superuser). The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Feb 2, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed (Rule superuser). The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Feb 2, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

ragulpr added a commit to ragulpr/pytorch that referenced this pull request Feb 2, 2023
…n-dev-setup

* origin: (898 commits)
  Move dynamo.optimizations.distributed to backends (pytorch#93408)
  Remove cuda 11.6 from nightly (pytorch#93979)
  Refactor dynamo register_backend/BACKENDS (pytorch#93389)
  Remove cuda 11.6 from CI replace with 11.7 (pytorch#93406)
  [Dynamo] Rename `GuardBuilder.guarded_code` -> `check_fn_manager` (pytorch#93934)
  Revert "Remove CUDA 11.6 from nightly builds (pytorch#93404)"
  Revert "[inductor] fix crash issue when input is a view tensor (pytorch#90150)"
  Basic Validation for FSDP `state_dict` transformations of modules with persistent buffers (pytorch#93396)
  Merge Inductor perf smoke test with other inductor CI tests (pytorch#93395)
  [inductor] Don't import torchvision (pytorch#93027)
  [FSDP][3/N] Refactor `summon_full_params` unit tests (pytorch#92298)
  [FSDP][2/N] `_summon_full_params` -> `_unshard_params` (pytorch#92297)
  Remove CUDA 11.6 from nightly builds (pytorch#93404)
  Mark buffers that reuse other buffers (pytorch#93329)
  Refactor to allow reuse of SchedulerNode.allocate (pytorch#93328)
  retire sparse_mask_helper (pytorch#91714)
  update fbgemm third party (pytorch#93907)
  [inductor] fix crash issue when input is a view tensor (pytorch#90150)
  [Inductor] add config for weight prepacking (pytorch#93811)
  Check for none for NNModuleVariable.__module__ (pytorch#93326)
  ...
@facebook-github-bot facebook-github-bot deleted the gh/wconstab/85/head branch June 8, 2023 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants