Avoid writing temporary modules to disk#157713
Avoid writing temporary modules to disk#157713apmorton wants to merge 4 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/157713
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 7350062 with merge base a2b6afe ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "topic: not user facing" |
|
This seems morally ok but @xush6528 should really take a look |
|
@apmorton Can you check if the test failures are relevant? |
|
|
|
@xush6528 CI is green |
| return generated_module | ||
| module = importlib.util.module_from_spec(spec) | ||
| sys.modules[generated_module_name] = module | ||
| spec.loader.exec_module(module) |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
@xush6528 can we merge this? |
|
I got this message: @ezyang Do you know how to operate on this? |
|
I guess apmorton didn't allow maintainer pushes. Can you just make a new PR with this change crediting apmorton and I will stamp it? Thanks. |
|
@pytorchbot merge -r |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
642ee7e to
f1b402e
Compare
|
@pytorchbot merge -r |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
f1b402e to
7350062
Compare
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 3 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command |
|
@pytorchbot merge -f "all green" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
In some cases the warning from #147744 still gets emitted because [atexit hooks aren't called](python/cpython#114279). Even in those cases, if the atexit hooks _were_ called you could end up with issues due to the directory being deleted in one process, but still being used elsewhere. It's better all round to load these modules entirely in-memory. Pull Request resolved: #157713 Approved by: https://github.com/xush6528
In some cases the warning from pytorch#147744 still gets emitted because [atexit hooks aren't called](python/cpython#114279). Even in those cases, if the atexit hooks _were_ called you could end up with issues due to the directory being deleted in one process, but still being used elsewhere. It's better all round to load these modules entirely in-memory. Pull Request resolved: pytorch#157713 Approved by: https://github.com/xush6528
In some cases the warning from #147744 still gets emitted because atexit hooks aren't called.
Even in those cases, if the atexit hooks were called you could end up with issues due to the directory being deleted in one process, but still being used elsewhere.
It's better all round to load these modules entirely in-memory.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @EikanWang @jgong5 @wenzhe-nrv @sanchitintel