Skip to content

Cleanup since FEATURE_TORCH_MOBILE is always true.#55835

Closed
ailzhang wants to merge 1 commit intogh/ailzhang/64/basefrom
gh/ailzhang/64/head
Closed

Cleanup since FEATURE_TORCH_MOBILE is always true.#55835
ailzhang wants to merge 1 commit intogh/ailzhang/64/basefrom
gh/ailzhang/64/head

Conversation

@ailzhang
Copy link
Contributor

@ailzhang ailzhang commented Apr 12, 2021

Stack from ghstack:

Now that #55424 is landed for a
week and no complains. It seems safe to say FEATURE_TORCH_MOBILE is
always true and we can do some cleanup.

Differential Revision: D27721284

Now that #55238 is landed for a
week and no complains. It seems safe to say FEATURE_TORCH_MOBILE is
always true and we can do some cleanup.

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 12, 2021

💊 CI failures summary and remediations

As of commit fb92b19 (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_linux_bionic_py3_8_gcc9_coverage_test1 (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun) ❄️

Apr 13 00:06:29 RuntimeError: Process 1 terminated or timed out after 100.09255838394165 seconds
Apr 13 00:06:29 ======================================================================
Apr 13 00:06:29 ERROR [100.161s]: test_py_tensors_multi_async_call (__main__.TensorPipeRpcTestWithSpawn)
Apr 13 00:06:29 ----------------------------------------------------------------------
Apr 13 00:06:29 Traceback (most recent call last):
Apr 13 00:06:29   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 322, in wrapper
Apr 13 00:06:29     self._join_processes(fn)
Apr 13 00:06:29   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 515, in _join_processes
Apr 13 00:06:29     self._check_return_codes(elapsed_time)
Apr 13 00:06:29   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 563, in _check_return_codes
Apr 13 00:06:29     raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time))
Apr 13 00:06:29 RuntimeError: Process 1 terminated or timed out after 100.09255838394165 seconds
Apr 13 00:06:29 
Apr 13 00:06:29 ----------------------------------------------------------------------
Apr 13 00:06:29 Ran 356 tests in 1302.965s
Apr 13 00:06:29 
Apr 13 00:06:29 FAILED (errors=1, skipped=6)
Apr 13 00:06:29 
Apr 13 00:06:29 Generating XML reports...
Apr 13 00:06:29 Generated XML report: test-reports/python-unittest/distributed.rpc.test_tensorpipe_agent/TEST-TensorPipeDdpComparisonTestWithSpawn-20210412234446.xml
Apr 13 00:06:29 Generated XML report: test-reports/python-unittest/distributed.rpc.test_tensorpipe_agent/TEST-TensorPipeDdpUnderDistAutogradTestWithSpawn-20210412234446.xml
Apr 13 00:06:29 Generated XML report: test-reports/python-unittest/distributed.rpc.test_tensorpipe_agent/TEST-TensorPipeDistAutogradTestWithSpawn-20210412234446.xml

🚧 3 fixed upstream failures:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

ailzhang pushed a commit that referenced this pull request Apr 12, 2021
Now that #55238 is landed for a
week and no complains. It seems safe to say FEATURE_TORCH_MOBILE is
always true and we can do some cleanup.

ghstack-source-id: 43cbb36
Pull Request resolved: #55835
@ailzhang ailzhang requested review from bhosmer, ezyang, ljk53 and malfet April 12, 2021 23:53
Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure I understand, the reason we think it's ok is that we've seen no build errors from the thread_local in InferenceMode.cpp that's been sitting there for a week ungated by FEATURE_TORCH_MOBILE?

BTW I think the description is pointing to the wrong PR - 55238 still has the FEATURE_TORCH_MOBILE guard in InferenceMode.cpp, but I think the one that actually landed is 55424, which doesn't.

@facebook-github-bot
Copy link
Contributor

@ailzhang merged this pull request in 1688a5d.

@facebook-github-bot facebook-github-bot deleted the gh/ailzhang/64/head branch April 18, 2021 14:14
krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
Summary:
Pull Request resolved: pytorch#55835

Now that pytorch#55238 is landed for a
week and no complains. It seems safe to say FEATURE_TORCH_MOBILE is
always true and we can do some cleanup.

Test Plan: Imported from OSS

Reviewed By: ezyang, walterddr

Differential Revision: D27721284

Pulled By: ailzhang

fbshipit-source-id: 4896bc5f736373d0922cfbe8eed0d16df62f0fa1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants