Skip to content

Enable periodic builds for CUDA 11.7#81688

Closed
ptrblck wants to merge 7 commits intopytorch:masterfrom
ptrblck:cu117_builds
Closed

Enable periodic builds for CUDA 11.7#81688
ptrblck wants to merge 7 commits intopytorch:masterfrom
ptrblck:cu117_builds

Conversation

@ptrblck
Copy link
Collaborator

@ptrblck ptrblck commented Jul 19, 2022

@ptrblck ptrblck requested a review from a team as a code owner July 19, 2022 07:06
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 19, 2022

🔗 Helpful links

❌ 7 New Failures, 2 Base Failures

As of commit 059452d (more details on the Dr. CI page):

Expand to see more
  • 7/9 failures introduced in this PR
  • 2/9 broken upstream at merge base 24d3ea6 on Aug 09 from 12:40am to 1:31pm

🕵️ 7 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build trunk / linux-bionic-cuda10.2-py3.9-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (1/7)

Step: "Unknown" (full log | diagnosis details)

2022-08-09T21:01:18.1047367Z ##[error]The operation was canceled.
2022-08-09T21:01:13.2241994Z   test_qelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.351s)
2022-08-09T21:01:13.4663551Z   test_qgelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.242s)
2022-08-09T21:01:13.6868635Z   test_qhardsigmoid (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.220s)
2022-08-09T21:01:13.8583686Z   test_qlayer_norm (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.171s)
2022-08-09T21:01:14.1225363Z   test_qmatmul (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.264s)
2022-08-09T21:01:14.1255152Z   test_qmul_broadcast (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.003s)
2022-08-09T21:01:14.1347304Z   test_qmul_relu_different_qparams (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.009s)
2022-08-09T21:01:14.8497803Z   test_qmul_relu_same_qparams (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.715s)
2022-08-09T21:01:14.9224014Z   test_qprelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (0.072s)
2022-08-09T21:01:17.9875137Z   test_qrelu (quantization.core.test_quantized_op.TestQuantizedOps) ... ok (3.065s)
2022-08-09T21:01:18.1047367Z ##[error]The operation was canceled.
2022-08-09T21:01:18.1108947Z Prepare all required actions
2022-08-09T21:01:18.1109722Z Getting action download info
2022-08-09T21:01:18.2940707Z ##[group]Run ./.github/actions/get-workflow-job-id
2022-08-09T21:01:18.2941398Z with:
2022-08-09T21:01:18.2942325Z   github-token: ***
2022-08-09T21:01:18.2942872Z env:
2022-08-09T21:01:18.2943445Z   GIT_DEFAULT_BRANCH: master
2022-08-09T21:01:18.2944079Z   GPU_FLAG: --gpus all
2022-08-09T21:01:18.2944634Z ##[endgroup]
2022-08-09T21:01:18.3011000Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a

See GitHub Actions build windows-binary-wheel / wheel-py3_7-cuda11_3-test (2/7)

Step: "Unknown" (full log | diagnosis details)

2022-08-09T21:01:04.4599804Z ##[error]The operation was canceled.
2022-08-09T21:00:52.0828931Z Submodule path 'third_party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929'
2022-08-09T21:00:53.9507274Z From https://github.com/intel/ideep
2022-08-09T21:00:53.9507852Z  * branch            8a114a51c116b55c4ceb689b98746786bd00c29b -> FETCH_HEAD
2022-08-09T21:00:54.0395616Z Submodule path 'third_party/ideep': checked out '8a114a51c116b55c4ceb689b98746786bd00c29b'
2022-08-09T21:00:54.0583112Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn'
2022-08-09T21:00:56.6189818Z Cloning into 'C:/actions-runner/_work/pytorch/pytorch/pytorch/third_party/ideep/mkl-dnn'...
2022-08-09T21:00:58.4538647Z From https://github.com/intel/mkl-dnn
2022-08-09T21:00:58.4539117Z  * branch            888a87a954e4fddb4d81fd10858eb834f2441b46 -> FETCH_HEAD
2022-08-09T21:01:01.4963716Z Submodule path 'third_party/ideep/mkl-dnn': checked out '888a87a954e4fddb4d81fd10858eb834f2441b46'
2022-08-09T21:01:01.5192353Z Submodule 'third_party/oneDNN' (https://github.com/oneapi-src/oneDNN.git) registered for path 'third_party/ideep/mkl-dnn/third_party/oneDNN'
2022-08-09T21:01:04.4599804Z ##[error]The operation was canceled.
2022-08-09T21:01:04.5878174Z ##[group]Run .github\scripts\wait_for_ssh_to_drain.ps1
2022-08-09T21:01:04.5879055Z �[36;1m.github\scripts\wait_for_ssh_to_drain.ps1�[0m
2022-08-09T21:01:04.5916029Z shell: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.EXE -command ". '{0}'"
2022-08-09T21:01:04.5916417Z env:
2022-08-09T21:01:04.5916787Z   ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine
2022-08-09T21:01:04.5917118Z   ANACONDA_USER: pytorch
2022-08-09T21:01:04.5917411Z   AWS_DEFAULT_REGION: us-east-1
2022-08-09T21:01:04.5917732Z   BUILD_ENVIRONMENT: windows-binary-wheel
2022-08-09T21:01:04.5918306Z   GITHUB_TOKEN: ***
2022-08-09T21:01:04.5918584Z   PR_NUMBER: 

See GitHub Actions build trunk / win-vs2019-cuda11.6-py3 / test (default, 5, 5, windows.8xlarge.nvidia.gpu) (3/7)

Step: "Unknown" (full log | diagnosis details)

2022-08-09T21:01:08.1875802Z RuntimeError: test_cpp_api_parity failed!
2022-08-09T21:01:07.7295124Z   File "C:\Jenkins\Miniconda3\lib\subprocess.py", line 495, in run
2022-08-09T21:01:07.7295513Z     stdout, stderr = process.communicate(input, timeout=timeout)
2022-08-09T21:01:07.7295948Z   File "C:\Jenkins\Miniconda3\lib\subprocess.py", line 1015, in communicate
2022-08-09T21:01:07.7296294Z     stdout = self.stdout.read()
2022-08-09T21:01:07.7296551Z KeyboardInterrupt
2022-08-09T21:01:08.1874091Z Traceback (most recent call last):
2022-08-09T21:01:08.1874622Z   File "run_test.py", line 974, in <module>
2022-08-09T21:01:08.1874886Z     main()
2022-08-09T21:01:08.1875172Z   File "run_test.py", line 952, in main
2022-08-09T21:01:08.1875517Z     raise RuntimeError(err_message)
2022-08-09T21:01:08.1875802Z RuntimeError: test_cpp_api_parity failed!
2022-08-09T21:01:08.5816456Z Terminate batch job (Y/N)? 
2022-08-09T21:01:08.5818077Z 
2022-08-09T21:01:08.5818649Z (base) C:\actions-runner\_work\pytorch\pytorch\test>if ERRORLEVEL 1 goto fail 
2022-08-09T21:01:08.5822068Z 
2022-08-09T21:01:08.5822384Z (base) C:\actions-runner\_work\pytorch\pytorch\test>exit /b 1 
2022-08-09T21:01:08.8657701Z ##[error]The operation was canceled.
2022-08-09T21:01:08.9175620Z Prepare all required actions
2022-08-09T21:01:08.9176224Z Getting action download info
2022-08-09T21:01:09.1123656Z Download action repository 'nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a' (SHA:71062288b76e2b6214ebde0e673ce0de1755740a)
2022-08-09T21:01:09.3646422Z ##[group]Run ./.github/actions/get-workflow-job-id

See GitHub Actions build trunk / linux-focal-rocm5.2-py3.7 / test (default, 1, 2, linux.rocm.gpu) (4/7)

Step: "Unknown" (full log | diagnosis details)

2022-08-09T20:48:20.1169359Z test_cast (__mai...Error: VariableType::ID() not implemented (0.001s)
2022-08-09T20:48:20.0375186Z   test_call_python_mod_from_tracing_fn (__main__.TestScript) ... ok (0.009s)
2022-08-09T20:48:20.0437923Z   test_call_script_fn_from_script_fn (__main__.TestScript) ... ok (0.006s)
2022-08-09T20:48:20.0528910Z   test_call_script_fn_from_script_module (__main__.TestScript) ... ok (0.009s)
2022-08-09T20:48:20.0631399Z   test_call_script_fn_from_tracing_fn (__main__.TestScript) ... ok (0.010s)
2022-08-09T20:48:20.0703705Z   test_call_script_mod_from_script_fn (__main__.TestScript) ... ok (0.007s)
2022-08-09T20:48:20.0832770Z   test_call_script_mod_from_script_module (__main__.TestScript) ... ok (0.013s)
2022-08-09T20:48:20.0846940Z   test_call_script_mod_from_tracing_fn (__main__.TestScript) ... skip: error in first class mode (0.001s)
2022-08-09T20:48:20.0965187Z   test_call_traced_fn_from_tracing_fn (__main__.TestScript) ... ok (0.012s)
2022-08-09T20:48:20.0977714Z   test_call_traced_mod_from_tracing_fn (__main__.TestScript) ... skip: error in first class mode (0.001s)
2022-08-09T20:48:20.1163118Z   test_canonicalize_control_outputs (__main__.TestScript) ... ok (0.018s)
2022-08-09T20:48:20.1169359Z   test_cast (__main__.TestScript) ... skip: RuntimeError: VariableType::ID() not implemented (0.001s)
2022-08-09T20:48:20.1362709Z   test_cat (__main__.TestScript) ... ok (0.019s)
2022-08-09T20:48:20.1455823Z   test_cat_lifts (__main__.TestScript) ... ok (0.009s)
2022-08-09T20:48:20.1514607Z   test_chr (__main__.TestScript) ... ok (0.006s)
2022-08-09T20:48:20.1532799Z   test_circular_dependency (__main__.TestScript)
2022-08-09T20:48:20.1894068Z https://github.com/pytorch/pytorch/issues/25871 ... ok (0.038s)
2022-08-09T20:48:20.2085793Z   test_class_as_attribute (__main__.TestScript) ... ok (0.019s)
2022-08-09T20:48:20.2126357Z   test_class_attribute (__main__.TestScript) ... ok (0.004s)
2022-08-09T20:48:20.2164159Z   test_class_attribute_in_script (__main__.TestScript) ... ok (0.004s)
2022-08-09T20:48:20.2234312Z   test_class_with_comment_at_lower_indentation (__main__.TestScript) ... ok (0.007s)
2022-08-09T20:48:20.2242982Z   test_code_with_constants (__main__.TestScript)

See GitHub Actions build trunk / linux-bionic-cuda10.2-py3.9-gcc7 / test (default, 1, 2, linux.4xlarge.nvidia.gpu) (5/7)

Step: "Unknown" (full log | diagnosis details)

2022-08-09T19:33:51.4571346Z RuntimeError: CUDA error: device-side assert triggered
2022-08-09T19:33:48.7170462Z   File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 500, in synchronize
2022-08-09T19:33:48.7172480Z     return torch._C._cuda_synchronize()
2022-08-09T19:33:48.7173147Z RuntimeError: CUDA error: device-side assert triggered
2022-08-09T19:33:48.7173976Z CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2022-08-09T19:33:48.7174446Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2022-08-09T19:33:51.4562637Z /var/lib/jenkins/workspace/aten/src/ATen/native/cuda/TensorCompare.cu:110: _assert_async_cuda_kernel: block: [0,0,0], thread: [0,0,0] Assertion `input[0] != c10::complex<float>(0, 0)` failed.
2022-08-09T19:33:51.4566922Z Traceback (most recent call last):
2022-08-09T19:33:51.4567704Z   File "<string>", line 4, in <module>
2022-08-09T19:33:51.4568579Z   File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 500, in synchronize
2022-08-09T19:33:51.4570892Z     return torch._C._cuda_synchronize()
2022-08-09T19:33:51.4571346Z RuntimeError: CUDA error: device-side assert triggered
2022-08-09T19:33:51.4571813Z CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2022-08-09T19:33:51.4572294Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2022-08-09T19:33:51.6760443Z ok (11.058s)
2022-08-09T19:33:51.7658964Z   test_float32_matmul_precision_get_set (__main__.TestCuda) ... ok (0.090s)
2022-08-09T19:33:51.8260716Z   test_gather_bool (__main__.TestCuda) ... ok (0.060s)
2022-08-09T19:33:51.8869511Z   test_get_device_index (__main__.TestCuda) ... ok (0.061s)
2022-08-09T19:33:51.8879749Z   test_get_set_rng_state_all (__main__.TestCuda) ... skip: only one GPU detected (0.001s)
2022-08-09T19:33:51.9665405Z   test_grad_scaling_accumulation (__main__.TestCuda) ... ok (0.078s)
2022-08-09T19:33:52.0686812Z   test_grad_scaling_autocast (__main__.TestCuda) ... ok (0.102s)
2022-08-09T19:33:52.1555566Z   test_grad_scaling_clipping (__main__.TestCuda) ... ok (0.087s)

See GitHub Actions build trunk / linux-bionic-cuda10.2-py3.9-gcc7 / test (distributed, 1, 2, linux.8xlarge.nvidia.gpu) (6/7)

Step: "Unknown" (full log | diagnosis details)

2022-08-09T21:01:19.2510439Z ##[error]The operation was canceled.
2022-08-09T21:01:18.0512841Z 
2022-08-09T21:01:18.0513250Z ----------------------------------------------------------------------
2022-08-09T21:01:18.0513581Z Ran 14 tests in 50.249s
2022-08-09T21:01:18.0513748Z 
2022-08-09T21:01:18.0516987Z OK
2022-08-09T21:01:18.0517257Z 
2022-08-09T21:01:18.0517401Z Generating XML reports...
2022-08-09T21:01:18.0641510Z Generated XML report: test-reports/python-unittest/distributed.fsdp.test_fsdp_misc/TEST-TestFSDPMisc-20220809210027.xml
2022-08-09T21:01:18.4190070Z Running distributed/test_c10d_spawn_nccl ... [2022-08-09 21:01:18.418513]
2022-08-09T21:01:18.4191061Z Executing ['/opt/conda/bin/python', '-bb', 'distributed/test_c10d_spawn_nccl.py', '-v', '--subprocess', '--import-slow-tests', '--import-disabled-tests'] ... [2022-08-09 21:01:18.418622]
2022-08-09T21:01:19.2510439Z ##[error]The operation was canceled.
2022-08-09T21:01:19.2543825Z Prepare all required actions
2022-08-09T21:01:19.2544246Z Getting action download info
2022-08-09T21:01:19.4083904Z ##[group]Run ./.github/actions/get-workflow-job-id
2022-08-09T21:01:19.4084210Z with:
2022-08-09T21:01:19.4084647Z   github-token: ***
2022-08-09T21:01:19.4084872Z env:
2022-08-09T21:01:19.4085113Z   GIT_DEFAULT_BRANCH: master
2022-08-09T21:01:19.4085391Z   GPU_FLAG: --gpus all
2022-08-09T21:01:19.4085619Z ##[endgroup]
2022-08-09T21:01:19.4118209Z ##[group]Run nick-fields/retry@71062288b76e2b6214ebde0e673ce0de1755740a

See GitHub Actions build trunk / win-vs2019-cuda11.6-py3 / test (default, 4, 5, windows.8xlarge.nvidia.gpu) (7/7)

Step: "Stop monitoring script" (full log | diagnosis details)

2022-08-09T19:50:43.6321704Z ls: cannot access ...d/win_tmp/ci_scripts/*': No such file or directory
2022-08-09T19:50:43.4998077Z + export TEST_DIR_WIN
2022-08-09T19:50:43.4998620Z + export PYTORCH_FINAL_PACKAGE_DIR=/c/2827634488/build-results/
2022-08-09T19:50:43.4999247Z + PYTORCH_FINAL_PACKAGE_DIR=/c/2827634488/build-results/
2022-08-09T19:50:43.5090297Z ++ cygpath -w /c/2827634488/build-results/
2022-08-09T19:50:43.5248158Z + PYTORCH_FINAL_PACKAGE_DIR_WIN='C:\2827634488\build-results\'
2022-08-09T19:50:43.5248892Z + export PYTORCH_FINAL_PACKAGE_DIR_WIN
2022-08-09T19:50:43.5249599Z + mkdir -p /c/actions-runner/_work/pytorch/pytorch/build/win_tmp/build/torch
2022-08-09T19:50:43.5642810Z + CI_SCRIPTS_DIR=/c/actions-runner/_work/pytorch/pytorch/build/win_tmp/ci_scripts
2022-08-09T19:50:43.5643672Z + mkdir -p /c/actions-runner/_work/pytorch/pytorch/build/win_tmp/ci_scripts
2022-08-09T19:50:43.5950842Z ++ ls '/c/actions-runner/_work/pytorch/pytorch/build/win_tmp/ci_scripts/*'
2022-08-09T19:50:43.6321704Z ls: cannot access '/c/actions-runner/_work/pytorch/pytorch/build/win_tmp/ci_scripts/*': No such file or directory
2022-08-09T19:50:43.6326350Z + '[' -n '' ']'
2022-08-09T19:50:43.6327195Z + export SCRIPT_HELPERS_DIR=/c/actions-runner/_work/pytorch/pytorch/.jenkins/pytorch/win-test-helpers
2022-08-09T19:50:43.6329049Z + SCRIPT_HELPERS_DIR=/c/actions-runner/_work/pytorch/pytorch/.jenkins/pytorch/win-test-helpers
2022-08-09T19:50:43.6329876Z + [[ win-vs2019-cuda11.6-py3 == *cuda11* ]]
2022-08-09T19:50:43.6330420Z + export BUILD_SPLIT_CUDA=ON
2022-08-09T19:50:43.6330855Z + BUILD_SPLIT_CUDA=ON
2022-08-09T19:50:43.6331373Z + [[ default = \f\o\r\c\e\_\o\n\_\c\p\u ]]
2022-08-09T19:50:43.6331903Z + [[ win-vs2019-cuda11.6-py3 == *cuda* ]]
2022-08-09T19:50:43.6332475Z + export PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda
2022-08-09T19:50:43.6333045Z + PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda

🚧 2 fixed upstream failures:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@bdhirsh bdhirsh added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 20, 2022
@atalman atalman added the ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR label Jul 22, 2022
pytorchmergebot pushed a commit that referenced this pull request Aug 2, 2022
)

### Description
CUDA 11.7 Requires Driver update : https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
This is only Linux update, will require followup PR for Windows update.

This should resolve this error:
https://github.com/pytorch/pytorch/runs/7478454141?check_suite_focus=true

On this PR: #81688
```
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.7, please update your driver to a newer version, or use an earlier cuda container: unknown.
```

### Testing
IN CI

Pull Request resolved: #82472
Approved by: https://github.com/DanilBaibak, https://github.com/malfet
pytorchmergebot pushed a commit that referenced this pull request Aug 2, 2022
)

### Description
CUDA 11.7 Requires Driver update : https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
This is only Linux update, will require followup PR for Windows update.

This should resolve this error:
https://github.com/pytorch/pytorch/runs/7478454141?check_suite_focus=true

On this PR: #81688
```
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.7, please update your driver to a newer version, or use an earlier cuda container: unknown.
```

### Testing
IN CI

Pull Request resolved: #82472
Approved by: https://github.com/DanilBaibak, https://github.com/malfet
@atalman
Copy link
Contributor

atalman commented Aug 2, 2022

@pytorchmergebot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased cu117_builds onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout cu117_builds && git pull --rebase)

"102",
"113",
"116",
"117",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file doesn't seem like it's used anymore.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right, we should generally clean .circleci folder to remove all unused files. Multiple files are still used there. Perhaps we should create a bootcamper task to perform this cleanup

"102",
"113",
"116",
"117",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right, we should generally clean .circleci folder to remove all unused files. Multiple files are still used there. Perhaps we should create a bootcamper task to perform this cleanup

facebook-github-bot pushed a commit that referenced this pull request Aug 3, 2022
) (#82472)

Summary:
### Description
CUDA 11.7 Requires Driver update : https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
This is only Linux update, will require followup PR for Windows update.

This should resolve this error:
https://github.com/pytorch/pytorch/runs/7478454141?check_suite_focus=true

On this PR: #81688
```
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.7, please update your driver to a newer version, or use an earlier cuda container: unknown.
```

### Testing
IN CI

Pull Request resolved: #82472
Approved by: https://github.com/DanilBaibak, https://github.com/malfet

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/5aef03513f2ebcc56bd0687d99da04cc073af15c

Reviewed By: kit1980

Differential Revision: D38359595

Pulled By: atalman

fbshipit-source-id: 60eaf00f4dbe9ab456beef3cc14274d7845b71e2
Comment on lines 6 to +7

DISTRIBUTION=$(. /etc/os-release;echo $ID$VERSION_ID) \
DRIVER_FN="NVIDIA-Linux-x86_64-510.60.02.run"
DRIVER_FN="NVIDIA-Linux-x86_64-515.57.run"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we updating driver here? Aren't forward compatibility guarantee for CUDA-11.x applies for 11.7?

@atalman
Copy link
Contributor

atalman commented Aug 8, 2022

@pytorchmergebot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased cu117_builds onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout cu117_builds && git pull --rebase)

@atalman
Copy link
Contributor

atalman commented Aug 9, 2022

@pytorchmergebot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased cu117_builds onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout cu117_builds && git pull --rebase)

@atalman
Copy link
Contributor

atalman commented Aug 9, 2022

@pytorchmergebot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge and created land time checks. See merge status here and land check progress here.

@pytorchmergebot
Copy link
Collaborator

Merge failed due to 2 additional jobs have failed, first few of them are: periodic ,periodic / ios-12-5-1-arm64-metal / build
Raised by https://github.com/pytorch/pytorch/actions/runs/2827542659 If you believe this is an error, you can use the old behavior with @pytorchbot merge -g (optionally with the "ciflow/trunk" to get land signals) or use @pytorchbot merge -f "some reason here". For more information, see the bot wiki.

pytorchmergebot pushed a commit that referenced this pull request Aug 9, 2022
@atalman atalman added ciflow/binaries_wheel Trigger binary build and upload jobs for wheel on the PR ciflow/trunk Trigger trunk jobs on your pull request labels Aug 9, 2022
@atalman
Copy link
Contributor

atalman commented Aug 9, 2022

@pytorchmergebot rebase

@atalman atalman removed ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/binaries_wheel Trigger binary build and upload jobs for wheel on the PR labels Aug 9, 2022
@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased cu117_builds onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout cu117_builds && git pull --rebase)

@atalman
Copy link
Contributor

atalman commented Aug 9, 2022

@pytorchmergebot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link
Contributor

Hey @ptrblck.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Aug 10, 2022
Summary:
CC atalman

Pull Request resolved: #81688
Approved by: https://github.com/atalman

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/b4f7e2264014d8e6010bad15952ff632607709c7

Reviewed By: seemethere

Differential Revision: D38585742

fbshipit-source-id: 6dd7f38a3ac78d12194a1dc4a53841c9ee495d38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants