Skip to content

[wip] Use codegen'ed unboxing wrappers#30370

Closed
smessmer wants to merge 12 commits intogh/smessmer/124/basefrom
gh/smessmer/124/head
Closed

[wip] Use codegen'ed unboxing wrappers#30370
smessmer wants to merge 12 commits intogh/smessmer/124/basefrom
gh/smessmer/124/head

Conversation

@smessmer
Copy link
Contributor

@smessmer smessmer commented Nov 23, 2019

Stack from ghstack:

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

The use_c10_dispatcher setting in native_functions.yaml now has a new option 'with_codegenerated_unboxing_wrapper' which means we take the codegened unboxing wrapper from register_aten_ops.cpp and stuff it into c10. This new argument is the default, 'unboxed_only' is not the default anymore. For the ops that don't support boxed dispatch yet (i.e. ops taking TensorOptions arguments), we set them to 'unboxed_only' and they follow the old behavior of having register_aten_ops.cpp register the jit op.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: D18672405

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Nov 23, 2019
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

ghstack-source-id: 94481318
Pull Request resolved: #30370
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Nov 24, 2019
Pull Request resolved: #30370

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.
ghstack-source-id: 94499786

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Dec 17, 2019
Pull Request resolved: #30370

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.
ghstack-source-id: 95812247

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)
@kostmo
Copy link
Member

kostmo commented Dec 17, 2019

💊 CircleCI build failures summary and remediations

As of commit 51c4532:

  • 1/17 broken upstream at merge base ecbf6f9 since Jan 18

    Please rebase on the viable/strict branch (expand for instructions)

    Since your merge base is older than viable/strict, run these commands:

    git fetch origin viable/strict
    git rebase viable/strict
    

    Check out the recency history of this "viable master" tracking branch.

  • 12/17 failures introduced in this PR

  • 4/17 recognized as flaky ❄️

    • Re-run these jobs?

Detailed failure analysis

One may explore the probable reasons each build failed interactively on the Dr. CI website.

🕵️ 12 new failures recognized by patterns

The following build failures do not appear to be due to upstream breakage:

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_ge_config_simple_test (1/12)

Step: "Test" (full log | pattern match details)

Jan 21 21:07:31 RuntimeError: test_jit_simple failed!
Jan 21 21:07:31 Ran 2346 tests in 46.593s 
Jan 21 21:07:31  
Jan 21 21:07:31 FAILED (failures=4, errors=1024, skipped=79, expected failures=1) 
Jan 21 21:07:31  
Jan 21 21:07:31 Generating XML reports... 
Jan 21 21:07:31 Traceback (most recent call last): 
Jan 21 21:07:31   File "test/run_test.py", line 456, in <module> 
Jan 21 21:07:31     main() 
Jan 21 21:07:31   File "test/run_test.py", line 449, in main 
Jan 21 21:07:31     raise RuntimeError(message) 
Jan 21 21:07:31 RuntimeError: test_jit_simple failed! 
Jan 21 21:07:31 + cleanup 
Jan 21 21:07:31 + retcode=1 
Jan 21 21:07:31 + set +x 
retrieving test reports 

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_ge_config_legacy_test (2/12)

Step: "Test" (full log | pattern match details)

Jan 21 21:05:59 caused by: Connection refused (os error 111)
Jan 21 21:05:59 +++ eval 'extract_trap_cmd ' 
Jan 21 21:05:59 ++++ extract_trap_cmd 
Jan 21 21:05:59 ++++ printf '%s\n' '' 
Jan 21 21:05:59 +++ printf '%s\n' cleanup 
Jan 21 21:05:59 ++ trap -- ' 
Jan 21 21:05:59 cleanup' EXIT 
Jan 21 21:05:59 ++ which sccache 
Jan 21 21:05:59 ++ sccache --stop-server 
Jan 21 21:05:59 Stopping sccache server... 
Jan 21 21:05:59 error: couldn't connect to server 
Jan 21 21:05:59 caused by: Connection refused (os error 111) 
Jan 21 21:05:59 ++ true 
Jan 21 21:05:59 ++ rm /var/lib/jenkins/sccache_error.log 
Jan 21 21:05:59 ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 
Jan 21 21:05:59 ++ SCCACHE_IDLE_TIMEOUT=1200 
Jan 21 21:05:59 ++ RUST_LOG=sccache::server=error 
Jan 21 21:05:59 ++ sccache --start-server 
Jan 21 21:05:59 Starting sccache server... 
Jan 21 21:05:59 ++ sccache --zero-stats 
Jan 21 21:05:59 Compile requests                 0 
Jan 21 21:05:59 Compile requests executed        0 

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_test (3/12)

Step: "Test" (full log | pattern match details)

Jan 21 21:17:26 RuntimeError: test_quantization failed!
Jan 21 21:17:26 Ran 36 tests in 41.920s 
Jan 21 21:17:26  
Jan 21 21:17:26 FAILED (errors=22, skipped=2) 
Jan 21 21:17:26  
Jan 21 21:17:26 Generating XML reports... 
Jan 21 21:17:26 Traceback (most recent call last): 
Jan 21 21:17:26   File "test/run_test.py", line 456, in <module> 
Jan 21 21:17:26     main() 
Jan 21 21:17:26   File "test/run_test.py", line 449, in main 
Jan 21 21:17:26     raise RuntimeError(message) 
Jan 21 21:17:26 RuntimeError: test_quantization failed! 
Jan 21 21:17:26 + cleanup 
Jan 21 21:17:26 + retcode=1 
Jan 21 21:17:26 + set +x 
Jan 21 21:17:26 =================== sccache compilation log =================== 
Jan 21 21:17:26 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 21:17:26 Compile requests                  7 
Jan 21 21:17:26 Compile requests executed         6 
Jan 21 21:17:26 Cache hits                        0 
Jan 21 21:17:26 Cache misses                      6 
Jan 21 21:17:26 Cache timeouts                    0 

See CircleCI build pytorch_windows_test2 (4/12)

Step: "Test" (full log | pattern match details)

RuntimeError: test_distributions failed!
 
---------------------------------------------------------------------- 
Ran 197 tests in 19.552s 
 
FAILED (errors=11) 
Traceback (most recent call last): 
  File "run_test.py", line 456, in <module> 
    main() 
  File "run_test.py", line 449, in main 
    raise RuntimeError(message) 
RuntimeError: test_distributions failed! 
 
(base) circleci@PACKER-5DDC1B75 C:\Users\circleci\project\test>if ERRORLEVEL 1 exit /b 1  
+ cleanup
+ retcode=1
+ set +x

See CircleCI build pytorch_linux_xenial_py2_7_9_test (5/12)

Step: "Test" (full log | pattern match details)

Jan 21 21:36:24 RuntimeError: test_quantization failed!
Jan 21 21:36:24 Ran 36 tests in 43.936s 
Jan 21 21:36:24  
Jan 21 21:36:24 FAILED (errors=22, skipped=2) 
Jan 21 21:36:24  
Jan 21 21:36:24 Generating XML reports... 
Jan 21 21:36:24 Traceback (most recent call last): 
Jan 21 21:36:24   File "test/run_test.py", line 456, in <module> 
Jan 21 21:36:24     main() 
Jan 21 21:36:24   File "test/run_test.py", line 449, in main 
Jan 21 21:36:24     raise RuntimeError(message) 
Jan 21 21:36:24 RuntimeError: test_quantization failed! 
Jan 21 21:36:24 + cleanup 
Jan 21 21:36:24 + retcode=1 
Jan 21 21:36:24 + set +x 
Jan 21 21:36:24 =================== sccache compilation log =================== 
Jan 21 21:36:24 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 21:36:24 Compile requests                 10 
Jan 21 21:36:24 Compile requests executed         8 
Jan 21 21:36:24 Cache hits                        2 
Jan 21 21:36:24 Cache misses                      6 
Jan 21 21:36:24 Cache timeouts                    0 

See CircleCI build pytorch_macos_10_13_py3_test (6/12)

Step: "Test" (full log | pattern match details)

Jan 21 14:10:04 RuntimeError: test_distributions failed!
Jan 21 14:10:04 Ran 197 tests in 23.943s 
Jan 21 14:10:04  
Jan 21 14:10:04 FAILED (errors=11, skipped=6) 
Jan 21 14:10:04  
Jan 21 14:10:04 Generating XML reports... 
Jan 21 14:10:04 Traceback (most recent call last): 
Jan 21 14:10:04   File "test/run_test.py", line 456, in <module> 
Jan 21 14:10:04     main() 
Jan 21 14:10:04   File "test/run_test.py", line 449, in main 
Jan 21 14:10:04     raise RuntimeError(message) 
Jan 21 14:10:04 RuntimeError: test_distributions failed! 
Jan 21 14:10:04 + cleanup 
Jan 21 14:10:04 + retcode=1 
Jan 21 14:10:04 + set +x 

See CircleCI build pytorch_linux_xenial_py3_clang5_asan_test (7/12)

Step: "Test" (full log | pattern match details)

Jan 21 22:55:13 RuntimeError: test_quantization failed!
Jan 21 22:55:13 Ran 36 tests in 321.036s 
Jan 21 22:55:13  
Jan 21 22:55:13 FAILED (errors=22, skipped=2) 
Jan 21 22:55:13  
Jan 21 22:55:13 Generating XML reports... 
Jan 21 22:55:13 Traceback (most recent call last): 
Jan 21 22:55:13   File "test/run_test.py", line 456, in <module> 
Jan 21 22:55:13     main() 
Jan 21 22:55:13   File "test/run_test.py", line 449, in main 
Jan 21 22:55:13     raise RuntimeError(message) 
Jan 21 22:55:13 RuntimeError: test_quantization failed! 
Jan 21 22:55:13 + cleanup 
Jan 21 22:55:13 + retcode=1 
Jan 21 22:55:13 + set +x 
Jan 21 22:55:13 =================== sccache compilation log =================== 
Jan 21 22:55:13 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 22:55:13 Compile requests                 0 
Jan 21 22:55:13 Compile requests executed        0 
Jan 21 22:55:13 Cache hits                       0 
Jan 21 22:55:13 Cache misses                     0 
Jan 21 22:55:13 Cache timeouts                   0 

See CircleCI build pytorch_linux_xenial_cuda10_1_cudnn7_py3_nogpu_test (8/12)

Step: "Test" (full log | pattern match details)

Jan 21 23:47:42 RuntimeError: test_quantization failed!
Jan 21 23:47:42 Ran 36 tests in 40.885s 
Jan 21 23:47:42  
Jan 21 23:47:42 FAILED (errors=22, skipped=2) 
Jan 21 23:47:42  
Jan 21 23:47:42 Generating XML reports... 
Jan 21 23:47:42 Traceback (most recent call last): 
Jan 21 23:47:42   File "test/run_test.py", line 456, in <module> 
Jan 21 23:47:42     main() 
Jan 21 23:47:42   File "test/run_test.py", line 449, in main 
Jan 21 23:47:42     raise RuntimeError(message) 
Jan 21 23:47:42 RuntimeError: test_quantization failed! 
Jan 21 23:47:42 + cleanup 
Jan 21 23:47:42 + retcode=1 
Jan 21 23:47:42 + set +x 
Jan 21 23:47:42 =================== sccache compilation log =================== 
Jan 21 23:47:42 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 23:47:42 Compile requests                  7 
Jan 21 23:47:42 Compile requests executed         6 
Jan 21 23:47:42 Cache hits                        0 
Jan 21 23:47:42 Cache misses                      6 
Jan 21 23:47:42 Cache timeouts                    0 

See CircleCI build pytorch_linux_xenial_cuda10_1_cudnn7_py3_slow_test (9/12)

Step: "Test" (full log | pattern match details)

Jan 21 23:52:35 RuntimeError: test_jit failed!
Jan 21 23:52:35 Ran 2346 tests in 56.771s 
Jan 21 23:52:35  
Jan 21 23:52:35 FAILED (errors=9, skipped=2336) 
Jan 21 23:52:35  
Jan 21 23:52:35 Generating XML reports... 
Jan 21 23:52:35 Traceback (most recent call last): 
Jan 21 23:52:35   File "test/run_test.py", line 456, in <module> 
Jan 21 23:52:35     main() 
Jan 21 23:52:35   File "test/run_test.py", line 449, in main 
Jan 21 23:52:35     raise RuntimeError(message) 
Jan 21 23:52:35 RuntimeError: test_jit failed! 
Jan 21 23:52:36 + cleanup 
Jan 21 23:52:36 + retcode=1 
Jan 21 23:52:36 + set +x 
Jan 21 23:52:36 =================== sccache compilation log =================== 
Jan 21 23:52:36 ERROR:sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "/tmp/torch_extensions/test_compilation_error_formatting/main.cpp: In function \'int main()\':\n/tmp/torch_extensions/test_compilation_error_formatting/main.cpp:2:23: error: expected \';\' before \'}\' token\n int main() { return 0 }\n                       ^\n" } 
Jan 21 23:52:36  
Jan 21 23:52:36 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 23:52:36 Compile requests                143 
Jan 21 23:52:36 Compile requests executed        51 
Jan 21 23:52:36 Cache hits                        1 

See CircleCI build pytorch_linux_xenial_cuda10_1_cudnn7_py3_NO_AVX2_test (10/12)

Step: "Test" (full log | pattern match details)

Jan 21 23:53:28 RuntimeError: test_quantization failed!
Jan 21 23:53:28 Ran 36 tests in 46.130s 
Jan 21 23:53:28  
Jan 21 23:53:28 FAILED (errors=22, skipped=2) 
Jan 21 23:53:28  
Jan 21 23:53:28 Generating XML reports... 
Jan 21 23:53:28 Traceback (most recent call last): 
Jan 21 23:53:28   File "test/run_test.py", line 456, in <module> 
Jan 21 23:53:28     main() 
Jan 21 23:53:28   File "test/run_test.py", line 449, in main 
Jan 21 23:53:28     raise RuntimeError(message) 
Jan 21 23:53:28 RuntimeError: test_quantization failed! 
Jan 21 23:53:29 + cleanup 
Jan 21 23:53:29 + retcode=1 
Jan 21 23:53:29 + set +x 
Jan 21 23:53:29 =================== sccache compilation log =================== 
Jan 21 23:53:29 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 23:53:29 Compile requests                32 
Jan 21 23:53:29 Compile requests executed       11 
Jan 21 23:53:29 Cache hits                       0 
Jan 21 23:53:29 Cache misses                    11 
Jan 21 23:53:29 Cache timeouts                   0 

See CircleCI build pytorch_linux_xenial_cuda10_1_cudnn7_py3_gcc7_test (11/12)

Step: "Test" (full log | pattern match details)

Jan 21 23:54:24 RuntimeError: test_quantization failed!
Jan 21 23:54:24 Ran 36 tests in 47.254s 
Jan 21 23:54:24  
Jan 21 23:54:24 FAILED (errors=22, skipped=2) 
Jan 21 23:54:24  
Jan 21 23:54:24 Generating XML reports... 
Jan 21 23:54:24 Traceback (most recent call last): 
Jan 21 23:54:24   File "test/run_test.py", line 456, in <module> 
Jan 21 23:54:24     main() 
Jan 21 23:54:24   File "test/run_test.py", line 449, in main 
Jan 21 23:54:24     raise RuntimeError(message) 
Jan 21 23:54:24 RuntimeError: test_quantization failed! 
Jan 21 23:54:25 + cleanup 
Jan 21 23:54:25 + retcode=1 
Jan 21 23:54:25 + set +x 
Jan 21 23:54:25 =================== sccache compilation log =================== 
Jan 21 23:54:25 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 23:54:25 Compile requests                32 
Jan 21 23:54:25 Compile requests executed       11 
Jan 21 23:54:25 Cache hits                       1 
Jan 21 23:54:25 Cache misses                    10 
Jan 21 23:54:25 Cache timeouts                   0 

See CircleCI build pytorch_linux_xenial_cuda10_1_cudnn7_py3_NO_AVX_NO_AVX2_test (12/12)

Step: "Test" (full log | pattern match details)

Jan 21 23:54:06 RuntimeError: test_quantization failed!
Jan 21 23:54:06 Ran 36 tests in 45.838s 
Jan 21 23:54:06  
Jan 21 23:54:06 FAILED (errors=22, skipped=2) 
Jan 21 23:54:06  
Jan 21 23:54:06 Generating XML reports... 
Jan 21 23:54:06 Traceback (most recent call last): 
Jan 21 23:54:06   File "test/run_test.py", line 456, in <module> 
Jan 21 23:54:06     main() 
Jan 21 23:54:06   File "test/run_test.py", line 449, in main 
Jan 21 23:54:06     raise RuntimeError(message) 
Jan 21 23:54:06 RuntimeError: test_quantization failed! 
Jan 21 23:54:07 + cleanup 
Jan 21 23:54:07 + retcode=1 
Jan 21 23:54:07 + set +x 
Jan 21 23:54:07 =================== sccache compilation log =================== 
Jan 21 23:54:07 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Jan 21 23:54:07 Compile requests                32 
Jan 21 23:54:07 Compile requests executed       11 
Jan 21 23:54:07 Cache hits                       0 
Jan 21 23:54:07 Cache misses                    11 
Jan 21 23:54:07 Cache timeouts                   0 

❄️ 4 failures recognized as flaky

The following build failures have been detected as flaky and may not be your fault:

See CircleCI build binary_linux_libtorch_2_7m_cpu_devtoolset7_shared-with-deps_test (1/4)

Step: "Set Up CI Environment After attach_workspace" (full log | pattern match details) ❄️

E: Failed to fetch https://nvidia.github.io/libnvidia-container/ubuntu16.04/amd64/Packages Writing more data than expected (55935 > 51667)
                                                                  96% [39 Packages store 0 B]                             Get:53 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 Packages [16.8 kB] 
                                                         96% [39 Packages store 0 B]                             Get:54 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse Translation-en [8,468 B] 
96% [Waiting for headers] 96% [40 Translation-en store 0 B] [Waiting for headers]                                                         Get:55 http://archive.ubuntu.com/ubuntu xenial-backports/main Sources [4,848 B] 
 96% [40 Translation-en store 0 B] [Waiting for headers]                                                         Get:56 http://archive.ubuntu.com/ubuntu xenial-backports/universe Sources [7,120 B] 
 96% [40 Translation-en store 0 B] [Waiting for headers]                                                         Get:57 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [7,280 B] 
 96% [40 Translation-en store 0 B] [Waiting for headers]                                                         Get:58 http://archive.ubuntu.com/ubuntu xenial-backports/main Translation-en [4,456 B] 
 96% [40 Translation-en store 0 B] [Waiting for headers]                                                         Get:59 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [8,064 B] 
 96% [40 Translation-en store 0 B] [Waiting for headers]                                                         Get:60 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [4,328 B] 
100% [60 Translation-en store 0 B]                                4,764 kB/s 0s 100% [Working]                                                    4,764 kB/s 0s                                                                                 Fetched 28.8 MB in 6s (4,586 kB/s) 
Reading package lists... 99%  Reading package lists... Done  
E: Failed to fetch https://nvidia.github.io/libnvidia-container/ubuntu16.04/amd64/Packages  Writing more data than expected (55935 > 51667) 
E: Some index files failed to download. They have been ignored, or old ones used instead. 

See CircleCI build caffe2_onnx_py2_gcc5_ubuntu16_04_test (2/4)

Step: "Test" (full log | pattern match details) ❄️

Jan 21 20:59:49 unknown file: Failure
Jan 21 20:59:49 frame #9: /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test() [0x448685] 
Jan 21 20:59:49 frame #10: /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test() [0x4488dd] 
Jan 21 20:59:49 frame #11: testing::internal::UnitTestImpl::RunAllTests() + 0xbc2 (0x4498a2 in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test) 
Jan 21 20:59:49 frame #12: testing::UnitTest::Run() + 0x8b (0x449bbb in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test) 
Jan 21 20:59:49 frame #13: main + 0x3c (0x417a6c in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test) 
Jan 21 20:59:49 frame #14: __libc_start_main + 0xf0 (0x7f6747f7c830 in /lib/x86_64-linux-gnu/libc.so.6) 
Jan 21 20:59:49 frame #15: _start + 0x29 (0x417e19 in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test) 
Jan 21 20:59:49 " thrown in the test body. 
Jan 21 20:59:49 [  FAILED  ] BackendFallbackTest.TestBackendFallbackWithMode (7 ms) 
Jan 21 20:59:49 [ RUN      ] BackendFallbackTest.TestBackendFallbackWithWrapper 
Jan 21 20:59:49 unknown file: Failure 
Jan 21 20:59:49 C++ exception with description "false INTERNAL ASSERT FAILED at /var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h:46, please report a bug to PyTorch. Tried to call KernelFunction::callBoxed() on a KernelFunction that can only be called with KernelFunction::callUnboxed(). (callBoxed at /var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h:46) 
Jan 21 20:59:49 frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7f6748b7428a in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/lib/libc10.so) 
Jan 21 20:59:49 frame #1: <unknown function> + 0x2be73cc (0x7f674b96f3cc in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/lib/libtorch_cpu.so) 
Jan 21 20:59:49 frame #2: /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test() [0x41abf2] 
Jan 21 20:59:49 frame #3: /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test() [0x41c077] 
t::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double, double, bool>(void (*)(c10::OperatorKernel*, c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*), c10::OperatorKernel*, c10::OperatorHandle const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double, double, bool) + 0x176 (0x426e46 in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test) 
Jan 21 20:59:49 frame #5: /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test() [0x41bbeb] 
Jan 21 20:59:49 frame #6: /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test() [0x41d19e] 
Jan 21 20:59:49 frame #7: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x43 (0x4525d3 in /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test) 
Jan 21 20:59:49 frame #8: /var/lib/jenkins/.local/lib/python2.7/site-packages/torch/test/backend_fallback_test() [0x44834b] 

See CircleCI build caffe2_onnx_py3_6_clang7_ubuntu16_04_test (3/4)

Step: "Test" (full log | pattern match details) ❄️

Jan 21 21:04:38 unknown file: Failure
Jan 21 21:04:38 frame #12: testing::TestCase::Run() + 0xf7 (0x4347a7 in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test) 
Jan 21 21:04:38 frame #13: testing::internal::UnitTestImpl::RunAllTests() + 0x467 (0x442627 in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test) 
Jan 21 21:04:38 frame #14: bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) + 0x44 (0x455ef4 in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test) 
Jan 21 21:04:38 frame #15: testing::UnitTest::Run() + 0x4c (0x44218c in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test) 
Jan 21 21:04:38 frame #16: main + 0x37 (0x427837 in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test) 
Jan 21 21:04:38 frame #17: __libc_start_main + 0xf0 (0x7f47d43e3830 in /lib/x86_64-linux-gnu/libc.so.6) 
Jan 21 21:04:38 frame #18: _start + 0x29 (0x415c59 in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test) 
Jan 21 21:04:38 " thrown in the test body. 
Jan 21 21:04:38 [  FAILED  ] BackendFallbackTest.TestBackendFallbackWithMode (8 ms) 
Jan 21 21:04:38 [ RUN      ] BackendFallbackTest.TestBackendFallbackWithWrapper 
Jan 21 21:04:38 unknown file: Failure 
Jan 21 21:04:38 C++ exception with description "false INTERNAL ASSERT FAILED at /var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h:46, please report a bug to PyTorch. Tried to call KernelFunction::callBoxed() on a KernelFunction that can only be called with KernelFunction::callUnboxed(). (callBoxed at /var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h:46) 
Jan 21 21:04:38 frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x4a (0x7f47d5986dea in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/lib/libc10.so) 
Jan 21 21:04:38 frame #1: <unknown function> + 0x7e21de (0x7f47d63781de in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) 
Jan 21 21:04:38 frame #2: <unknown function> + 0x30d90d2 (0x7f47d8c6f0d2 in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) 
Jan 21 21:04:38 frame #3: /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test() [0x417866] 
Jan 21 21:04:38 frame #4: /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test() [0x416eae] 
Jan 21 21:04:38 frame #5: _ZN3c104impl19boxAndCallBoxedFuncIN2at6TensorEJRKS3_S5_S5_S5_S5_bddbEEENSt9enable_ifIXaasr15supports_boxingIT_DpT0_EE5valuentsr3std7is_sameIvS7_EE5valueES7_E4typeEPFvPNS_14OperatorKernelERKNS_14OperatorHandleEPSt6vectorINS_6IValueESaISI_EEESD_SG_S9_ + 0x33a (0x426b7a in /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test) 
Jan 21 21:04:38 frame #6: /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test() [0x426635] 
Jan 21 21:04:38 frame #7: /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test() [0x42629e] 
Jan 21 21:04:38 frame #8: /var/lib/jenkins/.local/lib/python3.6/site-packages/torch/test/backend_fallback_test() [0x41879d] 

See CircleCI build pytorch_linux_xenial_cuda10_1_cudnn7_py3_multigpu_test (4/4)

Step: "Test" (full log | pattern match details) ❄️

Jan 21 23:36:41 unknown file: Failure
Jan 21 23:36:41 Warning: The enum value `torch::nn::init::Nonlinearity::LeakyReLU` is deprecated and will be removed in 1.5. Please use `torch::kLeakyReLU` instead. (_compute_nonlinearity_type at /var/lib/jenkins/workspace/torch/csrc/api/src/nn/init.cpp:64) 
Jan 21 23:36:41 [       OK ] InitTest.CanInitializeCnnWithOrthogonal (1 ms) 
Jan 21 23:36:41 [ RUN      ] InitTest.NonlinearityLegacyEnum 
Jan 21 23:36:41 [       OK ] InitTest.NonlinearityLegacyEnum (0 ms) 
Jan 21 23:36:41 [ RUN      ] InitTest.FanModeLegacyEnum 
Jan 21 23:36:41 [       OK ] InitTest.FanModeLegacyEnum (0 ms) 
Jan 21 23:36:41 [----------] 11 tests from InitTest (22 ms total) 
Jan 21 23:36:41  
Jan 21 23:36:41 [----------] 6 tests from TorchScriptTest 
Jan 21 23:36:41 [ RUN      ] TorchScriptTest.CanCompileMultipleFunctions 
Jan 21 23:36:41 unknown file: Failure 
Jan 21 23:36:41 C++ exception with description "false INTERNAL ASSERT FAILED at /var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h:46, please report a bug to PyTorch. Tried to call KernelFunction::callBoxed() on a KernelFunction that can only be called with KernelFunction::callUnboxed(). 
Jan 21 23:36:41 The above operation failed in interpreter. 
Jan 21 23:36:41 Traceback (most recent call last): 
Jan 21 23:36:41   File "<string>", line 8 
Jan 21 23:36:41       def test_while(a, i): 
Jan 21 23:36:41         while bool(i < 10): 
Jan 21 23:36:41           a += a 
Jan 21 23:36:41           ~~~~~~ <--- HERE 
Jan 21 23:36:41           i += 1 
Jan 21 23:36:41         return a 

🚧 1 upstream failure recognized by patterns:

These builds matched patterns, but were probably caused by upstream breakages:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

This comment has been revised 80 times.

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
@smessmer smessmer requested a review from apaszke as a code owner December 18, 2019 03:46
smessmer added a commit that referenced this pull request Dec 18, 2019
Pull Request resolved: #30370

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.
ghstack-source-id: 95878675

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Jan 7, 2020
Pull Request resolved: #30370

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.
ghstack-source-id: 96369463

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

The `use_c10_dispatcher` setting in `native_functions.yaml` now has a new option 'with_codegenerated_unboxing_wrapper' which means we take the codegened unboxing wrapper from register_aten_ops.cpp and stuff it into c10. This new argument is the default, 'unboxed_only' is not the default anymore. For the ops that don't support boxed dispatch yet (i.e. ops taking TensorOptions arguments), we set them to 'unboxed_only' and they follow the old behavior of having register_aten_ops.cpp register the jit op.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

The `use_c10_dispatcher` setting in `native_functions.yaml` now has a new option 'with_codegenerated_unboxing_wrapper' which means we take the codegened unboxing wrapper from register_aten_ops.cpp and stuff it into c10. This new argument is the default, 'unboxed_only' is not the default anymore. For the ops that don't support boxed dispatch yet (i.e. ops taking TensorOptions arguments), we set them to 'unboxed_only' and they follow the old behavior of having register_aten_ops.cpp register the jit op.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Jan 16, 2020
Pull Request resolved: #30370

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

The `use_c10_dispatcher` setting in `native_functions.yaml` now has a new option 'with_codegenerated_unboxing_wrapper' which means we take the codegened unboxing wrapper from register_aten_ops.cpp and stuff it into c10. This new argument is the default, 'unboxed_only' is not the default anymore. For the ops that don't support boxed dispatch yet (i.e. ops taking TensorOptions arguments), we set them to 'unboxed_only' and they follow the old behavior of having register_aten_ops.cpp register the jit op.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

ghstack-source-id: 96798469

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)
Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

The `use_c10_dispatcher` setting in `native_functions.yaml` now has a new option 'with_codegenerated_unboxing_wrapper' which means we take the codegened unboxing wrapper from register_aten_ops.cpp and stuff it into c10. This new argument is the default, 'unboxed_only' is not the default anymore. For the ops that don't support boxed dispatch yet (i.e. ops taking TensorOptions arguments), we set them to 'unboxed_only' and they follow the old behavior of having register_aten_ops.cpp register the jit op.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)

[ghstack-poisoned]
@smessmer smessmer closed this Jan 23, 2020
@smessmer smessmer deleted the gh/smessmer/124/head branch January 23, 2020 02:50
xxtEchjovs44 pushed a commit to xxtEchjovs44/pytorch that referenced this pull request Jan 29, 2020
Pull Request resolved: pytorch/pytorch#30370

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

The `use_c10_dispatcher` setting in `native_functions.yaml` now has a new option 'with_codegenerated_unboxing_wrapper' which means we take the codegened unboxing wrapper from register_aten_ops.cpp and stuff it into c10. This new argument is the default, 'unboxed_only' is not the default anymore. For the ops that don't support boxed dispatch yet (i.e. ops taking TensorOptions arguments), we set them to 'unboxed_only' and they follow the old behavior of having register_aten_ops.cpp register the jit op.

This temporary hack will allow us to get mobile working much faster, but the plan is
to remove this hack again once all ops support the templated unboxing wrapper.

ghstack-source-id: 96979487

Differential Revision: [D18672405](https://our.internmc.facebook.com/intern/diff/D18672405/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants