Skip to content

Conversation

@snnn
Copy link
Contributor

@snnn snnn commented Jan 31, 2020

Description:

  1. Add support for vstest.
  2. Add support for vcpkg. To use it:
 vcpkg install zlib:x64-windows benchmark:x64-windows gtest:x64-windows protobuf:x64-windows pybind11:x64-windows re2:x64-windows
 mkdir build
 cmake ..\cmake -DCMAKE_BUILD_TYPE=Debug -A x64 -T host=x64 -DCMAKE_TOOLCHAIN_FILE=C:\vcpkg\scripts\buildsystems\vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows -Donnxruntime_PREFER_SYSTEM_LIB=ON
  1. New cmake option: onnxruntime_PREFER_SYSTEM_LIB, which allows user using the preinstall libs instead of the things in onnxruntime submodule.
  2. New cmake option: onnxruntime_ENABLE_MEMLEAK_CHECKER, which allows user turn on/off the memory leak checker by @RyanUnderhill in Windows Debug Build. The checker doesn't work with vstest.
  3. Fix the post merge pipeline(Mainly for test coverage report).
  4. Ignore the compile warning from the Featurizer library code
  5. Apply "/utf-8" VC compile flag to our code. Without this, you can't build onnxruntime on Chinese Windows.
  6. Remove the SingleUnitTestProject cmake option because it's deprecated more than one year and nobody is using it.
  7. Move opaque api tests to onnxruntime_test_all
  8. Enable "/W4" on CUDA ep's C++ code(Not the *.cu files), and fix some warnings, add some extra checks.
  9. Delete the onnxruntime::test::TestEnvironment class.
  10. Add a DLLmain for onnxruntime.dll.
  11. Allow dynamic link to libprotobuf

Motivation and Context

@snnn snnn requested a review from a team as a code owner January 31, 2020 19:55
Changming Sun added 3 commits February 1, 2020 18:51
@snnn snnn requested a review from a team February 3, 2020 18:24
@pranavsharma
Copy link
Contributor

Do we need to setup some pipeline or continuous test to ensure vcpkg support always works?

@pranavsharma
Copy link
Contributor

Also, these are a lot of changes. Would've been nice to split them.

@snnn snnn merged commit 7ff5c0e into master Feb 4, 2020
@snnn snnn deleted the snnn/testconv branch February 4, 2020 03:33
@snnn
Copy link
Contributor Author

snnn commented Feb 4, 2020

Do we need to setup some pipeline or continuous test to ensure vcpkg support always works?

Good suggestion! I will do it.

yan12125 added a commit to archlinuxcn/repo that referenced this pull request Mar 11, 2020
* replace bundled date with chrono-date in repo [1]
* Use new cmake option from upstream [2] for debundling
* refreshes and merges all patches
* Use older GCC only for the CUDA build
* Update comments about obstacles to debundled onnx

[1] https://lists.archlinux.org/pipermail/arch-dev-public/2020-February/029885.html
[2] microsoft/onnxruntime#2961
weixingzhang pushed a commit that referenced this pull request Mar 23, 2020
Merge up to commit 4f4f4bc

There were several very large pull requests in public master:
#2956
#2958
#2961

**BERT-Large, FP16, seq=128:**
Batch = 66
Throughput = 189.049 ex/sec

**BERT-Large, FP16, seq=512:**
Batch = 10
Throughput = 36.6335 ex/sec

**BERT-Large, FP32, seq=128:**
Batch = 33
Throughput = 42.2642 ex/sec

**BERT-Large, FP32, seq=512:**
Batch = 5
Throughput = 9.32792 ex/sec

**BERT-Large LAMB convergence:**
![image.png](https://aiinfra.visualstudio.com/530acbc4-21bc-487d-8cd8-348ff451d2ff/_apis/git/repositories/adc1028e-6f04-44b7-a3cf-cb157be4fb65/pullRequests/5567/attachments/image.png)
`$ python watch_experiment.py --subscription='4aaa645c-5ae2-4ae9-a17a-84b9023bc56a' --resource_group='onnxtraining' --workspace='onnxtraining' --remote_dir='logs/tensorboard/' --local_dir='D:/tensorboard/bert-large/fp16/lamb/seq128/lr3e-3/wr0.2843/master/' --run='BERT-ONNX_1581120364_71872cef'`

**E2E**:  PASSED
https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=117300&view=results
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants