-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Ort 1 23 ovep fixes #25744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ort 1 23 ovep fixes #25744
Conversation
Not setting default precision if it is not set via provider option.
#776) * Fix failing case where input onnx model is used with shared context enabled * Update onnxruntime/core/providers/openvino/openvino_execution_provider.cc Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: MayureshV1 <47039074+MayureshV1@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
/azp run Test Linux CUDA x64 Release,Test Linux TensorRT x64 Release,Windows GPU Doc Gen CI Pipeline, windows_x64_debug/build_x64_debug (pull_request),web_Debug/build_onnxruntime_web, web_Debug / build_onnxruntime_web |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
some test failure? |
That's odd. I see OV failures but not sure where they're coming from, those logs a huge and can't find the error message. The validation team tested each commit before the PR was raised so I'm not so concerned about the OV issues. Regarding the MacOS failures the PR only touches OVEP files, this is likely a false negative. |
|
/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows x64 QNN CI Pipeline |
|
Azure Pipelines successfully started running 5 pipeline(s). |
HectorSVC
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The errors are from onnxruntime_test_all
1: [ RUN ] GemmOpTest.GemmNoTrans_f16
1: E:_work\onnxruntime\onnxruntime\onnxruntime\test\providers\checkers.cc(437): error: The difference between f_expected[i] and f_actual[i] is 0.046875, which exceeds tolerance, where
1: f_expected[i] evaluates to 19.796875,
1: f_actual[i] evaluates to 19.75, and
1: tolerance evaluates to 0.024796877056360245.
1: i:0
1: Google Test trace:
1: E:_work\onnxruntime\onnxruntime\onnxruntime\test\providers\checkers.cc(568): provider type: OpenVINOExecutionProvider
1: E:_work\onnxruntime\onnxruntime\onnxruntime\test\providers\base_tester.cc(849): registered execution providers: OpenVINOExecutionProvider
1: Stack trace:
|
@javier-intel any updates? |
More or less, here's what I have. We had ran |
there is some config related to precision that he changed. is it resulting in a different default precision being used? because the test failure is output mismatch but the results/expected look close. perhaps the tolerance requires adjusting? |
|
We were able to reproduce the regression. Given there are other changes for 1.23 I'm closing this PR and raise a new one once our full scope validation completes. |
Description
This PR fixes additional load_config handling logic issues and a failure when (ep.share_ep_contexts && !ep.context_enable) are true.