Skip to content

OpenCL: use half when specified#18465

Closed
tomoaki0705 wants to merge 1 commit intoopencv:3.4from
tomoaki0705:useHalfDnnOpenCL
Closed

OpenCL: use half when specified#18465
tomoaki0705 wants to merge 1 commit intoopencv:3.4from
tomoaki0705:useHalfDnnOpenCL

Conversation

@tomoaki0705
Copy link
Copy Markdown
Contributor

relates #18457

Pull Request Readiness Checklist

This PR doesn't fix #18457, but during the debug, I found that weights_half is never used even it has been converted from weight
I confirmed the this PR doesn't harm the tests. perf_dnn fails as same as before, and some tests fails even before from this PR, so nothing gets worse.

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or other license that is incompatible with OpenCV
  • The PR is proposed to proper branch
  • There is reference to original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

@asmorkalov
Copy link
Copy Markdown
Contributor

@sl-sergei Could you look at CI logs and tests results.

@asmorkalov asmorkalov requested a review from sl-sergei November 19, 2020 12:13
@sl-sergei
Copy link
Copy Markdown
Contributor

@sl-sergei Could you look at CI logs and tests results.

Sure

Copy link
Copy Markdown
Contributor

@sl-sergei sl-sergei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that function convertFp16 provides incorrect weights/biases, all other networks without Conv layers work fine in tests with FP16. More investigation required

if (bestKernelConfig.empty())
return false;
return convolve(bottom, top, weight, (use_half_) ? bias_half : bias, numImages, bestKernelConfig);
return convolve(bottom, top, (use_half_) ? weights_half : weight, (use_half_) ? bias_half : bias, numImages, bestKernelConfig);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like OpenCL kernels are still called properly (without this patch).

There are several other places with handling this in this file: 1 2 3

IMHO, it is a coding issue.
There is no reason to pass FP32 weight/bias to this implementation.
FP16 should be passed instead by caller (convolution_layer.cpp).

Also it should avoid double conversion FP16->FP32->FP16 hacks like this.

@sl-sergei
Copy link
Copy Markdown
Contributor

sl-sergei commented Nov 27, 2020

@alalek Can you please provide the details about testing machine configuration for test_dnn-ippicv-opencl? I can't check it on https://pullrequest.opencv.org/ , it seems to be broken somehow:https://pullrequest.opencv.org/buildbot/builders/precommit_opencl_linux/builds/27021/steps/test_dnn-ippicv-opencl/logs/stdio
I tried to test these changes on my local machine with Nvidia 1080Ti and it passes all the tests with FP_16 OCL

@asmorkalov
Copy link
Copy Markdown
Contributor

@sl-sergei I restarted CI builds, let's check again.

@asmorkalov
Copy link
Copy Markdown
Contributor

Replaced by accurate fix in #19115

@asmorkalov asmorkalov closed this Dec 15, 2020
@JulienMaille
Copy link
Copy Markdown
Contributor

Does it mean we can load a "normal" (fp32) onnx and run fp16 inference?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants