Skip to content

dnn(perf): fix and merge Convolution tests#12142

Merged
opencv-pushbot merged 2 commits intoopencv:3.4from
alalek:dnn_ocl_fix_convolution_perf_tests
Aug 31, 2018
Merged

dnn(perf): fix and merge Convolution tests#12142
opencv-pushbot merged 2 commits intoopencv:3.4from
alalek:dnn_ocl_fix_convolution_perf_tests

Conversation

@alalek
Copy link
Copy Markdown
Member

@alalek alalek commented Aug 3, 2018

  • OpenCL tests didn't run any OpenCL kernels
  • use real configuration from existed models
  • changed dump format for DNN Backend/Target in tests

Configuration list is prepared by using of this patch: alalek@dnn_dump_conv_kernels

update: 4.5.3 - https://github.com/alalek/opencv/commit/dnn_dump_conv_kernels_4.5.3

resolves #10238

allow_multiple_commits=1
docker_image:Custom=ubuntu-openvino:16.04
buildworker:Custom=linux-2

static const tuple<Backend, Target> testBackendsAndTargets[] = {
tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_CPU),
tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL),
tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL_FP16)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can skip OpenCL targets if OpenCL is not available to reduce duplications:

static testing::internal::ParamGenerator<tuple<Backend, Target> > testBackendsAndTargets()
{
    static std::vector<tuple<Backend, Target> > targets;
    if (targets.empty())
    {
        targets.push_back(tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_CPU));
#ifdef HAVE_OPENCL
        if (cv::ocl::useOpenCL())
        {
            targets.push_back(tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL));
            targets.push_back(tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL_FP16));
        }
#endif
    }
    return testing::ValuesIn(targets);
}

INSTANTIATE_TEST_CASE_P(/**/, Conv, Combine(
    ConvParamID::all(),
    testBackendsAndTargets()
));

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

return testing::ValuesIn(targets);
}

static testing::internal::ParamGenerator<tuple<Backend, Target> > dnnBackendsAndTargets()
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved into test_common.hpp (file is shared between accuracy and perf tests)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alalek, Could you please also replace DNNBackend and DNNTarget at https://github.com/opencv/opencv/blob/3.4/modules/dnn/perf/perf_net.cpp?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, these changes are placed as a separate commit.


using namespace cv::dnn;

static testing::internal::ParamGenerator<tuple<Backend, Target> > dnnBackendsAndTargets(bool withInferenceEngine = true, bool withHalide = true)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be make withHalide is false by default? There are only two places where it's tested:

static testing::internal::ParamGenerator<tuple<Backend, Target> > dnnBackendsAndTargetsWithHalide()
{
static const tuple<Backend, Target> testCases[] = {
#ifdef HAVE_HALIDE
tuple<Backend, Target>(DNN_BACKEND_HALIDE, DNN_TARGET_CPU),
tuple<Backend, Target>(DNN_BACKEND_HALIDE, DNN_TARGET_OPENCL),
#endif
#ifdef HAVE_INF_ENGINE
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_CPU),
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_OPENCL),
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_OPENCL_FP16),
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_MYRIAD),
#endif
tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL),
tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL_FP16)
};
return testing::ValuesIn(testCases);
}

const tuple<Backend, Target> testCases[] = {
#ifdef HAVE_HALIDE
tuple<Backend, Target>(DNN_BACKEND_HALIDE, DNN_TARGET_CPU),
tuple<Backend, Target>(DNN_BACKEND_HALIDE, DNN_TARGET_OPENCL),
#endif
#ifdef HAVE_INF_ENGINE
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_CPU),
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_OPENCL),
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_OPENCL_FP16),
tuple<Backend, Target>(DNN_BACKEND_INFERENCE_ENGINE, DNN_TARGET_MYRIAD),
#endif
tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL),
tuple<Backend, Target>(DNN_BACKEND_OPENCV, DNN_TARGET_OPENCL_FP16)
};

Both have no DNN_BACKEND_OPENCV, DNN_TARGET_CPU because it is used as reference for outputs so I think we can apply this method something like this:

dnnBackendsAndTargets(bool withInferenceEngine = true, bool withHalide = false, bool withCpuOCV = true)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

alalek added 2 commits August 31, 2018 15:02
- OpenCL tests didn't run any OpenCL kernels
- use real configuration from existed models (the first 100 cases)
- batch size = 1
Copy link
Copy Markdown
Member

@dkurt dkurt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants