Added a helper function that will list all available Intel devices#16184
Added a helper function that will list all available Intel devices#16184JulienMaille wants to merge 2 commits intoopencv:masterfrom
Conversation
…mpatible with OpenVino inference engine (cpu, gpus, movidius, etc.)
|
Thanks! Can this method test the devices? I mean which lists it will show in the following scenarios:
|
|
|
Can this or similar method determine Myriad 2 or Myriad X? Can you also experiment replacing checkIETarget from dnn.cpp using this new function? Thanks! |
|
Well I did not know I could call cv::dnn::getAvailableBackends() to do this |
|
@Nefast, May I ask you if getAvailableTargets(DNN_BACKEND_INFERENCE_ENGINE) can match your expectations? This way we could add a wrapper to it instead. If yes, you can modify modify this PR in the following way: (it'll also require to reopen PR with 3.4 branch as a target, we will port the changes to master branch after that). diff --git a/modules/dnn/include/opencv2/dnn/dnn.hpp b/modules/dnn/include/opencv2/dnn/dnn.hpp
index 94e2ada..c5d3975 100644
--- a/modules/dnn/include/opencv2/dnn/dnn.hpp
+++ b/modules/dnn/include/opencv2/dnn/dnn.hpp
@@ -94,7 +94,7 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN
*/
enum Target
{
- DNN_TARGET_CPU,
+ DNN_TARGET_CPU = 0,
DNN_TARGET_OPENCL,
DNN_TARGET_OPENCL_FP16,
DNN_TARGET_MYRIAD,
@@ -102,7 +102,7 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN
};
CV_EXPORTS std::vector< std::pair<Backend, Target> > getAvailableBackends();
- CV_EXPORTS std::vector<Target> getAvailableTargets(Backend be);
+ CV_EXPORTS_W std::vector<Target> getAvailableTargets(Backend be);
/** @brief This class provides all data needed to initialize layer.
*
diff --git a/modules/dnn/misc/python/pyopencv_dnn.hpp b/modules/dnn/misc/python/pyopencv_dnn.hpp
index 34aeacb..69c1424 100644
--- a/modules/dnn/misc/python/pyopencv_dnn.hpp
+++ b/modules/dnn/misc/python/pyopencv_dnn.hpp
@@ -71,6 +71,12 @@ PyObject* pyopencv_from(const dnn::LayerParams& lp)
return dict;
}
+template<>
+PyObject* pyopencv_from(const std::vector<dnn::Target> &t)
+{
+ return pyopencv_from(std::vector<int>(t.begin(), t.end()));
+}
+
class pycvLayer CV_FINAL : public dnn::Layer
{
public: |
|
@dkurt Do you mean I should replace calls to |
|
@Nefast, You still can help us adding a wrapper for |
|
Sorry to be slow. You want me to reopen a PR on 3.4 with nothing more than the diff you posted just above? |
|
@Nefast, Yes, if you don't mind. |
|
Ok. I close this PR as a duplicate of #16232 |
compatible with OpenVino inference engine (cpu, gpus, movidius, etc.)
This pullrequest changes
new cv::dnn::listInferenceEngineDevices() which is a wrapper on InferenceEngine::Core::GetAvailableDevices()