Conversation
| DNN_TARGET_MYRIAD, | ||
| DNN_TARGET_FPGA //!< FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin. | ||
| DNN_TARGET_FPGA, //!< FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin. | ||
| DNN_TARGET_ARM, |
There was a problem hiding this comment.
I believe DNN_TARGET_CPU should be reused instead.
Can we collect some extra information through InferenceEngine Query API to detect ARM CPU plugin?
There was a problem hiding this comment.
For ARM plugin, we need to skip some tests and layers. If we we are going to use DNN_TARGET_CPU, we need to add extra information (conditions) to do this.
There was a problem hiding this comment.
We have such conditions for NCS / NCS2: getInferenceEngineVPUType()
There was a problem hiding this comment.
Added getInferenceEngineCPUType()
2550252 to
35685e0
Compare
b544505 to
a71bda4
Compare
d392728 to
3ab0685
Compare
f25a3bb to
ff77c65
Compare
modules/dnn/src/op_inf_engine.cpp
Outdated
| AutoLock lock(getInitializationMutex()); | ||
| InferenceEngine::Core& ie = getCore("CPU"); | ||
| const std::vector<std::string> devices = ie.GetAvailableDevices(); | ||
| return std::find(devices.begin(), devices.end(), std::string("CPU")) != devices.end(); |
There was a problem hiding this comment.
@ilya-lavrenov Is there some robust approach to detect plugin type through InferenceEngine QueryAPI?
AFAIK, for example, GNA has software and hardware-accelerated implementations.
There was a problem hiding this comment.
@l-bat Please use FULL_DEVICE_NAME and check for "arm_compute::NEON":
There was a problem hiding this comment.
FULL_DEVICE_NAME for ARM plugin has "arm_compute::NEON". Maybe we can rely on it..
modules/dnn/src/op_inf_engine.cpp
Outdated
| AutoLock lock(getInitializationMutex()); | ||
| InferenceEngine::Core& ie = getCore("CPU"); | ||
| const std::vector<std::string> devices = ie.GetAvailableDevices(); | ||
| return std::find(devices.begin(), devices.end(), std::string("CPU")) != devices.end(); |
There was a problem hiding this comment.
@l-bat Please use FULL_DEVICE_NAME and check for "arm_compute::NEON":
modules/dnn/src/op_inf_engine.cpp
Outdated
|
|
||
| bool isArmPlugin() | ||
| { | ||
| static bool armPlugin = getInferenceEngineCPUType() == "ARM"; |
There was a problem hiding this comment.
Lets make this check robust.
We do not really need to check for "ARM" CPU type here (we know that in compile time).
We should check for "ARM COMPUTE" plugin instead.
isArmPlugin => isArmComputePlugin
"ARM" => "ARM_COMPUTE"
| backend == DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_2019) && target == DNN_TARGET_MYRIAD) | ||
| applyTestTag(CV_TEST_TAG_DNN_SKIP_IE_MYRIAD, CV_TEST_TAG_DNN_SKIP_IE_NN_BUILDER, CV_TEST_TAG_DNN_SKIP_IE_NGRAPH); | ||
|
|
||
| if (backend == DNN_BACKEND_INFERENCE_ENGINE_NGRAPH && target == DNN_TARGET_CPU && getInferenceEngineCPUType() == "ARM") |
There was a problem hiding this comment.
== "ARM"
Please use macro defines instead of raw strings:
- it is very easy to make a typo in the string which is hard to find
- compilers raise error for typos in macro definition names
There was a problem hiding this comment.
Is it enough to add the macro 'CV_DNN_INFERENCE_ENGINE_CPU_TYPE_ARM_COMPUTE' or should we add a macro for CPU plugin too to use it in 'getInferenceEngineCPUType()? Do we need a public function 'getInferenceEngineCPUType() or can we create a public isArmComputePlugin()?
There was a problem hiding this comment.
Similar to VPU part.
isArmComputePlugin()
Lets keep it private. getInferenceEngineCPUType() is enough
| auto& ieInpNode = nodes[0].dynamicCast<InfEngineNgraphNode>()->node; | ||
| std::vector<size_t> dims = ieInpNode->get_shape(); | ||
| CV_Assert(dims.size() == 4 || dims.size() == 5); | ||
| CV_Assert(dims.size() >= 3 && dims.size() <= 5); |
There was a problem hiding this comment.
Lets emit clear error diagnostic messages:
CV_Check(dims.size(), dims.size() >= 3 && dims.size() <= 5, "");
| #if INF_ENGINE_VER_MAJOR_GT(INF_ENGINE_RELEASE_2021_2) | ||
| auto mul = std::make_shared<ngraph::op::v1::Multiply>(norm, weight, ngraph::op::AutoBroadcastType::NUMPY); | ||
| #else | ||
| auto mul = std::make_shared<ngraph::op::v0::Multiply>(norm, weight, ngraph::op::AutoBroadcastType::NUMPY); | ||
| return Ptr<BackendNode>(new InfEngineNgraphNode(mul)); | ||
| #endif | ||
| return Ptr<BackendNode>(new InfEngineNgraphNode(mul)); |
Uh oh!
There was an error while loading. Please reload this page.