Skip to content

Added OpenVINO ARM target#19632

Merged
alalek merged 6 commits intoopencv:3.4from
l-bat:lb/ie_arm_target
Mar 20, 2021
Merged

Added OpenVINO ARM target#19632
alalek merged 6 commits intoopencv:3.4from
l-bat:lb/ie_arm_target

Conversation

@l-bat
Copy link
Copy Markdown
Contributor

@l-bat l-bat commented Feb 26, 2021

force_builders=Custom,Custom Win,Custom Mac
build_image:Custom=ubuntu-openvino-2021.2.0:20.04
build_image:Custom Win=openvino-2021.1.0
build_image:Custom Mac=openvino-2021.2.0

test_modules:Custom=dnn,python2,python3,java
test_modules:Custom Win=dnn,python2,python3,java
test_modules:Custom Mac=dnn,python2,python3,java

buildworker:Custom=linux-1
# disabled due high memory usage: test_opencl:Custom=ON
test_opencl:Custom=OFF
test_bigdata:Custom=1
test_filter:Custom=*

@l-bat l-bat added this to the 3.4.14 milestone Feb 26, 2021
DNN_TARGET_MYRIAD,
DNN_TARGET_FPGA //!< FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin.
DNN_TARGET_FPGA, //!< FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin.
DNN_TARGET_ARM,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe DNN_TARGET_CPU should be reused instead.

Can we collect some extra information through InferenceEngine Query API to detect ARM CPU plugin?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For ARM plugin, we need to skip some tests and layers. If we we are going to use DNN_TARGET_CPU, we need to add extra information (conditions) to do this.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have such conditions for NCS / NCS2: getInferenceEngineVPUType()

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added getInferenceEngineCPUType()

@l-bat l-bat force-pushed the lb/ie_arm_target branch 3 times, most recently from b544505 to a71bda4 Compare March 9, 2021 12:56
@l-bat l-bat marked this pull request as ready for review March 9, 2021 12:57
@l-bat l-bat force-pushed the lb/ie_arm_target branch 5 times, most recently from d392728 to 3ab0685 Compare March 10, 2021 09:10
@l-bat l-bat force-pushed the lb/ie_arm_target branch 4 times, most recently from f25a3bb to ff77c65 Compare March 16, 2021 08:47
@l-bat l-bat requested a review from alalek March 17, 2021 13:24
AutoLock lock(getInitializationMutex());
InferenceEngine::Core& ie = getCore("CPU");
const std::vector<std::string> devices = ie.GetAvailableDevices();
return std::find(devices.begin(), devices.end(), std::string("CPU")) != devices.end();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ilya-lavrenov Is there some robust approach to detect plugin type through InferenceEngine QueryAPI?

AFAIK, for example, GNA has software and hardware-accelerated implementations.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FULL_DEVICE_NAME for ARM plugin has "arm_compute::NEON". Maybe we can rely on it..

AutoLock lock(getInitializationMutex());
InferenceEngine::Core& ie = getCore("CPU");
const std::vector<std::string> devices = ie.GetAvailableDevices();
return std::find(devices.begin(), devices.end(), std::string("CPU")) != devices.end();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


bool isArmPlugin()
{
static bool armPlugin = getInferenceEngineCPUType() == "ARM";
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets make this check robust.
We do not really need to check for "ARM" CPU type here (we know that in compile time).
We should check for "ARM COMPUTE" plugin instead.

isArmPlugin => isArmComputePlugin
"ARM" => "ARM_COMPUTE"

backend == DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_2019) && target == DNN_TARGET_MYRIAD)
applyTestTag(CV_TEST_TAG_DNN_SKIP_IE_MYRIAD, CV_TEST_TAG_DNN_SKIP_IE_NN_BUILDER, CV_TEST_TAG_DNN_SKIP_IE_NGRAPH);

if (backend == DNN_BACKEND_INFERENCE_ENGINE_NGRAPH && target == DNN_TARGET_CPU && getInferenceEngineCPUType() == "ARM")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

== "ARM"

Please use macro defines instead of raw strings:

  • it is very easy to make a typo in the string which is hard to find
  • compilers raise error for typos in macro definition names

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it enough to add the macro 'CV_DNN_INFERENCE_ENGINE_CPU_TYPE_ARM_COMPUTE' or should we add a macro for CPU plugin too to use it in 'getInferenceEngineCPUType()? Do we need a public function 'getInferenceEngineCPUType() or can we create a public isArmComputePlugin()?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to VPU part.

isArmComputePlugin()

Lets keep it private. getInferenceEngineCPUType() is enough

auto& ieInpNode = nodes[0].dynamicCast<InfEngineNgraphNode>()->node;
std::vector<size_t> dims = ieInpNode->get_shape();
CV_Assert(dims.size() == 4 || dims.size() == 5);
CV_Assert(dims.size() >= 3 && dims.size() <= 5);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets emit clear error diagnostic messages:

CV_Check(dims.size(), dims.size() >= 3 && dims.size() <= 5, "");

Comment on lines +341 to +346
#if INF_ENGINE_VER_MAJOR_GT(INF_ENGINE_RELEASE_2021_2)
auto mul = std::make_shared<ngraph::op::v1::Multiply>(norm, weight, ngraph::op::AutoBroadcastType::NUMPY);
#else
auto mul = std::make_shared<ngraph::op::v0::Multiply>(norm, weight, ngraph::op::AutoBroadcastType::NUMPY);
return Ptr<BackendNode>(new InfEngineNgraphNode(mul));
#endif
return Ptr<BackendNode>(new InfEngineNgraphNode(mul));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix indentation

Copy link
Copy Markdown
Member

@alalek alalek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well done! Thank you 👍

@alalek alalek merged commit c0dd82f into opencv:3.4 Mar 20, 2021
@alalek alalek mentioned this pull request Mar 22, 2021
@alalek alalek mentioned this pull request Apr 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants