Skip to content

Feature/azaytsev/from 2021 4#9247

Merged
andrew-zaytsev merged 70 commits intoopenvinotoolkit:masterfrom
andrew-zaytsev:feature/azaytsev/from-2021-4
Dec 21, 2021
Merged

Feature/azaytsev/from 2021 4#9247
andrew-zaytsev merged 70 commits intoopenvinotoolkit:masterfrom
andrew-zaytsev:feature/azaytsev/from-2021-4

Conversation

@andrew-zaytsev
Copy link
Copy Markdown
Contributor

Sphinx stuff, Python versions of docs and other updates

andrew-zaytsev and others added 30 commits October 30, 2020 16:50
* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (openvinotoolkit#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
@openvino-pushbot openvino-pushbot added category: inference OpenVINO Runtime library - Inference category: Python API OpenVINO Python bindings category: Core OpenVINO Core (aka ngraph) labels Dec 15, 2021

You can activate Dynamic Batching by setting the "DYN_BATCH_ENABLED" flag to "YES" in a configuration map that is passed to the plugin while loading a network. This configuration creates an `ExecutableNetwork` object that will allow setting batch size dynamically in all of its infer requests using the [ie_api.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) method. The batch size that was set in the passed CNNNetwork object will be used as a maximum batch size limit.

```python
Copy link
Copy Markdown
Contributor

@helena-intel helena-intel Dec 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example is incomplete. It shows how to load the model with dynamic batching enabled, but not how to do inference on such a model with a dynamic batch size. It is not intuitive how this works, so an example that shows how to propagate a batch with 5 elements through a network with dynamic batching with max size of 32 would be useful. @jiwaszki can you help with this? [edit] @nosovmik @AlexeyLebedev1 @akuporos

```python
all_metrics = ie.get_metric(device_name=device, metric_name="SUPPORTED_METRICS")
# Find the 'IMPORT_EXPORT_SUPPORT' metric in supported metrics
allows_caching = "IMPORT_EXPORT_SUPPORT" in all_metrics
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
allows_caching = "IMPORT_EXPORT_SUPPORT" in all_metrics
allows_caching = "IMPORT_EXPORT_SUPPORT" in all_metrics and ie.get_metric(device_name=device, metric_name="IMPORT_EXPORT_SUPPORT")

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear to me why that addition is required (so maybe also not for some users). Could you add a comment that explains that?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Existence of this metric in 'Supported metrics' list doesn't necessarily guarantee that device supports caching. Plugin also needs to return 'True' value for this metric.
Even though existing plugins which declare 'IMPORT_EXPORT_SUPPORT' in supported metrics, always return 'true', it is allowed to return 'False' if plugin decides so.
Also if you check 'C++' snippet - there are 2 checks for this:

// Find 'IMPORT_EXPORT_SUPPORT' metric in supported metrics
auto it = std::find(keys.begin(), keys.end(), METRIC_KEY(IMPORT_EXPORT_SUPPORT));

// If metric 'IMPORT_EXPORT_SUPPORT' exists, check it's value
auto cachingSupported = (it != keys.end()) && ie.GetMetric(deviceName, METRIC_KEY(IMPORT_EXPORT_SUPPORT)).as<bool>();

It may be confusing for user that C++ needs value check, but Python code does not

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @nosovmik . I agree that similarity between C++ and Python is good, but to me it is now confusing that "SUPPORTED_METRICS" does not necessarily mean that that metric is supported. What would it mean if a plugin declares IMPORT_EXPORT_SUPPORT but still returns False with get_metric? Why would it then declare IMPORT_EXPORT_SUPPORT?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nosovmik could you please add your comments?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @helena-intel
Sorry for delay with reply. I agree that extra check looks redundant, however there is a kind of API limitation - if plugin declares some metric - this metric shall return some value. So we agreed to have 'boolean' metric which returns 'true', and in this case now plugin has a possibility to return 'false' and it shall be treated correctly.
My suggestion is that you can create an issue to simplify the API so that we can re-visit current implementation and find best solution. From one hand, extra check looks redundant for end user. From another hand, it is 'advanced api', so it shall not be a problem for advanced users to add extra check if their application needs this functionality


> **NOTE:** OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta*, and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that OpenCL compiler redistributed with all OpenVINO distributions? Otherwise it would be good to be more specific, and possibly add instructions on how to get it.

@andrew-zaytsev andrew-zaytsev enabled auto-merge (squash) December 21, 2021 16:10
@andrew-zaytsev andrew-zaytsev merged commit 4ae6258 into openvinotoolkit:master Dec 21, 2021
"@OpenVINO_SOURCE_DIR@/src/core/shape_inference/include" \
"@OpenVINO_SOURCE_DIR@/src/frontends/common/include" \
"@OpenVINO_SOURCE_DIR@/src/inference/dev_api" \
"@OpenVINO_SOURCE_DIR@/src/inference/include"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from these header files we need only public API ones:
"@OpenVINO_SOURCE_DIR@/src/frontends/common/include"
"@OpenVINO_SOURCE_DIR@/src/core/include"
"@OpenVINO_SOURCE_DIR@/src/inference/include"
"@OpenVINO_SOURCE_DIR@/src/frontends/common/include" \

ie_complete_call_back \
IEStatusCode \
input_shape \
struct_desc
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need all these EXCLUDE_SYMBOLS?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category: CI OpenVINO public CI category: Core OpenVINO Core (aka ngraph) category: docs OpenVINO documentation category: inference OpenVINO Runtime library - Inference category: Python API OpenVINO Python bindings

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants