Feature/azaytsev/from 2021 4#9247
Conversation
* Changes according to feedback comments * Replaced @ref's with html links * Fixed links, added a title page for installing from repos and images, fixed formatting issues * Added links * minor fix * Added DL Streamer to the list of components installed by default * Link fixes * Link fixes * ovms doc fix (openvinotoolkit#2988) * added OpenVINO Model Server * ovms doc fixes Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
This reverts commit 706dac5.
|
|
||
| You can activate Dynamic Batching by setting the "DYN_BATCH_ENABLED" flag to "YES" in a configuration map that is passed to the plugin while loading a network. This configuration creates an `ExecutableNetwork` object that will allow setting batch size dynamically in all of its infer requests using the [ie_api.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) method. The batch size that was set in the passed CNNNetwork object will be used as a maximum batch size limit. | ||
|
|
||
| ```python |
There was a problem hiding this comment.
This example is incomplete. It shows how to load the model with dynamic batching enabled, but not how to do inference on such a model with a dynamic batch size. It is not intuitive how this works, so an example that shows how to propagate a batch with 5 elements through a network with dynamic batching with max size of 32 would be useful. @jiwaszki can you help with this? [edit] @nosovmik @AlexeyLebedev1 @akuporos
| ```python | ||
| all_metrics = ie.get_metric(device_name=device, metric_name="SUPPORTED_METRICS") | ||
| # Find the 'IMPORT_EXPORT_SUPPORT' metric in supported metrics | ||
| allows_caching = "IMPORT_EXPORT_SUPPORT" in all_metrics |
There was a problem hiding this comment.
| allows_caching = "IMPORT_EXPORT_SUPPORT" in all_metrics | |
| allows_caching = "IMPORT_EXPORT_SUPPORT" in all_metrics and ie.get_metric(device_name=device, metric_name="IMPORT_EXPORT_SUPPORT") |
There was a problem hiding this comment.
It is not clear to me why that addition is required (so maybe also not for some users). Could you add a comment that explains that?
There was a problem hiding this comment.
Existence of this metric in 'Supported metrics' list doesn't necessarily guarantee that device supports caching. Plugin also needs to return 'True' value for this metric.
Even though existing plugins which declare 'IMPORT_EXPORT_SUPPORT' in supported metrics, always return 'true', it is allowed to return 'False' if plugin decides so.
Also if you check 'C++' snippet - there are 2 checks for this:
// Find 'IMPORT_EXPORT_SUPPORT' metric in supported metrics
auto it = std::find(keys.begin(), keys.end(), METRIC_KEY(IMPORT_EXPORT_SUPPORT));
// If metric 'IMPORT_EXPORT_SUPPORT' exists, check it's value
auto cachingSupported = (it != keys.end()) && ie.GetMetric(deviceName, METRIC_KEY(IMPORT_EXPORT_SUPPORT)).as<bool>();
It may be confusing for user that C++ needs value check, but Python code does not
There was a problem hiding this comment.
Thanks @nosovmik . I agree that similarity between C++ and Python is good, but to me it is now confusing that "SUPPORTED_METRICS" does not necessarily mean that that metric is supported. What would it mean if a plugin declares IMPORT_EXPORT_SUPPORT but still returns False with get_metric? Why would it then declare IMPORT_EXPORT_SUPPORT?
There was a problem hiding this comment.
@nosovmik could you please add your comments?
There was a problem hiding this comment.
Hi @helena-intel
Sorry for delay with reply. I agree that extra check looks redundant, however there is a kind of API limitation - if plugin declares some metric - this metric shall return some value. So we agreed to have 'boolean' metric which returns 'true', and in this case now plugin has a possibility to return 'false' and it shall be treated correctly.
My suggestion is that you can create an issue to simplify the API so that we can re-visit current implementation and find best solution. From one hand, extra check looks redundant for end user. From another hand, it is 'advanced api', so it shall not be a problem for advanced users to add extra check if their application needs this functionality
|
|
||
| > **NOTE:** OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO. | ||
| OpenCL support is provided by ComputeAorta*, and is distributed under a license agreement between Intel® and Codeplay* Software Ltd. | ||
| > **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO. |
There was a problem hiding this comment.
Is that OpenCL compiler redistributed with all OpenVINO distributions? Otherwise it would be good to be more specific, and possibly add instructions on how to get it.
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
…rew-zaytsev/openvino into feature/azaytsev/from-2021-4
…nto feature/azaytsev/from-2021-4
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
…rew-zaytsev/openvino into feature/azaytsev/from-2021-4
…nto feature/azaytsev/from-2021-4
| "@OpenVINO_SOURCE_DIR@/src/core/shape_inference/include" \ | ||
| "@OpenVINO_SOURCE_DIR@/src/frontends/common/include" \ | ||
| "@OpenVINO_SOURCE_DIR@/src/inference/dev_api" \ | ||
| "@OpenVINO_SOURCE_DIR@/src/inference/include" |
There was a problem hiding this comment.
from these header files we need only public API ones:
"@OpenVINO_SOURCE_DIR@/src/frontends/common/include"
"@OpenVINO_SOURCE_DIR@/src/core/include"
"@OpenVINO_SOURCE_DIR@/src/inference/include"
"@OpenVINO_SOURCE_DIR@/src/frontends/common/include" \
| ie_complete_call_back \ | ||
| IEStatusCode \ | ||
| input_shape \ | ||
| struct_desc |
There was a problem hiding this comment.
why do we need all these EXCLUDE_SYMBOLS?
Sphinx stuff, Python versions of docs and other updates