System information (version)
- OpenVINO=> 2021.2
- Operating System / Platform => Ubuntu 18.04
- Compiler => g++
- Problem classification => Inference Engine
Detailed description
The ExecutableNetwork object created by InferenceEngine::ImportNetwork does not seem to work. Attempting to create an inference request from the object results in
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
what(): Can not create infer request: there is no available devices with platform 2480
Steps to reproduce
For our application, we need to minimize the boot time of the system. Compiling the MYRIAD model takes ~30 sec on this platform. I would like to cache the compiled model to reduce the load time. The application generates a blob file from an XML if a blob file does not exist. On the next one, the cached blob should be used, but, it does not result in a viable network:
InferenceEngine::Core core;
std::string model = "/path/to/model.xml";
auto cached_network_path = fs::strip_extension(model) + ".blob";
if (!fs::exists(cached_network_path)) {
// if .blob does not exist, use LoadNetwork and create blob (this network is usable)
InferenceEngine::CNNNetwork network {core.ReadNetwork(model)};
// ... do various setup network stuff ...
executable_network_ = core.LoadNetwork(network, "MYRIAD");
LOG(DEBUG) << "Caching compiled network to : " << cached_network_path;
std::ofstream file{cached_network_path, std::ios::binary};
executable_network_.Export(file);
file.close();
} else {
// if .blob exist, use ImportNetwork
LOG(DEBUG) << "Using cached compiled network: " << cached_network_path;
std::ifstream file{cached_network_path, std::ios::binary};
executable_network_ = core.ImportNetwork(file, "MYRIAD", {});
file.close();
}
// error here if ImportNetwork used
return executable_network_.CreateInferRequest();
Issue submission checklist
- [x ] I report the issue, it's not a question
- [ x] I checked the problem with documentation, FAQ, open issues, Stack Overflow, etc and have not found solution
- [x ] There is reproducer code and related data files: images, videos, models, etc.
System information (version)
Detailed description
The ExecutableNetwork object created by InferenceEngine::ImportNetwork does not seem to work. Attempting to create an inference request from the object results inSteps to reproduce
For our application, we need to minimize the boot time of the system. Compiling the MYRIAD model takes ~30 sec on this platform. I would like to cache the compiled model to reduce the load time. The application generates a blob file from an XML if a blob file does not exist. On the next one, the cached blob should be used, but, it does not result in a viable network:
InferenceEngine::Core core; std::string model = "/path/to/model.xml"; auto cached_network_path = fs::strip_extension(model) + ".blob"; if (!fs::exists(cached_network_path)) { // if .blob does not exist, use LoadNetwork and create blob (this network is usable) InferenceEngine::CNNNetwork network {core.ReadNetwork(model)}; // ... do various setup network stuff ... executable_network_ = core.LoadNetwork(network, "MYRIAD"); LOG(DEBUG) << "Caching compiled network to : " << cached_network_path; std::ofstream file{cached_network_path, std::ios::binary}; executable_network_.Export(file); file.close(); } else { // if .blob exist, use ImportNetwork LOG(DEBUG) << "Using cached compiled network: " << cached_network_path; std::ifstream file{cached_network_path, std::ios::binary}; executable_network_ = core.ImportNetwork(file, "MYRIAD", {}); file.close(); } // error here if ImportNetwork used return executable_network_.CreateInferRequest();Issue submission checklist