{"id":56543,"date":"2024-07-02T06:00:00","date_gmt":"2024-07-02T13:00:00","guid":{"rendered":"https:\/\/learnopencv.com\/?p=56543"},"modified":"2025-02-26T22:22:08","modified_gmt":"2025-02-27T06:22:08","slug":"object-detection-on-edge-device","status":"publish","type":"post","link":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/","title":{"rendered":"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset"},"content":{"rendered":"\n<p>Performing<strong> Object Detection on edge device<\/strong> is an exciting area for tech enthusiasts where we can implement powerful computer vision applications in compact, efficient packages. Here we show&nbsp;one such interesting embedded computer vision application by deploying models on a popular edge AI device like <strong>OAK-D-Lite<\/strong>.<\/p>\n\n\n<figure class=\"aligncenter wp-block-post-featured-image\"><img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"object-detection-edge-device-oakd\" style=\"object-fit:cover;\" \/><\/figure>\n\n\n<p>According to the <strong>World Health Organization&#8217;s <\/strong>2023 report, road accidents claim 1.19 million lives annually and cause non-fatal injuries to 20 to 50 million people. A major contributor to these unsafe driving conditions is road potholes. To address this issue, we demonstrate the procedure for running a fine-tuned pothole detection model (<strong>YOLOv8<\/strong>) on the OAK-D-Lite device. In a broader perspective, this deployment can serve as a baseline reference, which can be extended to assist drivers by providing real-time monitoring and warning indicators, thereby enhancing road safety.<\/p>\n\n\n\n<p>To learn more about OAK-D and its essential applications, we recommend that you bookmark our series on <a href=\"https:\/\/learnopencv.com\/?s=stereo+depth\" target=\"_blank\" rel=\"noopener\" title=\"\">Stereo <\/a><a href=\"https:\/\/learnopencv.com\/depth-perception-using-stereo-camera-python-c\/\" target=\"_blank\" rel=\"noopener\" title=\"Depth\">Depth<\/a> and <a href=\"https:\/\/learnopencv.com\/page\/2\/?s=stereo+depth\" target=\"_blank\" rel=\"noopener\" title=\"\">OAK-D<\/a>.&nbsp;<\/p>\n\n\n\n<p>By this article&#8217;s end, you can<strong> convert and deploy recent lightweight YOLO models on an OAK-D-Lite<\/strong> device.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-yolo-master-post-every-model-explained\"><strong>YOLO Master Post &#8211; &nbsp;Every Model Explained<\/strong><\/h2>\n\n\n\n<div style=\"background-color:#f2f2f2;color:#32373c\" class=\"wp-block-genesis-blocks-gb-testimonial left-aligned gb-has-avatar gb-font-size-18 gb-block-testimonial\"><div class=\"gb-testimonial-text\">Unlock the full story behind all the YOLO models&#8217; evolutionary journey: Dive into our extensive pillar post, where we unravel the evolution from YOLOv1 to YOLOv10. This essential guide is packed with insights, comparisons, and a deeper understanding that you won&#8217;t find anywhere else.<br>Don&#8217;t miss out on this comprehensive resource, Mastering <a href=\"https:\/\/learnopencv.com\/mastering-all-yolo-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">All Yolo Models<\/a> for a richer, more informed perspective on the YOLO series.<\/div><div class=\"gb-testimonial-info\"><div class=\"gb-testimonial-avatar-wrap\"><div class=\"gb-testimonial-image-wrap\"><img decoding=\"async\" class=\"gb-testimonial-avatar\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2023\/12\/mastering-all-yolo-models-feature-image-150x150.gif\"\/><\/div><\/div><h2 class=\"gb-testimonial-name\" style=\"color:#32373c\"><a href=\"https:\/\/learnopencv.com\/mastering-all-yolo-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">Mastering All YOLO Models from YOLOv1 to YOLOv10: Papers Explained (2024)<\/a><\/h2><small class=\"gb-testimonial-title\" style=\"color:#32373c\"><\/small><\/div><\/div>\n\n\n\n<ol class=\"wp-block-list toc-gutenberg\">\n<li><a href=\"#aioseo-minimal-intro-to-oak-d-lite\" title=\"An Intro to OAK-D-Lite\">An Intro to OAK-D-Lite<\/a><\/li>\n\n\n\n<li><a href=\"#aioseo-depthai-oak-d-uses-a-depthai-pipeline-to-do-inference-and-establishes-connection-between-host-raspberry-pi-or-jetson-nano-or-a-laptop-with-oak-d-lite-device-via-xlinkin-and-xlinkout\" title=\"DepthAI Pipeline with OAK-D-Lite\">DepthAI Pipeline for OAK-D-Lite<\/a><\/li>\n\n\n\n<li><a href=\"#aioseo-why-openvino-ir\" title=\"What is OpenVINO IR Format ?\">How Does OpenVINO IR Format Optimizes OAK-D-Lite?<\/a><\/li>\n\n\n\n<li><a href=\"#aioseo-understanding-the-dataset\" title=\"Understanding the Pothole Dataset\">Understanding the Pothole Dataset<\/a> <\/li>\n\n\n\n<li><a href=\"#aioseo-code-walkthrough\" title=\"Code Walkthrough\">Code Walkthrough<\/a>\n<ol class=\"wp-block-list\">\n<li> <a href=\"#aioseo-model-conversion-yolo-pytorch-to-myriadx-blob-format\" title=\"Model Conversion: YOLO Pytorch to MyriadX Blob Format\">Model Conversion: YOLO Pytorch to MyriadX Blob Format<\/a><\/li>\n\n\n\n<li><a href=\"#aioseo-oak-d-lite-deployment\" title=\"OAK-D-Lite Deployment\">OAK-D-Lite Deployment<\/a><\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><a href=\"#aioseo-inference-results\" title=\"Video Inference Results on OAK-D-Lite\">Video Inference Results on OAK-D-Lite<\/a><\/li>\n\n\n\n<li><a href=\"#aioseo-key-takeaways\" title=\"Key Takeaways of OAK-D Deployment\">Key Takeaways of OAK-D Deployment<\/a><\/li>\n\n\n\n<li><a href=\"#aioseo-conclusion\" title=\"Conclusion\">Conclusion<\/a><\/li>\n\n\n\n<li><a href=\"#aioseo-references\" title=\"References\">References<\/a><\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-minimal-intro-to-oak-d-lite\">An Intro to OAK-D-Lite<\/h2>\n\n\n\n<figure class=\"wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"DepthAI  - Step By Step tutorial For Using OAK-D\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/e_uPEE_zlDo?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p><a href=\"https:\/\/learnopencv.com\/introduction-to-opencv-ai-kit-and-depthai\/\" target=\"_blank\" rel=\"noopener\" title=\"\">OAK-D-Lite<\/a> by Luxonis has an<strong> Intel Myriad chip<\/strong> or <strong>VPU<\/strong>(Vision Processing Unit) that can process 4 Trillion Neural Operations per second. As of 2024, It is priced at $149 USD, which makes it affordable for computer vision engineers to experiment with spatial AI.<\/p>\n\n\n\n<p>OAK-D-Lite is called the Swiss Army Knife of Computer Vision, which is true because of its form factor and impressive performance.<\/p>\n\n\n\n<p><strong>Hardware Specs<\/strong><br>OAK-D-Lite has:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A <strong>13 MP<\/strong> Color Camera<\/li>\n\n\n\n<li>Depth FOV\/ Horizontal FOV \/ Vertical FOV of <strong>81\u00b0 \/ 69\u00b0 \/ 54\u00b0<\/strong>&nbsp;<\/li>\n\n\n\n<li>Focus (AF: 8cm &#8211; \u221e <strong>OR<\/strong> FF: 50cm &#8211; \u221e )<\/li>\n\n\n\n<li>Power Consumption: USB 3 C-Type Cable with 900mA at <strong>5V<\/strong><\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/Oak-devices-scaled.jpg\"><img decoding=\"async\" width=\"1365\" height=\"1024\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/Oak-devices-edited-scaled.jpg\" alt=\"OAK-D-Lite Device - OAK-D AI Kit\" class=\"wp-image-56556\" style=\"width:496px;height:auto\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 1: <\/strong>OAK-D and <strong>OAK-D-Lite<\/strong> Device<\/figcaption><\/figure>\n\n\n\n<p>OAK-D-Lite has IMU Sensors (BMI270) that can be used for robotics perception, navigation, localization, and motion tracking.&nbsp;<\/p>\n\n\n\n<p>Are you interested in reading further about <strong>robotics<\/strong> perception? Starting with our guide on <a href=\"https:\/\/learnopencv.com\/monocular-slam-in-python\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Monocular SLAM with Python<\/a> can give you good hands-on experience with <strong>SLAM<\/strong> and perception.<\/p>\n\n\n\n\t\t<div data-elementor-type=\"section\" data-elementor-id=\"33489\" class=\"elementor elementor-33489\" data-elementor-post-type=\"elementor_library\">\n\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-aeedff9 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"aeedff9\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-a4845f2\" data-id=\"a4845f2\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-873d2bf elementor-widget elementor-widget-elementskit-testimonial\" data-id=\"873d2bf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"elementskit-testimonial.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<div class=\"ekit-wid-con\" ><div class=\"elementskit-testimonial-slider arrow_inside slider-dotted\" data-config=\"{&quot;rtl&quot;:false,&quot;arrows&quot;:false,&quot;dots&quot;:true,&quot;pauseOnHover&quot;:true,&quot;autoplay&quot;:true,&quot;speed&quot;:1500,&quot;slidesPerGroup&quot;:1,&quot;slidesPerView&quot;:2,&quot;loop&quot;:true,&quot;spaceBetween&quot;:15,&quot;breakpoints&quot;:{&quot;320&quot;:{&quot;slidesPerView&quot;:1,&quot;slidesPerGroup&quot;:1,&quot;spaceBetween&quot;:10},&quot;768&quot;:{&quot;slidesPerView&quot;:2,&quot;slidesPerGroup&quot;:1,&quot;spaceBetween&quot;:10},&quot;1024&quot;:{&quot;slidesPerView&quot;:2,&quot;slidesPerGroup&quot;:1,&quot;spaceBetween&quot;:15}}}\">\n\t<div class=\"ekit-main-swiper swiper\">\n\t\t<div class=\"swiper-wrapper\">\n\t\t\t\t\t\t<div class=\"swiper-slide\">\n\t\t\t\t<div class=\"swiper-slide-inner\">\n\t\t\t\t\t<a class=\"elementskit-testimonial-inner\" href=\"https:\/\/opencv.org\/university\/free-opencv-course\/?utm_source=locv&#038;utm_medium=midblog&#038;utm_campaign=object-detection-on-edge-device-deploying-yolov8-on-luxonis-oak-d-lite-pothole-datset\" target=\"_blank\">\n\t\t\t\t\t\t<div class=\"elementskit-single-testimonial-slider  ekit_testimonial_style_2\">\n\t\t\t\t\t\t\t<div class=\"elementskit-commentor-content\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"elementskit-client_logo\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/opencv.org\/university\/wp-content\/uploads\/sites\/4\/2023\/05\/All-CV-Courses-Thumbnails-3.jpg\" title=\"\" alt=\"\" class=\"elementskit-testimonial-client-logo\" \/>\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<p><i class=\"fa fa-group\"><\/i> 100K+ Learners<br>\n<i class=\"fa fa-clock\"><\/i>  3 Hours of Learning<\/p>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementskit-profile-info\">\n\t\t\t\t\t\t\t\t\t<strong class=\"elementskit-author-name\">Join Free OpenCV Bootcamp<\/strong>\n\t\t\t\t\t\t\t\t\t<span class=\"elementskit-author-des\"><\/span>\n\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t\t\t<div class=\"swiper-slide\">\n\t\t\t\t<div class=\"swiper-slide-inner\">\n\t\t\t\t\t<a class=\"elementskit-testimonial-inner\" href=\"https:\/\/opencv.org\/university\/free-tensorflow-keras-course\/?utm_source=locv&#038;utm_medium=midblog&#038;utm_campaign=object-detection-on-edge-device-deploying-yolov8-on-luxonis-oak-d-lite-pothole-datset\" target=\"_blank\">\n\t\t\t\t\t\t<div class=\"elementskit-single-testimonial-slider  ekit_testimonial_style_2\">\n\t\t\t\t\t\t\t<div class=\"elementskit-commentor-content\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"elementskit-client_logo\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/opencv.org\/university\/wp-content\/uploads\/sites\/4\/2023\/05\/Free-TF-Bootcamp_4.jpg\" title=\"\" alt=\"\" class=\"elementskit-testimonial-client-logo\" \/>\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<p><i class=\"fa fa-group\"><\/i> 15K+ Learners<br>\n<i class=\"fa fa-clock\"><\/i>  3 Hours of Learning<\/p>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementskit-profile-info\">\n\t\t\t\t\t\t\t\t\t<strong class=\"elementskit-author-name\">Join Free TensorFlow Bootcamp<\/strong>\n\t\t\t\t\t\t\t\t\t<span class=\"elementskit-author-des\"><\/span>\n\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t\t\t<div class=\"swiper-slide\">\n\t\t\t\t<div class=\"swiper-slide-inner\">\n\t\t\t\t\t<a class=\"elementskit-testimonial-inner\" href=\"https:\/\/opencv.org\/university\/free-pytorch-course\/?utm_source=locv&#038;utm_medium=midblog&#038;utm_campaign=object-detection-on-edge-device-deploying-yolov8-on-luxonis-oak-d-lite-pothole-datset\" target=\"_blank\">\n\t\t\t\t\t\t<div class=\"elementskit-single-testimonial-slider  ekit_testimonial_style_2\">\n\t\t\t\t\t\t\t<div class=\"elementskit-commentor-content\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"elementskit-client_logo\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/opencv.org\/university\/wp-content\/uploads\/sites\/4\/2025\/02\/PyTorch_Bootcamp.jpg\" title=\"\" alt=\"\" class=\"elementskit-testimonial-client-logo\" \/>\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<p><i class=\"fa fa-group\"><\/i> 10K+ Learners<br>\n<i class=\"fa fa-clock\"><\/i>  8 Hours of Learning<\/p>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementskit-profile-info\">\n\t\t\t\t\t\t\t\t\t<strong class=\"elementskit-author-name\">Join Free  PyTorch Bootcamp<\/strong>\n\t\t\t\t\t\t\t\t\t<span class=\"elementskit-author-des\"><\/span>\n\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div class=\"swiper-pagination\"><\/div>\n\t\t\n\t\t\t<\/div>\n<\/div>\n<\/div>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-756ac89 elementor-align-center elementor-widget elementor-widget-button\" data-id=\"756ac89\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/opencv.org\/university\/free-courses\/?utm_source=lopcv&#038;utm_medium=blog\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t<span class=\"elementor-button-icon\">\n\t\t\t\t<i aria-hidden=\"true\" class=\"fas fa-chevron-right\"><\/i>\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">View all AI Free Courses<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t \n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-depthai-oak-d-uses-a-depthai-pipeline-to-do-inference-and-establishes-connection-between-host-raspberry-pi-or-jetson-nano-or-a-laptop-with-oak-d-lite-device-via-xlinkin-and-xlinkout\"><strong>DepthAI<\/strong> Pipeline for OAK-D-Lite<\/h2>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/depthai-pipeline-transformed.png\"><img decoding=\"async\" width=\"1024\" height=\"472\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/depthai-pipeline-transformed-1024x472.png\" alt=\" DepthAI  Pipeline - Deploy Object Detection Model on OAK-D\" class=\"wp-image-56557\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 2<\/strong>: DepthAI  Pipeline<\/figcaption><\/figure>\n\n\n\n<p>OAK-D uses a <a href=\"https:\/\/learnopencv.com\/depthai-pipeline-overview-creating-a-complex-pipeline\/\" target=\"_blank\" rel=\"noopener\" title=\"\">depthai pipeline<\/a> to do inference and establishes connection between <strong>Host (raspberry pi or jetson nano or a laptop)<\/strong> with<strong> OAK-D-Lite (Device)<\/strong> via <strong>XLinkIn <\/strong>and <strong>XLinkOut<\/strong>.<\/p>\n\n\n\n<p><strong>Note<\/strong>: The hardware configuration of the host devices can greatly impact the inference results. The inference results shown in this article were performed on a <strong>Host<\/strong> with an <strong>Intel i5 13th-gen<\/strong> machine.<\/p>\n\n\n\n<p>The<strong> <code>dai.node.YoloDetectionNetwork<\/code><\/strong> expects the model to be in <code>.blob<\/code> format and the model config with hyperparameters in a&nbsp; <code>.json<\/code> file. Th<code>e<\/code> <code>.blob<\/code> format compiles the <strong>OpenVINO<\/strong> <code>.xml<\/code> containing model architecture and .bin&nbsp; containing model weights.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img decoding=\"async\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2022\/04\/model-compile.png\" alt=\"Convert any Model to OpenVINO Blob Format - Edge AI Device Deployment\" style=\"width:660px;height:auto\"\/><figcaption class=\"wp-element-caption\"><strong>FIG 3<\/strong>: Convert any Model to OpenVINO Blob Format<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-why-openvino-ir\"><strong>How Does OpenVINO IR Format Optimizes OAK-D-Lite?<\/strong><\/h2>\n\n\n\n<p><a href=\"https:\/\/learnopencv.com\/running-openvino-models-on-intel-integrated-gpu\/\" target=\"_blank\" rel=\"noopener\" title=\"OpenVINO by Intel \">OpenVINO by Intel <\/a>is an impressive library that can port almost any deep learning model to run in a highly <strong>optimized<\/strong>, efficient, and performant manner. As discussed earlier, OAK-D-Lite has an Intel VPU, so we can deploy and run our model with increased fps. OpenVINO supports a wide range of model formats like TensorFlow, PyTorch, MXNet, Caffe, Kaldi, ONNX etc.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/ov_chart.png\"><img decoding=\"async\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/openvino-model-conversion.png\" alt=\"OpenVINO Model Conversion - How to deploy an object detection model on OAK-D\" style=\"width:549px;height:auto\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 4<\/strong>: OpenVINO Model Conversion<\/figcaption><\/figure>\n\n\n\n<p>You may be interested in learning more about the <a href=\"https:\/\/learnopencv.com\/introduction-to-intel-openvino-toolkit\/\" target=\"_blank\" rel=\"noopener\" title=\"OpenVINO Toolkit\">OpenVINO Toolkit<\/a> and <a href=\"https:\/\/learnopencv.com\/introduction-to-openvino-deep-learning-workbench\/\" target=\"_blank\" rel=\"noopener\" title=\"OpenVINO Toolkit\">Workbench<\/a>, detour to our earlier post.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-understanding-the-dataset\">Understanding the Pothole Dataset <\/h2>\n\n\n\n<p>The Pot-hole Dataset contains a single class <strong>0 : Pothole<\/strong>,<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>6962<\/strong> training images<\/li>\n\n\n\n<li><strong>271<\/strong> validation images<\/li>\n<\/ul>\n\n\n\n<p>The images contain varying driving scenes captured on car dash cams or street POV.<\/p>\n\n\n\n<p>Common Image Dimensions:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>512 x 512<\/li>\n\n\n\n<li>600 x 600<\/li>\n\n\n\n<li>1920 x 1080<\/li>\n\n\n\n<li>1024 x 1024<\/li>\n\n\n\n<li>720 x 720<\/li>\n\n\n\n<li>1280 x 720<\/li>\n\n\n\n<li>600 x 600 and so on<\/li>\n<\/ul>\n\n\n\n<p>Let&#8217;s jump into the code; we will start with the standard approach of <a href=\"https:\/\/learnopencv.com\/train-yolov8-on-custom-dataset\/\" target=\"_blank\" rel=\"noopener\" title=\"fine-tuning YOLOv8 on Pothole dataset \">fine-tuning YOLOv8 on Pothole dataset <\/a>with Ultralytics.<\/p>\n\n\n\n<figure class=\"wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"YOLOV8: Train a Custom YOLOv8 Object Detector\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/ZUhRZ9UTkIM?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>You can access the code featured here by pressing the &#8221; <strong>Download Code<\/strong> &#8221; button in the banner.<\/p>\n\n\n\n<div class=\"download-code\">\r\n<strong> Download Code<\/strong>\r\nTo easily follow along this tutorial, please download code by clicking on the button below. It's FREE!\r\n\r\n<div style=\"text-align: left;\"><a class=\"elementorDownloadPopup subscribe-btn middle-download-code\"  href=\"javascript:void(0)\">Download Code<\/a><\/div>\r\n\r\n<\/div>\r\n\t\t<div data-elementor-type=\"section\" data-elementor-id=\"50465\" class=\"elementor elementor-50465\" data-elementor-post-type=\"elementor_library\">\n\t\t\t\t\t<section data-dce-background-color=\"#006CFF\" class=\"elementor-section elementor-top-section elementor-element elementor-element-1fb1fc2 download-code-bar elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"1fb1fc2\" data-element_type=\"section\" data-e-type=\"section\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-387c24a\" data-id=\"387c24a\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-7e0677f elementor-widget__width-auto dce_masking-none elementor-widget elementor-widget-image\" data-id=\"7e0677f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2021\/10\/cropped-favicon-512x512-1-150x150.png\" class=\"attachment-thumbnail size-thumbnail wp-image-38827\" alt=\"\" title=\"cropped-favicon-512x512.png \u2013 LearnOpenCV&nbsp;\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5946901 elementor-widget__width-auto elementor-widget elementor-widget-text-editor\" data-id=\"5946901\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<a class=\"elementorDownloadPopup top-bar-download\" style=\"color: #fff;\"> Click here to download the source code to this post <\/a>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-code-walkthrough\"><strong>Code Walkthrough<\/strong> <\/h2>\n\n\n\n<p>Install Dependencies<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n!pip install ultralytics\n<\/pre><\/div>\n\n\n<p>Import Dependencies<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nimport zipfile\nimport requests\nimport cv2\nimport matplotlib.pyplot as plt\nimport glob\nimport random\nimport os\nimport json\nimport time\nfrom pathlib import Path\nfrom ultralytics import YOLO\n<\/pre><\/div>\n\n\n<p>Next, download the dataset from the provided Dropbox link and unzip it in the <code>datasets<\/code> directory<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nos.makedirs(&#039;datasets&#039;, exist_ok=True)\n\ndef download_file(url, save_name):\n   url = url\n   if not os.path.exists(save_name):\n       file = requests.get(url)\n       open(save_name, &#039;wb&#039;).write(file.content)     \ndownload_file(  &#039;https:\/\/www.dropbox.com\/s\/qvglw8pqo16769f\/pothole_dataset_v8.zip?dl=1&#039;,\n   &#039;pothole_dataset_v8.zip&#039;\n)\n\n# Unzip the data file\ndef unzip(zip_file=None):\n   try:\n       with zipfile.ZipFile(zip_file) as z:\n           z.extractall(&quot;.\/&quot;)\n           print(&quot;Extracted all&quot;)\n   except:\n       print(&quot;Invalid file&quot;)\nunzip(&#039;pothole_dataset_v8.zip&#039;)\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n%cd ..\n<\/pre><\/div>\n\n\n<p>Creating a<code> <strong>.yaml file<\/strong><\/code><strong> <\/strong>for yolo training.<\/p>\n\n\n\n<p>The Ultralytics yolo training expects a yaml file that contains the root directory, path to train and val images, and class to index mapping (names).<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n%%writefile pothole_v8.yaml\npath: &#039;pothole_dataset_v8\/&#039;\ntrain: &#039;train\/images&#039;\nval: &#039;valid\/images&#039;\n# class names\nnames:\n 0: &#039;pothole&#039;\n<\/pre><\/div>\n\n\n<p>Let\u2019s run our training for <code>50 epochs<\/code> using the default model configuration of unfreezing all the layers and monitoring the training logs.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n#training for 50 epoch.\nEPOCHS = 50\n!yolo task=detect mode=train model=yolov8n.pt imgsz=960 data=pothole_v8.yaml epochs={EPOCHS} batch=32 name=yolov8n_v8_50e\n<\/pre><\/div>\n\n\n<p>Finally, the fine-tuned model results the following training logs,<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/results.png\"><img decoding=\"async\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/YOLOv8-training-chart-results.png\" alt=\"YOLOv8n Training Logs- Deploy Object Detection Model on OAK-D\" style=\"width:736px;height:auto\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 5<\/strong>: YOLOv8n &#8211; img_sz = 960 &#8211; Training Logs<\/figcaption><\/figure>\n\n\n\n<p>We have carried out a set of three experiments on the YOLOv8 nano model and achieved a highest <strong>mAP@0.5<\/strong> of <strong>0.44 <\/strong>with<strong> img_sz = 960 <\/strong>and<strong> batch = 8.<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>MODEL<\/strong><\/td><td><strong>TRAIN IMAGE SIZE<\/strong><\/td><td><strong>mAP@0.5<\/strong><\/td><\/tr><tr><td>YOLOv8n<\/td><td>( 640, 640 )<\/td><td>0.332<\/td><\/tr><tr><td>YOLOv8n<\/td><td><strong>( 960, 960 )<\/strong><\/td><td><strong>0.44<\/strong><\/td><\/tr><tr><td>YOLOv8n<\/td><td>( 1280, 1280 )<\/td><td>0.40<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\">                                                <strong>TABLE 1:<\/strong>  mAP50 of fine-tuned YOLOv8n <\/figcaption><\/figure>\n\n\n\n<p>In the experiment notebooks attached to this article, you can find these metric logs.<\/p>\n\n\n\n<p>Next, we will run predictions on the entire validation set as a video inference to understand the robustness of our fine-tuned YOLOv8n model on the pothole dataset.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n!yolo task=detect \\\nmode=predict \\\nmodel=runs\/detect\/yolov8n_v8_50e3\/weights\/best.pt \\\nsource=datasets\/pothole_dataset_v8\/valid\/images \\\nimgsz=960 \\\nname=yolov8n_v8_50e_infer640 \\\nhide_labels=True\n<\/pre><\/div>\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/direct-pred-val_video-e.gif\"><img decoding=\"async\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/OAK_D-Pothole-inference-0.gif\" alt=\"Validation Set Inference with Fine-tuned YOLOv8n - Object Detection on Edge Device\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 6<\/strong>: Validation Set Inference with Fine-tuned YOLOv8n<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-model-conversion-yolo-pytorch-to-myriadx-blob-format\"><strong>Model Conversion: YOLO Pytorch to MyriadX Blob Format<\/strong><\/h2>\n\n\n\n<p>Let&#8217;s now discuss the interesting part of converting Yolo models to depthai (OAK-D-Lite) compatible formats. For this, we have several options:<\/p>\n\n\n\n<p><strong>a)<\/strong> An easy approach is to use the <a href=\"https:\/\/tools.luxonis.com\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Luxonis toolkit<\/a>, which allows us to&nbsp; convert YOLOv5, YOLOv6, <strong>YOLOv8<\/strong> to <strong><code>blob<\/code><\/strong> format by uploading the fine-tuned model weights and specifying the trained input image size.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/Yolo-conversion.png\"><img decoding=\"async\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/Yolo-to-blob-model-conversion-luxonis-toolkit.png\" alt=\"Luxonis YOLO Model Convertor Toolkit - Step-by-step guide to convert Pytorch model to OpenVINO\" style=\"width:591px;height:auto\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 7<\/strong>: Luxonis YOLO Model Converter Toolkit<\/figcaption><\/figure>\n\n\n\n<p><strong>b) <\/strong>The second option would be to convert the YOLOv8 pytorch model to OpenVINO IR format directly using Ultralytics <strong>model.export(\u2018openvino\u2019, <\/strong>imgsz<strong>)<\/strong> or to any other format.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nUltralytics supported export formats are (&#039;torchscript&#039;, &#039;onnx&#039;, &#039;openvino&#039;, &#039;engine&#039;, &#039;coreml&#039;, &#039;saved_model&#039;, &#039;pb&#039;, &#039;tflite&#039;, &#039;edgetpu&#039;, &#039;tfjs&#039;, &#039;paddle&#039;, &#039;ncnn&#039;).   \n<\/pre><\/div>\n\n\n<p><strong>c)<\/strong> <strong>A step-by-step guide to convert the Pytorch model to OpenVINO<\/strong><\/p>\n\n\n\n<p> A final choice or generic approach is to convert our model (either custom or pretrain weights) to <strong>ONNX<\/strong>( Open Neural Network Exchange) format and then to OpenVINO IR.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Define a dummy input tensor with the same size as the input the model expects, on the same device as the model\n\nimport torch\ndummy_input = torch.randn(1, 3, 960, 960, device=DEVICE)\nonnx_path = Path(root) \/ &quot;yolov8n-pothole.onnx&quot; \n\ntorch.onnx.export(\n           model,\n           dummy_input,\n           str(onnx_path),  # Convert Path to str\n           opset_version = 11)\n<\/pre><\/div>\n\n\n<p>While performing <strong>onnx conversion<\/strong> we have to make sure all the model layers defined are compatible with the <a href=\"https:\/\/docs.openvino.ai\/2023.3\/openvino_docs_ops_opset.html\" target=\"_blank\" rel=\"noopener\" title=\"\">opset<\/a> version according to the openvino tool version that is used. Otherwise we will not be able to convert the model successfully and end up with undesirable errors or warnings.<\/p>\n\n\n\n<p>Then, using the OpenVINO library by providing the input shape, we can convert <strong>onnx to OpenVINO IR Format<\/strong>.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n!pip install -q openvino onnx\nimport openvino as ov\n\nir_path = onnx_path.with_suffix(&quot;.xml&quot;)\n\nif not ir_path.exists():\n   print(&quot;Exporting ONNX model to IR... This may take a few minutes.&quot;)\n   ov_model = ov.convert_model(onnx_path)\n   ov.save_model(ov_model, ir_path)\nelse:\n   print(f&quot;IR model {ir_path} already exists.&quot;)\n<\/pre><\/div>\n\n\n<p>Alternatively, we can also convert our onnx model using the OpenVINO model optimizer by specifying the <strong>input shape<\/strong>, data type and target layout.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n!pip install -q openvino-dev==2022.3.0\n\n!mo --input_model yolov8n-pothole.onnx --input_shape &#x5B;1,960,960,3] --data_type FP16 --target_layout nchw\n<\/pre><\/div>\n\n\n<p>Finally the OpenVINO IR model can be compiled using the <a href=\"https:\/\/blobconverter.luxonis.com\/\" target=\"_blank\" rel=\"noopener\" title=\"\">BlobConverter toolkit<\/a> provided by Luxonis. Under the hood, a blob model is a wrapper that is built around OpenVINO toolkit by the Luxonis team.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/BlobConverter-Interface.png\"><img decoding=\"async\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/Luxonis-Blob-Converter-01.png\" alt=\"Luxonis Blob Converter - Step-by-step guide to convert Pytorch model to OpenVINO\" style=\"width:710px;height:auto\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 8<\/strong>: Luxonis Blob Converter<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/BlobConverter-actual.png\"><img decoding=\"async\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/Luxonis-Blob-Converter-02.png\" alt=\"OpenVINO IR to Blob Converter - object detection model on edge devices \" style=\"width:840px;height:auto\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 9:<\/strong> OpenVINO IR to Blob Converter<\/figcaption><\/figure>\n\n\n\n<p><strong>Shaves<\/strong><\/p>\n\n\n\n<p>In OpenVINO, shaves are&nbsp; specialized processing unit that accelerate neural network inference, especially vector based operations. Usually there are 16 available shaves for OAK-D devices. This helps offload computationally expensive tasks from the main cpu to boost performance on inference workload.&nbsp;<\/p>\n\n\n\n<p><strong>Note<\/strong>: By default, the blob model compiles the output layers in a <code>FP16<\/code> data type.&nbsp;<\/p>\n\n\n\n<p>Guess what? The final step is to start our OAK-D-Lite deployment. Hold your excitement still!<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><a href=\"https:\/\/opencv.org\/university\/\" target=\"_blank\" rel=\"noreferrer noopener\"><img decoding=\"async\" width=\"1024\" height=\"389\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/09\/image-1.png\" alt=\"\" class=\"wp-image-59135\" style=\"width:619px;height:auto\"\/><\/a><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-oak-d-lite-deployment\"><strong>OAK-D-Lite Deployment<\/strong><\/h2>\n\n\n\n<p>Our entire <a href=\"https:\/\/learnopencv.com\/object-detection-with-depth-measurement-with-oak-d\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Object detection model deployment on OAK-D<\/a> with DepthAI pipeline looks similar to this,<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><a href=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/YOLO-Detection-Network-Pipeline.png\"><img decoding=\"async\" width=\"1024\" height=\"611\" src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/06\/YOLO-Detection-Network-Pipeline-1024x611.png\" alt=\"DepthAI with YOLO Detection Network -  Object Detection with OAK-D\" class=\"wp-image-56570\" style=\"width:652px;height:auto\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>FIG 10<\/strong>: DepthAI with YOLO Detection Network<\/figcaption><\/figure>\n\n\n\n<p>Install and import DepthAI dependencies,<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n!python3 -m pip install depthai\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nimport depthai as dai\n<\/pre><\/div>\n\n\n<p>Let\u2019s specify the path to the model, hyperparameters config, input video, etc.<\/p>\n\n\n\n<p>The model configuration .json file contains:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n{\n   &quot;model&quot;: {\n       &quot;xml&quot;: &quot;best.xml&quot;,\n       &quot;bin&quot;: &quot;best.bin&quot;\n},\n   &quot;nn_config&quot;: {\n       &quot;output_format&quot;: &quot;detection&quot;,\n       &quot;NN_family&quot;: &quot;YOLO&quot;,\n       &quot;input_size&quot;: &quot;960x960&quot;,\n       &quot;NN_specific_metadata&quot;: {\n           &quot;classes&quot;: 1,\n           &quot;coordinates&quot;: 4,\n           &quot;anchors&quot;: &#x5B;], # As YOLOv8 has anchor free detections\n           &quot;anchor_masks&quot;: {},\n           &quot;iou_threshold&quot;: 0.5,\n           &quot;confidence_threshold&quot;: 0.5 }},\n   &quot;mappings&quot;: {\n       &quot;labels&quot;: &#x5B;\n           &quot;Class_0&quot;\n       ] }, &quot;version&quot;: 1}\n<\/pre><\/div>\n\n\n<p>Then, the camera resolution is set according to our trained model\u2019s <strong>img_sz = (960,960)<\/strong>.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Define path to the model, test data directory, and results\nYOLOV8N_MODEL = &quot;yolov8_320_fps_check\/result\/best_openvino_2022.1_6shave.blob&quot;\nYOLOV8N_CONFIG = &quot;yolov8_320_fps_check\/result\/best.json&quot;\n\nINPUT_VIDEO = &quot;potholes-in-a-rural.mp4&quot;\nOUTPUT_VIDEO = &quot;vid_result\/potholes-in-a-rural_output_video.mp4&quot;\n\nCAMERA_PREVIEW_DIM = (960, 960)\nLABELS = &#x5B;&quot;Pot-hole&quot;]\n\ndef load_config(config_path):\n   with open(config_path) as f:\n       return json.load(f)\n<\/pre><\/div>\n\n\n<p>Initially, we will start by creating an image pipeline utility with <code>dai.Pipeline()<\/code>. In the configuration JSON file, we will get the neural network-specific metadata like <code>classes<\/code>, <code>coordinates<\/code>, <code>iou_threshold<\/code>, <code>confidence_threshold<\/code>, etc.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\ndef create_image_pipeline(config_path, model_path):\n   pipeline = dai.Pipeline()\n   model_config = load_config(config_path)\n   nnConfig = model_config.get(&quot;nn_config&quot;, {})\n   metadata = nnConfig.get(&quot;NN_specific_metadata&quot;, {})\n   classes = metadata.get(&quot;classes&quot;, {})\n   coordinates = metadata.get(&quot;coordinates&quot;, {})\n   anchors = metadata.get(&quot;anchors&quot;, {})\n   anchorMasks = metadata.get(&quot;anchor_masks&quot;, {})\n   iouThreshold = metadata.get(&quot;iou_threshold&quot;, {})\n   confidenceThreshold = metadata.get(&quot;confidence_threshold&quot;, {})\n   ...\n<\/pre><\/div>\n\n\n<ul class=\"wp-block-list\">\n<li>The input to the nn model running on OAK-D-Lite (device) is sent using the <code>XLinkIn<\/code> node.<\/li>\n\n\n\n<li>An instance of detection network is created for <code>YolodetectionNetwork<\/code> using <code>pipeline.create()<\/code>&nbsp;<\/li>\n\n\n\n<li>Output from the model after inference is accessed via the <code>XLinkOut<\/code> node.<\/li>\n\n\n\n<li>To access the input and output stream to the device, we will set a StreamName, which will set a unique name to each stream.<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\ndef create_image_pipeline(config_path, model_path):\n   ...\n  detectionIN = pipeline.create(dai.node.XLinkIn)\n   detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)\n   nnOut = pipeline.create(dai.node.XLinkOut)\n\n   nnOut.setStreamName(&quot;nn&quot;)\n   detectionIN.setStreamName(&quot;detection_in&quot;)\n<\/pre><\/div>\n\n\n<p>Then the YoloDetectionNetwork initialized is configured with the metadata&nbsp; we&nbsp; got from our fine-tuned converted model. The Model is compiled using <code>detectionNetwork.setBlobPath()<\/code>.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\ndef create_image_pipeline(config_path, model_path):\n   ...\n   detectionNetwork.setConfidenceThreshold(confidenceThreshold)\n   detectionNetwork.setNumClasses(classes)\n   detectionNetwork.setCoordinateSize(coordinates)\n   detectionNetwork.setAnchors(anchors)\n   detectionNetwork.setAnchorMasks(anchorMasks)\n   detectionNetwork.setIouThreshold(iouThreshold)\n   detectionNetwork.setBlobPath(model_path)\n   detectionNetwork.setNumInferenceThreads(2)\n   detectionNetwork.input.setBlocking(False)\n\n   detectionIN.out.link(detectionNetwork.input)\n   detectionNetwork.out.link(nnOut.input)\n\n   return pipeline\n<\/pre><\/div>\n\n\n<p>Next, a pipeline instance for our image pipeline is created for communication, flow of data etc., and the video is read using <code>cv2.VideoCapture()<\/code>.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Create pipeline\npipeline = create_image_pipeline(YOLOV8N_CONFIG, YOLOV8N_MODEL)\n\n# Ensure output directory exists\nos.makedirs(os.path.dirname(OUTPUT_VIDEO), exist_ok=True)\n\ncap = cv2.VideoCapture(INPUT_VIDEO)\nframe_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\nframe_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\nfps = cap.get(cv2.CAP_PROP_FPS)\nout = cv2.VideoWriter(OUTPUT_VIDEO, cv2.VideoWriter_fourcc(*&#039;mp4v&#039;), fps, (frame_width, frame_height))\n<\/pre><\/div>\n\n\n<p><strong>Pre-processing Utility:<\/strong><\/p>\n\n\n\n<p>We will define two important pre-processing steps:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>to_planar()<\/code>: We ensure the input frame is resized to the inference image size or camera resolution we have set up earlier using <code>CAMERA_PREVIEW_DIM<\/code>. We also ensure proper model format, as the DepthAI detection network expects the input frame to be in <code>CHW<\/code> format.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>frame_norm()<\/code>: This utility is responsible for converting to normalized frames and ensures they are within a valid pixel range.<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\ndef to_planar(arr: np.ndarray, shape: tuple) -&gt; np.ndarray:\n   resized = cv2.resize(arr, shape)\n   return resized.transpose(2, 0, 1)\n\ndef frame_norm(frame, bbox):\n   norm_vals = np.full(len(bbox), frame.shape&#x5B;0])\n   norm_vals&#x5B;::2] = frame.shape&#x5B;1]\n   return (np.clip(np.array(bbox), 0, 1) * norm_vals).astype(int)\n<\/pre><\/div>\n\n\n<p><strong>Post-processing Utility:<\/strong><br>This is a<strong> <\/strong>simple utility for drawing the bounding boxes or annotating the inference frames and updating them with real-time FPS.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\ndef annotate_frame(frame, detections, fps):\n   color = (0, 0, 255)\n   for detection in detections:\n       bbox = frame_norm(frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax))\n       cv2.putText(frame, LABELS&#x5B;detection.label], (bbox&#x5B;0] + 10, bbox&#x5B;1] + 25), cv2.FONT_HERSHEY_TRIPLEX, 1, color)\n       cv2.putText(frame, f&quot;{int(detection.confidence * 100)}%&quot;, (bbox&#x5B;0] + 10, bbox&#x5B;1] + 60), cv2.FONT_HERSHEY_TRIPLEX, 1, color)\n       cv2.rectangle(frame, (bbox&#x5B;0], bbox&#x5B;1]), (bbox&#x5B;2], bbox&#x5B;3]), color, 2)\n  \n   # Annotate the frame with the FPS\n   cv2.putText(frame, f&quot;FPS: {fps:.2f}&quot;, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)\n   return frame\n<\/pre><\/div>\n\n\n<ul class=\"wp-block-list\">\n<li>The following set of code set up the connection to the OAK-D-Lite device&nbsp;<\/li>\n\n\n\n<li>The detection is responsible for sending the input to the inference model, and the <code>detectionNN<\/code> will receive the output from the nn model.<\/li>\n\n\n\n<li>The video frames are continuously processed until it hits the last frame of the video which will be reshaped using <code>to_planar()<\/code> pre-processing utility.<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Connect to device and start pipeline\nwith dai.Device(pipeline) as device:\n   # Define the queues that will be used in order to communicate with depthai\n   detectionIN = device.getInputQueue(&quot;detection_in&quot;)\n   detectionNN = device.getOutputQueue(&quot;nn&quot;)\n\n   start_time = time.time()\n   frame_count = 0\n\n   while cap.isOpened():\n       ret, frame = cap.read()\n       if not ret:\n          break\n\n       frame_count += 1  \n# Initialize depthai NNData() class which is fed with the image data resized and transposed to model input shape\n       nn_data = dai.NNData()\n       nn_data.setLayer(&quot;input&quot;, to_planar(frame, CAMERA_PREVIEW_DIM))\n<\/pre><\/div>\n\n\n<ul class=\"wp-block-list\">\n<li>After pre-processing, the prepared input frame is sent to the detection model for inference as queues for memory efficiency with the nn_data class instance.<\/li>\n\n\n\n<li>The detection from the model is fetched using <code>detectionNN.get()<\/code>. If detections are found in a frame, they are stored as a list.\u00a0<\/li>\n<\/ul>\n\n\n\n<p>If detections are found we will get a depthai image detection objects in our CLI,<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nDetections &#x5B;&amp;lt;depthai.ImgDetection object at 0x71adcafacd70&gt;]\n<\/pre><\/div>\n\n\n<p>Then, the <code>annotate_frame()<\/code> utility is called, and the output video with pot-hole detections is saved locally as a .mp4 file using <code>cv2.VideoWriter()<\/code>.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n# Send the image to detectionIN queue further passed to the detection network for inference as defined in pipeline\n       detectionIN.send(nn_data)\n      \n       # Fetch the neural network output\n       inDet = detectionNN.get()\n       detections = &#x5B;]\n       if inDet is not None:\n           detections = inDet.detections\n           print(&quot;Detections&quot;, detections)\n\n       # Calculate the FPS\n       elapsed_time = time.time() - start_time\n       fps = frame_count \/ elapsed_time if elapsed_time &gt; 0 else 0\n\n       # Annotate the frame with detections and FPS\n       frame = annotate_frame(frame, detections, fps)\n\n       out.write(frame)\ncap.release()\nout.release()\nprint(f&quot;&#x5B;INFO] Processed video {INPUT_VIDEO} and saved to {OUTPUT_VIDEO}&quot;)\n<\/pre><\/div>\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Image Size (OAK-D-Lite)<\/strong><\/td><td><strong>Video Inference (FPS)<\/strong><\/td><\/tr><tr><td>320&#215;320<\/td><td>14.1<\/td><\/tr><tr><td>640&#215;640<\/td><td>6.7<\/td><\/tr><tr><td>960&#215;960<\/td><td>3.3<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\"><strong>TABLE 2: <\/strong>Comparing FPS of OAK-D-Lite with varying image resolutions<\/figcaption><\/figure>\n\n\n\n<p><strong>Note<\/strong>: The deployment pipeline should work with any OAK-D variants with difference in inference speed and supported image resolution.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-inference-results\">Video Inference Results on OAK-D-Lite<\/h2>\n\n\n\n<p><strong>Result 1<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video autoplay controls loop muted src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/Yolov8-oak-d-pothole-inference-3.mp4\"><\/video><figcaption class=\"wp-element-caption\"><strong>VIDEO 1: <\/strong>YOLOv8n &#8211; 960 x 960 &#8211; Pothole Detection<\/figcaption><\/figure>\n\n\n\n<p><strong>Result 2<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video autoplay controls loop muted src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/Yolov8-oak-d-pothole-inference-1.mp4\"><\/video><figcaption class=\"wp-element-caption\"><strong>VIDEO 2: <\/strong>YOLOv8n &#8211; 960 x 960 &#8211; Pothole Detection<\/figcaption><\/figure>\n\n\n\n<p><strong>Result 3<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video autoplay controls loop muted src=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/07\/Yolov8-oak-d-pothole-inference-2.mp4\"><\/video><figcaption class=\"wp-element-caption\"><strong>VIDEO 3: <\/strong>YOLOv8n &#8211; 960 x 960 &#8211; Pothole Detection<\/figcaption><\/figure>\n\n\n\n<p>This application is quite impressive. <strong>SCROLL UP<\/strong> to check out the implementation details &#8211; you might find them surprisingly interesting!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-using-oak-d-lite-camera-stream\"><strong>Using OAK-D-Lite<\/strong> <strong>Camera Stream<\/strong> <\/h2>\n\n\n\n<p>To infer using OAK-D <strong>RGB Camera Stream<\/strong>, simply change the image pipeline to create a camera pipeline. Additionally, we should also take into account how the frame is retrieved from the RGB camera stream and how it is sent to the detection network&nbsp;<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nCAMERA_PREVIEW_DIM = (960, 960)\nLABELS = &#x5B;&quot;Pot-hole&quot;]\n\ndef create_camera_pipeline(config_path, model_path):\n    pipeline = dai.Pipeline()\n    model_config = load_config(config_path)\n    nnConfig = model_config.get(&quot;nn_config&quot;, {})\n    metadata = nnConfig.get(&quot;NN_specific_metadata&quot;, {})\n    classes = metadata.get(&quot;classes&quot;, {})\n    coordinates = metadata.get(&quot;coordinates&quot;, {})\n    anchors = metadata.get(&quot;anchors&quot;, {})\n    anchorMasks = metadata.get(&quot;anchor_masks&quot;, {})\n    iouThreshold = metadata.get(&quot;iou_threshold&quot;, {})\n    confidenceThreshold = metadata.get(&quot;confidence_threshold&quot;, {})\n\n    # Create camera node\n    camRgb = pipeline.create(dai.node.ColorCamera)\n    camRgb.setPreviewSize(CAMERA_PREVIEW_DIM&#x5B;0], CAMERA_PREVIEW_DIM&#x5B;1])\n    camRgb.setInterleaved(False)\n    camRgb.setBoardSocket(dai.CameraBoardSocket.RGB)\n    camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)\n\n    detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)\n    nnOut = pipeline.create(dai.node.XLinkOut)\n\n    nnOut.setStreamName(&quot;nn&quot;)\n\n    detectionNetwork.setConfidenceThreshold(confidenceThreshold)\n    detectionNetwork.setNumClasses(classes)\n    detectionNetwork.setCoordinateSize(coordinates)\n    detectionNetwork.setAnchors(anchors)\n    detectionNetwork.setAnchorMasks(anchorMasks)\n    detectionNetwork.setIouThreshold(iouThreshold)\n    detectionNetwork.setBlobPath(model_path)\n    detectionNetwork.setNumInferenceThreads(2)\n    detectionNetwork.input.setBlocking(False)\n\n    # Linking\n    camRgb.preview.link(detectionNetwork.input)\n    detectionNetwork.out.link(nnOut.input)\n\n    return pipeline\n\n# Create pipeline\npipeline = create_camera_pipeline(YOLOV8N_CONFIG, YOLOV8N_MODEL)\n\n# Connect to device and start pipeline\nwith dai.Device(pipeline) as device:\n    # Define the queue that will be used to receive the neural network output\n    detectionNN = device.getOutputQueue(&quot;nn&quot;, maxSize=4, blocking=False)\n\n    # Video writer to save the output video\n    fps = 30  # Assuming 30 FPS for the OAK-D camera\n    frame_width, frame_height = CAMERA_PREVIEW_DIM\n    out = cv2.VideoWriter(OUTPUT_VIDEO, cv2.VideoWriter_fourcc(*&#039;mp4v&#039;), fps, (frame_width, frame_height))\n\n    start_time = time.time()\n    frame_count = 0\n\n    while True:\n        inDet = detectionNN.get()\n        detections = &#x5B;]\n        if inDet is not None:\n            detections = inDet.detections\n            print(&quot;Detections&quot;, detections)\n        \n        # Retrieve the frame from the camera preview\n        frame = inDet.getFrame()\n\n#Other steps are similar to the previous code we have a discussed earlier.\n\n<\/pre><\/div>\n\n\n<p><strong>Congrats on <\/strong><a href=\"https:\/\/learnopencv.com\/deploy-deep-learning-model-huggingface-spaces\/\" target=\"_blank\" rel=\"noopener\" title=\"\"><strong>deploying your fine-tuned model<\/strong><\/a><strong> on an OAK-D device for the first time.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-key-takeaways\">Key Takeaways of OAK-D Deployment<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Running optimized models on edge devices like OAK-D-Lite using OpenVINO toolkit showed promising results and were computationally efficient.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The future scope of this article can include exploring <a href=\"https:\/\/learnopencv.com\/post-training-quantization-with-openvino-toolkit\/\" target=\"_blank\" rel=\"noopener\" title=\"OpenVINO post training quantization\">OpenVINO post training quantization<\/a> and quantization aware training techniques that can be carried out with OpenVINO model conversion for optimizing the FPS and latency of our OAK-D-Lite device.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>To enhance <a href=\"https:\/\/learnopencv.com\/driver-drowsiness-detection-using-mediapipe-in-python\/\" target=\"_blank\" rel=\"noopener\" title=\"\">driver safety<\/a>, we can extend this experiment by keeping this as a baseline and installing OAK-D-Lite with a Raspberry Pi or Jetson Nano as a Host on a vehicle\u2019s driver seat as a monitoring system that indicates the forthcoming pot-holes in the <a href=\"https:\/\/learnopencv.com\/segformer-fine-tuning-for-lane-detection\/\" target=\"_blank\" rel=\"noopener\" title=\"\">lane<strong> <\/strong>detection<\/a><strong>.<\/strong>  <\/li>\n<\/ul>\n\n\n\n<p>Having a career in robotics is a \u201cPursuit of Happiness\u201d. For a&nbsp;<strong>foundational understanding<\/strong>, explore our&nbsp;Getting Started with <a href=\"https:\/\/learnopencv.com\/category\/robotics\/\" target=\"_blank\" rel=\"noopener\" title=\"Robotics Series\">Robotics Series<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-conclusion\">Conclusion<\/h2>\n\n\n\n<p>Our objective of deploying an Object Detection Model on OAK-D has been achieved to the best of our knowledge. Now, you will be able to deploy the latest YOLO models on OAK-D devices in accordance with the DepthAI pipeline. For models other than YOLO, you can use the third instruction discussed in the Model Conversion section. You can build robust real world applications offering various possibilities by developing hands-on experience with deep neural models and edge AI devices.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"aioseo-references\"><strong>R<\/strong>eferences<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a href=\"https:\/\/docs.luxonis.com\/software\/depthai\/manual-install\/#Test%20Installation\" target=\"_blank\" rel=\"noopener\" title=\"Luxonis Docs\">Luxonis Docs<\/a><br><\/li>\n\n\n\n<li>Object Detection on Edge Device &#8211; <a href=\"https:\/\/github.com\/luxonis\/depthai\" target=\"_blank\" rel=\"noopener\" title=\"DepthAI Repository\">DepthAI Repository<\/a><br><\/li>\n\n\n\n<li><a href=\"https:\/\/docs.openvino.ai\/2024\/index.html\" target=\"_blank\" rel=\"noopener\" title=\"OpenVINO Documentation\">OpenVINO Documentation<\/a><br><\/li>\n\n\n\n<li><a href=\"https:\/\/docs.ultralytics.com\/modes\/export\/\" target=\"_blank\" rel=\"noopener\" title=\"Ultralytics Model Export \">Ultralytics Model Export <\/a><br><\/li>\n\n\n\n<li><a href=\"https:\/\/docs.openvino.ai\/2023.3\/ptq_introduction.html\" target=\"_blank\" rel=\"noopener\" title=\"OpenVINO Post Training Quanitization\">OpenVINO Post Training Quantization<\/a><\/li>\n<\/ol>\n\n\n\n<!-- Shortcode [subscribe] does not exist -->\n","protected":false},"excerpt":{"rendered":"<p>This article discusses how to use any finetuned yolov8 pytorch model on oak-d-lite device with OpenVINO IR Format.<\/p>\n","protected":false},"author":120,"featured_media":58015,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1910,590,1067,649],"tags":[1912,1117,1119,2769,2768,1075,1217,2045,1116,1115,2046,139,648,378,2767,650,1769],"coauthors":[2637],"class_list":["post-56543","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-deployment","category-edge-devices","category-intel-openvino-toolkit","category-oak","tag-deep-learning-model-deployment","tag-depthai","tag-edge-ai","tag-luxonis","tag-model-deployment","tag-model-optimization","tag-oak","tag-oak-d-camera","tag-oak-d-lite","tag-oak-d-oak-d-lite-spatial-ai-depthai-stereo-vision-edge-ai-pipeline-node","tag-oak-d-projects","tag-object-detection","tag-opencv-ai-kit","tag-openvino","tag-real-time-inference","tag-spatial-ai","tag-yolov8"],"acf":[],"aioseo_notices":[],"featured_image_src":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated-600x400.gif","featured_image_src_square":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated-600x432.gif","author_info":{"display_name":"Jaykumaran","author_link":"https:\/\/learnopencv.com\/author\/jayakumaran\/"},"aioseo_head":"\n\t\t<!-- All in One SEO Pro 4.9.5.2 - aioseo.com -->\n\t<meta name=\"description\" content=\"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.\" \/>\n\t<meta name=\"robots\" content=\"max-image-preview:large\" \/>\n\t<meta name=\"author\" content=\"Jaykumaran\"\/>\n\t<meta name=\"google-site-verification\" content=\"k5TzNXM2eW8MZp1slBPbu88XEZ7mPURnk897kaccqiI\" \/>\n\t<meta name=\"keywords\" content=\"deep learning model deployment,depthai,edge ai,luxonis,model deployment,model optimization,oak,oak-d camera,oak-d lite,oak-d oak-d lite spatial ai depthai stereo vision edge ai pipeline node,oak-d projects,object detection,opencv ai kit,openvino,real-time inference,spatial ai,yolov8\" \/>\n\t<link rel=\"canonical\" href=\"https:\/\/learnopencv.com\/object-detection-on-edge-device\/\" \/>\n\t<meta name=\"generator\" content=\"All in One SEO Pro (AIOSEO) 4.9.5.2\" \/>\n\t\t<meta property=\"og:locale\" content=\"en_US\" \/>\n\t\t<meta property=\"og:site_name\" content=\"LearnOpenCV \u2013 Learn OpenCV, PyTorch, Keras, Tensorflow with code, &amp; tutorials\" \/>\n\t\t<meta property=\"og:type\" content=\"article\" \/>\n\t\t<meta property=\"og:title\" content=\"Object Detection on Edge Device - OAK-D\" \/>\n\t\t<meta property=\"og:description\" content=\"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.\" \/>\n\t\t<meta property=\"og:url\" content=\"https:\/\/learnopencv.com\/object-detection-on-edge-device\/\" \/>\n\t\t<meta property=\"og:image\" content=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif\" \/>\n\t\t<meta property=\"og:image:secure_url\" content=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif\" \/>\n\t\t<meta property=\"og:image:width\" content=\"768\" \/>\n\t\t<meta property=\"og:image:height\" content=\"432\" \/>\n\t\t<meta property=\"article:published_time\" content=\"2024-07-02T13:00:00+00:00\" \/>\n\t\t<meta property=\"article:modified_time\" content=\"2025-02-27T06:22:08+00:00\" \/>\n\t\t<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Learnopencv-277284889389059\" \/>\n\t\t<meta name=\"twitter:card\" content=\"summary\" \/>\n\t\t<meta name=\"twitter:site\" content=\"@AiOpencv\" \/>\n\t\t<meta name=\"twitter:title\" content=\"Object Detection on Edge Device - OAK-D\" \/>\n\t\t<meta name=\"twitter:description\" content=\"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.\" \/>\n\t\t<meta name=\"twitter:creator\" content=\"@AiOpencv\" \/>\n\t\t<meta name=\"twitter:image\" content=\"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif\" \/>\n\t\t<script type=\"application\/ld+json\" class=\"aioseo-schema\">\n\t\t\t{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#article\",\"name\":\"Object Detection on Edge Device - OAK-D\",\"headline\":\"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset\",\"author\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/jayakumaran\\\/#author\"},\"publisher\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#person\"},\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/uploads\\\/2024\\\/08\\\/object-detection-edge-device-oakd-updated.gif\",\"width\":768,\"height\":432,\"caption\":\"object-detection-edge-device-oakd-updated\"},\"datePublished\":\"2024-07-02T06:00:00-07:00\",\"dateModified\":\"2025-02-26T22:22:08-08:00\",\"inLanguage\":\"en-US\",\"commentCount\":9,\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#webpage\"},\"isPartOf\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#webpage\"},\"articleSection\":\"Deployment, Edge Devices, Intel OpenVINO Toolkit, OAK, deep learning model deployment, DepthAI, EDGE AI, Luxonis, model deployment, Model Optimization, OAK, oak-d camera, OAK-D Lite, OAK-D OAK-D Lite Spatial AI DepthAI Stereo Vision EDGE AI Pipeline Node, oak-d projects, Object Detection, OpenCV AI Kit, OpenVINO, real-time inference, Spatial AI, YOLOv8, jayakumaran\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#breadcrumblist\",\"itemListElement\":[{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com#listItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/learnopencv.com\",\"nextItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/edge-devices\\\/#listItem\",\"name\":\"Edge Devices\"}},{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/edge-devices\\\/#listItem\",\"position\":2,\"name\":\"Edge Devices\",\"item\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/edge-devices\\\/\",\"nextItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#listItem\",\"name\":\"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset\"},\"previousItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com#listItem\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#listItem\",\"position\":3,\"name\":\"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset\",\"previousItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/category\\\/edge-devices\\\/#listItem\",\"name\":\"Edge Devices\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#person\",\"name\":\"Satya Mallick\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#personImage\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/litespeed\\\/avatar\\\/483395fab515fdb59dbd986fdcd73e10.jpg?ver=1776255091\",\"width\":96,\"height\":96,\"caption\":\"Satya Mallick\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/jayakumaran\\\/#author\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/jayakumaran\\\/\",\"name\":\"Jaykumaran\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/1687253077471-e1715696465480-150x150.jpg\"}},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#webpage\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/\",\"name\":\"Object Detection on Edge Device - OAK-D\",\"description\":\"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.\",\"inLanguage\":\"en-US\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#website\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#breadcrumblist\"},\"author\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/jayakumaran\\\/#author\"},\"creator\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/author\\\/jayakumaran\\\/#author\"},\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/wp-content\\\/uploads\\\/2024\\\/08\\\/object-detection-edge-device-oakd-updated.gif\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#mainImage\",\"width\":768,\"height\":432,\"caption\":\"object-detection-edge-device-oakd-updated\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/object-detection-on-edge-device\\\/#mainImage\"},\"datePublished\":\"2024-07-02T06:00:00-07:00\",\"dateModified\":\"2025-02-26T22:22:08-08:00\"},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#website\",\"url\":\"https:\\\/\\\/learnopencv.com\\\/\",\"name\":\"LearnOpenCV\",\"description\":\"Learn OpenCV, PyTorch, Keras, Tensorflow with code, & tutorials\",\"inLanguage\":\"en-US\",\"publisher\":{\"@id\":\"https:\\\/\\\/learnopencv.com\\\/#person\"}}]}\n\t\t<\/script>\n\t\t<!-- All in One SEO Pro -->\r\n\t\t<title>Object Detection on Edge Device - OAK-D<\/title>\n\n","aioseo_head_json":{"title":"Object Detection on Edge Device - OAK-D","description":"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.","canonical_url":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/","robots":"max-image-preview:large","keywords":"deep learning model deployment,depthai,edge ai,luxonis,model deployment,model optimization,oak,oak-d camera,oak-d lite,oak-d oak-d lite spatial ai depthai stereo vision edge ai pipeline node,oak-d projects,object detection,opencv ai kit,openvino,real-time inference,spatial ai,yolov8","webmasterTools":{"google-site-verification":"k5TzNXM2eW8MZp1slBPbu88XEZ7mPURnk897kaccqiI","miscellaneous":""},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#article","name":"Object Detection on Edge Device - OAK-D","headline":"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset","author":{"@id":"https:\/\/learnopencv.com\/author\/jayakumaran\/#author"},"publisher":{"@id":"https:\/\/learnopencv.com\/#person"},"image":{"@type":"ImageObject","url":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif","width":768,"height":432,"caption":"object-detection-edge-device-oakd-updated"},"datePublished":"2024-07-02T06:00:00-07:00","dateModified":"2025-02-26T22:22:08-08:00","inLanguage":"en-US","commentCount":9,"mainEntityOfPage":{"@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#webpage"},"isPartOf":{"@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#webpage"},"articleSection":"Deployment, Edge Devices, Intel OpenVINO Toolkit, OAK, deep learning model deployment, DepthAI, EDGE AI, Luxonis, model deployment, Model Optimization, OAK, oak-d camera, OAK-D Lite, OAK-D OAK-D Lite Spatial AI DepthAI Stereo Vision EDGE AI Pipeline Node, oak-d projects, Object Detection, OpenCV AI Kit, OpenVINO, real-time inference, Spatial AI, YOLOv8, jayakumaran"},{"@type":"BreadcrumbList","@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#breadcrumblist","itemListElement":[{"@type":"ListItem","@id":"https:\/\/learnopencv.com#listItem","position":1,"name":"Home","item":"https:\/\/learnopencv.com","nextItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/category\/edge-devices\/#listItem","name":"Edge Devices"}},{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/category\/edge-devices\/#listItem","position":2,"name":"Edge Devices","item":"https:\/\/learnopencv.com\/category\/edge-devices\/","nextItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#listItem","name":"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset"},"previousItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com#listItem","name":"Home"}},{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#listItem","position":3,"name":"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset","previousItem":{"@type":"ListItem","@id":"https:\/\/learnopencv.com\/category\/edge-devices\/#listItem","name":"Edge Devices"}}]},{"@type":"Person","@id":"https:\/\/learnopencv.com\/#person","name":"Satya Mallick","image":{"@type":"ImageObject","@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#personImage","url":"https:\/\/learnopencv.com\/wp-content\/litespeed\/avatar\/483395fab515fdb59dbd986fdcd73e10.jpg?ver=1776255091","width":96,"height":96,"caption":"Satya Mallick"}},{"@type":"Person","@id":"https:\/\/learnopencv.com\/author\/jayakumaran\/#author","url":"https:\/\/learnopencv.com\/author\/jayakumaran\/","name":"Jaykumaran","image":{"@type":"ImageObject","url":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/05\/1687253077471-e1715696465480-150x150.jpg"}},{"@type":"WebPage","@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#webpage","url":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/","name":"Object Detection on Edge Device - OAK-D","description":"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.","inLanguage":"en-US","isPartOf":{"@id":"https:\/\/learnopencv.com\/#website"},"breadcrumb":{"@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#breadcrumblist"},"author":{"@id":"https:\/\/learnopencv.com\/author\/jayakumaran\/#author"},"creator":{"@id":"https:\/\/learnopencv.com\/author\/jayakumaran\/#author"},"image":{"@type":"ImageObject","url":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif","@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#mainImage","width":768,"height":432,"caption":"object-detection-edge-device-oakd-updated"},"primaryImageOfPage":{"@id":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/#mainImage"},"datePublished":"2024-07-02T06:00:00-07:00","dateModified":"2025-02-26T22:22:08-08:00"},{"@type":"WebSite","@id":"https:\/\/learnopencv.com\/#website","url":"https:\/\/learnopencv.com\/","name":"LearnOpenCV","description":"Learn OpenCV, PyTorch, Keras, Tensorflow with code, & tutorials","inLanguage":"en-US","publisher":{"@id":"https:\/\/learnopencv.com\/#person"}}]},"og:locale":"en_US","og:site_name":"LearnOpenCV \u2013 Learn OpenCV, PyTorch, Keras, Tensorflow with code, &amp; tutorials","og:type":"article","og:title":"Object Detection on Edge Device - OAK-D","og:description":"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.","og:url":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/","og:image":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif","og:image:secure_url":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif","og:image:width":768,"og:image:height":432,"article:published_time":"2024-07-02T13:00:00+00:00","article:modified_time":"2025-02-27T06:22:08+00:00","article:publisher":"https:\/\/www.facebook.com\/Learnopencv-277284889389059","twitter:card":"summary","twitter:site":"@AiOpencv","twitter:title":"Object Detection on Edge Device - OAK-D","twitter:description":"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.","twitter:creator":"@AiOpencv","twitter:image":"https:\/\/learnopencv.com\/wp-content\/uploads\/2024\/08\/object-detection-edge-device-oakd-updated.gif"},"aioseo_meta_data":{"post_id":"56543","title":"Object Detection on Edge Device - OAK-D","description":"Object Detection on Edge Device - Deployment tutorial for running fine-tuned YOLOv8 on OAK-D-Lite device with DepthAI pipeline for Pothole Detection.","keywords":null,"keyphrases":{"focus":{"keyphrase":"Object Detection on Edge Device","score":74,"analysis":{"keyphraseInTitle":{"score":9,"maxScore":9,"error":0},"keyphraseInDescription":{"score":9,"maxScore":9,"error":0},"keyphraseLength":{"score":6,"maxScore":9,"error":1,"length":5},"keyphraseInURL":{"score":5,"maxScore":5,"error":0},"keyphraseInIntroduction":{"score":9,"maxScore":9,"error":0},"keyphraseInSubHeadings":{"score":3,"maxScore":9,"error":1},"keyphraseInImageAlt":{"score":9,"maxScore":9,"error":0},"keywordDensity":{"score":0,"type":"low","maxScore":9,"error":1}}},"additional":[{"keyphrase":" Object Detection with OAK-D","score":53,"analysis":{"keyphraseInDescription":{"score":3,"maxScore":9,"error":1},"keyphraseLength":{"score":9,"maxScore":9,"error":0,"length":4},"keyphraseInIntroduction":{"score":3,"maxScore":9,"error":1},"keyphraseInImageAlt":{"score":9,"maxScore":9,"error":0},"keywordDensity":{"score":0,"type":"low","maxScore":9,"error":1}}}]},"primary_term":{"category":590},"canonical_url":null,"og_title":null,"og_description":null,"og_object_type":"default","og_image_type":"default","og_image_url":null,"og_image_width":null,"og_image_height":null,"og_image_custom_url":null,"og_image_custom_fields":null,"og_video":"","og_custom_url":null,"og_article_section":null,"og_article_tags":null,"twitter_use_og":false,"twitter_card":"default","twitter_image_type":"default","twitter_image_url":null,"twitter_image_custom_url":null,"twitter_image_custom_fields":null,"twitter_title":null,"twitter_description":null,"schema":{"blockGraphs":[],"customGraphs":[],"default":{"data":{"Article":[],"Course":[],"Dataset":[],"FAQPage":[],"Movie":[],"Person":[],"Product":[],"ProductReview":[],"Car":[],"Recipe":[],"Service":[],"SoftwareApplication":[],"WebPage":[]},"graphName":"Article","isEnabled":true},"graphs":[]},"schema_type":"default","schema_type_options":null,"pillar_content":false,"robots_default":true,"robots_noindex":false,"robots_noarchive":false,"robots_nosnippet":false,"robots_nofollow":false,"robots_noimageindex":false,"robots_noodp":false,"robots_notranslate":false,"robots_max_snippet":"-1","robots_max_videopreview":"-1","robots_max_imagepreview":"large","priority":null,"frequency":"default","location":null,"local_seo":null,"breadcrumb_settings":null,"limit_modified_date":false,"reviewed_by":null,"open_ai":"{\"title\":{\"suggestions\":[],\"usage\":0},\"description\":{\"suggestions\":[],\"usage\":0}}","ai":null,"created":"2024-06-26 12:06:21","updated":"2025-02-27 06:22:08"},"aioseo_breadcrumb":"<div class=\"aioseo-breadcrumbs\"><span class=\"aioseo-breadcrumb\">\n\t<a href=\"https:\/\/learnopencv.com\" title=\"Home\">Home<\/a>\n<\/span><span class=\"aioseo-breadcrumb-separator\">\u00bb<\/span><span class=\"aioseo-breadcrumb\">\n\t<a href=\"https:\/\/learnopencv.com\/category\/edge-devices\/\" title=\"Edge Devices\">Edge Devices<\/a>\n<\/span><span class=\"aioseo-breadcrumb-separator\">\u00bb<\/span><span class=\"aioseo-breadcrumb\">\n\tObject Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite \u2013 Pothole Datset\n<\/span><\/div>","aioseo_breadcrumb_json":[{"label":"Home","link":"https:\/\/learnopencv.com"},{"label":"Edge Devices","link":"https:\/\/learnopencv.com\/category\/edge-devices\/"},{"label":"Object Detection on Edge Device: Deploying YOLOv8 on Luxonis OAK-D-Lite &#8211; Pothole Datset","link":"https:\/\/learnopencv.com\/object-detection-on-edge-device\/"}],"_links":{"self":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/posts\/56543","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/users\/120"}],"replies":[{"embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/comments?post=56543"}],"version-history":[{"count":0,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/posts\/56543\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/media\/58015"}],"wp:attachment":[{"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/media?parent=56543"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/categories?post=56543"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/tags?post=56543"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/learnopencv.com\/wp-json\/wp\/v2\/coauthors?post=56543"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}