This repository contains code for various Yolo examples:
| Directory | Decoding | Version | Description |
|---|---|---|---|
main.py |
device | From https://tools.luxonis.com | Run your custom trained YOLO model that was converted using the tools.luxonis.com. Uses DepthAI-SDK |
device-decoding |
device | V3, V3-tiny, V4, V4-tiny, V5 | General object detection using any of the versions for which we support on-device decoding. Uses DepthAI-API |
car-detection |
device | V3-tiny, V4-tiny | Car detection using YoloV3-tiny and YoloV4-tiny with on-device decoding (DepthAI-SDK). |
host-decoding |
host | V5 | Object detection using YoloV5 and on-host decoding. |
yolox |
host | X | Object detection without anchors using YOLOX-tiny with on-host decoding. |
yolop |
host | P | Vehicle detection, road segmentation, and lane segmentation using YOLOP on OAK with on-host decoding. |
yolo-segmentation |
host | V5, V8, V9, 11 | Object segmentation using YOLOv5, YOLOv8, YOLOv9, YOLO11 on OAK with on-host decoding. |
DepthAI allows execution of certain Yolo object detection models fully on a device, including decoding. Currently, the supported models are:
-
YoloV3 & YoloV3-tiny,
-
YoloV4 & YoloV4-tiny,
-
YoloV5.
Non-supported Yolo models usually require on-host decoding. We provide on-device decoding pipeline examples in device-decoding (and similar code is used in car-detection). Other repositories are likely to use on-host decoding.
DepthAI enables you to take the advantage of depth information and get x, y, and z coordinates of detected objects. E in this directory are not using the depth information. If you are interested in using the depth information with Yolo detectors, please check our documentation.
Open the directory and follow the instructions.

