Support for YOLOv7 ONNX (not simplified)#22290
Conversation
|
kindly fix this issue as soon as possible |
|
All is ready for reviews. Let me test other linked issues as well. |
zihaomu
left a comment
There was a problem hiding this comment.
Thanks for your contribution!
Please check the comments below, others are fine for me.
rogday
left a comment
There was a problem hiding this comment.
Thank you for contribution! Looks good, have just a few questions.
modules/dnn/test/test_model.cpp
Outdated
| Rect2d(109.99532f, 246.13585f, 259.82266f, 600.7827f), | ||
| Rect2d(386.13947f, 83.04132f, 575.3909f, 189.83206f)}; | ||
|
|
||
| Mat img = imread(imgPath); |
There was a problem hiding this comment.
Can testDetectModel be used here? (instead of some/all of the code below)
There was a problem hiding this comment.
I just recalled a reason why I did not use testDetectModel. For YOLOv7, the final confidence for a box is calculated by multiplying the class confidence and the object confidence in line 855 (conf *= obj_conf;). I dont see this in testDetectModel.detect(), so I did not use testDetectModel.
|
I will try manually upload the model to each CI server tomorrow. |
modules/dnn/test/test_model.cpp
Outdated
| // filter out non objects | ||
| float obj_conf = preds.row(i).at<float>(4); | ||
| if (obj_conf < conf_threshold) | ||
| continue; | ||
|
|
||
| // get class id and conf | ||
| Mat scores = preds.row(i).colRange(5, preds.cols); | ||
| double conf; | ||
| Point maxLoc; | ||
| minMaxLoc(scores, 0, &conf, 0, &maxLoc); | ||
| conf *= obj_conf; | ||
| if (conf < conf_threshold) | ||
| continue; |
There was a problem hiding this comment.
To my understanding, recent object detection models (e.g. YOLOX in opencv_zoo) are trained with predicting IoU between anchors and ground truth boxes (related lines in YOLOv7 repo). In this case, the final confidence should be calculated by multiplying the class confidence and the predicted IoU (which is named obj_conf in my code).
In short, models without such a training scheme should not do this multiplication. So I do not think it makes sense if we do this for region/detection_output layer, considering compatibility for old object detection models.
By the way, we should consider support next gen NMS algorithms (CIoU, DIoU ...) since they are quite popular among recent object detection models.
Done. GitHub Actions tests are all passed. |
|
Update: Model can works fine with this patch. The following is old comment. |
|
Hello @alalek , could you kindly update the dnn model in default? Link to the model is attached to the first comment. |
|
There is some problem with script: For Google documents we should use Update: Proper way is using Update2: File is downloaded manually. |
|
There are 10 commits, some of them are not needed at all in git history
So please squash commits into one (1 PR - 1 commit rule). |
6c30664 to
2a88597
Compare
|
@alalek Thank you for the feedback and suggestions. Commits are squashed into one and the model is now downloaded with GDrive() in opencv_extra. |
Thank you for the information! I thought I use GDrive() in a wrong way and was testing it till now. |
|
There are failures of test accuracy checks on CI (i5-6600 through KVM): |
@alalek I cannot reproduce this issue on my side, and we do not have these failures in GHA checks. I looked into the error message, and the problem was the difference between refScores and testScores (0.000111818, 0.00028795, 0.00987768) exceeding the threshold (1e-5). So these differences break the condition in line 124, and all IoU calculations are skipped, leading to all boxes mismatched: opencv/modules/dnn/test/test_common.impl.hpp Lines 121 to 136 in 3456d28 I even tried manually update the base of my branch, re-compile everything, but still the check was passed and the issue cannot be reproduced. |
|
It makes sense to add performance test too. |
2a88597 to
e03afb0
Compare
It is indeed a good place for the test, but I think we keep it in |
|
Hello @asenyaev , I observed that in Windows10-x64, the model that I manually uploaded to $dnn_models/dnn/onnx/models/yolov7-not-simplified.onnx got deleted somehow from time to time. But this issue did not happen on other hosts. Is it because of the scheduled update script or something like that? |
Adding of performance test doesn't mean to drop the accuracy test. |
|
Hello @fengyuentau! I checked logs in different workflows and want to say it's a strange behavior, the model was copied in the latest pipeline and DNN models weren't updated that day. Moreover, when models update, self-downloaded are not deleted. I see your model right there since September 6. |
|
Thank you @asenyaev . I guess I should just store the model somewhere else other than google drive, where our CI servers can download from, to avoid this kind of issue. |
|
@fengyuentau Please sqush the commits. |
2f4defa to
4aef9b1
Compare
Fixes #22215 .
Test data: opencv/opencv_extra#998
Link to the model: https://drive.google.com/u/0/uc?id=1rm3mIqjJNu0xPTCjMKnXccspazV1B2zv&export=download
Things have to be done for the support:
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.