-
Notifications
You must be signed in to change notification settings - Fork 243
Implement v2 Camera Focus block with Tenengrad measure and improved visualization #1857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…visualization is disabled
⚡️ Codeflash found optimizations for this PR📄 56% (0.56x) speedup for
|
inference/core/workflows/core_steps/classical_cv/camera_focus/v2.py
Outdated
Show resolved
Hide resolved
Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
|
This PR is now faster! 🚀 Shantanu Bala accepted my code suggestion above. |
I closed this PR - one of the suggestions was good (applied it) but some of the other suggestions didn't make sense (e.g. a cache for the values at the module level wouldn't improve performance since the images running through the pipeline would be different each time) |
Description
This PR implements a new version of the camera focus block to provide improved focus measures across a wider range of images and use cases, including help adjusting exposure and aperture (focus metrics are also impacted by under/over-exposed images).
This PR contains a V2 workflow block that:
Since this is a pretty significant change in behavior, this is a V2 with a new implementation.
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
Tested this locally using a USB webcam below 👇
TestCameraFocusBlock.mp4
Tested using bounding box and label visualizations containing the focus measure of the boxes:
TestFocusBoxVis.mp4
Any specific deployment considerations
Since this is a classical CV block that uses a standard OpenCV Sobel operator on CPU, it can be deployed anywhere inference can run.
Docs
Currently, all documentation for this block is generated from code - I will be writing a followup blog post as part of Shipmas to update the information from this previous post: https://blog.roboflow.com/computer-vision-camera-focus/