
Tracking the blob of light from a flashlight can be useful. It certainly was for my Google Science fair project, and it may also be useful for any projects of your own. So without further ado, here’s the flashlight blob tracker.
Prerequisites
You should be able to understand the code in my previous post and should also have a strong foundation in python.
Create new thresholded frame
So far, we’ve been using the cv2.cvtColor() function simply in order to convert from an BGR colorspace to an HSV colorspace. But HSV colorspaces are only useful for thresholding specific colors or color ranges. This won’t work for thresholding a certain brightness. We’ll need a different colorspace for this.
#import libs
import cv2
import numpy as np
import time
#begin streaming
cap = cv2.VideoCapture(0)
while True:
_, frame = cap.read()
#convert frame to monochrome and blur
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (9,9), 0)
#use function to identify threshold intensities and locations
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(blur)
#threshold the blurred frame accordingly
hi, threshold = cv2.threshold(blur, maxVal-20, 230, cv2.THRESH_BINARY)
thr = threshold.copy()
#resize frame for ease
cv2.resize(thr, (300,300))
cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY). The unfamiliar parameter should be self explanatory, cv2.COLOR_BGR2GRAY, converts a frame from the BGR colorspace to the ‘gray’ or ‘monochrome’ colorspace. The reason this is done is intuitive. In a black and white image, the brightest area generally also appears to be the most white area in the picture. (The sun is the whitest area in this picture, and we know that it is also the brightest area in the picture).

cv2.minmaxloc() is also fairly self-explanatory. It searches the frame and returns the brightest pixel, the darkest pixel, and their respective positions. These values are then used as thresholds in threshold frame. Our thresholded frame is called thr and threshold.
Identify light blob in thresholded frame
#find contours in thresholded frame
edged = cv2.Canny(threshold, 50, 150)
lightcontours, hierarchy = cv2.findContours(edged, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#attempts finding the circle created by the torch illumination on the wall
circles = cv2.HoughCircles(threshold, cv2.cv.CV_HOUGH_GRADIENT, 1.0, 20,
param1=10,
param2= 15,
minRadius=20,
maxRadius=100,)
We use the cv2.HoughCircles function simply because one can intuitively confirm that a blob of light from a flashlight on a wall resembles a circular figure. NOTE: YOU MUST MESS AROUND WITH THE PARAMETERS ON THE HOUGH CIRCLES FUNCTION UNTIL FALSE DETECTIONS ARE MINIMIZED
Track the light blob
All we have left is to make sure the blob detected matches the type we’re tracking for and then draw a marker around this light blob.
#check if the list of contours is greater than 0 and if any circles are detected
if len(lightcontours)>0 and circles is not None:
#Find the Maxmimum Contour, this is assumed to be the light beam
maxcontour = max(lightcontours, key=cv2.contourArea)
#avoids random spots of brightness by making sure the contour is reasonably sized
if cv2.contourArea(maxcontour) > 2000:
(x, final_y), radius = cv2.minEnclosingCircle(maxcontour)
cv2.circle(frame, (int(x), int(final_y)), int(radius), (0, 255, 0), 4)
cv2.rectangle(frame, (int(x) - 5, int(final_y) - 5), (int(x) + 5, int(final_y) + 5), (0, 128, 255), -1)
#display frames and exit
cv2.imshow('light', thr)
cv2.imshow('frame', frame)
cv2.waitKey(4)
key = cv2.waitKey(5) & 0xFF
if key == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Line 2 is to avoid an error that will appear if no circular light blobs are detected. If none are detected, the code just passes this iteration.
Line 4 assumes the light blob is the largest sized circular blob in the entire frame, meaning the maxcontour would be the blob itself.
Lines 6 is to make sure any tiny blobs or random bright pixels due to glare or other environmental interferences are not detected. The area must be over a certain value to be considered legit.
Lines 7-18 Just draw the trackers and display the frame.

