I am trying to track a basketball through a short clip using OpenCV. I am using code to help me try to find the correct upper and lower bounds for the color code, but the ball is of very similar color to the game clock near the bottom of the video. How can I cut this off in my object tracking code, so that the program does not simply track the clock? I am using code from this tutorial: https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/
The code where I think this change would be made is in the following block:
# find contours in the mask and initialize the current
# (x, y) center of the ball
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
# find the largest contour in the mask, then use
# it to compute the minimum enclosing circle and
# centroid
c = max(cnts, key=cv2.contourArea)
((x, y), radius) = cv2.minEnclosingCircle(c)
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
# only proceed if the radius meets a minimum size
if radius > 10:
# draw the circle and centroid on the frame,
# then update the list of tracked points
cv2.circle(frame, (int(x), int(y)), int(radius),
(0, 255, 255), 2)
cv2.circle(frame, center, 5, (0, 0, 255), -1)
I know I haven't provided a MWE but I'm not sure how to do that in this case. I think my question is at least straightforward, if not simple.
Related
I have some code in which you input a video, and then use background subtraction via cv2 to produce contours over a certain threshold, drawing a bounding box over them. On its own this acts simply to identify objects/motion in the video. I then go on to track the change in x and y over coordinate points of the contours to print on screen what direction the object is moving, drawing on a motion track behind the object to the frame.
However, the code cannot distinguish the different contours as separate objects. When only one object is present, it works great. When it detects more than one object/contour, the motion is all over the place as you would expect.
I have been doing some research and it seems optical flow might be the best solution, but I'm not really sure if it applies to this situation, nor how to integrate it into the code I already have.
Is optical flow the best solution, and how can I implement it into the code? I have read this page but it doesn't follow the background subtraction or contour finding that my code currently follows. The result I want is objects/contours being tracked separately, so I can filter out any moving in a fashion/direction I don't want. Below is a small bits from my code to demonstrate my method. I follow these outlines: https://pyimagesearch.com/2015/09/21/opencv-track-object-movement/
Any help appreciated! Thanks!
# example of bg subtraction
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
frameDelta = cv2.absdiff(firstFrame, gray)
(_, thresh) = cv2.threshold(frameDelta, desired_th, 255, cv2.THRESH_BINARY)
thresh = cv2.dilate(thresh, kernel, iterations=2)
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts) #cnts ie contours, getting the outline
# motion
c = max(cnts, key=cv2.contourArea) #find the largest contour (based on its area)
((x, y), radius) = cv2.minEnclosingCircle(c) #compute the minimum enclosing circle and the centroid of the object.
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
if radius > 10: #minimum pixel radius
cv2.circle(frame, (int(x), int(y)), int(radius),(0, 255, 255), 2) #draw the minimum enclosing circle surrounding the objec
cv2.circle(frame, center, 5, (0, 0, 255), -1) #draw the centroid
pts.appendleft(center) #update the list of pts containing the center (x, y)-coordinates of the object
for i in np.arange(1, len(pts)): # loop over the set of tracked points
if pts[i - 1] is None or pts[i] is None:
continue
if counter >= 5 and i == 10 and pts[i-10] is not None: #check if enough points have been accumulated in buffer
dX = pts[i-10][0] - pts[i][0] #compute difference between x and y
dY = pts[i-10][1] - pts[i][1]
(dirX, dirY) = ("", "") #reinitialise direction in txt variables
if np.abs(dX) > 10: #ensure significant movement in x direction
dirX = "East" if np.sign(dX) == 1 else "West"
if np.abs(dY) > 10: #eg 20 as in, 20 pixel difference. make smaller to detect smaller movements, and vice versa
dirY = "South" if np.sign(dY) == 1 else "North" # Python reverses eg top of frame is pixel 0 and bottom is eg 780
if dirX != "" and dirY != "": #handles when both directions are not empty
direction = "{}-{}".format(dirY, dirX) #ie north west
else: # otherwise, only one direction is non-empty
direction = dirX if dirX != "" else dirY
#compute the thickness of the line and draw the connecting lines
thickness = int(np.sqrt(buffer / float(i + 1)) * 2.5)
cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)
i found this code on GitHub and it is a function that serves to find circles based on camera frames, yet my problem is that i would like to modify it so that i could choose which shape it has to find. I tried to do it for a square but unfortunately i couldn't do it.
Is there someone who could possibly help me?
Thanks a lot
This is the code of the function that find circles:
def find_circles(frame, mask):
contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = imutils.grab_contours(contours)
center = None
if len(contours) > 0:
c = max(contours, key=cv2.contourArea)
((x, y), radius) = cv2.minEnclosingCircle(c)
M = cv2.moments(c) #Finds center point
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
if radius > 10:
cv2.circle(frame, (int(x), int(y)), int(radius),
(0, 255, 255), 2)
cv2.circle(frame, center, 5, (0, 0, 0), -1)
return center
Firstly, I am new to python and opencv so my understanding is limited, however I am trying to learn more as best I can.
I am currently struggling to locate contours(bullet holes) on a simple ringed target so that I can eventually score each hole. I have managed to solve a similar problem on a different image and I am wondering what I can do to get the same method to work on the new one.
Successful attempt at scoring a target
This is the target that my problem concerns
When I use these HSV Values I am presented with only the Bullet Holes. My limited knowledge tells me that perhaps these HSV values are useful in thresholding(?) but I can not seem to find the execution.
The method used to locate the contours in the example target is shown below:
imgREDUCED = cv2.inRange(image, (60, 60, 60), (150, 150, 150))
kernel = np.ones((10,10),np.uint8)
opening = cv2.morphologyEx(imgREDUCED, cv2.MORPH_OPEN, kernel)
thresh = cv2.threshold(opening, 60, 255, cv2.THRESH_BINARY)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
for c in cnts:
if cv2.contourArea(c) > 1:
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
What steps can I take to locate the contours on this new target image?
All help is appreciated and I will try my best to answer any questions
By finding the bullseye and the outer ring we can calculate the score of each hole based on its distance from the center. Here are the steps I took to try and solve this.
First, I converted to HSV and took a look at the different channels:
Hue
Value
From the hue channel we can clearly see the holes in the target so it's a good candidate for thresholding for those. On the value channel we can clearly see the rings and the center so we'll use that channel to detect those.
Hue Mask (0, 30)
Value Mask (0, 155)
We can use findContours to outline the white parts of the mask. From that outline we can get the center of the contour and the area of the contour. Using this on the hue mask we get the center of each hole and using this on the value mask we can get the biggest ring by looking for the contour with the largest area. With the area of the biggest ring, we can estimate the radius via the circle's area formula.
To find the bullseye I thresholded the value mask again, but using the (215, 255) to search for high values. This perfectly captures just the center, but you might not always get that lucky with your pictures. Using the findContours function again I get the center are radius of the bullseye.
Now I can score each of the holes. I get the distance from the hole to the center and figure out where on the scoresheet it should land on.
Marked the outer ring, the center of each hole, the score of each hole, and the bullseye:
Here's the code:
import cv2
import math
import numpy as np
# get center of contour
def centroid(contour):
M = cv2.moments(contour);
cx = int(round(M['m10']/M['m00']));
cy = int(round(M['m01']/M['m00']));
center = (cx, cy);
return center;
# load image
img = cv2.imread("target.png");
img = img[:,:-1,:]; # there's a bit of wall or something on the right
# hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV);
h,s,v = cv2.split(hsv);
# find the outer ring
v_mask = cv2.inRange(v, 0, 155);
# contours
_, contours, _ = cv2.findContours(v_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# find the biggest contour
biggest_cntr = None;
biggest_area = 0;
marked = img.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > biggest_area:
biggest_area = area;
biggest_cntr = contour;
cv2.drawContours(marked, [biggest_cntr], -1, (0, 255, 0), 3);
# find biggest radius
big_radius = math.sqrt(biggest_area / math.pi);
# find center
center_v_mask = cv2.inRange(v, 215, 255);
_, contours, _ = cv2.findContours(center_v_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
center = centroid(contours[0]);
# use this to calculate the middle radius
area = cv2.contourArea(contours[0]);
little_radius = math.sqrt(area / math.pi);
# draw center
marked = cv2.circle(marked, center, 2, (155,155,0), -1);
# mask holes
h_mask = cv2.inRange(h, 0, 30);
h_mask = cv2.medianBlur(h_mask, 11);
# draw contour centers
_, contours, _ = cv2.findContours(h_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
holes = [];
for contour in contours:
c = centroid(contour);
holes.append(c);
marked = cv2.circle(marked, c, 2, (0,0,155), -1);
# calculate approximate size of each ring
# (with foreknowledge that there are 9 + bullseye)
remaining_radius = big_radius - little_radius;
slices = remaining_radius / 9;
# calculate scores
scores = [];
for hole in holes:
# get distance from center
dx = hole[0] - center[0];
dy = hole[1] - center[1];
dist = math.sqrt(dx*dx + dy*dy);
# check score
dist -= little_radius;
if dist < 0:
scores.append(10);
else:
scores.append(9 - int(dist / slices));
# draw the scores
font = cv2.FONT_HERSHEY_SIMPLEX ;
for a in range(len(holes)):
tup = (holes[a][0], holes[a][1]);
marked = cv2.putText(marked, str(scores[a]), tup, font, 1, (0,0,155), 2, cv2.LINE_AA);
# show
cv2.imshow("marked", marked);
cv2.waitKey(0);
I want to draw the contour with the maximum area along side that contours centroid. To do this I have stored the area values of the contours onto a list and selected the largest area as well as it's position in the list.
for i in range(len(contours)):
area = cv2.contourArea(contours[i])
area_vals.append(area)
maxarea = max(area_vals) #maximum value from the list of contour areas stored
pos = area_vals.index(maxarea) #index position of largest contour in list
I have then used these to find the contour with the maximum area and its moments.
maincontour = contours[pos] #contour that gave the largest contour area
M = cv2.moments(maincontour) #moments of that contour
cv2.circle(img, (cX, cY), 6, (0, 0, 255), -2) #drawing the centroid circle on the image
image = cv2.drawContours(img, maincontour, -1, (0, 255, 0), 5) #drawing main contour on the image
Am I right in using this method? Would this produce the desired outcome? From my results it looks correct but I wanted to double check the method and logic behind the approach.
I do not see where you use the moments to get the centroid. Please always show a reproducible set of code.
In Python/OpenCV, to get the largest contour, an efficient method is:
# get largest contour
contours = cv2.findContours(binary_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
To get the centroid,
M = cv2.moments(big_contour)
cx = int(M["m10"] / M["m00"])
cy = int(M["m01"] / M["m00"])
I'm using OpenCV 2.4 to do some tracking, and I can get a contour of the shape I want, which is a T.
Input image:
I can use cv2.minAreaRect(my_t_contour) and get the angle of that rect, but that only gives me 0-180 degrees. But this is a T shape though, so I want to be able to tell 0-360. I was thinking of:
Split the contour into two rects
Get a line through the rects (either using skeletonize > HoughLinesP)
Determine which line is which, determine their gradient (using the coordinates I get from HoughLinesP) and then determine the direction of the T.
But I'm stuck at number 1, how can I split a contour into two shapes?
Method 1: draw center of contour and center of minAreaRect of contour
dst = cv2.cvtColor(r_target, cv2.COLOR_BGR2GRAY)
dst = cv2.GaussianBlur(dst, (11, 11), 0)
ret,dst = cv2.threshold(dst,110,255,cv2.THRESH_BINARY_INV)
cnts = cv2.findContours(dst, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in cnts:
# get minAreaRect around contour and draw its center in red
rect = cv2.minAreaRect(c)
cv2.circle(r_target, (int(rect[0][0]), int(rect[0][1])), 7, (0, 0, 255), -1)
# get moments of contour to get center and draw it in white
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
cv2.circle(r_target, (cX, cY), 7, (255, 255, 255), -1)
Next step would probably calculate a simple gradient between the centers to determine the angle.
Method 2: skeletonize the image and get lines using HoughLinesP.
dst = cv2.cvtColor(r_target, cv2.COLOR_BGR2GRAY)
dst = cv2.GaussianBlur(dst, (11, 11), 0)
ret,dst = cv2.threshold(dst,110,255,cv2.THRESH_BINARY)
dst = 1 - dst / 255
dst = skimage.morphology.skeletonize(dst).astype(np.uint8)
rho = 1
theta = np.pi / 180
threshold = 1
minLineLength = 30
maxLineGap = 15
lines = cv2.HoughLinesP(dst, rho, theta, threshold, minLineLength=minLineLength, maxLineGap=maxLineGap)
for line in lines[0]:
cv2.line(r_target, (line[0], line[1]), (line[2], line[3]), (0, 255, 0), 1, 8)
But the lines don't come out nicely. This is how the skeleton looks like:
I'm still experimenting with the variables but is there a specific thought process around using HoughLinesP?
As a variant you can use PCA, find first component direction, and use it as an searced angle. You can check here for an example: http://docs.opencv.org/trunk/d1/dee/tutorial_introduction_to_pca.html