I'm using OpenCV 2.4 to do some tracking, and I can get a contour of the shape I want, which is a T.
Input image:
I can use cv2.minAreaRect(my_t_contour) and get the angle of that rect, but that only gives me 0-180 degrees. But this is a T shape though, so I want to be able to tell 0-360. I was thinking of:
Split the contour into two rects
Get a line through the rects (either using skeletonize > HoughLinesP)
Determine which line is which, determine their gradient (using the coordinates I get from HoughLinesP) and then determine the direction of the T.
But I'm stuck at number 1, how can I split a contour into two shapes?
Method 1: draw center of contour and center of minAreaRect of contour
dst = cv2.cvtColor(r_target, cv2.COLOR_BGR2GRAY)
dst = cv2.GaussianBlur(dst, (11, 11), 0)
ret,dst = cv2.threshold(dst,110,255,cv2.THRESH_BINARY_INV)
cnts = cv2.findContours(dst, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in cnts:
# get minAreaRect around contour and draw its center in red
rect = cv2.minAreaRect(c)
cv2.circle(r_target, (int(rect[0][0]), int(rect[0][1])), 7, (0, 0, 255), -1)
# get moments of contour to get center and draw it in white
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
cv2.circle(r_target, (cX, cY), 7, (255, 255, 255), -1)
Next step would probably calculate a simple gradient between the centers to determine the angle.
Method 2: skeletonize the image and get lines using HoughLinesP.
dst = cv2.cvtColor(r_target, cv2.COLOR_BGR2GRAY)
dst = cv2.GaussianBlur(dst, (11, 11), 0)
ret,dst = cv2.threshold(dst,110,255,cv2.THRESH_BINARY)
dst = 1 - dst / 255
dst = skimage.morphology.skeletonize(dst).astype(np.uint8)
rho = 1
theta = np.pi / 180
threshold = 1
minLineLength = 30
maxLineGap = 15
lines = cv2.HoughLinesP(dst, rho, theta, threshold, minLineLength=minLineLength, maxLineGap=maxLineGap)
for line in lines[0]:
cv2.line(r_target, (line[0], line[1]), (line[2], line[3]), (0, 255, 0), 1, 8)
But the lines don't come out nicely. This is how the skeleton looks like:
I'm still experimenting with the variables but is there a specific thought process around using HoughLinesP?
As a variant you can use PCA, find first component direction, and use it as an searced angle. You can check here for an example: http://docs.opencv.org/trunk/d1/dee/tutorial_introduction_to_pca.html
Related
I want to write a tool for finding the number of angles, curves and straight lines within each bounded object in an image.
All input images will be black on white background and all will represent characters.
As illustrated in the image, for each bounded region, each shape occurrence is noted. It would be preferable to be able to have a threshold for how curvy a curve must be to be considered a curve and not an angle etc. And the same for straight lines and angles.
I have used Hough Line Transform for detecting straight lines on other images and it might work in combination with something here I thought.
Am open to other libraries than opencv - this is just what I have some experience with.
Thanks in advance
EDIT:
So based on the answer from Markus, I made a program using the findContours() with CHAIN_APPROX_SIMPLE.
It produces a somewhat wierd result inputting a 'k' where it correctly identifies some points around the angles but then the 'leg' (the lower diagonal part) has many many points on it. I am unsure how to go about segmenting this to group into straights, angles and curves.
Code:
import numpy as np
img = cv2.imread('Helvetica-K.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (3, 3), 0)
edges = cv2.Canny(blurred, 50, 150, apertureSize=3)
ret, thresh = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY_INV)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#cv2.drawContours(img, contours, 0, (0,255,0), 1)
#Coordinates of each contour
for i in range(len(contours[0])):
print(contours[0][i][0][0])
print(contours[0][i][0][1])
cv2.circle(img, (contours[0][i][0][0], contours[0][i][0][1]), 2, (0,0,255), -1)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Img example:
You can use findContours with option CHAIN_APPROX_SIMPLE.
A point with an angle less than some threshold is a corner.
A point with an angle more than some threshold is on a straight line and should be removed.
Two adjacent points with a distance of more than some threshold are the ends of a straight line.
Two adjacent points that are identified to be corners are the ends of a straight line.
All other points belong to some curvy detail.
Update:
Here is some code you can start with. It shows how to smoothen the straight lines, how you can merge several corner points into one, and how to calculate distances and angles at each point. There is still some work to be done for you to achieve the required result but I hope it leads in the right direction.
import numpy as np
import numpy.linalg as la
import cv2
def get_angle(p1, p2, p3):
v1 = np.subtract(p2, p1)
v2 = np.subtract(p2, p3)
cos = np.inner(v1, v2) / la.norm(v1) / la.norm(v2)
rad = np.arccos(np.clip(cos, -1.0, 1.0))
return np.rad2deg(rad)
def get_angles(p, d):
n = len(p)
return [(p[i], get_angle(p[(i-d) % n], p[i], p[(i+d) % n])) for i in range(n)]
def remove_straight(p):
angles = get_angles(p, 2) # approximate angles at points (two steps in path)
return [p for (p, a) in angles if a < 170] # remove points with almost straight angles
def max_corner(p):
angles = get_angles(p, 1) # get angles at points
j = 0
while j < len(angles): # for each point
k = (j + 1) % len(angles) # and its successor
(pj, aj) = angles[j]
(pk, ak) = angles[k]
if la.norm(np.subtract(pj, pk)) <= 4: # if points are close
if aj > ak: # remove point with greater angle
angles.pop(j)
else:
angles.pop(k)
else:
j += 1
return [p for (p, a) in angles]
def main():
img = cv2.imread('abc.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY_INV)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for c in contours: # for each contour
pts = [v[0] for v in c] # get pts from contour
pts = remove_straight(pts) # remove almost straight angles
pts = max_corner(pts) # remove nearby points with greater angle
angles = get_angles(pts, 1) # get angles at points
# draw result
for (p, a) in angles:
if a < 120:
cv2.circle(img, p, 3, (0, 0, 255), -1)
else:
cv2.circle(img, p, 3, (0, 255, 0), -1)
cv2.imwrite('out.png', img)
cv2.destroyAllWindows()
main()
Firstly, I am new to python and opencv so my understanding is limited, however I am trying to learn more as best I can.
I am currently struggling to locate contours(bullet holes) on a simple ringed target so that I can eventually score each hole. I have managed to solve a similar problem on a different image and I am wondering what I can do to get the same method to work on the new one.
Successful attempt at scoring a target
This is the target that my problem concerns
When I use these HSV Values I am presented with only the Bullet Holes. My limited knowledge tells me that perhaps these HSV values are useful in thresholding(?) but I can not seem to find the execution.
The method used to locate the contours in the example target is shown below:
imgREDUCED = cv2.inRange(image, (60, 60, 60), (150, 150, 150))
kernel = np.ones((10,10),np.uint8)
opening = cv2.morphologyEx(imgREDUCED, cv2.MORPH_OPEN, kernel)
thresh = cv2.threshold(opening, 60, 255, cv2.THRESH_BINARY)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
for c in cnts:
if cv2.contourArea(c) > 1:
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
What steps can I take to locate the contours on this new target image?
All help is appreciated and I will try my best to answer any questions
By finding the bullseye and the outer ring we can calculate the score of each hole based on its distance from the center. Here are the steps I took to try and solve this.
First, I converted to HSV and took a look at the different channels:
Hue
Value
From the hue channel we can clearly see the holes in the target so it's a good candidate for thresholding for those. On the value channel we can clearly see the rings and the center so we'll use that channel to detect those.
Hue Mask (0, 30)
Value Mask (0, 155)
We can use findContours to outline the white parts of the mask. From that outline we can get the center of the contour and the area of the contour. Using this on the hue mask we get the center of each hole and using this on the value mask we can get the biggest ring by looking for the contour with the largest area. With the area of the biggest ring, we can estimate the radius via the circle's area formula.
To find the bullseye I thresholded the value mask again, but using the (215, 255) to search for high values. This perfectly captures just the center, but you might not always get that lucky with your pictures. Using the findContours function again I get the center are radius of the bullseye.
Now I can score each of the holes. I get the distance from the hole to the center and figure out where on the scoresheet it should land on.
Marked the outer ring, the center of each hole, the score of each hole, and the bullseye:
Here's the code:
import cv2
import math
import numpy as np
# get center of contour
def centroid(contour):
M = cv2.moments(contour);
cx = int(round(M['m10']/M['m00']));
cy = int(round(M['m01']/M['m00']));
center = (cx, cy);
return center;
# load image
img = cv2.imread("target.png");
img = img[:,:-1,:]; # there's a bit of wall or something on the right
# hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV);
h,s,v = cv2.split(hsv);
# find the outer ring
v_mask = cv2.inRange(v, 0, 155);
# contours
_, contours, _ = cv2.findContours(v_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# find the biggest contour
biggest_cntr = None;
biggest_area = 0;
marked = img.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > biggest_area:
biggest_area = area;
biggest_cntr = contour;
cv2.drawContours(marked, [biggest_cntr], -1, (0, 255, 0), 3);
# find biggest radius
big_radius = math.sqrt(biggest_area / math.pi);
# find center
center_v_mask = cv2.inRange(v, 215, 255);
_, contours, _ = cv2.findContours(center_v_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
center = centroid(contours[0]);
# use this to calculate the middle radius
area = cv2.contourArea(contours[0]);
little_radius = math.sqrt(area / math.pi);
# draw center
marked = cv2.circle(marked, center, 2, (155,155,0), -1);
# mask holes
h_mask = cv2.inRange(h, 0, 30);
h_mask = cv2.medianBlur(h_mask, 11);
# draw contour centers
_, contours, _ = cv2.findContours(h_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
holes = [];
for contour in contours:
c = centroid(contour);
holes.append(c);
marked = cv2.circle(marked, c, 2, (0,0,155), -1);
# calculate approximate size of each ring
# (with foreknowledge that there are 9 + bullseye)
remaining_radius = big_radius - little_radius;
slices = remaining_radius / 9;
# calculate scores
scores = [];
for hole in holes:
# get distance from center
dx = hole[0] - center[0];
dy = hole[1] - center[1];
dist = math.sqrt(dx*dx + dy*dy);
# check score
dist -= little_radius;
if dist < 0:
scores.append(10);
else:
scores.append(9 - int(dist / slices));
# draw the scores
font = cv2.FONT_HERSHEY_SIMPLEX ;
for a in range(len(holes)):
tup = (holes[a][0], holes[a][1]);
marked = cv2.putText(marked, str(scores[a]), tup, font, 1, (0,0,155), 2, cv2.LINE_AA);
# show
cv2.imshow("marked", marked);
cv2.waitKey(0);
I'm fairly new to programming and need to write a program that will find the center of 4 discs (circles) that are located in each corner of a square image.
I don't know the exact coordinates of the discs but have a very good approximation of them. How can I go about finding the location of the centers of each of the 4 discs?
The window will most likely draw images using the top left corner as X= 0, Y = 0. Circles tend to be drawn from the centre outwards, so the centre of the circle is most likely the (X,Y) coordinates used to draw the given circle. You can probably find a way to request the values for the circles coordinates depending how they're being drawn and what you're using to draw them. There are a number of ways of drawing to screen with python, so It depends how you are doing it. Can you provide a little more details?
You can look for the centroid of shapes using moments in OpenCV:
import cv2
# read image through command line
img = cv2.imread(args["ipimage"])
# or load it from a path
#img = cv2.imread(R"/usr/home/dinges/4c.png")
# convert the image to grayscale
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# convert the grayscale image to binary image
ret,thresh = cv2.threshold(gray_image,127,255,0)
# find contours in the binary image
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
# calculate moments for each contour
M = cv2.moments(c)
# calculate x,y coordinate of center
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
print('centroid: X:{}, Y:{}'.format(cX, cY))
cv2.circle(img, (cX, cY), 5, (255, 255, 255), -1)
cv2.putText(img, "centroid", (cX -25, cY -25),cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
# display the image
cv2.imshow("Image", img)
cv2.waitKey(0)
Reference: Find the Center of a Blob (Centroid) using OpenCV
I would like to convert this radar graph into numeric values for each component (dotted lines), using openCV for python. How would I best do this? I've been thinking about detecting the center of the graph as well as the intersections of the dotted lines with the gray area. By doing this I could measure the distance between the center and each of these intersections to determine a value for each component. However this seems rather complex, and I have no idea how to start. Could someone help me out here?
EDIT: The purpose is to write a piece of software that can extract data from 250 of these graphs. (I have better quality scans)
I would do something like this (sorry it's pseudo-code, if you think the idea it's good enough I'll try to write some real code):
1.find the circle center(maybe using HoughCircle function)
2.inverse thresholding to highlight your dark grey area
3.Call opencv function approx poly to get the polygon representing this area
4.for each vertex, measure its distance from the center and convert it to your desidered scale
I think it should work.
hough circle tutorial
https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
approx poly tutorial
https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html
EDIT:
I've had some spare time so I wrote an initial piece of code to extract circle, radius and the poly, hope it helps you
img = cv.imread("c:\\temp\\test.jpg", cv.IMREAD_COLOR)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
#thresholds for isolate circle and poly
ret, thres_poly_temp = cv.threshold(gray, 90, 255, cv.THRESH_BINARY_INV)
ret, thres_circle = cv.threshold(gray, 110, 255, cv.THRESH_BINARY_INV)
#cleanup image for detect poly
thres_poly = thres_poly_temp.copy()
thres_poly = cv.GaussianBlur(thres_poly_temp,(3,3),0)
thres_poly = cv.medianBlur( thres_poly, 5)
#bitwise_and to keep just the poly, removing the circle
cv.bitwise_and(thres_poly_temp, thres_circle, thres_poly)
kernel = np.ones((3, 3),np.uint8)
thres_poly = cv.morphologyEx(thres_poly, cv.MORPH_CLOSE, kernel)
kernel = np.ones((3, 3),np.uint8)
thres_poly = cv.morphologyEx(thres_poly, cv.MORPH_OPEN, kernel)
#find circle
circle = cv.HoughCircles(thres_circle, 3, 2, 800, minRadius = 100, maxRadius=500, param1=80, param2=100)
radius_list = []
if circle is not None:
circle = np.round(circle[0, :]).astype("int")
for (x,y,r) in circle:
cv.circle(gray, (x,y), r, (255,255,0),3)
cv.circle(gray, (x,y), 3, (255,255,0),3)
radius_list.append((x+r,y))
a = 0
#find radius
while(a < 360):
rad = math.radians(a)
x2 = int((radius_list[0][0] - x) * math.cos(rad)) - int((radius_list[0][1] - y) * math.sin(rad)) + x;
y2 = int((radius_list[0][0] - x) * math.sin(rad)) + int((radius_list[0][1] - y) * math.cos(rad)) + y;
radius_list.append((x2,y2))
a = a + 18
cv.line(gray, (x,y), (x2,y2), (255,255,0),2)
#find poly contour
contours,hierarchy = cv.findContours(thres_poly, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
#extract contour with max area
maxArea = -1
maxAreaContour = contours[0]
for contour in contours:
area = abs(cv.contourArea(contour))
if area > maxArea:
maxArea = area
maxAreaContour = contour
#approx poly to get contours
epsilon = 0.1*cv.arcLength(maxAreaContour,True)
approx = cv.approxPolyDP(maxAreaContour, 5, True)
cv.drawContours(gray, [approx],-1,(0,255,0),2)
cv.imshow("1", gray)
#now just iterate all the radius with the contour to find the intersection
# it should be pretty straight forward
output sample
EDIT 2: still missing: align image, as I said you can use the horizontal and vertical dotted lines to do that (use hough lines to get them,then determine m and q and do the inverse roto-translation)
I am new here and a little bit newbie in programming.
I have one question. I have picture of Sun in bmp file and 16 bit. The picture look as white circle with black backround.
I want to find a circle and identify its center in x,y coordinates.
I have this script
import cv
import numpy as np
orig = cv.LoadImage('sun0016.bmp')
grey_scale = cv.CreateImage(cv.GetSize(orig), 8, 1)
processed = cv.CreateImage(cv.GetSize(orig), 8, 1)
cv.Smooth(orig, orig, cv.CV_GAUSSIAN, 5, 5)
cv.CvtColor(orig, grey_scale, cv.CV_RGB2GRAY)
cv.Erode(grey_scale, processed, None, 10)
cv.Dilate(processed, processed, None, 10)
cv.Canny(processed, processed, 5, 70, 3)
cv.Smooth(processed, processed, cv.CV_GAUSSIAN, 15, 15)
storage = cv.CreateMat(orig.width, 1, cv.CV_32FC3)
cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 1, 16.0, 10, 140)
for i in range(0, len(np.asarray(storage))):
print "circle #%d" %i
Radius = int(np.asarray(storage)[i][0][2])
x = int(np.asarray(storage)[i][0][0])
y = int(np.asarray(storage)[i][0][1])
center = (x, y)
print x,y
cv.Circle(orig, center, 1, cv.CV_RGB(0, 255, 0), 1, 8, 0)
cv.Circle(orig, center, Radius, cv.CV_RGB(255, 0, 0), 1, 8, 0)
cv.Circle(processed, center, 1, cv.CV_RGB(0, 0, 0), -1, 8, 0)
cv.Circle(processed, center, Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0)
cv.ShowImage("sun0016", orig)
cv.ShowImage("processed", processed)
cv_key = cv.WaitKey(0)
And when I run this I find edge of Sun which is circle with center but very inaccurately.
Pls know you setting of parameters HoughCircles module for precise search circles.
Thanks
The main problem here is finding a good range for your radius.
You may have a look at your picture and guess the Radius.
From the Picture you have given I would guess 180 - 220 would be a good range.
Your code would look like:
cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 1, 16.0, 180, 220)
Just try to find good Values for minRadius and maxRadius and this should work fine.
here is solution of my problem
import numpy as np
import cv2
im = cv2.imread('sun0016.bmp')
height, width, depth = im.shape
print height, width, depth
thresh = 132
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(imgray,(5,5),0)
edges = cv2.Canny(blur,thresh,thresh*2)
contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
cv2.drawContours(im,contours,-1,(0,255,0),-1)
#centroid_x = M10/M00 and centroid_y = M01/M00
M = cv2.moments(cnt)
x = int(M['m10']/M['m00'])
y = int(M['m01']/M['m00'])
print x,y
print width/2.0,height/2.0
print width/2-x,height/2-y
cv2.circle(im,(x,y),1,(0,0,255),2)
cv2.putText(im,"center of Sun contour", (x,y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255))
cv2.circle(im,(width/2,height/2),1,(255,0,0),2)
cv2.putText(im,"center of image", (width/2,height/2), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0))
cv2.imshow('contour',im)
cv2.waitKey(0)
Thought I would chime in with an alternative solution in case anyone stumbles upon this question in the future.
The following function uses cv2.inRange instead of cv2.Canny, and cv2.minEnclosingCircle instead of cv2.moments. It selects the largest contour found by cv2.findContours by measuring the radius of candidates' minimum enclosing circle. This filtering helps reject false positives from e.g. watermarks or dust, but depending on your requirements you might want to go about this step differently or omit it entirely.
The function returns both the x,y coordinates as well as the radius of the detected disk, a requirement for the project I was working on.
import cv2
def find_disk(img, threshold=10):
"""Finds the center and radius of a single solar disk present in the supplied image.
Uses cv2.inRange, cv2.findContours and cv2.minEnclosingCircle to determine the centre and
radius of the solar disk present in the supplied image.
Args:
img (numpy.ndarray): greyscale image containing a solar disk against a background that is below `threshold`.
threshold (int): threshold of min pixel value to consider as part of the solar disk
Returns:
tuple: center coordinates in x,y form (int)
int: radius
"""
if img is None:
raise TypeError("img argument is None - check that the path of the loaded image is correct.")
if len(img.shape) > 2:
raise TypeError("Expected single channel (grayscale) image.")
blurred = cv2.GaussianBlur(img, (5, 5), 0)
mask = cv2.inRange(blurred, threshold, 255)
img_mod, contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Find and use the biggest contour
r = 0
for cnt in contours:
(c_x, c_y), c_r = cv2.minEnclosingCircle(cnt)
# cv2.circle(img, (round(c_x), round(c_y)), round(c_r), (255, 255, 255), 2)
if c_r > r:
x = c_x
y = c_y
r = c_r
# print("Number of contours found: {}".format(len(contours)))
# cv2.imwrite("mask.jpg", mask)
# cv2.imwrite("circled_contours.jpg", img)
if x is None:
raise RuntimeError("No disks detected in the image.")
return (round(x), round(y)), round(r)
if __name__ == "__main__":
image = cv2.imread("path/to/your/image.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
center, radius = find_disk(img=gray, threshold=20)
print("circle x,y: {},{}".format(center[0], center[1]))
print("circle radius: {}".format(radius))
# Output the original image with the detected disk superimposed
cv2.circle(image, center, radius, (0, 0, 255), 1)
cv2.rectangle(image, (center[0] - 2, center[1] - 2), (center[0] + 2, center[1] + 2), (0, 0, 255), -1)
cv2.imwrite("disk_superimposed.jpg", image)
I have left in some commented debug statements that may come in handy if you find the need to tinker with this further.
You might want to use a higher threshold if your images include a lot of glare.