Finding distinct center of 2 objects - python

I am trying to find centers of 2 squares in the same image which looks as follows:
I am able to detect the lines that make up the square. My output looks as follows:
As documented here to find the center of a polygon, I used moments to find center. Here is what I did.
import cv2
import numpy as np
img = cv2.imread('images/sq.png', 0)
gray = img
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
ret,thresh = cv2.threshold(blur_gray,100,255,0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(thresh, low_threshold, high_threshold)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 3 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),2)
print("x1 {} y1 {} x2 {} y2 {}".format(x1,y1,x2,y2))
lines_edges = cv2.addWeighted(img, 0.5, line_image, 1, 0)
line_image_gray = cv2.cvtColor(line_image, cv2.COLOR_RGB2GRAY)
M = cv2.moments(line_image_gray)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cv2.circle(lines_edges, (cx, cy), 5, (0, 0, 255), 1)
cv2.imshow("res", lines_edges)
cv2.imshow("line_image", line_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
But this finds the center between 2 detected squares. How could I find the centers of each square while only using Hough methods?

Given that you have a requirement to use the Hough transform, I suggest you prepare the image better for it. The Canny edge detector will detect the inner and outer edges of the black line here, leading to two pairs of lines detected by Hough.
Instead, follow a procedure like this:
Find all black (or nearly-black) pixels. For example pixels where all three RGB components are below 50. This will return the squares by themselves.
Apply a morphological thinning (or a skeleton) to turn this into a 1-pixel thick outline of the squares.
Apply the Hough transform on the result, and detect line segments.
Proper pre-processing makes the Hough transform easier to set up, as there will be a larger range of parameters that yields the correct results.
Next, find segments that start or end at the same pixel, with a little bit of tolerance (i.e. start or end points are within a few pixels of each other), to determine which of the lines belong together in the same shape.

You could use this method combined with the following code to find which lines are part of the same square:
How can I check if two segments intersect?
Where 'lines' is a list of the recognized lines, and intersects(line1, line2) is a function using the process in the above link
squares = [[lines(1)]]
for line1 in lines:
for square in squares:
for line2 in square:
if line1 != line2:
if intersects(line1, line2):
square.append(line1)
else:
squares.append([line1])
This gives you 'squares' that contain the lines that are a part of it. You could then use the moment function on each individually.

Related

How do I use OpenCV to detect exclusively almost straight edges?

I'm trying to detect straight edges in a basketball card and what I have so far does a good job of detecting all edges. I would like for this piece of code however, to detect exclusively straight edges (the outline of the card).
import cv2
import numpy as np
import imutils
img = cv2.imread('edgedetection/cardgiannis.jpeg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),5)
# Draw the lines on the image
lines_edges = cv2.addWeighted(img, 0.8, line_image, 1, 0)
cv2.imshow('lines', lines_edges)
cv2.waitKey()
Here is what I think.
If you can somehow get the highest & lowest valued coordinates in the pixels that are forming the lines, you can use those pixels to form a rectangle that consist of the straight edges.
Using the pixels to (roughly) form a rectangle can be the solution!
To find the pixels in a line, you can take a look in here, here and here.
you may use cv.approxPolyDP to extract dominant points from edges and stick to the edges that have 2 dominant points.
ps: you apply approxPolyDP on edges after collecting them by cv.findcontours

Finding the diameter and area of overlapping ellipses (OpenCV, Python)

This is my first question on Stackoverflow. I'm a little excited, forgive me if I'm wrong. We have mixed ellipses with and without overlapping drawn randomly from paint. I'm sharing the image I'm working on and my code. I am not a professional in opencv module, I wrote my code as a result of research inspired by sources.
The purpose of my code is,
Detection of randomly drawn with and without overlapping ellipses using the cv2.fitEllipse method. Next, find the major axis, minor axis and areas of the detected ellipses.
The problem with my code is actually this,
In overlapping ellipses, while fitting the ellipse under normal conditions, 2 ellipses should be fit, but about 6-7 ellipses are fit and I cannot reach the values I want to be calculated.
I'm open to your help, thank you in advance.
Example image:
import cv2
import numpy as np
import random as rng
import math
img = cv2.imread('overlapping_ellipses.png', 1)
imge= cv2.cvtColor(img,cv2.COLOR_RGB2BGR)
gray = cv2.cvtColor(imge, cv2.COLOR_BGR2GRAY)
blur = cv2.blur(gray, (2,2), 3)
edged = cv2.Canny(blur, 50, 100)
kernel= np.ones((2,2))
edged1 = cv2.dilate(edged, kernel, iterations=2)
edged2 = cv2.erode(edged1, kernel, iterations=2)
def thresh_callback(val):
threshold = val
canny_output = cv2.Canny(edged2, threshold, threshold * 4)
contours, _ = cv2.findContours(canny_output, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
minRect = [None]*len(contours)
minEllipse = [None]*len(contours)
for i, c in enumerate(contours):
minRect[i] = cv2.minAreaRect(c)
if c.shape[0] > 5:
minEllipse[i] = cv2.fitEllipse(c)
(x1,y1),(d1,d2),angle = minEllipse[i]
print('\nX1: ', round(x1,4), '\nY1: ', round(y1,4), '\nD1:',round(d1,4), '\nD2',round(d2,4), '\nAngle:', round(angle,4))
long= x1-d2
small= y1-d1
major= long/2
minor= small/2
pixel= 37.795275591
major1= major/pixel
minor1= minor/pixel
print('--------------------------------')
print('Major axis is: ', abs(round(major1,4)), 'cm')
print('Minor axis is: ', abs(round(minor1,4)), 'cm')
print('--------------------------------')
drawing = np.zeros((canny_output.shape[1], canny_output.shape[1], 3), dtype=np.uint8)
for i, c in enumerate(contours):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
cv2.drawContours(drawing, contours, i, color)
if c.shape[0] > 5:
cv2.ellipse(drawing, minEllipse[i], color, 1)
cv2.imshow('Fitting Ellips', drawing)
source_window = 'Source'
cv2.namedWindow(source_window)
cv2.imshow(source_window, img)
max_thresh = 255
thresh = 100
cv2.createTrackbar('Canny Thresh:', source_window, thresh, max_thresh, thresh_callback)
thresh_callback(thresh)
cv2.waitKey()
Step 1: Identify and separate the blobs in the input image.
Since we don't care about colour information here, we can directly load the image as grayscale.
image = cv2.imread('input.png', cv2.IMREAD_GRAYSCALE)
The input image contains black ellipses on white background.
We only need the external contours of the blobs, and cv2.findContours expects white blobs on black background.
Therefore we need to invert the image. At the same time we need a binary image. We can use cv2.threshold to accomplish both tasks.
Once we detect the blob contours, we can collect some useful information for each blob into a simple map-based data structure.
def detect_blobs(image):
_,img_binary = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY_INV)
contours, _ = cv2.findContours(img_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
blobs = []
for i, contour in enumerate(contours):
orig_x, orig_y, width, height = cv2.boundingRect(contour)
roi_image = image[orig_y:orig_y+height,orig_x:orig_x+width]
blobs.append({
"i" : i
, "contour" : contour
, "origin" : (orig_x, orig_y)
, "size" : (width, height)
, "roi_image" : roi_image
})
return blobs
Step 2: Process each blob
First we need to determine whether the blob is a single ellipse, or whether it is a pair over intersecting ellipses.
One way to do this is by looking for convexity defects.
Since the coordinates of our contour are represented by integers, even the single-ellipse scenario will exhibit some convexity defects.
However, their magnitude (the distance between the furthest point on the contour from the enclosing convex hull segment) will be very small, generally below 1 pixel.
On the other hand, the contour of a pair of intersecting ellipses will have large convexity defects, one for each of the four points where the curves intersect.
This distinction can be seen on the following two images (contour is blue, convex hull red, identified intersection points/locations of large convexity defects are orange circles):
Single ellipse
Two intersecting ellipses
We therefore filter out any small convexity defects, and note the locations of the large ones. Now we're left with 3 possible scenarios.
Scenario A: No intersection points detected
Only small convexity defects were identified, which means this is very likely a single ellipse. We simply fit an ellipse to the contour and move on.
Scenario B: Exactly 4 intersection points detected
In this case we have 2 intersecting ellipses. We use the intersection points to split the contour into 4 segments, one for each "lobe" of the blob. Each of the segments should include the two intersection points that delimit it.
In the following picture, the segments are show in green, yellow, cyan and magenta, while the intersection points are orange circles:
Now, we can combine the pairs of segments that lie opposite each other (i.e. green+cyan and yellow+magenta) to get two lists of points, one for each ellipse. Again, we simply fit an ellipse to each list of points.
Scenario C: Some other number of intersection points detected
This is considered an invalid situation.
def process_blob(blob):
MAJOR_DEFECT_THRESHOLD = 2.0
contour = blob["contour"]
blob["hull"] = cv2.convexHull(contour)
hull_idx = cv2.convexHull(contour, returnPoints=False)
defects = cv2.convexityDefects(contour, hull_idx)
intersections = []
for i,defect in enumerate(np.squeeze(defects, 1)):
_, _, far_idx, far_dist = defect
real_far_dist = far_dist / 256.0
if real_far_dist >= MAJOR_DEFECT_THRESHOLD:
intersections.append(far_idx)
if len(intersections) == 0:
print("One ellipse")
blob["ellipses"] = [cv2.fitEllipse(contour)]
elif len(intersections) == 4:
print("Two ellipses")
blob["segments"] = [
contour[intersections[0]:intersections[1]+1]
, contour[intersections[1]:intersections[2]+1]
, contour[intersections[2]:intersections[3]+1]
, np.vstack([contour[intersections[3]:],contour[:intersections[0]+1]])
]
split_contours = [
np.vstack([blob["segments"][0], blob["segments"][2]])
, np.vstack([blob["segments"][1], blob["segments"][3]])
]
blob["ellipses"] = [cv2.fitEllipse(c) for c in split_contours]
else:
print("Invalid scenario")
blob["ellipses"] = []
return blob["ellipses"]
At this point, it's trivial to calculate the parameters you need -- I'll leave this as an excercise to the reader.
As a bonus, here's some simple visualization for debugging purposes:
def visualize_blob(blob):
PADDING = 20
orig_x, orig_y = blob["origin"]
offset = (orig_x - PADDING, orig_y - PADDING)
input_img = cv2.copyMakeBorder(blob["roi_image"]
, PADDING, PADDING, PADDING, PADDING
, cv2.BORDER_CONSTANT, None, 255)
adjusted_img = cv2.add(input_img, 127) - 63
output_img_ch = cv2.cvtColor(adjusted_img, cv2.COLOR_GRAY2BGR)
output_img_seg = output_img_ch.copy()
output_img_el = output_img_ch.copy()
cv2.drawContours(output_img_ch, [blob["hull"] - offset], 0, (127,127,255), 4)
cv2.drawContours(output_img_ch, [blob["contour"] - offset], 0, (255,127,127), 2)
SEGMENT_COLORS = [(0,255,0),(0,255,255),(255,255,0),(255,0,255)]
if "segments" in blob:
for i in range(4):
cv2.polylines(output_img_seg, [blob["segments"][i] - offset], False, SEGMENT_COLORS[i], 4)
for i in range(4):
center = (blob["segments"][i] - offset)[0][0]
cv2.circle(output_img_ch, center, 4, (0,191,255), -1)
cv2.circle(output_img_seg, center, 4, (0,191,255), -1)
for ellipse in blob["ellipses"]:
offset_ellipse = ((ellipse[0][0] - offset[0], ellipse[0][1] - offset[1]), ellipse[1], ellipse[2])
cv2.ellipse(output_img_el, offset_ellipse, (0,0,255), 2)
cv2.imshow('', np.hstack([output_img_ch,output_img_seg, output_img_el]))
cv2.imwrite('output_%d_ch.png' % blob["i"], output_img_ch)
cv2.imwrite('output_%d_seg.png' % blob["i"], output_img_seg)
cv2.imwrite('output_%d_el.png' % blob["i"], output_img_el)
cv2.waitKey()
Pulling it all together:
import cv2
import numpy as np
## INSERT THE FUNCTIONS LISTED ABOVE IN THE QUESTION ##
image = cv2.imread('input.png', cv2.IMREAD_GRAYSCALE)
blobs = detect_blobs(image)
print("Found %d blob(s)." % len(blobs))
for blob in blobs:
process_blob(blob)
visualize_blob(blob)

How to connect closest keypoints (blobs) with a line?

I'm trying to draw a (straight) line passing through black spots (on a binary threshold image). I tried without success and now think using the keypoints in the blobdetector is a good option.
How can you connect the keypoints from detected blobs in a binary threshold image? Maybe by connecting the closest blobs together with a line? And maybe later on by removing the lines that are below a minimum length?
Image with blobs detected:
Expected result:
That might be a good use case for the plain Hough transform, i.e. OpenCV's HoughLines method.
Here's what my approach does (I used a cropped, grayscaled version of your first image):
Inverse binarize the input image to have white points on black background. (The less white, the faster the Hough transform.)
First Hough transform with high accumulator threshold to obtain the main angle of the resulting lines by finding the median.
Second Hough transform with fixed main angle, but lower accumulator threshold to get also the shorter lines (bottom right corner).
Group neighbouring lines.
First, let's have a look at the output. That's after the first Hough transform:
You can see, most lines have the "right" angle, but not all. After the second Hough transform with the fixed angle, but lower accumulator threshold, we get (output not reflected in below code):
All desired lines are detected, but there are sets of neighbouring lines. After grouping them, the final output would look like this:
And, here comes the whole code:
import cv2
import numpy as np
# Draw lines using rho and theta values
def draw_lines(image, rhos, thetas):
for i in np.arange(thetas.shape[0]):
a = np.cos(thetas[i])
b = np.sin(thetas[i])
x0 = a * rhos[i]
y0 = b * rhos[i]
pt1 = (np.round(x0 + 1000 * -b).astype(np.int32), np.round(y0 + 1000 * a).astype(np.int32))
pt2 = (np.round(x0 - 1000 * -b).astype(np.int32), np.round(y0 - 1000 * a).astype(np.int32))
cv2.line(image, pt1, pt2, (0, 255, 0), 2, cv2.LINE_AA)
return image
# Read image, convert to grayscale and inverse binarize
img = cv2.imread('path/to/your/image.png')
_, img_thr = cv2.threshold(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), 128, 255, cv2.THRESH_BINARY_INV)
# First Hough transform with high accumulator threshold
lines = cv2.HoughLines(img_thr, 1, np.pi / 180, 200)
# Just for visualization: Intermediate output
img_hough1 = draw_lines(img.copy(), lines[:, 0, 0], lines[:, 0, 1])
# Get main angle
main_theta = np.median(lines[:, 0, 1])
# Second Hough transform with mediocre accumulator threshold, and fixed angle
lines = cv2.HoughLines(img_thr, 1, np.pi / 180, 100, min_theta=main_theta - 0.01, max_theta=main_theta + 0.01)
# Group remaining lines
rhos = np.sort(lines[:, 0, 0])
line_idx = np.where(np.diff(rhos) > 100)[0]
new_rhos = []
s = 0
for i in np.arange(line_idx.shape[0]):
e = line_idx[i]
new_rhos.append(np.mean(rhos[s:e + 1]))
s = line_idx[i] + 1
if i == line_idx.shape[0] - 1:
new_rhos.append(np.mean(rhos[s:rhos.shape[0] + 1]))
# Final output
img_hough2 = draw_lines(img.copy(), new_rhos, np.ones_like(new_rhos) * main_theta)
# Output visualization
cv2.imshow('img_hough1', img_hough1)
cv2.imshow('img_hough2', img_hough2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Hope that helps!
Disclaimer: The line drawing code was adopted from this OpenCV tutorial.

Gradient of edges Python

I have a dataset with two classes of images: Cityscape and Landscape. What I want to do is calculate the gradient(orientation) of the edges of each image and show that images of cityscapes have more vertical/horizontal edges than landscape images.
What I've done is calculated vertical, horizontal, 45 degree and 135 degree edges. I've applied a Canny filter to the images, calculated the x,y gradients and also applied a threshold to the images show it shows edges above that threshold. The result of this thresholding is seen here:
This is my code for this image manipulation as well as calculating the gradients:
def gradient(image):
# Step 1
img = image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Step 2
bi = cv2.bilateralFilter(gray, 15, 75, 75)
# Step 3
dst = cv2.Canny(bi, 100, 200)
#print(np.count_nonzero(dst)) #--> make sure it's not all zeroes
# Step 4
#--- create a black image to see where those edges occur ---
mask = np.zeros_like(gray)
#--- applying a threshold and turning those pixels above the threshold to white ---
mask[dst > 0.1 * dst.max()] = 255
# Step 5
img[dst > 0.1 * dst.max()] = [255, 0, 0] #--- [255, 0, 0] --> Red ---
Gx = cv2.Sobel(mask,cv2.CV_64F,1,0,ksize=5)
Gy = cv2.Sobel(mask,cv2.CV_64F,0,1,ksize=5)
#orientation of the edges
theta = np.arctan2(Gy, Gx)
#magnitude
M = np.sqrt(Gx*Gx + Gy*Gy)
#Vertical edges:
v = abs(Gy)
#Horizontal edges:
h = abs(Gx)
#45 Degree edges:
deg45 = M*abs(np.cos(theta - np.pi/4))
#135 Degree edges:
deg135 = M*abs(np.cos(theta - 3*np.pi/4))
print('Vertical:')
#print(v)
print(np.count_nonzero(v))
print('Horizontal:')
#print(h)
print(np.count_nonzero(h))
What I want is to calculate the v,h,deg45,deg135 for the edges shown as red in the image above (Step 5). If that is not possible, then do that for the image with the white edges (Step 4). Can anyone help?
EDIT: So as to avoid confusion, what I want to do is to get the amount of vertical, horizontal etc edges in a given image, so that I can compare those numbers for cityscapes vs landscape images.
If what you want is the total number of pixels comprising horizontal vs vertical edges, I would suggest defining some threshold for horizontal vs vertical (say 15 degrees). So you can count the number of elements of theta for which
abs(theta) < pi/12 (horizontal)
or abs(theta) > pi-pi/12 (horizontal)
or pi/2 - pi/12 < abs(theta) < pi/2+pi/12 (vertical)
What you're storing in v and h are the vertical and horizontal components of the gradient at each point and what you need is to compare the values of v and h to determine for each point if the gradient vector should count as horizontal or vertical. Comparing theta is probably the most intuitive way to do this.
In order to get the number of elements of theta that satisfy a particular condition, I would suggest using a generator expression:
sum(1 for i in theta if (abs(i)<pi/12) or (abs(i)>pi-pi/12))
would give you the number of horizontal edge pixels for example.
I think Hough transform fits more to your needs if you want to count or control how many linear features you have in your image, either you could count how many linear feature for each specific orientation (in the hough space). As soon you are using Python, this and this link might be helpful!

grid detection in opencv python

I have an image that has items inside a grid. One stage of the problem is to detect and build a mask of the full grid, which can be slightly rotated clock or anticlockwise. My current CV pipeline extracts (probabilistic) Hough lines from an image and then uses its contours to filter for a set of rectangles (call the actual and the detected sets R' and R respectively). However due to occlusions and lighting conditions the hough lines (and consequently all downstream contours and lines segments) are incomplete (R << R'). Schematically the problem i must solve is to infer the missing grid components (R'- R) given the detected grid cells.
One strategy I am considering is the following. Foreach detected rectangle contour r in R do:
1- using fitLine() find vertical and horizontal lines that pass through the center of r (see code and image below)
rect = cv2.minAreaRect(r)
box = cv2.boxPoints(rect)
box = np.int0(box)
cx = np.int0(rect[0][0])
cy = np.int0(rect[0][1])
w = np.int0(rect[1][0])
h = np.int0(rect[1][1])
cv2.drawContours(img,[box],0,255,1)
[vx,vy,x,y] = cv2.fitLine(box, cv2.DIST_L2,0,0.01,0.01)
lefty = int((-x*vy/vx) + y)
righty = int(((cols-x)*vy/vx)+y)
start = (cols-1,righty)
end = (0,lefty)
cv2.line(img,start,end,255,1)
# HORIZONTAL
nx,ny = 1,-vx/vy
mag = np.sqrt((1+ny**2))
vx,vy = nx/mag,ny/mag
# Now find two extreme points on the line to draw line
lefty = int((-x*vy/vx) + y)
righty = int(((cols-x)*vy/vx)+y)
start = (cols-1,righty)
end = (0,lefty)
cv2.putText(img,'start',(start[0]-60,start[1]),font,0.5,255,1)
cv2.putText(img,'end',end,font,0.5,255,1)
cv2.line(img,start,end,255,1)
2- create a mask of same size and rotation as the source rectangle and move along the line define in (1), drawing the same shape rectangle as the source if there is no previously detected contour in the same area
ie.
My questions are:
is there a more efficient and better way to solve this problem?
What is best way to move the mask along the line that is at an angle
(given by cv2.minAreaRect)?
thank you

Categories