Count number of the blues lines on white background in the image - python

I have 1000 images like that
I tried the cv2 library and Hough Line Transform by this tutorial, but I'm don't understand it is my case? I have 1000 images, i.e. I almost don't have the possibility to enter any data (like width or coordinates) manually.
By the logic, I must find every blue pixel in the image and check, if the neighbors' pixels are white
So for it I must know pixels format of a PNG image. How I must to read the image, like common file open (path, 'r') as file_object or it must be some special method with a library?

You could count the line ends and divide by two...
#!/usr/bin/env python3
import numpy as np
from PIL import Image
from scipy.ndimage import generic_filter
# Line ends filter
def lineEnds(P):
global ends
# Central pixel and one other must be 255 for line end
if (P[4]==255) and np.sum(P)==510:
ends += 1
return 255
return 0
# Global count of line ends
ends = 0
# Open image and make into Numpy array
im = Image.open('lines.png').convert('L')
im = np.array(im)
# Invert and threshold for white lines on black
im = 255 - im
im[im>0] = 255
# Save result, just for debug
Image.fromarray(im).save('intermediate.png')
# Find line ends
result = generic_filter(im, lineEnds, (3, 3))
print(f'Line ends: {ends}')
# Save result, just for debug
Image.fromarray(result).save('result.png')
Output
Line ends: 16
Note this is not production quality code. You should add extra checks, such as the total number of line-ends being even, and adding a 1 pixel wide black border around the edge in case a line touches the edge and so on.

At first glance the problem looks simple - convert to binary image, use Hough Line Transform, and count the lines, but it's not working...
Note:
The solution I found is based on finding and merging contours, but using Hough Transform may be more robust.
Instead of merging contours, you may find many short lines, and merge them into long lines based on close angle and edges proximity.
The solution below uses the following stages:
Convert image to binary image with white lines on black background.
Split intersection points between lines (fill crossing points with black).
Find contours in binary image (and remove small contours).
Merge contours with close angles, and close edges.
Here is a working code sample:
import cv2
import numpy as np
def box2line(box):
"""Convert rotated rectangle box into two array of two points that defines a line"""
b = box.copy()
for i in range(2):
p0 = b[0]
dif0 = (b[1:, 0] - p0[0])**2 + (b[1:, 1] - p0[1])**2
min_idx = np.argmin(dif0, 0)
b = np.delete(b, min_idx+1, 0)
return b
def minlinesdist(line, line2):
"""Finds minimum distance between any two edges of two lines"""
a0 = line[0, :]
a1 = line[1, :]
b0 = line2[0, :]
b1 = line2[1, :]
d00 = np.linalg.norm(a0 - b0)
d01 = np.linalg.norm(a0 - b1)
d10 = np.linalg.norm(a1 - b0)
d11 = np.linalg.norm(a1 - b1)
min_dist = np.min((d00, d01, d10, d11))
return min_dist
def get_rect_box_line_and_angle(c):
"""Return minAreaRect, boxPoints, line and angle of contour"""
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
line = box2line(box)
angle = rect[2]
return rect, box, line, angle
(cv_major_ver, cv_minor_ver, cv_subminor_ver) = (cv2.__version__).split('.') # Get version of OpenCV
im = cv2.imread('BlueLines.png') # Read input image
# Convert image to binary image with white lines on black background
################################################################################
gray = im[:, :, 1] # Get only the green color channel (the blue lines should be black).
# Apply threshold
ret, thresh_gray = cv2.threshold(gray, 10, 255, cv2.THRESH_BINARY)
# Invert polarity
thresh_gray = 255 - thresh_gray
################################################################################
# Split intersection points between lines (fill crossing points with black).
################################################################################
thresh_float = thresh_gray.astype(float) / 255 # Convert to float with range [0, 1]
thresh_float = cv2.filter2D(thresh_float, -1, np.ones((3, 3))) # Filter with ones 5x5
# Find pixels with "many" neighbors
thresh_intersect = np.zeros_like(thresh_gray)
thresh_intersect[(thresh_float > 3)] = 255; # Image of intersection points only.
thresh_gray[(thresh_float > 3)] = 0;
################################################################################
# Find contours in thresh_gray, and remove small contours.
################################################################################
if int(cv_major_ver) < 4:
_, contours, _ = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
else:
contours, _ = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Remove small contours, because their angle is not well defined
fcontours = []
for i in range(len(contours)):
c = contours[i]
if c.shape[0] > 6: # Why 6?
fcontours.append(c)
contours = fcontours
# Starting value.
n_lines = len(contours)
################################################################################
# Merge contours with close angles, and close edges
# Loop decreases n_lines when two lines are merged.
# Note: The solution is kind of "brute force" solution, and can be better.
################################################################################
# https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html
# Fitting a Line
rows,cols = im.shape[:2]
for i in range(len(contours)):
c = contours[i]
rect, box, line, angle = get_rect_box_line_and_angle(c)
for j in range(i+1, len(contours)):
c2 = contours[j]
rect2 = cv2.minAreaRect(c2)
box2 = cv2.boxPoints(rect2)
line2 = box2line(box2)
angle2 = rect2[2]
angle_diff = (angle - angle2 + 720) % 180 # Angle difference in degrees (force it to be positive number in range [0, 180].
angle_diff = np.minimum(angle_diff, 180 - angle_diff)
min_dist = minlinesdist(line, line2) # Minimum distance between any two edges of line and line2
if (angle_diff < 3) and (min_dist < 20):
color = (int((i+3)*100 % 255),int((i+3)*50 % 255), int((i+3)*70 % 255))
# https://stackoverflow.com/questions/22801545/opencv-merge-contours-together
# Merge contours together
tmp = np.vstack((c, c2))
c = cv2.convexHull(tmp)
# Draw merged contour (for testing)
im = cv2.drawContours(im, [c], 0, color, 2)
# Replace contour with merged one.
contours[j] = c
n_lines -= 1 # Subtract lines counter
break
################################################################################
print('Number of lines = {}'.format(n_lines))
# Display result (for testing):
cv2.imshow('thresh_gray', thresh_gray)
cv2.imshow('im', im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Number of lines = 8
thresh_gray (before splitting):
thresh_gray (after splitting):
im:
Note:
I know the solution is not perfect, and not going to find perfect results on all of your 1000 images.
I think there is a better change that using Hough Transform and merging lines is going to give perfect results.

Related

Finding the diameter and area of overlapping ellipses (OpenCV, Python)

This is my first question on Stackoverflow. I'm a little excited, forgive me if I'm wrong. We have mixed ellipses with and without overlapping drawn randomly from paint. I'm sharing the image I'm working on and my code. I am not a professional in opencv module, I wrote my code as a result of research inspired by sources.
The purpose of my code is,
Detection of randomly drawn with and without overlapping ellipses using the cv2.fitEllipse method. Next, find the major axis, minor axis and areas of the detected ellipses.
The problem with my code is actually this,
In overlapping ellipses, while fitting the ellipse under normal conditions, 2 ellipses should be fit, but about 6-7 ellipses are fit and I cannot reach the values I want to be calculated.
I'm open to your help, thank you in advance.
Example image:
import cv2
import numpy as np
import random as rng
import math
img = cv2.imread('overlapping_ellipses.png', 1)
imge= cv2.cvtColor(img,cv2.COLOR_RGB2BGR)
gray = cv2.cvtColor(imge, cv2.COLOR_BGR2GRAY)
blur = cv2.blur(gray, (2,2), 3)
edged = cv2.Canny(blur, 50, 100)
kernel= np.ones((2,2))
edged1 = cv2.dilate(edged, kernel, iterations=2)
edged2 = cv2.erode(edged1, kernel, iterations=2)
def thresh_callback(val):
threshold = val
canny_output = cv2.Canny(edged2, threshold, threshold * 4)
contours, _ = cv2.findContours(canny_output, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
minRect = [None]*len(contours)
minEllipse = [None]*len(contours)
for i, c in enumerate(contours):
minRect[i] = cv2.minAreaRect(c)
if c.shape[0] > 5:
minEllipse[i] = cv2.fitEllipse(c)
(x1,y1),(d1,d2),angle = minEllipse[i]
print('\nX1: ', round(x1,4), '\nY1: ', round(y1,4), '\nD1:',round(d1,4), '\nD2',round(d2,4), '\nAngle:', round(angle,4))
long= x1-d2
small= y1-d1
major= long/2
minor= small/2
pixel= 37.795275591
major1= major/pixel
minor1= minor/pixel
print('--------------------------------')
print('Major axis is: ', abs(round(major1,4)), 'cm')
print('Minor axis is: ', abs(round(minor1,4)), 'cm')
print('--------------------------------')
drawing = np.zeros((canny_output.shape[1], canny_output.shape[1], 3), dtype=np.uint8)
for i, c in enumerate(contours):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
cv2.drawContours(drawing, contours, i, color)
if c.shape[0] > 5:
cv2.ellipse(drawing, minEllipse[i], color, 1)
cv2.imshow('Fitting Ellips', drawing)
source_window = 'Source'
cv2.namedWindow(source_window)
cv2.imshow(source_window, img)
max_thresh = 255
thresh = 100
cv2.createTrackbar('Canny Thresh:', source_window, thresh, max_thresh, thresh_callback)
thresh_callback(thresh)
cv2.waitKey()
Step 1: Identify and separate the blobs in the input image.
Since we don't care about colour information here, we can directly load the image as grayscale.
image = cv2.imread('input.png', cv2.IMREAD_GRAYSCALE)
The input image contains black ellipses on white background.
We only need the external contours of the blobs, and cv2.findContours expects white blobs on black background.
Therefore we need to invert the image. At the same time we need a binary image. We can use cv2.threshold to accomplish both tasks.
Once we detect the blob contours, we can collect some useful information for each blob into a simple map-based data structure.
def detect_blobs(image):
_,img_binary = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY_INV)
contours, _ = cv2.findContours(img_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
blobs = []
for i, contour in enumerate(contours):
orig_x, orig_y, width, height = cv2.boundingRect(contour)
roi_image = image[orig_y:orig_y+height,orig_x:orig_x+width]
blobs.append({
"i" : i
, "contour" : contour
, "origin" : (orig_x, orig_y)
, "size" : (width, height)
, "roi_image" : roi_image
})
return blobs
Step 2: Process each blob
First we need to determine whether the blob is a single ellipse, or whether it is a pair over intersecting ellipses.
One way to do this is by looking for convexity defects.
Since the coordinates of our contour are represented by integers, even the single-ellipse scenario will exhibit some convexity defects.
However, their magnitude (the distance between the furthest point on the contour from the enclosing convex hull segment) will be very small, generally below 1 pixel.
On the other hand, the contour of a pair of intersecting ellipses will have large convexity defects, one for each of the four points where the curves intersect.
This distinction can be seen on the following two images (contour is blue, convex hull red, identified intersection points/locations of large convexity defects are orange circles):
Single ellipse
Two intersecting ellipses
We therefore filter out any small convexity defects, and note the locations of the large ones. Now we're left with 3 possible scenarios.
Scenario A: No intersection points detected
Only small convexity defects were identified, which means this is very likely a single ellipse. We simply fit an ellipse to the contour and move on.
Scenario B: Exactly 4 intersection points detected
In this case we have 2 intersecting ellipses. We use the intersection points to split the contour into 4 segments, one for each "lobe" of the blob. Each of the segments should include the two intersection points that delimit it.
In the following picture, the segments are show in green, yellow, cyan and magenta, while the intersection points are orange circles:
Now, we can combine the pairs of segments that lie opposite each other (i.e. green+cyan and yellow+magenta) to get two lists of points, one for each ellipse. Again, we simply fit an ellipse to each list of points.
Scenario C: Some other number of intersection points detected
This is considered an invalid situation.
def process_blob(blob):
MAJOR_DEFECT_THRESHOLD = 2.0
contour = blob["contour"]
blob["hull"] = cv2.convexHull(contour)
hull_idx = cv2.convexHull(contour, returnPoints=False)
defects = cv2.convexityDefects(contour, hull_idx)
intersections = []
for i,defect in enumerate(np.squeeze(defects, 1)):
_, _, far_idx, far_dist = defect
real_far_dist = far_dist / 256.0
if real_far_dist >= MAJOR_DEFECT_THRESHOLD:
intersections.append(far_idx)
if len(intersections) == 0:
print("One ellipse")
blob["ellipses"] = [cv2.fitEllipse(contour)]
elif len(intersections) == 4:
print("Two ellipses")
blob["segments"] = [
contour[intersections[0]:intersections[1]+1]
, contour[intersections[1]:intersections[2]+1]
, contour[intersections[2]:intersections[3]+1]
, np.vstack([contour[intersections[3]:],contour[:intersections[0]+1]])
]
split_contours = [
np.vstack([blob["segments"][0], blob["segments"][2]])
, np.vstack([blob["segments"][1], blob["segments"][3]])
]
blob["ellipses"] = [cv2.fitEllipse(c) for c in split_contours]
else:
print("Invalid scenario")
blob["ellipses"] = []
return blob["ellipses"]
At this point, it's trivial to calculate the parameters you need -- I'll leave this as an excercise to the reader.
As a bonus, here's some simple visualization for debugging purposes:
def visualize_blob(blob):
PADDING = 20
orig_x, orig_y = blob["origin"]
offset = (orig_x - PADDING, orig_y - PADDING)
input_img = cv2.copyMakeBorder(blob["roi_image"]
, PADDING, PADDING, PADDING, PADDING
, cv2.BORDER_CONSTANT, None, 255)
adjusted_img = cv2.add(input_img, 127) - 63
output_img_ch = cv2.cvtColor(adjusted_img, cv2.COLOR_GRAY2BGR)
output_img_seg = output_img_ch.copy()
output_img_el = output_img_ch.copy()
cv2.drawContours(output_img_ch, [blob["hull"] - offset], 0, (127,127,255), 4)
cv2.drawContours(output_img_ch, [blob["contour"] - offset], 0, (255,127,127), 2)
SEGMENT_COLORS = [(0,255,0),(0,255,255),(255,255,0),(255,0,255)]
if "segments" in blob:
for i in range(4):
cv2.polylines(output_img_seg, [blob["segments"][i] - offset], False, SEGMENT_COLORS[i], 4)
for i in range(4):
center = (blob["segments"][i] - offset)[0][0]
cv2.circle(output_img_ch, center, 4, (0,191,255), -1)
cv2.circle(output_img_seg, center, 4, (0,191,255), -1)
for ellipse in blob["ellipses"]:
offset_ellipse = ((ellipse[0][0] - offset[0], ellipse[0][1] - offset[1]), ellipse[1], ellipse[2])
cv2.ellipse(output_img_el, offset_ellipse, (0,0,255), 2)
cv2.imshow('', np.hstack([output_img_ch,output_img_seg, output_img_el]))
cv2.imwrite('output_%d_ch.png' % blob["i"], output_img_ch)
cv2.imwrite('output_%d_seg.png' % blob["i"], output_img_seg)
cv2.imwrite('output_%d_el.png' % blob["i"], output_img_el)
cv2.waitKey()
Pulling it all together:
import cv2
import numpy as np
## INSERT THE FUNCTIONS LISTED ABOVE IN THE QUESTION ##
image = cv2.imread('input.png', cv2.IMREAD_GRAYSCALE)
blobs = detect_blobs(image)
print("Found %d blob(s)." % len(blobs))
for blob in blobs:
process_blob(blob)
visualize_blob(blob)

How to detect the outline of a shape and then crop the shape from image?

I am attempting to only keep the part of the image bounded by the orange/greenish line in lot #17.
As you can see the shape is fairly non standard and I am new to image processing so my approach thus far has been brute forced and error prone.
Each image I need to do this for has a black dot (rgb of (77,77,77)) in the center of the shape I want to crop which has been my anchor.
import PIL
import pandas as pd
image = PIL.Image.open(file)
rgb_im = image.convert('RGB')
color = (77,77,77)
colorindex = pd.DataFrame(data = None,columns = ['X','Y'])
for x in range(image.size[0]):
for y in range(image.size[1]):
r, g, b = rgb_im.getpixel((x, y))
if (r,g,b) == color:
append = [x,y]
append = pd.Series(append,index = colorindex.columns)
colorindex = colorindex.append(append,ignore_index = True)
center = [colorindex.mode()['X'][0],colorindex.mode()['Y'][0]]
line = pd.read_excel('C:/Users/lines RGb.xlsx') ##Prerecorded RGB Values
def findparcelline(CenterX,CenterY,direction):
if direction == 'left':
for x in range(CenterX):
r,g,b = rgb_im.getpixel((CenterX-x,CenterY))
for i in range(len(line)):
if (r,g,b) == (line.loc[i][0],line.loc[i][1],line.loc[i][2]):
pixelsave = CenterX-x
return pixelsave
elif direction == 'right':
for x in range(CenterX):
r,g,b = rgb_im.getpixel((CenterX+x,CenterY))
for i in range(len(line)):
if (r,g,b) == (line.loc[i][0],line.loc[i][1],line.loc[i][2]):
pixelsave = CenterX+x
return pixelsave
elif direction == 'down':
for y in range(CenterY):
r,g,b = rgb_im.getpixel((CenterX,CenterY + y))
for i in range(len(line)):
if (r,g,b) == (line.loc[i][0],line.loc[i][1],line.loc[i][2]):
pixelsave = CenterY + y
return pixelsave
elif direction == 'up':
for y in range(CenterY):
r,g,b = rgb_im.getpixel((CenterX,CenterY - y))
for i in range(len(line)):
if (r,g,b) == (line.loc[i][0],line.loc[i][1],line.loc[i][2]):
pixelsave = CenterY - y
return pixelsave
directions = ['left','down','right','up']
coords =[]
for direction in directions:
coords.append(findparcelline(center[0],center[1],direction))
im1 = image.crop(coords)
My code only works for right side up rectangular shapes (which a good bit of them are) but it will fail when it comes to something like in the example.
I've thought about using the code written this far to then 'walk the line' from the pixel location provided via a 9x9 array of pixels and only selecting the ones that:
aren't previously selected
match the prerecorded color values
are closest to the anchor pixel location
But in the example there are even more rgb color values to and even some holes in the line I'm interested in.
Is there a way to obtain the coordinates of the line bounding the black dot in the center and subsequently crop the image after having recording all the coordinates?
Thanks in advance.
First of all: If you have access to the generation of these images, save them as lossless PNGs! Those JPG artifacts make it even harder to get proper results. For example, only one pixel of your "black" dot actually has RGB values of (77, 77, 77). Therefore, I omitted the programmatically finding of the "black" dot, and assumed the image center as the dot location.
Since you have kind of red-ish lines with some kind of yellow-ish dots, I rectified the red channel by subtracting a portion of the green channel to get rid of yellow-ish colors. After some further emphasizing (red-ish lines have high values in the red channel), the new red channel looks like this:
On that new red channel, I use some kind of Laplace operator to detect the (red-ish) lines. After some further processing, that'd be the result:
From there, it's just some thresholding using Otsu's method to get a proper binary image to work on:
Finally, I find all contours, and iterate them. If I find an inner(!) contour – please see this answer for an extensive explanation on contour hierarchies – which contains the location of the "black" dot, that must be shape of interest. Since you might get some odd, open contours from the surrounding, you need to stick to inner contours. Also, it's an assumption here, that the shape of interest is closed.
After extracting the proper contour, you just need to set up a proper mask, and for example blacken the background, or crop the image using the bounding rectangle of that mask:
Here's the full code:
import cv2
import numpy as np
# Read image, split color channels
img = cv2.imread('5aY7A.jpg')
b, g, r = cv2.split(img)
# Rectify red-ish lines (get rid of yellow-ish dots) by subtracting
# green channel from red channel
r = r - 0.5 * g
r[r < 0] = 0
# Emphasize red-ish lines
r **= 2
r = cv2.normalize(r, 0, 0, 255, cv2.NORM_MINMAX).astype(np.uint8)
# Detection of red-ish lines by Laplace operator
r = cv2.Laplacian(r, cv2.CV_64F)
r = cv2.erode(r, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)))
r = cv2.GaussianBlur(r, (5, 5), 0)
r = cv2.normalize(r, 0, 0, 255, cv2.NORM_MINMAX).astype(np.uint8)
# Mask red-ish lines
r = cv2.threshold(r, 10, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
r = cv2.morphologyEx(r, cv2.MORPH_CLOSE,
cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)))
# Detection of "black" dot location omitted here due to JPG artifacts...
dot = (916, 389)
# Find contours from masked red-ish lines
cnts, hier = cv2.findContours(r, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Find some inner(!) contour containing the "black dot"
cnt = None
for i, c in enumerate(cnts):
if cv2.pointPolygonTest(c, dot, True) > 0 and hier[0, i, 3] != -1:
cnt = c
break
if cnt is None:
print('Something went wrong, no contour found.')
else:
mask = cv2.drawContours(np.zeros_like(r), [cnt], -1, 255, cv2.FILLED)
output = cv2.bitwise_xor(img, np.zeros_like(img), mask=mask)
cv2.imshow('Output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.19041-SP0
Python: 3.9.1
PyCharm: 2021.1.2
NumPy: 1.20.3
OpenCV: 4.5.2
----------------------------------------

Python - Map coordinates into list of lines and arcs

i'm trying to parse an array of coordinates (which represents a closed shape) into a set of lines and arcs in python (I'm using OpenCV for edge detection).
What I'm trying to achieve, briefly, is to use the coordinates which draw this example image
Example shape
Into this set of lines and arcs
Set of arcs
Obviously, arcs are not so defined as in the image, but are something like "pixeled" arcs.
Is there any utility which can help with this kind of processing?
Let's load the image as grayscale, threshold it to black and white and invert colors, erode it a little, use Canny edge detection, then Hough lines detection (mostly just following this tutorial):
import cv2
import numpy as np
import math
import random
src = cv2.imread("s34I0.png", cv2.IMREAD_GRAYSCALE)
thr, bw = cv2.threshold(src, 128, 255, cv2.THRESH_BINARY_INV)
eroded = cv2.erode(bw, np.ones((5, 5), np.uint8))
canny = cv2.Canny(src, 50, 200, None, 3)
lines = cv2.HoughLines(canny, 1, np.pi / 180, 150, None, 0, 0)
lines = [list(x[0]) for x in lines]
def draw_line(img, line, color, thickness):
rho, the = line
a = math.cos(the)
b = math.sin(the)
x0 = a * rho
y0 = b * rho
pt1 = (int(x0 + 1000 * (-b)), int(y0 + 1000 * (a)))
pt2 = (int(x0 - 1000 * (-b)), int(y0 - 1000 * (a)))
cv2.line(img, pt1, pt2, color, thickness, cv2.LINE_AA)
We have, unfortunately, two parallel lines detected for every straight segment. Let's replace each such pair of close parallel lines with their mid-line:
lines_ = []
def midline(line1, line2):
return [(x + y) / 2 for x, y in zip(line1, line2)]
used = []
for l1 in lines:
if l1 in used: continue
for l2 in lines:
if l2 in used: continue
if l1 is l2: continue
if (abs(l1[0] - l2[0]) < 20) and (abs(l1[1] - l2[1]) < 1):
lines_.append(midline(l1, l2))
used.append(l1)
used.append(l2)
continue
lines = lines_
Now, let's create binary masks for our straight lines. For every straight line, we create a temporary binary black image (all the pixel values are zeros), then draw the line over it as a thick white line (same or slightly thicker than the lines on the original image). Then we logical-AND the original thresholded image and the temporary line image, so we get the pixels common for both - that is the binary mask for the line.
line_masks = []
for i, line in enumerate(lines):
line_img = np.zeros(bw.shape)
draw_line(line_img, line, 255, 10) # 10 pixel thick white line
common = np.logical_and((bw != 0), (line_img != 0))
line_masks.append(common)
Remove the masked pixels from the original black and white image, so only the arcs should remain. Unfortunately, some garbage remains, because the lines in the original image aren't perfect. To get rid of that, we could've drawn our Hough lines thicker (say, 15, or 20 pixels instead of 10), but then they take too much of the arc pixels. Instead, we could erode-dilate the resulting image a little, to get rid of the junk:
for lm in line_masks:
bw[lm] = 0
bw = cv2.erode(bw, np.ones((5, 5), np.uint8))
bw = cv2.dilate(bw, np.ones((5, 5), np.uint8))
Let's create binary masks for the arcs. There's no function in OpenCV to detect arcs, but for this case we could use detection of connected components:
arc_masks = []
num, labels = cv2.connectedComponents(bw)
for i in range(1, num):
arc_masks.append(labels == i)
Now that we have the masks, let's visualize them by drawing over the original image. Lines are going to have random shades of green, arcs - of blue:
line_colors = [(0, random.randint(127, 256), 0) for _ in line_masks]
arc_colors = [(random.randint(127, 256), 0, 0) for _ in arc_masks]
dst = cv2.imread("s34I0.png")
for color, mask in zip(line_colors, line_masks):
dst[mask] = color
for color, mask in zip(arc_colors, arc_masks):
dst[mask] = color

How to detect different types of arrows in image?

Is there a Contour Method to detect arrows in Python CV? Maybe with Contours, Shapes, and Vertices.
# find contours in the thresholded image and initialize the shape detector
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
perimeterValue = cv2.arcLength(cnts , True)
vertices = cv2.approxPolyDP(cnts , 0.04 * perimeterValue, True)
Perhaps we can look at tips of the contours, and also detect triangles?
Hopefully it can detect arrows among different objects, among squares, rectangles, and circles. (otherwise, will have to use machine learning).
Also nice to get these three results if possible (arrow length, thickness, directionAngle)
This question recommends template matching, and doesn't specify any code base. Looking for something workable that can be code created
how to detect arrows using open cv python?
If PythonOpenCV doesn't have capability, open to utilizing another library.
The solution you are asking for is too complex to be solved by one function or particular algorithm. In fact, the problem could be broken down into smaller steps, each with their own algorithms and solutions. Instead of offering you a free, complete, copy-paste solution, I'll give you a general outline of the problem and post part of the solution I'd design. These are the steps I propose:
Identify and extract all the arrow blobs from the image, and process them one by one.
Try to find the end-points of the arrow. That is end and starting point (or "tail" and "tip")
Undo the rotation, so you have straightened arrows always, no matter their angle.
After this, the arrows will always point to one direction. This normalization let's itself easily for classification.
After processing, you can pass the image to a Knn classifier, a Support Vector Machine or even (if you are willing to call the "big guns" on this problem) a CNN (in which case, you probably won't need to undo the rotation - as long as you have enough training samples). You don't even have to compute features, as passing the raw image to a SVM would be probably enough. However, you need more than one training sample for each arrow class.
Alright, let's see. First, let's extract each arrow from the input. This is done using cv2.findCountours, this part is very straightforward:
# Imports:
import cv2
import math
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "arrows.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Grayscale conversion:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
grayscaleImage = 255 - grayscaleImage
# Find the big contours/blobs on the binary image:
contours, hierarchy = cv2.findContours(grayscaleImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
Now, let's check out the contours and process them one by one. Let's compute a (non-rotated) bounding box of the arrow and crop that sub-image. Now, note that some noise could come up. In which case, we won't be processing that blob. I apply an area filter to bypass blobs of small area. Like this:
# Process each contour 1-1:
for i, c in enumerate(contours):
# Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Get the bounding rect's data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Get the rect's area:
rectArea = rectWidth * rectHeight
minBlobArea = 100
We set a minBlobArea and process that contour. Crop the image if the contour is above that area threshold value:
# Check if blob is above min area:
if rectArea > minBlobArea:
# Crop the roi:
croppedImg = grayscaleImage[rectY:rectY + rectHeight, rectX:rectX + rectWidth]
# Extend the borders for the skeleton:
borderSize = 5
croppedImg = cv2.copyMakeBorder(croppedImg, borderSize, borderSize, borderSize, borderSize, cv2.BORDER_CONSTANT)
# Store a deep copy of the crop for results:
grayscaleImageCopy = cv2.cvtColor(croppedImg, cv2.COLOR_GRAY2BGR)
# Compute the skeleton:
skeleton = cv2.ximgproc.thinning(croppedImg, None, 1)
There are some couple of things going on here. After I crop the ROI of the current arrow, I extend borders on that image. I store a deep-copy of this image for further processing and, lastly, I compute the skeleton. The border-extending is done prior to skeletonizing because the algorithm produces artifacts if the contour is too close to the image limits. Padding the image in all directions prevents these artifacts. The skeleton is needed for the way I'm finding ending and starting points of the arrow. More of this latter, this is the first arrow cropped and padded:
This is the skeleton:
Note that the "thickness" of the contour is normalized to 1 pixel. That's cool, because that's what I need for the following processing step: Finding start/ending points. This is done by applying a convolution with a kernel designed to identify one-pixel wide end-points on a binary image. Refer to this post for the specifics. We will prepare the kernel and use cv2.filter2d to get the convolution:
# Threshold the image so that white pixels get a value of 0 and
# black pixels a value of 10:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)
# Set the end-points kernel:
h = np.array([[1, 1, 1],
[1, 10, 1],
[1, 1, 1]])
# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)
# Extract only the end-points pixels, those with
# an intensity value of 110:
binaryImage = np.where(imgFiltered == 110, 255, 0)
# The above operation converted the image to 32-bit float,
# convert back to 8-bit uint
binaryImage = binaryImage.astype(np.uint8)
After the convolution, all end-points have a value of 110. Setting these pixels to 255, while the rest are set to black, yields the following image (after proper conversion):
Those tiny pixels correspond to the "tail" and "tip" of the arrow. Notice there's more than one point per "Arrow section". This is because the end-points of the arrow do not perfectly end in one pixel. In the case of the tip, for example, there will be more end-points than in the tail. This is a characteristic we will exploit latter. Now, pay attention to this. There are multiple end-points but we only need an starting point and an ending point. I'm gonna use K-Means to group the points in two clusters.
Using K-means will also let me identify which end-points belong to the tail and which to the tip, so I'll always know the direction of the arrow. Let's roll:
# Find the X, Y location of all the end-points
# pixels:
Y, X = binaryImage.nonzero()
# Check if I got points on my arrays:
if len(X) > 0 or len(Y) > 0:
# Reshape the arrays for K-means
Y = Y.reshape(-1,1)
X = X.reshape(-1,1)
Z = np.hstack((X, Y))
# K-means operates on 32-bit float data:
floatPoints = np.float32(Z)
# Set the convergence criteria and call K-means:
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
_, label, center = cv2.kmeans(floatPoints, 2, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
Be careful with the data types. If I print the label and center matrices, I get this (for the first arrow):
Center:
[[ 6. 102. ]
[104. 20.5]]
Labels:
[[1]
[1]
[0]]
center tells me the center (x,y) of each cluster – That is the two points I was originally looking for. label tells me on which cluster the original data falls in. As you see, there were originally 3 points. 2 of those points (the points belonging to the tip of the arrow) area assigned to cluster 1, while the remaining end-point (the arrow tail) is assigned to cluster 0. In the centers matrix the centers are ordered by cluster number. That is – first center is that one of cluster 0, while second cluster is the center of cluster 1. Using this info I can easily look for the cluster that groups the majority of points - that will be the tip of the arrow, while the remaining will be the tail:
# Set the cluster count, find the points belonging
# to cluster 0 and cluster 1:
cluster1Count = np.count_nonzero(label)
cluster0Count = np.shape(label)[0] - cluster1Count
# Look for the cluster of max number of points
# That cluster will be the tip of the arrow:
maxCluster = 0
if cluster1Count > cluster0Count:
maxCluster = 1
# Check out the centers of each cluster:
matRows, matCols = center.shape
# We need at least 2 points for this operation:
if matCols >= 2:
# Store the ordered end-points here:
orderedPoints = [None] * 2
# Let's identify and draw the two end-points
# of the arrow:
for b in range(matRows):
# Get cluster center:
pointX = int(center[b][0])
pointY = int(center[b][1])
# Get the "tip"
if b == maxCluster:
color = (0, 0, 255)
orderedPoints[0] = (pointX, pointY)
# Get the "tail"
else:
color = (255, 0, 0)
orderedPoints[1] = (pointX, pointY)
# Draw it:
cv2.circle(grayscaleImageCopy, (pointX, pointY), 3, color, -1)
cv2.imshow("End Points", grayscaleImageCopy)
cv2.waitKey(0)
This is the result; the tip of the end-point of the arrow will always be in red and the end-point for the tail in blue:
Now, we know the direction of the arrow, let's compute the angle. I will measure this angle from 0 to 360. The angle will always be the one between the horizon line and the tip. So, we manually compute the angle:
# Store the tip and tail points:
p0x = orderedPoints[1][0]
p0y = orderedPoints[1][1]
p1x = orderedPoints[0][0]
p1y = orderedPoints[0][1]
# Compute the sides of the triangle:
adjacentSide = p1x - p0x
oppositeSide = p0y - p1y
# Compute the angle alpha:
alpha = math.degrees(math.atan(oppositeSide / adjacentSide))
# Adjust angle to be in [0,360]:
if adjacentSide < 0 < oppositeSide:
alpha = 180 + alpha
else:
if adjacentSide < 0 and oppositeSide < 0:
alpha = 270 + alpha
else:
if adjacentSide > 0 > oppositeSide:
alpha = 360 + alpha
Now you have the angle, and this angle is always measured between the same references. That's cool, we can undo the rotation of the original image like follows:
# Deep copy for rotation (if needed):
rotatedImg = croppedImg.copy()
# Undo rotation while padding output image:
rotatedImg = rotateBound(rotatedImg, alpha)
cv2. imshow("rotatedImg", rotatedImg)
cv2.waitKey(0)
else:
print( "K-Means did not return enough points, skipping..." )
else:
print( "Did not find enough end points on image, skipping..." )
This yields the following result:
The arrow will always point top the right regardless of its original angle. Use this as normalization for a batch of training images, if you want to classify each arrow in its own class.
Now, you noticed that I used a function to rotate the image: rotateBound. This function is taken from here. This functions correctly pads the image after rotation, so you do not end up with a rotated image that is cropped incorrectly.
This is the definition and implementation of rotateBound:
def rotateBound(image, angle):
# grab the dimensions of the image and then determine the
# center
(h, w) = image.shape[:2]
(cX, cY) = (w // 2, h // 2)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
# perform the actual rotation and return the image
return cv2.warpAffine(image, M, (nW, nH))
These are results for the rest of your arrows. The tip (always in red), the tail (always in blue) and their "projective normalization" - always pointing to the right:
What remains is collect samples of your different arrow classes, set up a classifier, train it with your samples and test it with the straightened image coming from the last processing block we examined.
Some remarks: Some arrows, like the one that is not filled, failed the end-point identification part, thus, not yielding enough points for clustering. That arrow is by-passed by the algorithm. The problem is tougher than initially though, right? I recommend doing some research on the topic, because not matter how "easy" the task seems, at the end, it will be performed by an automated "smart" system. And those systems aren't really that smart at the end of the day.
Here is the workflow I put together that would make this work:
Import the necessary libraries:
import cv2
import numpy as np
Define a function that will take in an image, and process it into something that can allow python to more easily find the necessary contours of each shape. The values can be adjusted to better suit your needs:
def preprocess(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (5, 5), 1)
img_canny = cv2.Canny(img_blur, 50, 50)
kernel = np.ones((3, 3))
img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
img_erode = cv2.erode(img_dilate, kernel, iterations=1)
return img_erode
Define a function that will take in two lists; an approximate contour of a shape, points, and the indices of the convex hull of that contour, convex_hull. For the below function, you must make sure that the length of the points list is exactly 2 units greater than the length of the convex_hull list before calling the function. The reasoning is that optimally, the arrow should have exactly 2 more points that aren't present in the convex hull of the arrow.
def find_tip(points, convex_hull):
In the find_tip function, define a list of the indices of the points array where the values are not present in the convex_hull array:
length = len(points)
indices = np.setdiff1d(range(length), convex_hull)
In order to find the tip of the arrow, given we have the approximate outline of the arrow as points and the indices of the two points that are concave to the arrow, indices, we can find the tip by either subtracting 2 from the first index in the indices list, or by adding 2 to the first index of the indices list. See the below examples for reference:
In order to know whether you should subtract 2 from the first element of the indices list, or add 2, you'll need to do the exact opposite to the second (which is the last) element of the indices list; if the resulting two indices returns the same value from the points list, then you found the tip of the arrow. I used a for loop that loops through numbers 0 and 1. The first iteration will add 2 to the second element of the indices list: j = indices[i] + 2, and subtract 2 from the first element of the indices list: indices[i - 1] - 2:
for i in range(2):
j = indices[i] + 2
if j > length - 1:
j = length - j
if np.all(points[j] == points[indices[i - 1] - 2]):
return tuple(points[j])
This part:
if j > length - 1:
j = length - j
is there for cases like this:
where if you try adding 2 to the index 5, you will get an IndexError. So if, say j becomes 7 from the j = indices[i] + 2, the above condition will convert j to len(points) - j.
Read the image and get its contours, utilizing the preprocess function defined earlier before passing it into the cv2.findContours method:
img = cv2.imread("arrows.png")
contours, hierarchy = cv2.findContours(preprocess(img), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
Loop through the contours, and find the approximate contour and convex hull of each shape:
for cnt in contours:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.025 * peri, True)
hull = cv2.convexHull(approx, returnPoints=False)
sides = len(hull)
If the number of sides of the convex hull is 4 or 5 (the extra side in case the arrow has a flat bottom), and if the shape of the arrow has exactly two more points that are not present in the convex hull, find the tip of the arrow:
if 6 > sides > 3 and sides + 2 == len(approx):
arrow_tip = find_tip(approx[:,0,:], hull.squeeze())
If there is indeed a tip, then congratulation! You found a decent arrow! Now the arrow can be highlighted, and a circle can be drawn at the location of the tip of the arrow:
if arrow_tip:
cv2.drawContours(img, [cnt], -1, (0, 255, 0), 3)
cv2.circle(img, arrow_tip, 3, (0, 0, 255), cv2.FILLED)
Finally, show the image:
cv2.imshow("Image", img)
cv2.waitKey(0)
Altogether:
import cv2
import numpy as np
def preprocess(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (5, 5), 1)
img_canny = cv2.Canny(img_blur, 50, 50)
kernel = np.ones((3, 3))
img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
img_erode = cv2.erode(img_dilate, kernel, iterations=1)
return img_erode
def find_tip(points, convex_hull):
length = len(points)
indices = np.setdiff1d(range(length), convex_hull)
for i in range(2):
j = indices[i] + 2
if j > length - 1:
j = length - j
if np.all(points[j] == points[indices[i - 1] - 2]):
return tuple(points[j])
img = cv2.imread("arrows.png")
contours, hierarchy = cv2.findContours(preprocess(img), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.025 * peri, True)
hull = cv2.convexHull(approx, returnPoints=False)
sides = len(hull)
if 6 > sides > 3 and sides + 2 == len(approx):
arrow_tip = find_tip(approx[:,0,:], hull.squeeze())
if arrow_tip:
cv2.drawContours(img, [cnt], -1, (0, 255, 0), 3)
cv2.circle(img, arrow_tip, 3, (0, 0, 255), cv2.FILLED)
cv2.imshow("Image", img)
cv2.waitKey(0)
Original image:
Python program output:
Here is an approach with cv2.connectedComponentsWithStats. After extracting every arrow individually, I am getting the farthest points on the arrow. The distance between these points give me (more or less) the length of the arrow. Also, I am calculating the angle of the arrow by using these two points, i.e., slope between two points. Finally, in order to find the thickness, I am drawing a straight line between these points. And, I am calculating the shortest distance of each pixel of the arrow to the line. The most repeated distance value should give me the thickness of arrow.
The algorithm is not perfect, as it is. Especially, if the arrow is tilted. But, I feel like it is a good starting point and you can improve it.
import cv2
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import distance
import math
img = cv2.imread('arrows.png',0)
_,img = cv2.threshold(img,10,255,cv2.THRESH_BINARY_INV)
labels, stats = cv2.connectedComponentsWithStats(img, 8)[1:3]
for label in np.unique(labels)[1:]:
arrow = labels==label
indices = np.transpose(np.nonzero(arrow)) #y,x
dist = distance.cdist(indices, indices, 'euclidean')
far_points_index = np.unravel_index(np.argmax(dist), dist.shape) #y,x
far_point_1 = indices[far_points_index[0],:] # y,x
far_point_2 = indices[far_points_index[1],:] # y,x
### Slope
arrow_slope = (far_point_2[0]-far_point_1[0])/(far_point_2[1]-far_point_1[1])
arrow_angle = math.degrees(math.atan(arrow_slope))
### Length
arrow_length = distance.cdist(far_point_1.reshape(1,2), far_point_2.reshape(1,2), 'euclidean')[0][0]
### Thickness
x = np.linspace(far_point_1[1], far_point_2[1], 20)
y = np.linspace(far_point_1[0], far_point_2[0], 20)
line = np.array([[yy,xx] for yy,xx in zip(y,x)])
thickness_dist = np.amin(distance.cdist(line, indices, 'euclidean'),axis=0).flatten()
n, bins, patches = plt.hist(thickness_dist,bins=150)
thickness = 2*bins[np.argmax(n)]
print(f"Thickness: {thickness}")
print(f"Angle: {arrow_angle}")
print(f"Length: {arrow_length}\n")
plt.figure()
plt.imshow(arrow,cmap='gray')
plt.scatter(far_point_1[1],far_point_1[0],c='r',s=10)
plt.scatter(far_point_2[1],far_point_2[0],c='r',s=10)
plt.scatter(line[:,1],line[:,0],c='b',s=10)
plt.show()
Thickness: 4.309328382835436
Angle: 58.94059117029002
Length: 102.7277956543408
Thickness: 7.851144897915465
Angle: -3.366460663429801
Length: 187.32325002519042
Thickness: 2.246710258748367
Angle: 55.51004336926862
Length: 158.93709447451215
Thickness: 25.060450615293227
Angle: -37.184706453233126
Length: 145.60219778561037
As your main concern is to filter out arrows from different shapes. I have implemented a method using convexityDefects. you can read more about convexity defects here.
Also, I have added more arrow inside other shapes to demonstrate the robustness of the method.
Updated Image
Method to filter arrows from image using convexity defects.
def get_filter_arrow_image(threslold_image):
blank_image = np.zeros_like(threslold_image)
# dilate image to remove self-intersections error
kernel_dilate = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
threslold_image = cv2.dilate(threslold_image, kernel_dilate, iterations=1)
contours, hierarchy = cv2.findContours(threslold_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
if hierarchy is not None:
threshold_distnace = 1000
for cnt in contours:
hull = cv2.convexHull(cnt, returnPoints=False)
defects = cv2.convexityDefects(cnt, hull)
if defects is not None:
for i in range(defects.shape[0]):
start_index, end_index, farthest_index, distance = defects[i, 0]
# you can add more filteration based on this start, end and far point
# start = tuple(cnt[start_index][0])
# end = tuple(cnt[end_index][0])
# far = tuple(cnt[farthest_index][0])
if distance > threshold_distnace:
cv2.drawContours(blank_image, [cnt], -1, 255, -1)
return blank_image
else:
return None
filter arrow image
I have added methods for the angle and length of the arrow, If this isn't good enough, let me know; there are more complicated methods for angle detection based on 3 coordinate points.
def get_max_distace_point(cnt):
max_distance = 0
max_points = None
for [[x1, y1]] in cnt:
for [[x2, y2]] in cnt:
distance = get_length((x1, y1), (x2, y2))
if distance > max_distance:
max_distance = distance
max_points = [(x1, y1), (x2, y2)]
return max_points
def angle_beween_points(a, b):
arrow_slope = (a[0] - b[0]) / (a[1] - b[1])
arrow_angle = math.degrees(math.atan(arrow_slope))
return arrow_angle
def get_arrow_info(arrow_image):
arrow_info_image = cv2.cvtColor(arrow_image.copy(), cv2.COLOR_GRAY2BGR)
contours, hierarchy = cv2.findContours(arrow_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
arrow_info = []
if hierarchy is not None:
for cnt in contours:
# draw single arrow on blank image
blank_image = np.zeros_like(arrow_image)
cv2.drawContours(blank_image, [cnt], -1, 255, -1)
point1, point2 = get_max_distace_point(cnt)
angle = angle_beween_points(point1, point2)
lenght = get_length(point1, point2)
cv2.line(arrow_info_image, point1, point2, (0, 255, 255), 1)
cv2.circle(arrow_info_image, point1, 2, (255, 0, 0), 3)
cv2.circle(arrow_info_image, point2, 2, (255, 0, 0), 3)
cv2.putText(arrow_info_image, "angle : {0:0.2f}".format(angle),
point2, cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 255), 1)
cv2.putText(arrow_info_image, "lenght : {0:0.2f}".format(lenght),
(point2[0], point2[1]+20), cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 255), 1)
return arrow_info_image, arrow_info
else:
return None, None
angle and length image
CODE
import math
import cv2
import numpy as np
def get_filter_arrow_image(threslold_image):
blank_image = np.zeros_like(threslold_image)
# dilate image to remove self-intersections error
kernel_dilate = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
threslold_image = cv2.dilate(threslold_image, kernel_dilate, iterations=1)
contours, hierarchy = cv2.findContours(threslold_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
if hierarchy is not None:
threshold_distnace = 1000
for cnt in contours:
hull = cv2.convexHull(cnt, returnPoints=False)
defects = cv2.convexityDefects(cnt, hull)
if defects is not None:
for i in range(defects.shape[0]):
start_index, end_index, farthest_index, distance = defects[i, 0]
# you can add more filteration based on this start, end and far point
# start = tuple(cnt[start_index][0])
# end = tuple(cnt[end_index][0])
# far = tuple(cnt[farthest_index][0])
if distance > threshold_distnace:
cv2.drawContours(blank_image, [cnt], -1, 255, -1)
return blank_image
else:
return None
def get_length(p1, p2):
line_length = ((p1[0] - p2[0]) ** 2 + (p1[1] - p2[1]) ** 2) ** 0.5
return line_length
def get_max_distace_point(cnt):
max_distance = 0
max_points = None
for [[x1, y1]] in cnt:
for [[x2, y2]] in cnt:
distance = get_length((x1, y1), (x2, y2))
if distance > max_distance:
max_distance = distance
max_points = [(x1, y1), (x2, y2)]
return max_points
def angle_beween_points(a, b):
arrow_slope = (a[0] - b[0]) / (a[1] - b[1])
arrow_angle = math.degrees(math.atan(arrow_slope))
return arrow_angle
def get_arrow_info(arrow_image):
arrow_info_image = cv2.cvtColor(arrow_image.copy(), cv2.COLOR_GRAY2BGR)
contours, hierarchy = cv2.findContours(arrow_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
arrow_info = []
if hierarchy is not None:
for cnt in contours:
# draw single arrow on blank image
blank_image = np.zeros_like(arrow_image)
cv2.drawContours(blank_image, [cnt], -1, 255, -1)
point1, point2 = get_max_distace_point(cnt)
angle = angle_beween_points(point1, point2)
lenght = get_length(point1, point2)
cv2.line(arrow_info_image, point1, point2, (0, 255, 255), 1)
cv2.circle(arrow_info_image, point1, 2, (255, 0, 0), 3)
cv2.circle(arrow_info_image, point2, 2, (255, 0, 0), 3)
cv2.putText(arrow_info_image, "angle : {0:0.2f}".format(angle),
point2, cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 255), 1)
cv2.putText(arrow_info_image, "lenght : {0:0.2f}".format(lenght),
(point2[0], point2[1] + 20), cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 255), 1)
return arrow_info_image, arrow_info
else:
return None, None
if __name__ == "__main__":
image = cv2.imread("image2.png")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, thresh_image = cv2.threshold(gray_image, 100, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("thresh_image", thresh_image)
arrow_image = get_filter_arrow_image(thresh_image)
if arrow_image is not None:
cv2.imshow("arrow_image", arrow_image)
cv2.imwrite("arrow_image.png", arrow_image)
arrow_info_image, arrow_info = get_arrow_info(arrow_image)
cv2.imshow("arrow_info_image", arrow_info_image)
cv2.imwrite("arrow_info_image.png", arrow_info_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
convexity defects on a thin arrow.
Blue point - start point of defect
Green point - far point if defect
Red point - end point of defect
yellow line = defect line from start point to end point.
image
defect-1
defect-2
and so on...
..

Extract a fixed number of squares from an image with Python/OpenCV

I have several scanned images I would like to compute with Python/Opencv. Each of these images (see an example below) contains n rows of coloured squares. Each of these squares have the same size. The goal is to crop each of these squares and to extract the data from it.
I have found there a code which is able to extract squares from an image.
Here is my code where I have used it :
import numpy as np
import cv2
from matplotlib import pyplot as plt
def angle_cos(p0, p1, p2):
import numpy as np
d1, d2 = (p0-p1).astype('float'), (p2-p1).astype('float')
return abs( np.dot(d1, d2) / np.sqrt( np.dot(d1, d1)*np.dot(d2, d2) ) )
def find_squares(img):
import cv2 as cv
import numpy as np
img = cv.GaussianBlur(img, (5, 5), 0)
squares = []
for gray in cv.split(img):
for thrs in range(0, 255, 26):
if thrs == 0:
bin = cv.Canny(gray, 0, 50, apertureSize=5)
bin = cv.dilate(bin, None)
else:
_retval, bin = cv.threshold(gray, thrs, 255, cv.THRESH_BINARY)
contours, _hierarchy = cv.findContours(bin, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE)
for cnt in contours:
cnt_len = cv.arcLength(cnt, True)
cnt = cv.approxPolyDP(cnt, 0.02*cnt_len, True)
if len(cnt) == 4 and cv.contourArea(cnt) > 1000 and cv.isContourConvex(cnt):
cnt = cnt.reshape(-1, 2)
max_cos = np.max([angle_cos( cnt[i], cnt[(i+1) % 4], cnt[(i+2) % 4] ) for i in range(4)])
if max_cos < 0.1:
squares.append(cnt)
print(len(squares))
return squares
img = cv2.imread("test_squares.jpg",1)
plt.axis("off")
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
squares = find_squares(img)
cv2.drawContours( img, squares, -1, (0, 255, 0), 1 )
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
However, it finds two many squares (100 instead of 15 !!). Looking at the image, it seems that Opencv find a lot of contours for each square.
I'm pretty sure that it can be optimized since the squares have more or less the same size and far from each other. As a very beginner in Opencv, I haven't found yet a way to give more criteria in the function "find squares" in order to get only 15 squares at the end of the routine. Maybe the contour area can be maximized ?
I have also found there a more detailed code (very close to the previous one) but it seems to be developed in a old version of Opencv. I haven't managed to make it work (and so to modify it).
This is another more robust method.
I used this code to find the contours in the image (the full code can be found in this gist):
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Define square size
min_square_size = 987
# Read Image
img = cv2.imread('/home/stephen/Desktop/3eY0k.jpg')
# Threshold and find edges
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Threshold the image - segment white background from post it notes
_, thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY_INV);
# Find the contours
_, contours, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
I iterated through the contours. I only looked at the contours that were a reasonable size. I found the four corners of each contour.
# Create a list for post-it images
images = []
# Iterate through the contours in the image
for contour in contours:
area = cv2.contourArea(contour)
# If the contour is not really small, or really big
h,w = img.shape[0], img.shape[1]
if area > min_square_size and area < h*w-(2*(h+w)):
# Get the four corners of the contour
epsilon = .1 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
# Draw the point
for point in approx: cv2.circle(img, tuple(point[0]), 2, (255,0,0), 2)
# Warp it to a square
pts1 = np.float32(approx)
pts2 = np.float32([[0,0],[300,0],[300,300],[0,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))
# Add the square to the list of images
images.append(dst.copy())
The post-it notes are squares, but because the camera warps the objects in the image they do not appear as squares. I used warpPerspective to make the post-it notes square shapes. Only a few of them are shown in this plot (there are more that didn't fit):
If your problem is that too many contours (edges) are found in the image, my suggestion is to modify the edge-finding part first. It'll be by far the easiest modification to make.
In particular, you'll need to change this call:
bin = cv.Canny(gray, 0, 50, apertureSize=5)
The cv.Canny() function takes as arguments two threshold values, the aperture size, and a boolean to indicate whether a precise form of gradient is used. Play with those parameters, and my guess is, you'll get much better results.

Categories