I'm not very good at image processing so this question may have a very simple answer. I'm not even sure that I use the right words to describe my problem.
I'm trying to rotate images taken by different cameras. For example look at the two images below. I have absolutely no information regarding the camera used. I'm trying to rotate the images so the sharpest point of the leaf is always looking up.
I also need a way to mesure to what extend the images were correctly rotated.
I have tried using the code below but I can't seem to get a lot of meaningful information. I think it's because the images are taken with different cameras.
import cv2
import imageio
import matplotlib.pyplot as plt
imgRef = imageio.imread("Ex2/Picture 538.jpg")
imgTest = imageio.imread("Ex2/Capture586_export001_r.JPG")
imgTest_grey = cv2.cvtColor(imgTest, cv2.COLOR_BGR2GRAY)
imgRef_grey = cv2.cvtColor(imgRef, cv2.COLOR_BGR2GRAY)
height, width = imgRef_grey.shape
orb = cv2.ORB_create(1000)
keypoints1, descriptors1 = orb.detectAndCompute(imgRef_grey, None)
keypoints2, descriptors2 = orb.detectAndCompute(imgTest_grey, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match( descriptors2, descriptors1, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * 0.3)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(imgTest,keypoints2, imgRef, keypoints1, matches, None)
plt.imshow(imMatches)
plt.show()
All help is highly appreciated.
I would say that based on the shape of the leaf (especially the form of the sharpest point), the detection of the correct current angle and the rotation angle would be non-deterministic.
Any way id tolerance of 5 deg is ok for you the following is my suggestion.
get original image, make it grayscale, make it blurred and find the biggest contour(the leaf)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
blurred = cv.GaussianBlur(gray, (17,17),0)
_, thresh_image = cv.threshold(blurred, 120, 255, cv.THRESH_BINARY_INV)
contours, hierarchy = cv.findContours(thresh_image, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
c_max = max(contours, key = cv.contourArea)
find the fitted ellipse over the leaf and get the angle of surrounding ellipse. this for sure does not consider the shape of the leaf and as a result, something that from your perspective is +90 is actually -90 (the sharpest point is on the other side).
So for example if we detect the angle to be 90 and then rotate is -90 it might happen that the sharpest point is downward, so the hard part would be the detection of the sharpest point being downward and if so rotate it again 180 degrees.
# remove the background so that the rotation is easier (since the aspect ratio of images are not 1)
img_bg_removed = img.copy()
stencil = np.zeros(img_bg_removed.shape[:-1]).astype(np.uint8)
cv.drawContours(stencil, [c_max], -1, color=(255, 255, 255), thickness=cv.FILLED)
sel = stencil != 255 # select everything that is not mask_value
img_bg_removed[sel] = (0,0,0)
#rotate the image
img_rotated = rotate_image(img_bg_removed, -1 * angle)
again find the contour and fitting rectangle. cut the fitting rectangle to half and see in which half the proportion of the white point(leaf) to all pixels is less. if the lower half had a lower proportion then the sharp point is there and the leaf should be rotated 180 again.
gray = cv.cvtColor(img_rotated, cv.COLOR_BGR2GRAY)
blurred = cv.GaussianBlur(gray, (17,17),0)
_, thresh_image = cv.threshold(blurred, 50, 255, cv.THRESH_BINARY)
contours, hierarchy = cv.findContours(thresh_image, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
c_max = max(contours, key = cv.contourArea)
x_rect,y_rect,w_rect,h_rect = cv.boundingRect(c_max)
up_side = thresh_image[y_rect: y_rect+h_rect//2, :][:, x_rect: x_rect + w_rect]
down_side = thresh_image[y_rect+h_rect//2: y_rect+h_rect, :][:, x_rect: x_rect + w_rect]
up_proportion = (up_side[:,:] == 255).sum() / left_side.size
down_proportion = (down_side[:,:] == 255).sum() / right_side.size
if down_proportion < up_proportion:
img_rotated = rotate_image(img_rotated, -180)
by the way, the rotation function is borrowed from here
PS: this solution can be optimized in a few ways. and one of which is the second Thresholding/Contour-finding can be removed. and simply as we find the surrounding ellipse we find the surrounding rectangle and then as we rotate the leaf we rotate the rectangle too.
Related
Hi I'm trying to create an OCR where the model should be able to read an uploaded document. However, lot of times, the documents uploaded are skewed or tilted. I plan to straighten and/or resize the document based on a template.
To achieve this, I intend to use feature mapping and homography. However, whenever I calculate my keypoints and descriptors (using ORB), and try to match them using Brute Force Matching, none of the features seem to match. Here's the code that I've used so far and the results with it. Can someone point me in the right direction if I'm missing something or doing it in a certain incorrect way?
def straighten_image(ORIG_IMG, IMG2):
# read both the images:
orig_image = cv2.imread(ORIG_IMG)
img_input = cv2.imread(IMG2)
orig_gray_scale = cv2.cvtColor(orig_image, cv2.COLOR_BGR2GRAY)
gray_scale_img = cv2.cvtColor(img_input, cv2.COLOR_BGR2GRAY)
#Detect ORB features and compute descriptors
MAX_NUM_FEATURES = 100
orb = cv2.ORB_create(MAX_NUM_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(orig_gray_scale, None)
keypoints2, descriptors2= orb.detectAndCompute(gray_scale_img, None)
#display image with keypoints
orig_wid_decriptors = cv2.drawKeypoints(orig_gray_scale, keypoints1, outImage = np.array([]), color= (255, 0, 0), flags= cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
inp_wid_decriptors = cv2.drawKeypoints(img_input, keypoints2, outImage = np.array([]), color= (255, 0, 0), flags= cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
#Match features
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
print(type(matches))
#sort matches
# matches.sort(key=lambda x: x.distance, reverse=False)
#Remove not-so-good matches
numGoodMatches = int(len(matches)*0.1)
matches = matches[:numGoodMatches]
#Draw Top matches
im_matches = cv2.drawMatches(orig_gray_scale, keypoints1, gray_scale_img, keypoints2, matches, None)
cv2.imshow("", im_matches)
cv2.waitKey(0)
#Homography
points1 = np.zeros((len(matches), 2), dtype = np.float32)
points2 = np.zeros((len(matches), 2), dtype = np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
#Find homography:
h, mask = cv2.findHomography(points2, points1, cv2.RANSAC)
#Warp image
# Use homography to warp image
height, width = orig_gray_scale.shape
inp_reg = cv2.warpPerspective(gray_scale_img, h, (width, height), borderValue = 255)
return inp_reg
import cv2
import matplotlib.pyplot as plt
import numpy as np
template = "template_aadhaar.jpg"
test = "test.jpeg"
str_img = straighten_image(template, test)
cv2.imshow("", str_img)
cv2.waitKey(0)
EDIT: If I use my own ID-card (perfectly straight) as the template and try to align the same ID-card that is tilted, it matches the features and re-aligns the tilted image perfectly. However, I need the model to be able to re-align any other ID-card based on the template. By any ID, I mean the details could be different but the location and font would be exactly the same.
EDIT#2: As suggested by #Olli, I tried using a template with only those features that are same for all Aadhaar cards. Image attached. But still the feature matching is a bit arbitrary.
Feature mapping tries to detect the most significant features on an image and tries to match them. This only works if the features really are the same. If the features are similar but different, it will fail.
If you have some features that are always the same (e.g. the logo on the top left), you could try to create a template with only these features and blank in all other areas, i.e. remove the person and the name and the QR code and...
But because there are more differences ("Government of India inside the green area on image and above on the other,...) than similarities, I would try to find the rotation based on the corners and/or the edges of the shape.
For example:
convert to grayscale
perform canny edge detection
detect corners, e.g. using cv2.goodFeaturesToTrack. If some corners are hidden, try finding the sides using Hough lines instead.
undistort
If some images are rotated 90, 180 or 270 degrees after undistortion, you could use a filter to find the orange and green areas and rotate so that this area is at the top again.
This is my first question on Stackoverflow. I'm a little excited, forgive me if I'm wrong. We have mixed ellipses with and without overlapping drawn randomly from paint. I'm sharing the image I'm working on and my code. I am not a professional in opencv module, I wrote my code as a result of research inspired by sources.
The purpose of my code is,
Detection of randomly drawn with and without overlapping ellipses using the cv2.fitEllipse method. Next, find the major axis, minor axis and areas of the detected ellipses.
The problem with my code is actually this,
In overlapping ellipses, while fitting the ellipse under normal conditions, 2 ellipses should be fit, but about 6-7 ellipses are fit and I cannot reach the values I want to be calculated.
I'm open to your help, thank you in advance.
Example image:
import cv2
import numpy as np
import random as rng
import math
img = cv2.imread('overlapping_ellipses.png', 1)
imge= cv2.cvtColor(img,cv2.COLOR_RGB2BGR)
gray = cv2.cvtColor(imge, cv2.COLOR_BGR2GRAY)
blur = cv2.blur(gray, (2,2), 3)
edged = cv2.Canny(blur, 50, 100)
kernel= np.ones((2,2))
edged1 = cv2.dilate(edged, kernel, iterations=2)
edged2 = cv2.erode(edged1, kernel, iterations=2)
def thresh_callback(val):
threshold = val
canny_output = cv2.Canny(edged2, threshold, threshold * 4)
contours, _ = cv2.findContours(canny_output, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
minRect = [None]*len(contours)
minEllipse = [None]*len(contours)
for i, c in enumerate(contours):
minRect[i] = cv2.minAreaRect(c)
if c.shape[0] > 5:
minEllipse[i] = cv2.fitEllipse(c)
(x1,y1),(d1,d2),angle = minEllipse[i]
print('\nX1: ', round(x1,4), '\nY1: ', round(y1,4), '\nD1:',round(d1,4), '\nD2',round(d2,4), '\nAngle:', round(angle,4))
long= x1-d2
small= y1-d1
major= long/2
minor= small/2
pixel= 37.795275591
major1= major/pixel
minor1= minor/pixel
print('--------------------------------')
print('Major axis is: ', abs(round(major1,4)), 'cm')
print('Minor axis is: ', abs(round(minor1,4)), 'cm')
print('--------------------------------')
drawing = np.zeros((canny_output.shape[1], canny_output.shape[1], 3), dtype=np.uint8)
for i, c in enumerate(contours):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
cv2.drawContours(drawing, contours, i, color)
if c.shape[0] > 5:
cv2.ellipse(drawing, minEllipse[i], color, 1)
cv2.imshow('Fitting Ellips', drawing)
source_window = 'Source'
cv2.namedWindow(source_window)
cv2.imshow(source_window, img)
max_thresh = 255
thresh = 100
cv2.createTrackbar('Canny Thresh:', source_window, thresh, max_thresh, thresh_callback)
thresh_callback(thresh)
cv2.waitKey()
Step 1: Identify and separate the blobs in the input image.
Since we don't care about colour information here, we can directly load the image as grayscale.
image = cv2.imread('input.png', cv2.IMREAD_GRAYSCALE)
The input image contains black ellipses on white background.
We only need the external contours of the blobs, and cv2.findContours expects white blobs on black background.
Therefore we need to invert the image. At the same time we need a binary image. We can use cv2.threshold to accomplish both tasks.
Once we detect the blob contours, we can collect some useful information for each blob into a simple map-based data structure.
def detect_blobs(image):
_,img_binary = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY_INV)
contours, _ = cv2.findContours(img_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
blobs = []
for i, contour in enumerate(contours):
orig_x, orig_y, width, height = cv2.boundingRect(contour)
roi_image = image[orig_y:orig_y+height,orig_x:orig_x+width]
blobs.append({
"i" : i
, "contour" : contour
, "origin" : (orig_x, orig_y)
, "size" : (width, height)
, "roi_image" : roi_image
})
return blobs
Step 2: Process each blob
First we need to determine whether the blob is a single ellipse, or whether it is a pair over intersecting ellipses.
One way to do this is by looking for convexity defects.
Since the coordinates of our contour are represented by integers, even the single-ellipse scenario will exhibit some convexity defects.
However, their magnitude (the distance between the furthest point on the contour from the enclosing convex hull segment) will be very small, generally below 1 pixel.
On the other hand, the contour of a pair of intersecting ellipses will have large convexity defects, one for each of the four points where the curves intersect.
This distinction can be seen on the following two images (contour is blue, convex hull red, identified intersection points/locations of large convexity defects are orange circles):
Single ellipse
Two intersecting ellipses
We therefore filter out any small convexity defects, and note the locations of the large ones. Now we're left with 3 possible scenarios.
Scenario A: No intersection points detected
Only small convexity defects were identified, which means this is very likely a single ellipse. We simply fit an ellipse to the contour and move on.
Scenario B: Exactly 4 intersection points detected
In this case we have 2 intersecting ellipses. We use the intersection points to split the contour into 4 segments, one for each "lobe" of the blob. Each of the segments should include the two intersection points that delimit it.
In the following picture, the segments are show in green, yellow, cyan and magenta, while the intersection points are orange circles:
Now, we can combine the pairs of segments that lie opposite each other (i.e. green+cyan and yellow+magenta) to get two lists of points, one for each ellipse. Again, we simply fit an ellipse to each list of points.
Scenario C: Some other number of intersection points detected
This is considered an invalid situation.
def process_blob(blob):
MAJOR_DEFECT_THRESHOLD = 2.0
contour = blob["contour"]
blob["hull"] = cv2.convexHull(contour)
hull_idx = cv2.convexHull(contour, returnPoints=False)
defects = cv2.convexityDefects(contour, hull_idx)
intersections = []
for i,defect in enumerate(np.squeeze(defects, 1)):
_, _, far_idx, far_dist = defect
real_far_dist = far_dist / 256.0
if real_far_dist >= MAJOR_DEFECT_THRESHOLD:
intersections.append(far_idx)
if len(intersections) == 0:
print("One ellipse")
blob["ellipses"] = [cv2.fitEllipse(contour)]
elif len(intersections) == 4:
print("Two ellipses")
blob["segments"] = [
contour[intersections[0]:intersections[1]+1]
, contour[intersections[1]:intersections[2]+1]
, contour[intersections[2]:intersections[3]+1]
, np.vstack([contour[intersections[3]:],contour[:intersections[0]+1]])
]
split_contours = [
np.vstack([blob["segments"][0], blob["segments"][2]])
, np.vstack([blob["segments"][1], blob["segments"][3]])
]
blob["ellipses"] = [cv2.fitEllipse(c) for c in split_contours]
else:
print("Invalid scenario")
blob["ellipses"] = []
return blob["ellipses"]
At this point, it's trivial to calculate the parameters you need -- I'll leave this as an excercise to the reader.
As a bonus, here's some simple visualization for debugging purposes:
def visualize_blob(blob):
PADDING = 20
orig_x, orig_y = blob["origin"]
offset = (orig_x - PADDING, orig_y - PADDING)
input_img = cv2.copyMakeBorder(blob["roi_image"]
, PADDING, PADDING, PADDING, PADDING
, cv2.BORDER_CONSTANT, None, 255)
adjusted_img = cv2.add(input_img, 127) - 63
output_img_ch = cv2.cvtColor(adjusted_img, cv2.COLOR_GRAY2BGR)
output_img_seg = output_img_ch.copy()
output_img_el = output_img_ch.copy()
cv2.drawContours(output_img_ch, [blob["hull"] - offset], 0, (127,127,255), 4)
cv2.drawContours(output_img_ch, [blob["contour"] - offset], 0, (255,127,127), 2)
SEGMENT_COLORS = [(0,255,0),(0,255,255),(255,255,0),(255,0,255)]
if "segments" in blob:
for i in range(4):
cv2.polylines(output_img_seg, [blob["segments"][i] - offset], False, SEGMENT_COLORS[i], 4)
for i in range(4):
center = (blob["segments"][i] - offset)[0][0]
cv2.circle(output_img_ch, center, 4, (0,191,255), -1)
cv2.circle(output_img_seg, center, 4, (0,191,255), -1)
for ellipse in blob["ellipses"]:
offset_ellipse = ((ellipse[0][0] - offset[0], ellipse[0][1] - offset[1]), ellipse[1], ellipse[2])
cv2.ellipse(output_img_el, offset_ellipse, (0,0,255), 2)
cv2.imshow('', np.hstack([output_img_ch,output_img_seg, output_img_el]))
cv2.imwrite('output_%d_ch.png' % blob["i"], output_img_ch)
cv2.imwrite('output_%d_seg.png' % blob["i"], output_img_seg)
cv2.imwrite('output_%d_el.png' % blob["i"], output_img_el)
cv2.waitKey()
Pulling it all together:
import cv2
import numpy as np
## INSERT THE FUNCTIONS LISTED ABOVE IN THE QUESTION ##
image = cv2.imread('input.png', cv2.IMREAD_GRAYSCALE)
blobs = detect_blobs(image)
print("Found %d blob(s)." % len(blobs))
for blob in blobs:
process_blob(blob)
visualize_blob(blob)
As part of a program which contains a series of images to be processed, I first need to detect a green-coloured rectangle. I'm trying to write a program that doesn't use colour masking, since the lighting and glare on the images will make it difficult to find the appropriate HSV ranges.
(p.s. I already have two questions based on this program, but this one is unrelated to those. It's not a follow up, I want to address a separate issue.)
I used the standard rectangle detection technique, making use of findContours() and approxPolyDp() methods. I added some constraints that got rid of unnecessary rectangles (like aspectRatio>2.5, since my desired rectangle is clearly the "widest" and area>1500, to discard random small rectangles) .
import numpy as np
import cv2 as cv
img = cv.imread("t19.jpeg")
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
output = img.copy()
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
#threshold
th = cv.adaptiveThreshold(gray,255,cv.ADAPTIVE_THRESH_GAUSSIAN_C,cv.THRESH_BINARY,9,2)
cv.imshow("th",th)
#rectangle detection
contours, _ = cv.findContours(th, cv.RETR_TREE, cv.CHAIN_APPROX_NONE)
for contour in contours:
approx = cv.approxPolyDP(contour, 0.01* cv.arcLength(contour, True), True)
cv.drawContours(img, [approx], 0, (0, 0, 0), 5)
x = approx.ravel()[0]
y = approx.ravel()[1]
x1 ,y1, w, h = cv.boundingRect(approx)
a=w*h
if len(approx) == 4 and x>15 :
aspectRatio = float(w)/h
if aspectRatio >= 2.5 and a>1500:
print(x1,y1,w,h)
width=w
height=h
start_x=x1
start_y=y1
end_x=start_x+width
end_y=start_y+height
cv.rectangle(output, (start_x,start_y), (end_x,end_y), (0,0,255),3)
cv.putText(output, "rectangle "+str(x1)+" , " +str(y1-5), (x1, y1-5), cv.FONT_HERSHEY_COMPLEX, 0.5, (0, 0, 0))
cv.imshow("op",output)
print("start",start_x,start_y)
print("end", end_x,end_y)
print("width",width)
print("height",height)
It is working flawlessly for all the images, except one:
I used adaptive thresholding to create the threshold, which was used by the findContours() method.
I tried displaying the threshold and the output , and it looks like this:
The thresholds for the other images also looked similar...so I can't pinpoint what exactly has gone wrong in the rectangle detection procedure.
Some tweaks I have tried:
Changing the last two parameters in the adaptive parameters method.
I tried 11,1 , 9,1, and for both of them, the rectangle in the
threshold looked more prominent : but in this case the output
detected no rectangles at all.
I have already disregarded otsu thresholding, as it is not working
for about 4 of my test images.
What exactly can I tweak in the rectangle detection procedure for it to detect this rectangle?
I also request , if possible, only slight modifications to this method and not some entirely new method. As I have mentioned, this method is working perfectly for all of my other test images, and if the new suggested method works for this image but fails for the others, then I'll find myself back here asking why it failed.
Edit: The method that abss suggested worked for this image, however failed for:
image 4
image 1, far off
Other test images:
image 1, normal
image 2
image 3
image 9, part 1
image 9, part 2
You can easily do it by adding this line of code after your threshold
kernel = cv.getStructuringElement(cv.MORPH_RECT,(3,3))
th = cv.morphologyEx(th,cv.MORPH_OPEN,kernel)
This will remove noise within the image. you can see this link for more understanding about morphologyEx https://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html
The results I got is shown below
I have made a few modifications to your code so that it works with all of your test images. There are a few false positives that you may have to filter based on HSV color range for green (since your target is always a shade of green). Alternately you can take into account the fact that the one of the child hierarchy of your ROI contour is going to be > 0.4 or so times than the outer contour. Here are the modifications:
Used DoG for thresholding useful contours
Changed arcLength multiplier to 0.5 instead of 0.1 as square corners are not smooth
cv2.RETR_CCOMP to get 2 level hierarchy
Moved ApproxPolyDP inside to make it more efficient
Contour filter area changed to 600 to filter ROI for all test images
Removed a little bit of unnecessary code
Check with all the other test images that you may have and modify the parameters accordingly.
img = cv2.imread("/path/to/your_image")
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
output = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gw, gs, gw1, gs1, gw2, gs2 = (3,1.0,7,3.0, 3, 2.0)
img_blur = cv2.GaussianBlur(gray, (gw, gw), gs)
g1 = cv2.GaussianBlur(img_blur, (gw1, gw1), gs1)
g2 = cv2.GaussianBlur(img_blur, (gw2, gw2), gs2)
ret, thg = cv2.threshold(g2-g1, 127, 255, cv2.THRESH_BINARY)
contours, hier = cv2.findContours(thg, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
img_cpy = img.copy()
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
for i in range(len(contours)):
if hier[0][i][2] == -1:
continue
x ,y, w, h = cv2.boundingRect(contours[i])
a=w*h
aspectRatio = float(w)/h
if aspectRatio >= 2.5 and a>600:
approx = cv2.approxPolyDP(contours[i], 0.05* cv2.arcLength(contours[i], True), True)
if len(approx) == 4 and x>15 :
width=w
height=h
start_x=x
start_y=y
end_x=start_x+width
end_y=start_y+height
cv2.rectangle(img_cpy, (start_x,start_y), (end_x,end_y), (0,0,255),3)
cv2.putText(img_cpy, "rectangle "+str(x)+" , " +str(y-5), (x, y-5), cv2.FONT_HERSHEY_COMPLEX, 0.5, (0, 0, 0))
plt.imshow(img_cpy)
print("start",start_x,start_y)
print("end", end_x,end_y)
I have several scanned images I would like to compute with Python/Opencv. Each of these images (see an example below) contains n rows of coloured squares. Each of these squares have the same size. The goal is to crop each of these squares and to extract the data from it.
I have found there a code which is able to extract squares from an image.
Here is my code where I have used it :
import numpy as np
import cv2
from matplotlib import pyplot as plt
def angle_cos(p0, p1, p2):
import numpy as np
d1, d2 = (p0-p1).astype('float'), (p2-p1).astype('float')
return abs( np.dot(d1, d2) / np.sqrt( np.dot(d1, d1)*np.dot(d2, d2) ) )
def find_squares(img):
import cv2 as cv
import numpy as np
img = cv.GaussianBlur(img, (5, 5), 0)
squares = []
for gray in cv.split(img):
for thrs in range(0, 255, 26):
if thrs == 0:
bin = cv.Canny(gray, 0, 50, apertureSize=5)
bin = cv.dilate(bin, None)
else:
_retval, bin = cv.threshold(gray, thrs, 255, cv.THRESH_BINARY)
contours, _hierarchy = cv.findContours(bin, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE)
for cnt in contours:
cnt_len = cv.arcLength(cnt, True)
cnt = cv.approxPolyDP(cnt, 0.02*cnt_len, True)
if len(cnt) == 4 and cv.contourArea(cnt) > 1000 and cv.isContourConvex(cnt):
cnt = cnt.reshape(-1, 2)
max_cos = np.max([angle_cos( cnt[i], cnt[(i+1) % 4], cnt[(i+2) % 4] ) for i in range(4)])
if max_cos < 0.1:
squares.append(cnt)
print(len(squares))
return squares
img = cv2.imread("test_squares.jpg",1)
plt.axis("off")
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
squares = find_squares(img)
cv2.drawContours( img, squares, -1, (0, 255, 0), 1 )
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
However, it finds two many squares (100 instead of 15 !!). Looking at the image, it seems that Opencv find a lot of contours for each square.
I'm pretty sure that it can be optimized since the squares have more or less the same size and far from each other. As a very beginner in Opencv, I haven't found yet a way to give more criteria in the function "find squares" in order to get only 15 squares at the end of the routine. Maybe the contour area can be maximized ?
I have also found there a more detailed code (very close to the previous one) but it seems to be developed in a old version of Opencv. I haven't managed to make it work (and so to modify it).
This is another more robust method.
I used this code to find the contours in the image (the full code can be found in this gist):
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Define square size
min_square_size = 987
# Read Image
img = cv2.imread('/home/stephen/Desktop/3eY0k.jpg')
# Threshold and find edges
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Threshold the image - segment white background from post it notes
_, thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY_INV);
# Find the contours
_, contours, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
I iterated through the contours. I only looked at the contours that were a reasonable size. I found the four corners of each contour.
# Create a list for post-it images
images = []
# Iterate through the contours in the image
for contour in contours:
area = cv2.contourArea(contour)
# If the contour is not really small, or really big
h,w = img.shape[0], img.shape[1]
if area > min_square_size and area < h*w-(2*(h+w)):
# Get the four corners of the contour
epsilon = .1 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
# Draw the point
for point in approx: cv2.circle(img, tuple(point[0]), 2, (255,0,0), 2)
# Warp it to a square
pts1 = np.float32(approx)
pts2 = np.float32([[0,0],[300,0],[300,300],[0,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))
# Add the square to the list of images
images.append(dst.copy())
The post-it notes are squares, but because the camera warps the objects in the image they do not appear as squares. I used warpPerspective to make the post-it notes square shapes. Only a few of them are shown in this plot (there are more that didn't fit):
If your problem is that too many contours (edges) are found in the image, my suggestion is to modify the edge-finding part first. It'll be by far the easiest modification to make.
In particular, you'll need to change this call:
bin = cv.Canny(gray, 0, 50, apertureSize=5)
The cv.Canny() function takes as arguments two threshold values, the aperture size, and a boolean to indicate whether a precise form of gradient is used. Play with those parameters, and my guess is, you'll get much better results.
This is a follow up to couple of similar questions regarding square detection in which karlphillip, mevatron, and abid-rahman-k came up with some cool approaches.
I am trying to design a robust square detection algorithm to help isolate a picture of a receipt from the rest of the image. My code is built off the convex hull approach from the previous questions but it's choking on an image where one of the points isn't in the image and the edges of the receipt have aberrations due to a pen holder on the left side.
How can I detect the corners on this receipt?
Here is the image:
Here is my code:
import cv2
import numpy as np
img = cv2.imread('taco.jpg')
img = cv2.resize(img,(1944,2592))
img = cv2.medianBlur(img,31)
img = cv2.GaussianBlur(img,(0,0),3)
grayscale = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
thresh = cv2.Canny(grayscale, 10, 20)
thresh = cv2.dilate(thresh,None)
contours,hier = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>250: # remove small areas like noise etc
hull = cv2.convexHull(cnt) # find the convex hull of contour
hull = cv2.approxPolyDP(hull,0.1*cv2.arcLength(hull,True),True)
if len(hull)==4:
cv2.drawContours(img,[hull],0,(0,255,0),2)
cv2.namedWindow('output',cv2.cv.CV_WINDOW_NORMAL)
cv2.imshow('output',img)
cv2.cv.ResizeWindow('output',960,640)
cv2.waitKey()
cv2.destroyAllWindows()
Any ideas?
A solution in Mathematica:
Import your image:
i = Import#"http://i.imgur.com/RrYKJ.jpg";
Detect edges in a scale grater than the letters in the receipt (a parameter)
i1 = EdgeDetect[i, 10]
Delete lines smaller than the scale of the perimeter of the receipt (a parameter)
i2 = DeleteSmallComponents[i1, 1000]
Find the morphological components
(mc = MorphologicalComponents[Erosion[ColorNegate#i2, 1]]) // Colorize
Find the morph component with more border adjacency (to delete it from the mask)
com = Commonest[Join[mc[[1]], mc[[-1]], Transpose[mc][[1]], Transpose[mc][[-1]]]]
Form the mask
mc1 = Unitize[mc /. com[[1]] -> 0];
Multiply the mask by your original image
ImageMultiply[Image#mc1, i]