Detect the corners of a grid using opencv - python

So I'm currently working on a project(not school related or anything) and a part of it involves being able to detect and project a grid in a picture onto a square image so we can get rid of any skewing the image may have and things alike. My problem now is that I cannot determine what points within my image are the corners of my grid. I have tried using a Hough transform but the problem with that is that many lines are generated, including the grid lines and thus it would be hard to determine which of the detected lines are the edges of the grid automatically. I also tried using a contour detector which gives a similar problem, although its more accurate in tracing out the edges of the grid. I'm unable to pick out what contours belong to the edge of the grid and what contours are say grid lines or just miscellaneous
A screenshot of the results from the Hough transform:
and A screenshot of the result from the contour detection:
.
Thanks for any help or advice in advance.

You probably need to look through the contours and find the largest 4 sided one to grab the outside of your grid.
You would use something like this helper function (processed is my preprocessed image):
def largest_4_sided_contour(processed, show_contours=False):
_, contours, _ = cv2.findContours(
processed, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Sort the contours by area
contours = sorted(contours, key=cv2.contourArea, reverse=True)
look at the biggest 5 (if there are more than 5, otherwise just look at all of them)
for cnt in contours[:min(5, len(contours))]:
If the number of sides is about 4, that's the one we are looking for so we can stop looking and return that one.
if len(approx(cnt)) == 4:
return cnt
return None
There are some irregularities to your grid so you may have to do some preprocessing or look for a range of the number of sides but generally, by looking at the area of the contour and narrowing down by the number of sides, you should be able to figure something out.
You mentioned getting the corners so this is that step:
def get_rectangle_corners(cnt):
''' gets corners from a contour '''
pts = cnt.reshape(cnt.shape[0], 2)
rect = np.zeros((4, 2), dtype="float32")
# the top-left point has the smallest sum whereas the
# bottom-right has the largest sum
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# compute the difference between the points -- the top-right
# will have the minumum difference and the bottom-left will
# have the maximum difference
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
return rect

Related

How to find distance between two contours

I'm working on a project to find defective parts on a metal ring.I successfully find defective parts on the surface but I cannot detect the protrusions on the inner surface of the metal ring.
I thought to determine the error using the distance between the inner and outer surface, but I don't know how I can calculate the distance between the two contour.
sucsess, frame = capture.read()
kernel = np.ones((1,1),np.uint8)
blur = cv2.bilateralFilter(frame,6,140,160)
threshold = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,21,16)
closing = cv2.morphologyEx(threshold,cv2.MORPH_CLOSE,kernel)
erosion = cv2.erode(closing,kernel,iterations =0)
contours, hierarchy = cv2.findContours(erosion,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 72000 and area < 80000:
cv2.drawContours(frame,cnt,-1,(0,0,0),3)
for cnt2 in contours:
area2 = cv2.contourArea(cnt2)
if area2 > 30 and area2 < 200:
cv2.drawContours(frame,cnt2,-1,(0,0,0),3)
cv2.imshow("frame",frame)
cv2.imshow("Erosion",erosion)
cv2.waitKey(0)
This is my code. First image is the object I am looking, second image is the output of the erosion.
enter image description here
enter image description here
My main problem is I am not able to detect any protrusion inside inner radius.
Any suggestion and help are welcome.
I thought to determine the error using the distance between the inner and outer surface, but I don't know how I can calculate the distance between the two contour.
One method is to take both contours and calculate the centroid, giving you a nominal centre point. Then from this point, cast rays out through 360 degrees, and compute the points of intersection with the inner and outer contours. (Closest point by Euclidean distance for example.) Then you have two corresponding radii on both the inner and outer surface, so you can subtract inner from outer to get the ring thickness. Compute this all the way around, using an angle increment proportional to the accuracy you need. The standard deviation of the thickness all the way around is a measure of protrusions (lower numbers are better!).
My main problem is I am not able to detect any protrusion inside inner radius.
If you are only interested in the inner radius, another way is to take the extracted contour from the inner surface, and again compute the centroid to find a nominal reference point. Take the average distance from this centre to each point on the contour, and that gives you the ideal fitted circle. Compute the distance from this ideal circle to each closest point on the actual contour and that gives you a measure of the perturbations.

Edge detection on dim edges using Python

I want to find dim edges using Python.
Input images (100 X 100) :
It consists of several horizontal boards: top, middle, bottom.
I want to find middle board bounding box like:
I used several edge detection methods (prewitt_x, sobel_x, cv2.findContours) but cannot detect well.
Because edge btw black region and board region is dim.
How can I find bounding box like red box?
Code below is example using prewitt_x and cv2.findContours:
import cv2
import numpy as np
img = cv2.imread('my_dir/my_img.bmp',0)
# prewitts_x
kernelx = np.array([[1,1,1],[0,0,0],[-1,-1,-1]])
img_prewittx = cv2.filter2D(img, -1, kernelx)
img_prewittx_gray = cv2.cvtColor(img_prewittx, cv2.COLOR_BGR2GRAY)
cv2.imwrite('my_outdir/my_outimg.bmp',img_prewittx)
# cv2.findContours
image, contours, hierarchy = cv2.findContours(img_prewittx_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rects = [cv2.boundingRect(cnt) for cnt in contours]
print(rects)
In fact, I don't want to use slower one like Canny detector.
Help me :)
My suggestion:
use a simple edge detection filter such as Prewitt
project horizontally (sum of the pixels in every row)
analyze the resulting profile to detect the regions of low/high activity and delimit the desired slabs.
You can also try the maximum along rows instead of the sum.
But don't expect miracles, this is a hard problem.

How can i scale a thickness of a character in image using python OpenCV?

I created one task, where I have white background and black digits.
I need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operations, but as I undestood, it helps to remove noises and another mistakes in picture, right? And I had a thought to check the distance between neighbour points of contours, but then I thought that it would be hard because of not exact and clear form of symbols(I draw tnem on paint). So, that's all Ideas, that I had. Can you help me in this question by telling names of themes in Comp. vision and OpenCV, that could help me to solve this task? I don't need exact algorithm of solution, only themes. And if that's not OpenCV task, so which is? What library? Should I learn some pack of themes and basics before the solution of my task?
One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much on how is your original image.
Have you thought about drawing a line from a certain point of the contour and look for points where the line intersects your contour? I mean if you get the coordinates from two points you can measure the distance. I have made a sample to demonstrate what I mean. Note that this script is meant just for the demonstration of solution and it will not work with other pictures except my sample one. I would give a better one but I have only encountered with programming a few months back.
So the first thing is to extract the contours which you said you have already done (mind that cv2.findContours finds white values). then you can get referential coordinates with cv2.boundingRect() - it returns x,y coordinate, width and height of an bounding rectangle for your contour (you can of course do something similar by extracting a little fracture of your contour on a mask and work from there). In my example I defined the center of the box and moved the line slightly downwards then made a line to the left (I have done it by appending to lists and converting it to arrays and there are probably a million better solutions). Then you look for points that are in your contour and in your line (those points are the points of intersection). I have calculated simply by difference of two x coordinates because it works for this demonstration but better approach would be sqrt(x2-x1)^2+(y2-y1)^2. Maybe it will give you an idea. Cheers!
Sample code:
import cv2
import numpy as np
import numpy
img = cv2.imread('Thickness2.png')
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray_image,10,255,cv2.THRESH_BINARY_INV)
im2, cnts, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
font = cv2.FONT_HERSHEY_TRIPLEX
for c in cnts:
two_points = []
coord_x = []
coord_y = []
area = cv2.contourArea(c)
perimeter = cv2.arcLength(c, False)
if area > 1 and perimeter > 1:
x,y,w,h = cv2.boundingRect(c)
cx = int((x+(w/2))) -5
cy = int((y+(h/2))) +15
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
for a in range(cx, cx+70):
coord_x.append(a)
coord_y.append(cy)
coord = list(zip(coord_x, coord_y))
arrayxy = np.array(coord)
arraycnt = np.array(c)
for a in arraycnt:
for b in arrayxy:
if a[:,0] == b[0] and a[:,1] == b[1]:
cv2.circle(img,(b[0],b[1]), 2, (0,255,255), -1)
two_points.append(b)
pointsarray = np.array(two_points)
thickness = int(pointsarray[1,0]) - int(pointsarray[0,0])
print(thickness)
cv2.line(img, (cx, cy), (cx+50, cy), (0,0,255), 1)
cv2.putText(img, 'Thickness : '+str(thickness),(x-20, y-10), font, 0.4,(0,0,0),1,cv2.LINE_AA)
cv2.imshow('img', img)
Output:

Count protuberances in dendrite with openCV (python)

I'm trying to count dendritic spines (the tiny protuberances) in mouse dendrites obtained by fluorescent microscopy, using Python and OpenCV.
Here is the original image, from which I'm starting:
Raw picture:
After some preprocessing (code below) I've obtained these contours:
Raw picture with contours (White):
What I need to do is to recognize all protuberances, obtaining something like this:
Raw picture with contours in White and expected counts in red:
What I intended to do, after preprocessing the image (binarizing, thresholding and reducing its noise), was drawing the contours and try to find convex defects in them. The problem arose as some of the "spines" (the technical name of those protuberances) are not recognized as they en up bulged together in the same convexity defect, underestimating the result. Is there any way to be more "precise" when marking convexity defects?
Raw image with contour marked in White. Red dots mark spines that were identified with my code. Green dots mark spines I still can't recognize:
My Python code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
#Image loading and preprocessing:
img = cv2.imread('Prueba.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pyrMeanShiftFiltering(img,5,11)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh1 = cv2.threshold(gray,5,255,0)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
img1 = cv2.morphologyEx(thresh1, cv2.MORPH_OPEN, kernel)
img1 = cv2.morphologyEx(img1, cv2.MORPH_OPEN, kernel)
img1 = cv2.dilate(img1,kernel,iterations = 5)
#Drawing of contours. Some spines were dettached of the main shaft due to
#image bad quality. The main idea of the code below is to identify the shaft
#as the biggest contour, and count any smaller as a spine too.
_, contours,_ = cv2.findContours(img1,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print("Number of contours detected: "+str(len(contours)))
cv2.drawContours(img,contours,-1,(255,255,255),6)
plt.imshow(img)
plt.show()
lengths = [len(i) for i in contours]
cnt = lengths.index(max(lengths))
#The contour of the main shaft is stored in cnt
cnt = contours.pop(cnt)
#Finding convexity points with hull:
hull = cv2.convexHull(cnt)
#The next lines are just for visualization. All centroids of smaller contours
#are marked as spines.
for i in contours:
M = cv2.moments(i)
centroid_x = int(M['m10']/M['m00'])
centroid_y = int(M['m01']/M['m00'])
centroid = np.array([[[centroid_x, centroid_y]]])
print(centroid)
cv2.drawContours(img,centroid,-1,(0,255,0),25)
cv2.drawContours(img,centroid,-1,(255,0,0),10)
cv2.drawContours(img,hull,-1,(0,255,0),25)
cv2.drawContours(img,hull,-1,(255,0,0),10)
plt.imshow(img)
plt.show()
#Finally, the number of spines is computed as the sum between smaller contours
#and protuberances in the main shaft.
spines = len(contours)+len(hull)
print("Number of identified spines: " + str(spines))
I know my code has many smaller problems to solve yet, but I think the biggest one is the one presented here.
Thanks for your help! and have a good one
I would approximate the contour to a polygon as Silencer suggests (don't use the convex hull). Maybe you should simplify the contour just a little bit to keep most of the detail of the shape.
This way, you will have many vertices that you have to filter: looking at the angle of each vertex you can tell if it is concave or convex. Each spine is one or more convex vertices between concave vertices (if you have several consecutive convex vertices, you keep only the sharper one).
EDIT: in order to compute the angle you can do the following: let's say that a, b and c are three consecutive vertices
angle1 = arctan((by-ay)/(bx-ax))
angle2 = arctan((cy-by)/(cx-bx))
angleDiff=angle2-angle1
if(angleDiff<-PI) angleDiff=angleDiff+2PI
if(angleDiff>0) concave
Else convex
Or vice versa, depending if your contour is clockwise or counterclockwise, black or white. If you sum all angleDiff of any polygon, the result should be 2PI. If it is -2PI, then the last "if" should be swapped.

How to draw a line from two points and then let the line complete drawing until reaching a contour point with opencv, python?

I am using opencv and python for programming and I am trying to draw a line between two points that I know their coordinates, and then let the line complete until it reaches the end of the contour as shown in the image bellow. The contour in my case is actually of an image face, but I have provided a circle here for explanation. So what I am trying to achieve is to get the edge of the head at that point intersecting with the line and contour. Is there a way to draw a line from two points and then let the line complete drawing until reaching the contour?
I can think of one easy method off the top of my head that doesn't involve incrementally updating the image: on one blank image, draw a long line extending from point one in the direction of point two, and then AND the resulting image with the an image of the single contour drawn (filled). This will stop the line at the end of the contour. Then you can either use that mask to draw the line, or get the minimum/maximum x, y coords if you want the coordinates of the line.
To walk through an example, first we'll find the contour and draw it on a blank image:
contours = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[1]
contour_img = np.zeros_like(img)
cv2.fillPoly(contour_img, contours, 255)
Then, if we have the points p1 and p2, find the direction they're heading in and find a point far off in that distance and draw that line on a new blank image (here I used a distance of 1000 pixels from p1):
p1 = (250, 250)
p2 = (235, 210)
theta = np.arctan2(p1[1]-p2[1], p1[0]-p2[0])
endpt_x = int(p1[0] - 1000*np.cos(theta))
endpt_y = int(p1[1] - 1000*np.sin(theta))
line_img = np.zeros_like(img)
cv2.line(line_img, (p1[0], p1[1]), (endpt_x, endpt_y), 255, 2)
Then simply cv2.bitwise_and() the two images together
contour_line_img = cv2.bitwise_and(line_img, contour_img)
Here is an image showing the points, the line extending past the contour, and the line breaking off at the contour.
Edit: Note that this will only work if your contours are convex. If there is any concavity and the line goes through that concave part, it will continue to draw on the other side of it. For e.g. in Silencer's answer, if both points were inside one of the ear and pointed towards the other ear, you'd want the contour to stop once it hit an edge, but mine will continue to draw on the other ear. I think an iterative method like Silencer's is the best for the general case, but I like the simplicity of this method if you know you have convex contours or if your points will be in a place to not have this issue.
Edit2: Someone else on Stack answered their own question about the Line Iterator class in Python by creating one: openCV 3.0 python LineIterator

Categories