How to find distance between two contours - python

I'm working on a project to find defective parts on a metal ring.I successfully find defective parts on the surface but I cannot detect the protrusions on the inner surface of the metal ring.
I thought to determine the error using the distance between the inner and outer surface, but I don't know how I can calculate the distance between the two contour.
sucsess, frame = capture.read()
kernel = np.ones((1,1),np.uint8)
blur = cv2.bilateralFilter(frame,6,140,160)
threshold = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,21,16)
closing = cv2.morphologyEx(threshold,cv2.MORPH_CLOSE,kernel)
erosion = cv2.erode(closing,kernel,iterations =0)
contours, hierarchy = cv2.findContours(erosion,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 72000 and area < 80000:
cv2.drawContours(frame,cnt,-1,(0,0,0),3)
for cnt2 in contours:
area2 = cv2.contourArea(cnt2)
if area2 > 30 and area2 < 200:
cv2.drawContours(frame,cnt2,-1,(0,0,0),3)
cv2.imshow("frame",frame)
cv2.imshow("Erosion",erosion)
cv2.waitKey(0)
This is my code. First image is the object I am looking, second image is the output of the erosion.
enter image description here
enter image description here
My main problem is I am not able to detect any protrusion inside inner radius.
Any suggestion and help are welcome.

I thought to determine the error using the distance between the inner and outer surface, but I don't know how I can calculate the distance between the two contour.
One method is to take both contours and calculate the centroid, giving you a nominal centre point. Then from this point, cast rays out through 360 degrees, and compute the points of intersection with the inner and outer contours. (Closest point by Euclidean distance for example.) Then you have two corresponding radii on both the inner and outer surface, so you can subtract inner from outer to get the ring thickness. Compute this all the way around, using an angle increment proportional to the accuracy you need. The standard deviation of the thickness all the way around is a measure of protrusions (lower numbers are better!).
My main problem is I am not able to detect any protrusion inside inner radius.
If you are only interested in the inner radius, another way is to take the extracted contour from the inner surface, and again compute the centroid to find a nominal reference point. Take the average distance from this centre to each point on the contour, and that gives you the ideal fitted circle. Compute the distance from this ideal circle to each closest point on the actual contour and that gives you a measure of the perturbations.

Related

How to find the centroid of multiple rectangles - Python

I have to find the exact centroid of multiple rectangles. The coordinates of each rectangle are as follows:
coord = (0.294792, 0.474537, 0.0989583, 0.347222) ## (xcenter, ycenter, width, height)
I have around 200 rectangles, how can I compute the centroid of them?
I already tried to implement it, but the code did not work well.
My code:
for i in range(len(xCenter)):
center = np.array((xCenter[i]+(Width[i]/2), yCenter[i]+(Height[i]/2)))
This is a somewhat vague question, but if you mean the centroid of all rectangles by area, then each center of a rectangle is weighted by the area of the rectangle. Think of it as the all the mass of the rectangle being compressed into the center, and then having to take the centroid of several weighted points. The formula for that would be the sum of 1 through n (assuming rectangles are numbered 1 to n) of Area(Rec(i)) * vec(center(i)) all divided by the total mass of the system (the sum of all the areas). If you are referring to the centroid of the area in general, ignoring rectangle overlap, that is a little more tricky. One thing you could do is for each rectangle, check it against all other rectangles, and if a pair of rectangles overlap, split them up into a set of non-overlapping rectangles and put them back into the set of rectangles. Once all rectangles are non-overlapping, find the centroid by mass.

Generating a segmentation mask for circular particles from threshold mask?

I am trying to find all the circular particles in the image attached. This is the only image I am have (along with its inverse).
I have read this post and yet I can't use hsv values for thresholding. I have tried using Hough Transform.
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=0.01, minDist=0.1, param1=10, param2=5, minRadius=3,maxRadius=6)
and using the following code to plot
names =[circles]
for nums in names:
color_img = cv2.imread(path)
blue = (211,211,211)
for x, y, r in nums[0]:
cv2.circle(color_img, (x,y), r, blue, 1)
plt.figure(figsize=(15,15))
plt.title("Hough")
plt.imshow(color_img, cmap='gray')
The following code was to plot the mask:
for masks in names:
black = np.zeros(img_gray.shape)
for x, y, r in masks[0]:
cv2.circle(black, (x,y), int(r), 255, -1) # -1 to draw filled circles
plt.imshow(black, gray)
Yet I am only able to get the following mask which if fairly poor.
This is an image of what is considered a particle and what is not.
One simple approach involves slightly eroding the image, to separate touching circular objects, then doing a connected component analysis and discarding all objects larger than some chosen threshold, and finally dilating the image back so the circular objects are approximately of the original size again. We can do this dilation on the labelled image, such that you retain the separated objects.
I'm using DIPlib because I'm most familiar with it (I'm an author).
import diplib as dip
a = dip.ImageRead('6O0Oe.png')
a = a(0) > 127 # the PNG is a color image, but OP's image is binary,
# so we binarize here to simulate OP's condition.
separation = 7 # tweak these two parameters as necessary
size_threshold = 500
b = dip.Erosion(a, dip.SE(separation))
b = dip.Label(b, maxSize=size_threshold)
b = dip.Dilation(b, dip.SE(separation))
Do note that the image we use here seems to be a zoomed-in screen grab rather than the original image OP is dealing with. If so, the parameters must be made smaller to identify the smaller objects in the smaller image.
My approach is based on a simple observation that most of the particles in your image have approximately same perimeter and the "not particles" have greater perimeter than them.
First, have a look at the RANSAC algorithm and how does it find inliers and outliers. It basically is for 2D data but we will have to transform it to 1D data in our case.
In your case, I am calling inliers to the correct particles and Outliers to incorrect particles.
Our data on which we have to work on will be the perimeter of these particles. To get the perimeter, find contours in this image and get the perimeter of each contour. Refer this for information about Contours.
Now we have the data, knowledge about RANSAC algo and our simple observation mentioned above. Now in this data, we have to find the most dense and compact cluster which will contain all the inliers and others will be outliers.
Now let's assume the inliers are in the range of 40-60 and the outliers are beyond 60. Let's define a threshold value T = 0. We say that for each point in the data, inliers for that point are in the range of (value of that point - T, value of that point + T).
Now first iterate over all the points in the data and count number of inliers to that point for a T and store this information. Find the maximum number of inliers possible for a value of T. Now increment the value of T by 1 and again find the maximum number of inliers possible for that T. Repeat these steps by incrementing value of T one by one.
There will be a range of values of T for which Maximum number of inliers are the same. These inliers are the particles in your image and the particles having perimeter greater than these inliers are the outliers thus the "not particles" in your image.
I have tried this algorithm in my test cases which are similar to your and it works. I am always able to determine the outliers. I hope it works for you too.
One last thing, I see that boundary of your particles are irregular and not smooth, try to make them smooth and use this algorithm if this doesn't work for you in this image.

Python: Return position and size of arbitrary/teeth shapes in image using OpenCV

I'm very new to the image processing and object detection. I'd like to extract/identify the position and dimensions of teeth in the following image:
Here's what I've tried so far using OpenCV:
import cv2
import numpy as np
planets = cv2.imread('model.png', 0)
canny = cv2.Canny(planets, 70, 150)
circles = cv2.HoughCircles(canny,cv2.HOUGH_GRADIENT,1,40, param1=10,param2=16,minRadius=10,maxRadius=80)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(planets,(i[0],i[1]),i[2],(255,0,0),2)
# draw the center of the circle
cv2.circle(planets,(i[0],i[1]),2,(255,0,0),3)
cv2.imshow("HoughCirlces", planets)
cv2.waitKey()
cv2.destroyAllWindows()
This is what I get after applying canny filter:
This is the final result:
I don't know where to go from here. I'd like to get all of the teeth identified. How can I do that?
I'd really appreciate any help..
Note that the teeth-structure is more-or-less a parabola (upside-down). If you could somehow guess the parabolic shape that defines the centroids of those blobs (teeth), then your problem could be simplified to a reasonable extent. I have shown a red line that passes through the centers of the teeth.
I would suggest you to approach it as follows:
Binarize your image (background=0, else 1). You could use sklearn.preprocessing.binarize.
Calculate the centroid of all the non-zero pixels. This is the central blue circle in the image. Call this structure_centroid. See this: How to center the nonzero values within 2D numpy array?.
Make polar slices of the entire image, centered at the location of the structure_centroid. I have shown a cartoon image of such polar slices (triangular semi-transparent). Cover complete 360 degrees. See this: polarTransform library.
Determine the position of the centroid of the non-zero pixels for each of these polar slices. See these:
find the distance between a point and a curve python.
Find the minimum distance from a point to a curve.
The array containing these centroids gives you the locus (path) of the average location of the teeth. Call this centroid_path.
Run an elimination/selection algorithm on the circles you were able to detect, that are closest to the centroid_path. Use a threshold distance to drop the outliers.
This should give you a good approximation of the teeth with the circles.
I hope this helps.

Detect the corners of a grid using opencv

So I'm currently working on a project(not school related or anything) and a part of it involves being able to detect and project a grid in a picture onto a square image so we can get rid of any skewing the image may have and things alike. My problem now is that I cannot determine what points within my image are the corners of my grid. I have tried using a Hough transform but the problem with that is that many lines are generated, including the grid lines and thus it would be hard to determine which of the detected lines are the edges of the grid automatically. I also tried using a contour detector which gives a similar problem, although its more accurate in tracing out the edges of the grid. I'm unable to pick out what contours belong to the edge of the grid and what contours are say grid lines or just miscellaneous
A screenshot of the results from the Hough transform:
and A screenshot of the result from the contour detection:
.
Thanks for any help or advice in advance.
You probably need to look through the contours and find the largest 4 sided one to grab the outside of your grid.
You would use something like this helper function (processed is my preprocessed image):
def largest_4_sided_contour(processed, show_contours=False):
_, contours, _ = cv2.findContours(
processed, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Sort the contours by area
contours = sorted(contours, key=cv2.contourArea, reverse=True)
look at the biggest 5 (if there are more than 5, otherwise just look at all of them)
for cnt in contours[:min(5, len(contours))]:
If the number of sides is about 4, that's the one we are looking for so we can stop looking and return that one.
if len(approx(cnt)) == 4:
return cnt
return None
There are some irregularities to your grid so you may have to do some preprocessing or look for a range of the number of sides but generally, by looking at the area of the contour and narrowing down by the number of sides, you should be able to figure something out.
You mentioned getting the corners so this is that step:
def get_rectangle_corners(cnt):
''' gets corners from a contour '''
pts = cnt.reshape(cnt.shape[0], 2)
rect = np.zeros((4, 2), dtype="float32")
# the top-left point has the smallest sum whereas the
# bottom-right has the largest sum
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# compute the difference between the points -- the top-right
# will have the minumum difference and the bottom-left will
# have the maximum difference
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
return rect

Is the centroid of a contour always its geometrical centre? (OpenCV, Python)

I am working with OpenCV+Python and I want to find the geometrical centre of the following contour:
The OpenCV documentation suggests the following to find the centroid of a contour:
import numpy as np
import cv2 as cv
img = cv.imread('star.jpg',0)
ret,thresh = cv.threshold(img,127,255,0)
im2,contours,hierarchy = cv.findContours(thresh, 1, 2)
cnt = contours[0]
M = cv.moments(cnt)
print( M )
#Centroid
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
If I am right, according to this formula the centroid is calculated as the mean (or the average?) of all the points of the contour.
However, if for example fewer points are detected at the upper part of the contour than at the lower part of the contour then the centroid will be a bit higher than the actual geometrical centre of the (fully detected) contour.
Am I right?
If so, then is it better to calculate the average of the extreme points of the contour to find the geometrical centre of the contour and in this way to not depend at all on if the points of the contour are uniformly detected?
Am I right?
No. The OpenCV function moments() uses Green's theorem as mentioned in the OpenCV moments() docs. Green's theorem is indeed a correct way to find the center of mass of a shape. Green's theorem specifically relates integrals about some shape to integrals about the shape's border. As such, it doesn't at all matter how many points define the border or not.
I say center of mass specifically, because you can pass in a single-channel image array into moments() as well to compute the moments and not just a point array. However for an image array, if the array is just binary, then the center of mass is the centroid. And with an array of points (from your contours), there is no array to tell the pixel values inside, so the result is similarly still the centroid.

Categories