Counting the number of circular rods in an image - python

I want to count the number of metal rods in an image using image processing. The images are all similar to this one:
I thought of using Hough Circle Transform, but the rods aren't precisely circular and also there are imperfections on the faces. Another idea was to use watershed algorithm. I took following steps:
Grayscale conversion
a CLAHE enhancement
Pyramid Mean Shift Filter to remove the texture and irregularities
Applied a gaussian blur.
Otsu's Thresholding
Result:
upper image is after 4, lower after 5
Clearly, I cannot use this image for Watershed.
I then tried using a simple thresholding, which gives better results, but still not accurate enough. Also I want the counting algorithm to be independent of the image used, and that means simple thresholding won't do.
I searched the net for algorithms to detect circular objects, but they all seem to be dependent on the segmentation or edge detection. I don't understand how to proceed from this point. Am I missing something basic here?
Edit: The basic preprocessing code:
img = cv2.imread('testc.jpg')
img = cv2.pyrMeanShiftFiltering(img, 21, 51)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(7,7))
cl1 = clahe.apply(img)
cl1 = cv2.GaussianBlur(cl1, (5,5), 1)
ret3,thresh = cv2.threshold(cl1,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Watershed:
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=35,
labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then appy the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
print("[INFO] {} unique segments found".format(len(np.unique(labels)) - 1))
# loop over the unique labels returned by the Watershed
# algorithm
for label in np.unique(labels):
# if the label is zero, we are examining the 'background'
# so simply ignore it
if label == 0:
continue
# otherwise, allocate memory for the label region and draw
# it on the mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# detect contours in the mask and grab the largest one
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
c = max(cnts, key=cv2.contourArea)
# draw a circle enclosing the object
((x, y), r) = cv2.minEnclosingCircle(c)
cv2.circle(img, (int(x), int(y)), int(r), (0, 255, 0), 2)
cv2.putText(img, "#{}".format(label), (int(x) - 10, int(y)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
Code for Watershed algorithm I took mostly from pyimagesearch I am sorry for not giving the link, I am allowed to post only two links currently (and No Images).
This is not the only thing that I have tried, but is the best approach currently.

Related

Segmentation of small particles using Watershed and Distance Transform

I’m trying to separate the dust particles in a picture. To do so, I was thinking of using the watershed algorithm (it’s the first time for me using it).
The picture I’m working with is the following:
Following the tutorial on the OpenCV documentation named “Image Segmentation with Distance Transform and Watershed Algorithm” (link), I wrote the following python code:
img = cv2.imread(img_path)
# sharpen the image to acute the edges of the foreground objects
kernel = np.array([[1, 1, 1], [1, -8, 1], [1, 1, 1]], dtype=np.float32)
imgLaplacian = cv2.filter2D(img, cv2.CV_32F, kernel)
sharp = np.float32(img)
imgResult = sharp - imgLaplacian
# Binarization
gray = cv2.cvtColor(imgResult, cv2.COLOR_BGR2GRAY)
_, img_bin = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
dist = cv2.distanceTransform(img_bin, cv2.DIST_L2, 5)
# Normalize the distance image for range = {0.0, 1.0}
# so we can visualize and threshold it
cv2.normalize(dist, dist, 0, 1.0, cv2.NORM_MINMAX)
# Threshold to obtain the peaks
# This will be the markers for the foreground objects
_, dist = cv2.threshold(dist, min_norm, max_norm, cv2.THRESH_BINARY)
# Dilate a bit the dist image
dist = cv2.dilate(dist, np.ones((3,3), dtype=np.uint8))
# Create the CV_8U version of the distance image
# It is needed for findContours()
dist_8u = dist.astype('uint8')
# Find total markers
contours, _ = cv2.findContours(dist_8u, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Create the marker image for the watershed algorithm
markers = np.zeros(dist.shape, dtype=np.int32)
# Draw the foreground markers
for i in range(len(contours)):
cv2.drawContours(markers, contours, i, (i+1), -1)
cv2.watershed(imgResult, markers)
img[markers == -1] = [0,0,255] # Draw the contours
ImgResult is the visualized as following:
As you can see in the picture, many of the bigger particles are still considered as just one particle, while some of the smaller particles are not even considered.
I’m sure there’s something I’m missing, but I can’t figure out what.
Let me know if something is not clear or if the pictures are not understandable. (I used another function to make the contours more visible, the code posted generates a red border as large as a pixel).
P.S.
I also tried to follow “Image Segmentation with Watershed Algorithm” (link) tutorial from OpenCV docs. The results are a little bit better for the bigger particles (they get separated), but it doesn’t identify many particles (I assume due to the morphological operations). The picture follows.
P.P.S.
In both cases, the algorithm generates a weird border around the whole picture.
Desired result
The desired output is something similar to this:
Drawing function
To draw on the pictures I used:
for label in no.unique(markers):
if label == 0:
continue
mask = np.zeros(image.shape[:2], dtype="uint8")
mask[markers == label] = 255
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
c = max(cnts, key=cv2.contourArea)
cv2.drawContours(image, [c], -1, (36,255,12), 2)

Recognizing corner's page with openCV partialy fails

I would like to get the 4 corners of a page,
The steps I took:
Converted to grayscale
Applied threshold the image
Applied Canny for detecting edges
After that I have used findContours
Draw the approx polygon for each polygon, my assumption was the relevant polygon must have 4 vertices.
but along the way I found out my solution sometimes misses,
apparently my solution is not robust enough (probably a bit a naive solution).
I think some of the reasons for those paper corner detection failure are:
The thresholds are picked manually for canny detection.
The same about the epsilon value for approxPolyDP
My Code
import cv2
import numpy as np
image = cv2.imread('page1.jpg')
descalingFactor = 3
imgheight, imgwidth = image.shape[:2]
resizedImg = cv2.resize(image, (int(imgwidth / descalingFactor), int(imgheight / descalingFactor)),
interpolation=cv2.INTER_AREA)
cv2.imshow(winname="original", mat=resizedImg)
cv2.waitKey()
gray = cv2.cvtColor(resizedImg, cv2.COLOR_BGR2GRAY)
cv2.imshow(winname="gray", mat=gray)
cv2.waitKey()
img_blur = cv2.GaussianBlur(gray, (5, 5), 1)
cv2.imshow(winname="blur", mat=img_blur)
cv2.waitKey()
canny = cv2.Canny(gray,
threshold1=120,
threshold2=255,
edges=1)
cv2.imshow(winname="Canny", mat=canny)
cv2.waitKey()
contours, _ = cv2.findContours(image=canny, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
for idx, cnt in enumerate(contours):
# print("Contour #", idx)
# print("Contour #", idx, " len(cnt): ", len(cnt))
cv2.drawContours(image=resizedImg, contours=[cnt], contourIdx=0, color=(255, 0, 0), thickness=3)
cv2.imshow(winname="contour" + str(idx), mat=resizedImg)
conv = cv2.convexHull(cnt)
epsilon = 0.1 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
cv2.drawContours(resizedImg, [approx], 0, (0, 0, 255), 3)
cv2.waitKey(0)
if len(approx) == 4:
print("found the paper!!")
break
pts = np.squeeze(approx)
Another approach
I was wondering wouldn't it be a better approach to fit a polygon with 4 vertices (Quadrilateral) to the contour , and then check if the area difference between the polygon to the contour is below a specified threshold.
Can somebody please suggest a more robust solution (demonstrating it with code), thank you.
The images:
image1: https://ibb.co/K2SqLwZ
image2: https://ibb.co/mbGFsNp
image3: https://ibb.co/m6QKkzw
image4: https://ibb.co/xh7W41V
As fmw42 suggested, you need to restrict the problem more. There are way too many variables to build a "works under all circumstances" solution. A possible, very basic, solution would be to try and get the convex hull of the page.
Another, more robust approach, would be to search for the four vertices of the corners and extrapolate lines to approximate the paper edges. That way you don't need perfect, clean edges, because you would reconstruct them using the four (maybe even three) corners.
To find the vertices you can run Hough Line detector or a Corner Detector on the edges and get at least four discernible clusters of end/starting points. From that you can average the four clusters to get a pair of (x, y) points per corner and extrapolate lines using those points.
That solution would be hypothetical and pretty laborious for a Stack Overflow question, so let me try the first proposal - detection via convex hull. Here are the steps:
Threshold the input image
Get edges from the input
Get the external contours of the edges using a minimum area filter
Get the convex hull of the filtered image
Get the corners of the convex hull
Let's see the code:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "img2.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert BGR to grayscale:
grayInput = cv2.cvtColor(inputImageCopy, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(grayInput, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
The first step is to get a binary image, very straightforward. This is the result if you threshold via Otsu:
It is never a good idea to try and segment an object from a textured (or high frequency) background, however, in this case the paper it is discernible in the image histogram and the binary image is reasonably good. Let's try and detect edges on this image, I'm applying Canny with the same parameters as your code:
# Get edges:
cannyImage = cv2.Canny(binaryImage, threshold1=120, threshold2=255, edges=1)
Which produces this:
Seems good enough, the target edges are mostly present. Let's detect contours. The idea is to set an area filter, because the target contour is the biggest amongst the rest. I (heuristically) set a minimum area of 100000 pixels. Once the target contour is found I get its convex hull, like this:
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(cannyImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the corners:
cornerList = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):
# Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Get the bounding rect's data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Estimate the bounding rect area:
rectArea = rectWidth * rectHeight
# Set a min area threshold
minArea = 100000
# Filter blobs by area:
if rectArea > minArea:
# Get the convex hull for the target contour:
hull = cv2.convexHull(c)
# (Optional) Draw the hull:
color = (0, 0, 255)
cv2.polylines(inputImageCopy, [hull], True, color, 2)
You'll notice I've prepared beforehand a list (cornerList) in which I'll store (hopefully) all the corners. The last two lines of the previous snippet are optional, they draw the convex hull via cv2.polylines, this would be the resulting image:
Still inside the loop, after we compute the convex hull, we will get the corners via cv2.goodFeaturesToTrack, which implements a Corner Detector. The function receives a binary image, so we need to prepare a black image with the convex hull points drawn in white:
# Create image for good features to track:
(height, width) = cannyImage.shape[:2]
# Black image same size as original input:
hullImg = np.zeros((height, width), dtype =np.uint8)
# Draw the points:
cv2.drawContours(hullImg, [hull], 0, 255, 2)
cv2.imshow("hullImg", hullImg)
cv2.waitKey(0)
This is the image:
Now, we must set the corner detector. It needs the number of corners you are looking for, a minimum "quality" parameter that discards poor points detected as "corners" and a minimum distance between the corners. Check out the documentation for more parameters. Let's set the detector, it will return an array of points where it detected a corner. After we get this array, we will store each point in our cornerList, like this:
# Set the corner detection:
maxCorners = 4
qualityLevel = 0.01
minDistance = int(max(height, width) / maxCorners)
# Get the corners:
corners = cv2.goodFeaturesToTrack(hullImg, maxCorners, qualityLevel, minDistance)
corners = np.int0(corners)
# Loop through the corner array and store/draw the corners:
for c in corners:
# Flat the array of corner points:
(x, y) = c.ravel()
# Store the corner point in the list:
cornerList.append((x,y))
# (Optional) Draw the corner points:
cv2.circle(inputImageCopy, (x, y), 5, 255, 5)
cv2.imshow("Corners", inputImageCopy)
cv2.waitKey(0)
Additionally you can draw the corners as circles, it will yield this image:
This is the same algorithm tested on your third image:

Detecting and counting blobs/connected objects with opencv

I want to detect and count the objects inside an image that touch while ignoring what could be considered as a single object. I have the basic image, on which i tried applying a cv2.HoughCircles() method to try and identify some circles. I then parsed the returned array and tried using cv2.circle() to draw them on the image.
However, I seem to always get too many circles returned by cv2.HoughCircles() and couldn't figure out how to only count the objects that are touching.
This is the image i was working on
My code so far:
import numpy
import matplotlib.pyplot as pyp
import cv2
segmentet = cv2.imread('photo')
houghCircles = cv2.HoughCircles(segmented, cv2.HOUGH_GRADIENT, 1, 80, param1=450, param2=10, minRadius=30, maxRadius=200)
houghArray = numpy.uint16(houghCircles)[0,:]
for circle in houghArray:
cv2.circle(segmented, (circle[0], circle[1]), circle[2], (0, 250, 0), 3)
And this is the image i get, which is quite a far shot from want i really want.
How can i properly identify and count said objects?
Here is one way in Python OpenCV by getting contour areas and the convex hull area of the contours. The take the ratio (area/convex_hull_area). If small enough, then it is a cluster of blobs. Otherwise it is an isolated blob.
Input:
import cv2
import numpy as np
# read input image
img = cv2.imread('blobs_connected.jpg')
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold to binary
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]
# find contours
#label_img = img.copy()
contour_img = img.copy()
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
index = 1
isolated_count = 0
cluster_count = 0
for cntr in contours:
area = cv2.contourArea(cntr)
convex_hull = cv2.convexHull(cntr)
convex_hull_area = cv2.contourArea(convex_hull)
ratio = area / convex_hull_area
#print(index, area, convex_hull_area, ratio)
#x,y,w,h = cv2.boundingRect(cntr)
#cv2.putText(label_img, str(index), (x,y), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0,0,255), 2)
if ratio < 0.91:
# cluster contours in red
cv2.drawContours(contour_img, [cntr], 0, (0,0,255), 2)
cluster_count = cluster_count + 1
else:
# isolated contours in green
cv2.drawContours(contour_img, [cntr], 0, (0,255,0), 2)
isolated_count = isolated_count + 1
index = index + 1
print('number_clusters:',cluster_count)
print('number_isolated:',isolated_count)
# save result
cv2.imwrite("blobs_connected_result.jpg", contour_img)
# show images
cv2.imshow("thresh", thresh)
#cv2.imshow("label_img", label_img)
cv2.imshow("contour_img", contour_img)
cv2.waitKey(0)
Clusters in Red, Isolated blobs in Green:
Textual Information:
number_clusters: 4
number_isolated: 81
approach it in steps.
label connected components. two connected blobs get the same label because they're connected. so far so good.
now separate your blobs. use watershed (first comment) or whatever other method gives you results. I can't fully predict the watershed approach. it might deal with touching blobs of dissimilar size or it might do something silly. the sample/tutorial also assumes a minimum size (0.7 * max peak); plug in something absolute in pixels maybe.
then, for each separated blob, check which label it sits on (take coordinates of centroid to be safe), and note down a +1 for that label (a histogram).
any label that has more than one separated blob sitting on it, would be what you are looking for.

How to find the centre of these sometimes-overlapping circles

As part of a project I'm working on, I need to find the centre-point of some "blobs" in an image using OpenCV with Python.
I'm having a bit of trouble with it, and would truly appreciate any help or insight :)
My current method is to: get the contours of the images, overlay ellipses on those, use the blob detector to find the centre of each of these.
This works fairly well, but occasionally I have extraneous blobs that I need to ignore, and sometimes the blobs are touching each-other.
Here's an example of when it goes well:
Good source image:
After extracting contours:
With the blobs detected:
And when it goes poorly (you can see that it's incorrectly overlayed an ellipse over three blobs, and detected one that I don't want):
Bad source image:
After extracting contours:
With the blobs detected:
This is the code I currently use. I'm unsure of any other option.
def process_and_detect(img_path):
img = cv2.imread(path)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 50, 150, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
drawn_img = np.zeros(img.shape, np.uint8)
min_area = 50
min_ellipses = []
for cnt in contours:
if cv2.contourArea(cnt) >= min_area:
ellipse = cv2.fitEllipse(cnt)
cv2.ellipse(drawn_img,ellipse,(0,255,0),-1)
plot_img(drawn_img, size=12)
# Change thresholds
params = cv2.SimpleBlobDetector_Params()
params.filterByColor = True
params.blobColor = 255
params.filterByCircularity = True
params.minCircularity = 0.75
params.filterByArea = True
params.minArea = 150
# Set up the detector
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(drawn_img)
for k in keypoints:
x = round(k.pt[0])
y = round(k.pt[1])
line_length = 20
cv2.line(img, (x-line_length, y), (x+line_length, y), (255, 0, 0), 2)
cv2.line(img, (x, y-line_length), (x, y+line_length), (255, 0, 0), 2)
plot_img(img, size=12)
Thank you so much for reading this far, I sincerely hope someone can help me out, or point me in the right direction. Thanks!
Blob detector
Currently, your implementation is redundant. From the SimpleBlobDetector() docs:
The class implements a simple algorithm for extracting blobs from an image:
Convert the source image to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold (exclusive) with distance thresholdStep between neighboring thresholds.
Extract connected components from every binary image by findContours() and calculate their centers.
Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the minDistBetweenBlobs parameter.
From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.
So you're implementing part of the steps already, which might give some unexpected behavior. You could try playing with the parameters to see if you can figure out some that work for you (try creating trackbars to play with the parameters and get live results of your algorithm with different blob detector parameters).
Modifying your pipeline
However, you've already got most of your own pipeline written, so you can easily remove the blob detector and implement your own algorithm. If you simply drop your threshold a bit, you can easily get clearly marked circles, and then blob detection is as simple as contour detection. If you have a separate contour for each blob, then you can calculate the centroid of the contour with moments(). For example:
def process_and_detect(img_path):
img = cv2.imread(img_path)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 100, 255, cv2.THRESH_BINARY)
contours = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]
line_length = 20
for c in contours:
if cv2.contourArea(c) >= min_area:
M = cv2.moments(c)
x = int(M['m10']/M['m00'])
y = int(M['m01']/M['m00'])
cv2.line(img, (x-line_length, y), (x+line_length, y), (255, 0, 0), 2)
cv2.line(img, (x, y-line_length), (x, y+line_length), (255, 0, 0), 2)
Getting more involved
This same pipeline can be used to automatically loop through threshold values so you don't have to guess and hardcode those values. Since the blobs all seem roughly the same size, you can loop through until all contours have roughly the same area. You could do this for e.g. by finding the median contour size, defining some percentage of that median size above and below that you'll allow, and checking if all the contours detected fit in those bounds.
Here's an animated gif of what I mean. Notice that the gif stops once the contours are separated:
Then you can simply find the centroids of those separated contours. Here's the code:
def process_and_detect(img_path):
img = cv2.imread(img_path)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for thresh_val in range(0, 255):
# threshold and detect contours
thresh = cv2.threshold(imgray, thresh_val, 255, cv2.THRESH_BINARY)[1]
contours = cv2.findContours(thresh,
cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]
# filter contours by area
min_area = 50
filtered_contours = [c for c in contours
if cv2.contourArea(c) >= min_area]
area_contours = [cv2.contourArea(c) for c in filtered_contours]
# acceptable deviation from median contour area
median_area = np.median(area_contours)
dev = 0.3
lowerb = median_area - dev*median_area
upperb = median_area + dev*median_area
# break when all contours are within deviation from median area
if ((area_contours > lowerb) & (area_contours < upperb)).all():
break
# draw center location of blobs
line_length = 8
cross_color = (255, 0, 0)
for c in filtered_contours:
M = cv2.moments(c)
x = int(M['m10']/M['m00'])
y = int(M['m01']/M['m00'])
cv2.line(img, (x-line_length, y), (x+line_length, y), cross_color, 2)
cv2.line(img, (x, y-line_length), (x, y+line_length), cross_color, 2)
Note that here I looped through all possible threshold values with range(0, 255) to give 0, 1, ..., 254 but really you could start higher and skip through a few values at a time with, say, range(50, 200, 5) to get 50, 55, ..., 195 which would of course be much faster.
The "standard" approach for such blob-splitting problem is by means of the watershed transform. It can be applied on the binary image, using a transform distance, or directly on the grayscale image.
Oversegmentation problems can make it tricky, but it seems that your case will not suffer from that.
To find the center, I would usually recommend a weighted average of the pixel coordinates to get a noise reduction effect, but in this case I would probably go for the location of the maximum intensity, which won't be influenced by the deformation of the shape.
Here is what you get with a grayscale watershed (the region intensity is the average). Contrary to what I initially thought, there is some fragmentation due to irregularities in the blobs
You can improve with a little of lowpass filtering before segmentation.

laser curved line detection using opencv and python

I have taken out the laser curve of this image :
(source: hostingpics.net)
And now, I'm trying to obtain a set of points (the more, the better), which are in the middle of this curve.
I have tried to split the image into vertical stripes, and then to detect the centroid.
But it doesn't calculate lots of points, and it's not satisfactory at all !
img = cv2.Canny(img,50,150,apertureSize = 3)
sub = 100
step=int(img.shape[1]/sub)
centroid=[]
for i in range(sub):
x0= i*step
x1=(i+1)*step-1
temp = img[:,x0:x1]
hierarchy,contours,_ = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if contours <> []:
for i in contours :
M = cv2.moments(i)
if M['m00'] <> 0:
centroid.append((x0+int(M['m10']/M['m00']),(int(M['m01']/M['m00']))))
I also tried cv2.fitLine(), but it wasn't satisfactory either.
How could I detect points in the middle of this curve efficiently ? regards.
I think you are getting fewer points because of the following two reasons:
using an edge detector: depending on the thresholds, sometimes the edges may not reasonably represent the curve
sampling the image using a large step
Try the following instead.
# threshold the image using a threshold value 0
ret, bw = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY)
# find contours of the binarized image
contours, heirarchy = cv2.findContours(bw, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# curves
curves = np.zeros((img.shape[0], img.shape[1], 3), np.uint8)
for i in range(len(contours)):
# for each contour, draw the filled contour
draw = np.zeros((img.shape[0], img.shape[1]), np.uint8)
cv2.drawContours(draw, contours, i, (255,255,255), -1)
# for each column, calculate the centroid
for col in range(draw.shape[1]):
M = cv2.moments(draw[:, col])
if M['m00'] != 0:
x = col
y = int(M['m01']/M['m00'])
curves[y, x, :] = (0, 0, 255)
I get a curve like this:
You can also use distance transform and then get the row associated with max distance value for each column of individual contours.

Categories