Firstly, I am new to python and opencv so my understanding is limited, however I am trying to learn more as best I can.
I am currently struggling to locate contours(bullet holes) on a simple ringed target so that I can eventually score each hole. I have managed to solve a similar problem on a different image and I am wondering what I can do to get the same method to work on the new one.
Successful attempt at scoring a target
This is the target that my problem concerns
When I use these HSV Values I am presented with only the Bullet Holes. My limited knowledge tells me that perhaps these HSV values are useful in thresholding(?) but I can not seem to find the execution.
The method used to locate the contours in the example target is shown below:
imgREDUCED = cv2.inRange(image, (60, 60, 60), (150, 150, 150))
kernel = np.ones((10,10),np.uint8)
opening = cv2.morphologyEx(imgREDUCED, cv2.MORPH_OPEN, kernel)
thresh = cv2.threshold(opening, 60, 255, cv2.THRESH_BINARY)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
for c in cnts:
if cv2.contourArea(c) > 1:
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
What steps can I take to locate the contours on this new target image?
All help is appreciated and I will try my best to answer any questions
By finding the bullseye and the outer ring we can calculate the score of each hole based on its distance from the center. Here are the steps I took to try and solve this.
First, I converted to HSV and took a look at the different channels:
Hue
Value
From the hue channel we can clearly see the holes in the target so it's a good candidate for thresholding for those. On the value channel we can clearly see the rings and the center so we'll use that channel to detect those.
Hue Mask (0, 30)
Value Mask (0, 155)
We can use findContours to outline the white parts of the mask. From that outline we can get the center of the contour and the area of the contour. Using this on the hue mask we get the center of each hole and using this on the value mask we can get the biggest ring by looking for the contour with the largest area. With the area of the biggest ring, we can estimate the radius via the circle's area formula.
To find the bullseye I thresholded the value mask again, but using the (215, 255) to search for high values. This perfectly captures just the center, but you might not always get that lucky with your pictures. Using the findContours function again I get the center are radius of the bullseye.
Now I can score each of the holes. I get the distance from the hole to the center and figure out where on the scoresheet it should land on.
Marked the outer ring, the center of each hole, the score of each hole, and the bullseye:
Here's the code:
import cv2
import math
import numpy as np
# get center of contour
def centroid(contour):
M = cv2.moments(contour);
cx = int(round(M['m10']/M['m00']));
cy = int(round(M['m01']/M['m00']));
center = (cx, cy);
return center;
# load image
img = cv2.imread("target.png");
img = img[:,:-1,:]; # there's a bit of wall or something on the right
# hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV);
h,s,v = cv2.split(hsv);
# find the outer ring
v_mask = cv2.inRange(v, 0, 155);
# contours
_, contours, _ = cv2.findContours(v_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# find the biggest contour
biggest_cntr = None;
biggest_area = 0;
marked = img.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > biggest_area:
biggest_area = area;
biggest_cntr = contour;
cv2.drawContours(marked, [biggest_cntr], -1, (0, 255, 0), 3);
# find biggest radius
big_radius = math.sqrt(biggest_area / math.pi);
# find center
center_v_mask = cv2.inRange(v, 215, 255);
_, contours, _ = cv2.findContours(center_v_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
center = centroid(contours[0]);
# use this to calculate the middle radius
area = cv2.contourArea(contours[0]);
little_radius = math.sqrt(area / math.pi);
# draw center
marked = cv2.circle(marked, center, 2, (155,155,0), -1);
# mask holes
h_mask = cv2.inRange(h, 0, 30);
h_mask = cv2.medianBlur(h_mask, 11);
# draw contour centers
_, contours, _ = cv2.findContours(h_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
holes = [];
for contour in contours:
c = centroid(contour);
holes.append(c);
marked = cv2.circle(marked, c, 2, (0,0,155), -1);
# calculate approximate size of each ring
# (with foreknowledge that there are 9 + bullseye)
remaining_radius = big_radius - little_radius;
slices = remaining_radius / 9;
# calculate scores
scores = [];
for hole in holes:
# get distance from center
dx = hole[0] - center[0];
dy = hole[1] - center[1];
dist = math.sqrt(dx*dx + dy*dy);
# check score
dist -= little_radius;
if dist < 0:
scores.append(10);
else:
scores.append(9 - int(dist / slices));
# draw the scores
font = cv2.FONT_HERSHEY_SIMPLEX ;
for a in range(len(holes)):
tup = (holes[a][0], holes[a][1]);
marked = cv2.putText(marked, str(scores[a]), tup, font, 1, (0,0,155), 2, cv2.LINE_AA);
# show
cv2.imshow("marked", marked);
cv2.waitKey(0);
Related
I am attempting to find the area inside an arbitrarily-shaped closed curve plotted in python (example image below). So far, I have tried to use both the alphashape and polygon methods to acheive this, but both have failed. I am now attempting to use OpenCV and the floodfill method to count the number of pixels inside the curve and then I will later convert that to an area given the area that a single pixel encloses on the plot.
Example image:
testplot.jpg
In order to do this, I am doing the following, which I adapted from another post about OpenCV.
import cv2
import numpy as np
# Input image
img = cv2.imread('testplot.jpg', cv2.IMREAD_GRAYSCALE)
# Dilate to better detect contours
temp = cv2.dilate(temp, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Find largest contour
cnts, _ = cv2.findContours(255-temp, cv2.RETR_TREE , cv2.CHAIN_APPROX_NONE) #255-img and cv2.RETR_TREE is to account for how cv2 expects the background to be black, not white, so I convert the background to black.
largestCnt = [] #I expect this to yield the blue contour
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initial mask for flood filling, should cover entire figure
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour onto it, flood fill this contour
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
area = cv2.countNonZero(temp) #Number of pixels encircled by blue line
I expect from this to get to a place where I have the same image as above, but with the center of the contour filled in white and the background and original blue contour in black. I end up with this:
result.jpg
While this at first glance appears to have accurately turned the area inside the contour white, the white area is actually larger than the area inside the contour and so the result I get is overestimating the number of pixels inside it.
Any input on this would be greatly appreciated. I am fairly new to OpenCV so I may have misunderstood something.
EDIT:
Thanks to a comment below, I made some edits and this is now my code, with edits noted:
import cv2
import numpy as np
# EDITED INPUT IMAGE: Input image
img = cv2.imread('testplot2.jpg', cv2.IMREAD_GRAYSCALE)
# EDIT: threshold
_, temp = cv2.threshold(img, 250, 255, cv2.THRESH_BINARY_INV)
# EDIT, REMOVED: Dilate to better detect contours
# Find largest contour
cnts, _ = cv2.findContours(temp, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE)
largestCnt = [] #I expect this to yield the blue contour
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initial mask for flood filling, should cover entire figure
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour, flood filled
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
area = cv2.countNonZero(temp) #Number of pixels encircled by blue line
I input a different image with the axes and the frame that python adds by default removed for ease. I get what I expect at the second step, so this image. However, in the enter image description here both the original contour and the area it encircles appear to have been made white, whereas I want the original contour to be black and only the area it encircles to be white. How might I acheive this?
The problem is your opening operation at the end. This morphological operation includes a dilation at the end that expands the white contour, increasing its area. Let’s try a different approach where no morphology is involved. These are the steps:
Convert your image to grayscale
Apply Otsu’s thresholding to get a binary image, let’s work with black and white pixels only.
Apply a first flood-fill operation at image location (0,0) to get rid of the outer white space.
Filter small blobs using an area filter
Find the “Curve Canvas” (The white space that encloses the curve) and locate and store its starting point at (targetX, targetY)
Apply a second flood-fill al location (targetX, targetY)
Get the area of the isolated blob with cv2.countNonZero
Let’s take a look at the code:
import cv2
import numpy as np
# Set image path
path = "C:/opencvImages/"
fileName = "cLIjM.jpg"
# Read Input image
inputImage = cv2.imread(path+fileName)
inputCopy = inputImage.copy()
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu + bias adjustment:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
This is the binary image you get:
Now, let’s flood-fill at the corner located at (0,0) with a black color to get rid of the first white space. This step is very straightforward:
# Flood-fill background, seed at (0,0) and use black color:
cv2.floodFill(binaryImage, None, (0, 0), 0)
This is the result, note how the first big white area is gone:
Let’s get rid of the small blobs applying an area filter. Everything below an area of 100 is gonna be deleted:
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 100
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
This is the result of the filter:
Now, what remains is the second white area, I need to locate its starting point because I want to apply a second flood-fill operation at this location. I’ll traverse the image to find the first white pixel. Like this:
# Get Image dimensions:
height, width = filteredImage.shape
# Store the flood-fill point here:
targetX = -1
targetY = -1
for i in range(0, width):
for j in range(0, height):
# Get current binary pixel:
currentPixel = filteredImage[j, i]
# Check if it is the first white pixel:
if targetX == -1 and targetY == -1 and currentPixel == 255:
targetX = i
targetY = j
print("Flooding in X = "+str(targetX)+" Y: "+str(targetY))
There’s probably a more elegant, Python-oriented way of doing this, but I’m still learning the language. Feel free to improve the script (and share it here). The loop, however, gets me the location of the first white pixel, so I can now apply a second flood-fill at this exact location:
# Flood-fill background, seed at (targetX, targetY) and use black color:
cv2.floodFill(filteredImage, None, (targetX, targetY), 0)
You end up with this:
As you see, just count the number of non-zero pixels:
# Get the area of the target curve:
area = cv2.countNonZero(filteredImage)
print("Curve Area is: "+str(area))
The result is:
Curve Area is: 1510
Here is another approach using Python/OpenCV.
Read Input
convert to HSV colorspace
Threshold on color range of blue
Find the largest contour
Get its area and print that
draw the contour as a white filled contour on black background
Save the results
Input:
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('closed_curve.jpg')
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
#select blu color range in hsv
lower = (24,128,115)
upper = (164,255,255)
# threshold on blue in hsv
thresh = cv2.inRange(hsv, lower, upper)
# get largest contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
area = cv2.contourArea(c)
print("Area =",area)
# draw filled contour on black background
result = np.zeros_like(thresh)
cv2.drawContours(result, [c], -1, 255, cv2.FILLED)
# save result
cv2.imwrite("closed_curve_thresh.jpg", thresh)
cv2.imwrite("closed_curve_result.jpg", result)
# view result
cv2.imshow("threshold", thresh)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold Image:
Result Filled Contour On Black Background:
Area Result:
Area = 2347.0
I would like to convert this radar graph into numeric values for each component (dotted lines), using openCV for python. How would I best do this? I've been thinking about detecting the center of the graph as well as the intersections of the dotted lines with the gray area. By doing this I could measure the distance between the center and each of these intersections to determine a value for each component. However this seems rather complex, and I have no idea how to start. Could someone help me out here?
EDIT: The purpose is to write a piece of software that can extract data from 250 of these graphs. (I have better quality scans)
I would do something like this (sorry it's pseudo-code, if you think the idea it's good enough I'll try to write some real code):
1.find the circle center(maybe using HoughCircle function)
2.inverse thresholding to highlight your dark grey area
3.Call opencv function approx poly to get the polygon representing this area
4.for each vertex, measure its distance from the center and convert it to your desidered scale
I think it should work.
hough circle tutorial
https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
approx poly tutorial
https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html
EDIT:
I've had some spare time so I wrote an initial piece of code to extract circle, radius and the poly, hope it helps you
img = cv.imread("c:\\temp\\test.jpg", cv.IMREAD_COLOR)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
#thresholds for isolate circle and poly
ret, thres_poly_temp = cv.threshold(gray, 90, 255, cv.THRESH_BINARY_INV)
ret, thres_circle = cv.threshold(gray, 110, 255, cv.THRESH_BINARY_INV)
#cleanup image for detect poly
thres_poly = thres_poly_temp.copy()
thres_poly = cv.GaussianBlur(thres_poly_temp,(3,3),0)
thres_poly = cv.medianBlur( thres_poly, 5)
#bitwise_and to keep just the poly, removing the circle
cv.bitwise_and(thres_poly_temp, thres_circle, thres_poly)
kernel = np.ones((3, 3),np.uint8)
thres_poly = cv.morphologyEx(thres_poly, cv.MORPH_CLOSE, kernel)
kernel = np.ones((3, 3),np.uint8)
thres_poly = cv.morphologyEx(thres_poly, cv.MORPH_OPEN, kernel)
#find circle
circle = cv.HoughCircles(thres_circle, 3, 2, 800, minRadius = 100, maxRadius=500, param1=80, param2=100)
radius_list = []
if circle is not None:
circle = np.round(circle[0, :]).astype("int")
for (x,y,r) in circle:
cv.circle(gray, (x,y), r, (255,255,0),3)
cv.circle(gray, (x,y), 3, (255,255,0),3)
radius_list.append((x+r,y))
a = 0
#find radius
while(a < 360):
rad = math.radians(a)
x2 = int((radius_list[0][0] - x) * math.cos(rad)) - int((radius_list[0][1] - y) * math.sin(rad)) + x;
y2 = int((radius_list[0][0] - x) * math.sin(rad)) + int((radius_list[0][1] - y) * math.cos(rad)) + y;
radius_list.append((x2,y2))
a = a + 18
cv.line(gray, (x,y), (x2,y2), (255,255,0),2)
#find poly contour
contours,hierarchy = cv.findContours(thres_poly, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
#extract contour with max area
maxArea = -1
maxAreaContour = contours[0]
for contour in contours:
area = abs(cv.contourArea(contour))
if area > maxArea:
maxArea = area
maxAreaContour = contour
#approx poly to get contours
epsilon = 0.1*cv.arcLength(maxAreaContour,True)
approx = cv.approxPolyDP(maxAreaContour, 5, True)
cv.drawContours(gray, [approx],-1,(0,255,0),2)
cv.imshow("1", gray)
#now just iterate all the radius with the contour to find the intersection
# it should be pretty straight forward
output sample
EDIT 2: still missing: align image, as I said you can use the horizontal and vertical dotted lines to do that (use hough lines to get them,then determine m and q and do the inverse roto-translation)
As part of a project I'm working on, I need to find the centre-point of some "blobs" in an image using OpenCV with Python.
I'm having a bit of trouble with it, and would truly appreciate any help or insight :)
My current method is to: get the contours of the images, overlay ellipses on those, use the blob detector to find the centre of each of these.
This works fairly well, but occasionally I have extraneous blobs that I need to ignore, and sometimes the blobs are touching each-other.
Here's an example of when it goes well:
Good source image:
After extracting contours:
With the blobs detected:
And when it goes poorly (you can see that it's incorrectly overlayed an ellipse over three blobs, and detected one that I don't want):
Bad source image:
After extracting contours:
With the blobs detected:
This is the code I currently use. I'm unsure of any other option.
def process_and_detect(img_path):
img = cv2.imread(path)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 50, 150, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
drawn_img = np.zeros(img.shape, np.uint8)
min_area = 50
min_ellipses = []
for cnt in contours:
if cv2.contourArea(cnt) >= min_area:
ellipse = cv2.fitEllipse(cnt)
cv2.ellipse(drawn_img,ellipse,(0,255,0),-1)
plot_img(drawn_img, size=12)
# Change thresholds
params = cv2.SimpleBlobDetector_Params()
params.filterByColor = True
params.blobColor = 255
params.filterByCircularity = True
params.minCircularity = 0.75
params.filterByArea = True
params.minArea = 150
# Set up the detector
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(drawn_img)
for k in keypoints:
x = round(k.pt[0])
y = round(k.pt[1])
line_length = 20
cv2.line(img, (x-line_length, y), (x+line_length, y), (255, 0, 0), 2)
cv2.line(img, (x, y-line_length), (x, y+line_length), (255, 0, 0), 2)
plot_img(img, size=12)
Thank you so much for reading this far, I sincerely hope someone can help me out, or point me in the right direction. Thanks!
Blob detector
Currently, your implementation is redundant. From the SimpleBlobDetector() docs:
The class implements a simple algorithm for extracting blobs from an image:
Convert the source image to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold (exclusive) with distance thresholdStep between neighboring thresholds.
Extract connected components from every binary image by findContours() and calculate their centers.
Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the minDistBetweenBlobs parameter.
From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.
So you're implementing part of the steps already, which might give some unexpected behavior. You could try playing with the parameters to see if you can figure out some that work for you (try creating trackbars to play with the parameters and get live results of your algorithm with different blob detector parameters).
Modifying your pipeline
However, you've already got most of your own pipeline written, so you can easily remove the blob detector and implement your own algorithm. If you simply drop your threshold a bit, you can easily get clearly marked circles, and then blob detection is as simple as contour detection. If you have a separate contour for each blob, then you can calculate the centroid of the contour with moments(). For example:
def process_and_detect(img_path):
img = cv2.imread(img_path)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 100, 255, cv2.THRESH_BINARY)
contours = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]
line_length = 20
for c in contours:
if cv2.contourArea(c) >= min_area:
M = cv2.moments(c)
x = int(M['m10']/M['m00'])
y = int(M['m01']/M['m00'])
cv2.line(img, (x-line_length, y), (x+line_length, y), (255, 0, 0), 2)
cv2.line(img, (x, y-line_length), (x, y+line_length), (255, 0, 0), 2)
Getting more involved
This same pipeline can be used to automatically loop through threshold values so you don't have to guess and hardcode those values. Since the blobs all seem roughly the same size, you can loop through until all contours have roughly the same area. You could do this for e.g. by finding the median contour size, defining some percentage of that median size above and below that you'll allow, and checking if all the contours detected fit in those bounds.
Here's an animated gif of what I mean. Notice that the gif stops once the contours are separated:
Then you can simply find the centroids of those separated contours. Here's the code:
def process_and_detect(img_path):
img = cv2.imread(img_path)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for thresh_val in range(0, 255):
# threshold and detect contours
thresh = cv2.threshold(imgray, thresh_val, 255, cv2.THRESH_BINARY)[1]
contours = cv2.findContours(thresh,
cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]
# filter contours by area
min_area = 50
filtered_contours = [c for c in contours
if cv2.contourArea(c) >= min_area]
area_contours = [cv2.contourArea(c) for c in filtered_contours]
# acceptable deviation from median contour area
median_area = np.median(area_contours)
dev = 0.3
lowerb = median_area - dev*median_area
upperb = median_area + dev*median_area
# break when all contours are within deviation from median area
if ((area_contours > lowerb) & (area_contours < upperb)).all():
break
# draw center location of blobs
line_length = 8
cross_color = (255, 0, 0)
for c in filtered_contours:
M = cv2.moments(c)
x = int(M['m10']/M['m00'])
y = int(M['m01']/M['m00'])
cv2.line(img, (x-line_length, y), (x+line_length, y), cross_color, 2)
cv2.line(img, (x, y-line_length), (x, y+line_length), cross_color, 2)
Note that here I looped through all possible threshold values with range(0, 255) to give 0, 1, ..., 254 but really you could start higher and skip through a few values at a time with, say, range(50, 200, 5) to get 50, 55, ..., 195 which would of course be much faster.
The "standard" approach for such blob-splitting problem is by means of the watershed transform. It can be applied on the binary image, using a transform distance, or directly on the grayscale image.
Oversegmentation problems can make it tricky, but it seems that your case will not suffer from that.
To find the center, I would usually recommend a weighted average of the pixel coordinates to get a noise reduction effect, but in this case I would probably go for the location of the maximum intensity, which won't be influenced by the deformation of the shape.
Here is what you get with a grayscale watershed (the region intensity is the average). Contrary to what I initially thought, there is some fragmentation due to irregularities in the blobs
You can improve with a little of lowpass filtering before segmentation.
How to detect optical circles(hollow as well as filled)? Is there any approach which can solve segementation issue in generalize way?
I was not able to detect optical circle when I apply the following approach:
import numpy as np
import cv2
image= cv2.imread("cropped.jpg")
lower_bound = np.array([0,0,0])
upper_bound = np.array([255,255,195])
blur_factor = (3,3)
image= cv2.blur(image, blur_factor)
mask = cv2.inRange(image, lower_bound, upper_bound)
kernel = np.ones((3,3),np.uint8)
closing = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[0]
contours.sort(key=lambda x:cv2.boundingRect(x)[0])
array = []
ii = 1
for c in contours:
(x,y),r = cv2.minEnclosingCircle(c)
center = (int(x),int(y))
r = int(r)
if r >= 12 and r<=15:
cv2.circle(image,center,r,(0,255,0),2)
array.append(center)
for i in array:
text_color = (0, 0, 255)
cv2.putText(image, str(ii), i, cv2.FONT_HERSHEY_SIMPLEX, 0.5, text_color, 2)
ii = ii + 1
cv2.imshow("masked",mask)
cv2.imshow("circled",image)
cv2.waitKey(0)
Your question is not entirely clear, but I'm gonna go ahead and suppose you wanna detect black circles on these images.
I'm not gonna delve into smoothing parameters, I don't think that's the issue here (not very blurry image, and easy to segment). Your code is fine for detecting components enclosed in a circle with a certain radius. You're getting a bunch of false positives because an object enclosed in a circle is not necessarily a circle.
Consider the two following pink objects : with your code, both of them are detected with an enclosing circle (in white) with the same radius
Since here we are lucky to try to detect full circle, an easily recognizable object, I would suggest to check for each circle you detect if the object inside it occupies a big part of this circle or not. This will enable to eliminate false positives such as the pink line in example above.
So with minimum tweaking of your code, I would suggest something like
import numpy as np
import cv2
image= cv2.imread(your_image)
lower_bound = np.array([0,0,0])
upper_bound = np.array([255,255,195])
blur_factor = (3,3)
image= cv2.blur(image, blur_factor)
mask = cv2.inRange(image, lower_bound, upper_bound)
maskg=np.copy(mask)
kernel = np.ones((3,3),np.uint8)
closing = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
contours=contours[1]
array = []
ii = 1
for c in contours:
#for mask creation
imgg=np.zeros(image.shape[0:2])
(x,y),r = cv2.minEnclosingCircle(c)
center = (int(x),int(y))
r = int(r)
if r >= 12 and r<=18:
#potential interesting circle. Let's check if it's a full circle. Create a mask with only your full circle
cv2.circle(imgg,center,r,255,-1)
#mask your thresholded image by this mask
masked=cv2.bitwise_and(maskg.astype(np.uint8),maskg.astype(np.uint8),mask=imgg.astype(np.uint8))
#and count how much white pixels are in this mask (divided by the mask's area)
circle_fullness=np.sum(masked)/(np.pi*r**2*255)
#if more than X% of the area is indeed an object, than you've got yourself a full circle
if circle_fullness>=0.8:
#and then do you consider it as positive
array.append(center)
cv2.circle(image, center, r, (0, 255, 0), 2)
for i in array:
text_color = (0, 0, 255)
cv2.putText(image, str(ii), i, cv2.FONT_HERSHEY_SIMPLEX, 0.5, text_color, 2)
ii = ii + 1
cv2.imshow("masked",mask)
cv2.imshow("circled",image)
cv2.waitKey(0)
Result [deleted on demand]
I have taken out the laser curve of this image :
(source: hostingpics.net)
And now, I'm trying to obtain a set of points (the more, the better), which are in the middle of this curve.
I have tried to split the image into vertical stripes, and then to detect the centroid.
But it doesn't calculate lots of points, and it's not satisfactory at all !
img = cv2.Canny(img,50,150,apertureSize = 3)
sub = 100
step=int(img.shape[1]/sub)
centroid=[]
for i in range(sub):
x0= i*step
x1=(i+1)*step-1
temp = img[:,x0:x1]
hierarchy,contours,_ = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if contours <> []:
for i in contours :
M = cv2.moments(i)
if M['m00'] <> 0:
centroid.append((x0+int(M['m10']/M['m00']),(int(M['m01']/M['m00']))))
I also tried cv2.fitLine(), but it wasn't satisfactory either.
How could I detect points in the middle of this curve efficiently ? regards.
I think you are getting fewer points because of the following two reasons:
using an edge detector: depending on the thresholds, sometimes the edges may not reasonably represent the curve
sampling the image using a large step
Try the following instead.
# threshold the image using a threshold value 0
ret, bw = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY)
# find contours of the binarized image
contours, heirarchy = cv2.findContours(bw, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# curves
curves = np.zeros((img.shape[0], img.shape[1], 3), np.uint8)
for i in range(len(contours)):
# for each contour, draw the filled contour
draw = np.zeros((img.shape[0], img.shape[1]), np.uint8)
cv2.drawContours(draw, contours, i, (255,255,255), -1)
# for each column, calculate the centroid
for col in range(draw.shape[1]):
M = cv2.moments(draw[:, col])
if M['m00'] != 0:
x = col
y = int(M['m01']/M['m00'])
curves[y, x, :] = (0, 0, 255)
I get a curve like this:
You can also use distance transform and then get the row associated with max distance value for each column of individual contours.