Python - openCv to detect circles giving strange results - python

I'm using the code explained in https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/#comment-480634 and trying to basically detect the small rounded profile images (to be precise 5) displayed lower half of this sample instagram page (attached). What I can't figure out is why:
1. only one out of the 5 small rounded profile circles is captured by the code
2. how come there's a big circle displayed on the page and seems quite absurd to me.
Here is the code I'm using:
# we create a copy of the original image so we can draw our detected circles
# without destroying the original image.
image = cv2.imread("instagram_page.png")
# the cv2.HoughCircles function requires an 8-bit, single channel image,
# so we’ll convert from the RGB color space to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#blurred = cv2.GaussianBlur(gray, (5, 5), 0)
# detect circles in the image. We pass in the image we want to detect circles as the first argument,
# the circle detection method as the second argument (currently, the cv2.cv.HOUGH_GRADIENT method
# is the only circle detection method supported by OpenCV and will likely be the only method for some time),
# an accumulator value of 1.5 as the third argument, and finally a minDist of 100 pixels.
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.7, minDist= 1, param1 = 300, param2 = 100, minRadius=3, maxRadius=150)
print("Circles len -> {}".format(len(circles)))
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
# converting our circles from floating point (x, y) coordinates to integers,
# allowing us to draw them on our output image.
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
orange = (39, 127, 255)
cv2.circle(output, (x, y), r, orange, 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
img_name = "Output"
cv2.namedWindow(img_name,cv2.WINDOW_NORMAL)
cv2.resizeWindow(img_name, 800,800)
cv2.imshow(img_name, output)
cv2.waitKey(0)
cv2.destroyAllWindows()
I use minDist = 1 to make sure those close circles are potentially captured. Does anybody see something completely wrong with my parameters?

I played around with the parameters and managed to detect all circles (Ubuntu 16.04 LTS x64, Python 3.7, numpy==1.15.1, python-opencv==3.4.3):
circles = cv2.HoughCircles(
gray,
cv2.HOUGH_GRADIENT,
1.7,
minDist=100,
param1=48,
param2=100,
minRadius=2,
maxRadius=100
)

Related

Using Python openCV to accurately find squares from processed image for Rubik's Cube solver

I am in the initial stages of writing a Rubik's cube solver and am stuck at the following challenge:
Using the following image-processing code gives me the following image:
import cv2 as cv
import glob
import numpy as np
for img in glob.glob("captured_images/*.jpg"):
image = cv.imread(img)
copy = image.copy()
grey = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
decrease_noise = cv.fastNlMeansDenoising(grey, 10, 15, 7, 21)
blurred = cv.GaussianBlur(decrease_noise, (3, 3), 0)
canny = cv.Canny(blurred, 20, 40)
thresh = cv.threshold(canny, 0, 255, cv.THRESH_OTSU + cv.THRESH_BINARY)[1]
contours = cv.findContours(thresh, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for c in contours:
# obtain the bounding rectangle coordinates for each square
x, y, w, h = cv.boundingRect(c)
# With the bounding rectangle coordinates we draw the green bounding boxes
cv.rectangle(copy, (x, y), (x + w, y + h), (36, 255, 12), 2)
cv.imshow('copy', copy)
cv.waitKey(0)
cv.destroyAllWindows()
There are numerous bound rectangles highlighted. Trying to filter out only the squares using this code:
contour_list = []
for contour in contours:
approx = cv.approxPolyDP(contour, 0.01 * cv.arcLength(contour, True), True)
area = cv.contourArea(contour)
if len(approx) == 4:
(x, y, w, h) = cv.boundingRect(approx)
if (float(w)/h) == 1:
cv.rectangle(copy, (x, y), (x + w, y + h), (36, 255, 12), 2)
contour_list.append(contour)
doesn't work as the squares aren't precise enough to fit the definition of "all sides of square are equal".
I though retaking the images against a white background might help to more easily find the relevant squares, however modifying the original image to a cube with a white background and using the original code causes only the larger cube to be recognised as a square:
My question is three-fold:
1a) How can I modify my original code for the original image to accurately measure only the relevant squares by using the following criteria for finding squares:
There must be four corners
All four lines must be roughly the same length
All four corners must be roughly 90 degrees
1b) In the second image with the white background, how can I select everything outside the bound rectangle and convert that white background to black, which helps greatly in correctly detecting the appropriate squares?
1c) In general, why is a black background so much more beneficial than a white background in using the cv2.rectangle() function?
Any help in gaining some clearer understanding is much appreciated! :)
How can I modify my original code for the original image to accurately measure only the relevant squares by using the following criteria for finding squares:
Your code only accepts contours that are exactly square. You need to have a "squaredness" factor and then determine some acceptable threshold.
The "squaredness" factor is h/w if w > h else w/h. The closer that value to one, the more square the rectangle is. Then you can accept only rectangles with a factor of .9 or higher (or whatever works best).
In general, why is a black background so much more beneficial than a white background in using the cv2.rectangle() function?
The contour finding algorithm that OpenCV uses is actually:
Suzuki, S. and Abe, K., Topological Structural Analysis of Digitized Binary Images by Border Following. CVGIP 30 1, pp 32-46 (1985)
In your case, the algorithm might just have picked up the contours just fine, but you have set the RETR_EXTERNAL flag, which will cause OpenCV to only report the outermost contours. Try changing it to RETR_LIST.
Find the OpenCV docs with regards to contour finding here: https://docs.opencv.org/master/d9/d8b/tutorial_py_contours_hierarchy.html

Rectangle detection inaccuracy using approxPolyDP() in openCV

As part of a program which contains a series of images to be processed, I first need to detect a green-coloured rectangle. I'm trying to write a program that doesn't use colour masking, since the lighting and glare on the images will make it difficult to find the appropriate HSV ranges.
(p.s. I already have two questions based on this program, but this one is unrelated to those. It's not a follow up, I want to address a separate issue.)
I used the standard rectangle detection technique, making use of findContours() and approxPolyDp() methods. I added some constraints that got rid of unnecessary rectangles (like aspectRatio>2.5, since my desired rectangle is clearly the "widest" and area>1500, to discard random small rectangles) .
import numpy as np
import cv2 as cv
img = cv.imread("t19.jpeg")
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
output = img.copy()
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
#threshold
th = cv.adaptiveThreshold(gray,255,cv.ADAPTIVE_THRESH_GAUSSIAN_C,cv.THRESH_BINARY,9,2)
cv.imshow("th",th)
#rectangle detection
contours, _ = cv.findContours(th, cv.RETR_TREE, cv.CHAIN_APPROX_NONE)
for contour in contours:
approx = cv.approxPolyDP(contour, 0.01* cv.arcLength(contour, True), True)
cv.drawContours(img, [approx], 0, (0, 0, 0), 5)
x = approx.ravel()[0]
y = approx.ravel()[1]
x1 ,y1, w, h = cv.boundingRect(approx)
a=w*h
if len(approx) == 4 and x>15 :
aspectRatio = float(w)/h
if aspectRatio >= 2.5 and a>1500:
print(x1,y1,w,h)
width=w
height=h
start_x=x1
start_y=y1
end_x=start_x+width
end_y=start_y+height
cv.rectangle(output, (start_x,start_y), (end_x,end_y), (0,0,255),3)
cv.putText(output, "rectangle "+str(x1)+" , " +str(y1-5), (x1, y1-5), cv.FONT_HERSHEY_COMPLEX, 0.5, (0, 0, 0))
cv.imshow("op",output)
print("start",start_x,start_y)
print("end", end_x,end_y)
print("width",width)
print("height",height)
It is working flawlessly for all the images, except one:
I used adaptive thresholding to create the threshold, which was used by the findContours() method.
I tried displaying the threshold and the output , and it looks like this:
The thresholds for the other images also looked similar...so I can't pinpoint what exactly has gone wrong in the rectangle detection procedure.
Some tweaks I have tried:
Changing the last two parameters in the adaptive parameters method.
I tried 11,1 , 9,1, and for both of them, the rectangle in the
threshold looked more prominent : but in this case the output
detected no rectangles at all.
I have already disregarded otsu thresholding, as it is not working
for about 4 of my test images.
What exactly can I tweak in the rectangle detection procedure for it to detect this rectangle?
I also request , if possible, only slight modifications to this method and not some entirely new method. As I have mentioned, this method is working perfectly for all of my other test images, and if the new suggested method works for this image but fails for the others, then I'll find myself back here asking why it failed.
Edit: The method that abss suggested worked for this image, however failed for:
image 4
image 1, far off
Other test images:
image 1, normal
image 2
image 3
image 9, part 1
image 9, part 2
You can easily do it by adding this line of code after your threshold
kernel = cv.getStructuringElement(cv.MORPH_RECT,(3,3))
th = cv.morphologyEx(th,cv.MORPH_OPEN,kernel)
This will remove noise within the image. you can see this link for more understanding about morphologyEx https://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html
The results I got is shown below
I have made a few modifications to your code so that it works with all of your test images. There are a few false positives that you may have to filter based on HSV color range for green (since your target is always a shade of green). Alternately you can take into account the fact that the one of the child hierarchy of your ROI contour is going to be > 0.4 or so times than the outer contour. Here are the modifications:
Used DoG for thresholding useful contours
Changed arcLength multiplier to 0.5 instead of 0.1 as square corners are not smooth
cv2.RETR_CCOMP to get 2 level hierarchy
Moved ApproxPolyDP inside to make it more efficient
Contour filter area changed to 600 to filter ROI for all test images
Removed a little bit of unnecessary code
Check with all the other test images that you may have and modify the parameters accordingly.
img = cv2.imread("/path/to/your_image")
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
output = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gw, gs, gw1, gs1, gw2, gs2 = (3,1.0,7,3.0, 3, 2.0)
img_blur = cv2.GaussianBlur(gray, (gw, gw), gs)
g1 = cv2.GaussianBlur(img_blur, (gw1, gw1), gs1)
g2 = cv2.GaussianBlur(img_blur, (gw2, gw2), gs2)
ret, thg = cv2.threshold(g2-g1, 127, 255, cv2.THRESH_BINARY)
contours, hier = cv2.findContours(thg, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
img_cpy = img.copy()
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
for i in range(len(contours)):
if hier[0][i][2] == -1:
continue
x ,y, w, h = cv2.boundingRect(contours[i])
a=w*h
aspectRatio = float(w)/h
if aspectRatio >= 2.5 and a>600:
approx = cv2.approxPolyDP(contours[i], 0.05* cv2.arcLength(contours[i], True), True)
if len(approx) == 4 and x>15 :
width=w
height=h
start_x=x
start_y=y
end_x=start_x+width
end_y=start_y+height
cv2.rectangle(img_cpy, (start_x,start_y), (end_x,end_y), (0,0,255),3)
cv2.putText(img_cpy, "rectangle "+str(x)+" , " +str(y-5), (x, y-5), cv2.FONT_HERSHEY_COMPLEX, 0.5, (0, 0, 0))
plt.imshow(img_cpy)
print("start",start_x,start_y)
print("end", end_x,end_y)

Crop segmented fruit inside a box to get his diameter in pixels

After have segmented my lemons successfully I would like to get his size in pixels and then convert this value to millimeters. I'm reading a thesis were this guys did that but with strawberries. The first step was crop the segmented strawberries in a rectangle:
The image (b) was called the 'minimum rectangle'. According the authors to create it, This is built depending on the extreme values of the region:
- the highest point
- the extreme left point
- the lowest point
- the extreme right point of the region of interest.
Once this is done, the width of the rectangle is measured, which will indicate the measurement of the diameter of the strawberry in pixels.
In my case this is my input image:
And this is my desired output:
I'm programming in python with opencv. I would like to crop my input image and then find the minimum rectangle to get the width of the rectangle which will show the diameter of the lemon in pixels.
According the thesis, to convert the measure in pixels to a measure of the real world as in millimeters, I should take a photography with a rectangle with a 3 cm of side with the same conditions as were take the images of the lemons. Then I should segment this rectangle and then find his minimun rectangle as the image of above and find his width in pixels as result of it with a rectangle of known measures i.g they got 176 pixels of width. Of this way they got:
1mm = 176/30 = 5.87 pixels
With this information I would like to compute the width of my lemons and get this first in pixels, the convert it to milimetters. Guys if you can do it, please suppost that I taked a photography of a know figure of 3cm of side, the same as the thesis. By the moment I can't get the minimun rectangle because I don't know how get it, is because that I asking for his help to you.
Well guys I would like to see your suggestions, any I idea I will apreciate it. Thanks so much.
Thanks you.
Once you have the thresholded image (mask) of your blob of interest (the lemon) it is very straightforward to get its (rotated) minimum area rectangle or bounding rectangle. Use the cv2.minAreaRect function to get the former or the cv2.boundingRect function to get the later. In both cases you need to compute the contours of the binary mask, get the outer and biggest contour and pass that to either function.
Let's see an example for getting both:
# image path
path = "C://opencvImages//"
fileName = "TAkY2.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Grayscale conversion:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Thresholding:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
This is just to get the binary mask, you already have this. This is the result:
Now, get the contours. And just to draw some results, prepare a couple of deep copies of the input that we will use to check out things:
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Deep copies of the input image to draw results:
minRectImage = inputImage.copy()
polyRectImage = inputImage.copy()
Now, get the contours and filter them by a minimum area (minArea) value. You want to just keep the biggest contour - that's the lemon perimeter:
# Look for the outer bounding boxes:
for i, c in enumerate(contours):
if hierarchy[0][i][3] == -1:
# Get contour area:
contourArea = cv2.contourArea(c)
# Set minimum area threshold:
minArea = 1000
# Look for the largest contour:
if contourArea > minArea:
# Option 1: Get the minimum area bounding rectangle
# for this contour:
boundingRectangle = cv2.minAreaRect(c)
# Get the rectangle points:
rectanglePoints = cv2.boxPoints(boundingRectangle)
# Convert float array to int array:
rectanglePoints = np.intp(rectanglePoints)
# Draw the min area rectangle:
cv2.drawContours(minRectImage, [rectanglePoints], 0, (0, 0, 255), 2)
cv2.imshow("minAreaRect", minRectImage)
cv2.waitKey(0)
This portion of code gets you these results. Note that this rectangle is rotated to encompass the minimum area of the contour, just as if you were actually measuring the lemon with a caliper:
You can also get the position of the four corners of this rectangle. Still, inside the loop, we have the following bit of code:
# Draw the corner points:
for p in rectanglePoints:
cv2.circle(minRectImage, (p[0], p[1]), 3, (0, 255, 0), -1)
cv2.imshow("minAreaRect Points", minRectImage)
cv2.waitKey(0)
These are the corners of the min area rectangle:
You might or might not like this result. You might be looking for the bounding rectangle that is not rotated. In such case you can use cv2.boundingRect, but first, you need to approximate the contour to a polygon-based set of points. This is the approach, continuing from the last line of code:
# Option2: Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Set the rectangle dimensions:
rectangleX = boundRect[0]
rectangleY = boundRect[1]
rectangleWidth = boundRect[0] + boundRect[2]
rectangleHeight = boundRect[1] + boundRect[3]
# Draw the rectangle:
cv2.rectangle(polyRectImage, (int(rectangleX), int(rectangleY)),
(int(rectangleWidth), int(rectangleHeight)), (0, 255, 0), 2)
cv2.imshow("Poly Rectangle", polyRectImage)
cv2.imwrite(path + "polyRectImage.png", polyRectImage)
cv2.waitKey(0)
This is the result:
Edit:
This is the bit that actually crops the lemon from the last image:
# Crop the ROI:
croppedImg = inputImage[rectangleY:rectangleHeight, rectangleX:rectangleWidth]
This is the final output:

Drawing true filled circle on an image

I have an image which I define like the following:
img = np.zeros(474,474)
I would like to draw true filled circles and not polygonal approximations of circles on this image at different coordinates as centre and of a fixed radius. For example, I want to draw two circles with centres (100,200) and (150,372) with radius 2 pixels. What I am expecting is that after plotting the circles, the entries of the original image img should change to all ones where the circle is present.
I tried opencv cv.circlemodule as well as skimage.draw.circle module but they generate some polynomial approximation of circle.
I was also trying the following in matplotlib but I don't seem to understand how to plot it on my image img.
Any help would be appreciated.
from matplotlib.patches import Circle
img=np.zeros(474,474)
fig = plt.figure()
ax = fig.add_subplot(111)
centers = [(100,200),(150,372)]
for i in range(len(centers)):
Circle((centers[i][0],centers[i][1]), radius= 2)
draw circles in the img.
import cv2
import numpy as np
img = np.zeros([474, 474])
cv2.circle(img, (100,100), 5, 255, -1)
cv2.circle(img, (200,200), 30, 255, -1)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
When drawing a circle in OpenCV, there are several parameters that can be chosen, one of them is used to define the type of the circle boundary, you have 4 types:
Filled
4-connected line
8-connected line
antialiased line
You can see the different effect on the following image (in same order)
An example code (in C++)
circle(src,cv::Point(300,300), 10, Scalar(0,0,255), 1, FILLED);

Hough transform not detecting correct circle

I'm working on a project in which I have to detect Traffic lights (circles obviously). Now I am working with a sample image I picked up from a spot, however after all my efforts I can't get the code to detect the proper circle(light).
Here is the code:-
# import the necessary packages
import numpy as np
import cv2
image = cv2.imread('circleTestsmall.png')
output = image.copy()
# Apply Guassian Blur to smooth the image
blur = cv2.GaussianBlur(image,(9,9),0)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 200)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", output)
cv2.imshow('Blur', blur)
cv2.waitKey(0)
The image in which I want to detect the circle-
This is what the output image is:-
I tried playing with the Gaussian blur radius values and the minDist parameter in hough transform but didn't get much of success.
Can anybody point me in the right direction?
P.S- Some out of topic questions but crucial ones to my project-
1. My computer takes about 6-7 seconds to show the final image. Is my code bad or my computer is? My specs are - Intel i3 M350 2.6 GHz(first gen), 6GB RAM, Intel HD Graphics 1000 1625 MB.
2. Will the hough transform work on a binary thresholded image directly?
3. Will this code run fast enough on a Raspberry Pi 3 to be realtime? (I gotta mount it on a moving autonomous robot.)
Thank you!
First of all you should restrict your parameters a bit.
Please refer to: http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#houghcircles
At least set reasonable values for min and max radius. Try to find that one particular circle first. If you succeed increase your radius tolerance.
Hough transform is a brute force method. It will try any possible radius for every edge pixel in the image. That's why it is not very suitable for real time applications. Especially if you do not provide proper parameters and input. You have no radius limits atm. So you will calculate hundreds, if not thousands of circles for every pixel...
In your case the trafficlight also is not very round so the accumulated result won't be very good. Try finding highly saturated, bright, compact blobs of a reasonable size instead. It should be faster and more robust.
You can further reduce processing time if you restrict the image size. I guess you can assume that the traffic light will always be in the upper half of your image. So omit the lower half. Traffic lights will always be green, red or yellow. Remove everything that is not of that color... I think you get what I mean...
I think that you should first perform a color segmentation based on the stoplight colors. It will tremendously reduce the ROI. Then you can apply the Hough Transform on the ROI edges only (because you want the contour).
Another restriction: Only accept circles where the inside color is homogenous.This would throw out all the false hits in the above example.

Categories