I have created a black and white mask using various OpenCV filters. There are four circles that are clearly visible
I am trying to outline these circles using HoughCircles, but it gives many false positives and generally bad results:
circles = cv2.HoughCircles(combined, cv.CV_HOUGH_GRADIENT, 1, 300, np.array([]), 10, 30, 60, 300)
How can I properly detect the circular shapes in the black and white image?
Here is runnable code that can be used with the black and white image:
import numpy as np
import cv2
import cv
image = cv2.imread("image.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray, cv.CV_HOUGH_GRADIENT, 1, 300, np.array([]), 10, 30, 60, 300)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(image, (i[0], i[1]), i[2], (0, 255, 0), 1)
cv2.circle(image, (i[0], i[1]), 2, (0, 0, 255), 3)
cv2.imshow("thing", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
First of all, if I'm not mistaken, Hough Transform for circles expects a hollow circle, not a full one. This means you need to extract only the boundary/perimeter of the circles before applying the Hough Transform. OpenCV's findContours and arcLength functions will help you find the perimeters.
Second of all, unfortunately, in my experience Hough Transform is very sensitive to variations of the circle shape, meaning that if the shape you're trying to detect is "almost" a circle, it might not be able to detect it.
My advice is you should try to make your objects "rounder" by applying the Closing morphological operation on your binary image, with a disk-shaped structuring element. Then extract the perimeter of the objects in the image, and only then apply the Hough Transform. Hopefully, this will give you good enough results.
Alternatively, you can try to detect circles using the RANSAC algorithm. Here is an implementation for detecting lines, but you can adjust it for circles - just choose 3 points randomly (instead of 2), and define a circle that goes through them. Next, find all the points that lie near that circle (those are called the inlier points). These will be the points that satisfy the inequality:
where (x,y) is the point, (x0,y0) is the center of the circle, r is its radius, and margin is a parameter you'll have to tune.
The rest of the algorithm is the same.
Good luck!
Related
I am trying to make a program that will accept an image of a pool table and detect balls with OpenCV and Python. My algorithm right now is basically: 1) grayscaling the image; 2) applying median blur; 3) applying Gaussian blur; 4) applying cv2.HoughCircles() to detect circles.
Unfortunately, cv2.HoughCircles() detects random circles in the image:
When I called cv2.Canny() to see the edges used in cv2.HoughCircles, the results included a lot of noise (due to the color/texture of the carpet):
How should I remove the false detected circles? Changing the values of "param2" to increase the accumulator threshold will cause the black eight ball to not be detected. I'm thinking of either 1) applying a mask to filter out everything but the pool table (and then determining a ROI from the mask); 2) applying a rectangle detection algorithm to detect the pool table; 3) applying fiducials to try an determine an ROI from the image.
Here is the original image.
The code is below:
import cv2
img = cv2.imread("Assets/Setup.jpg", 1)
grayscale_frame = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
median_blur_frame = cv2.medianBlur(grayscale_frame, 5)
gaussian_blur_frame = cv2.GaussianBlur(median_blur_frame, (5, 5), 0)
circles = cv2.HoughCircles(gaussian_blur_frame, cv2.HOUGH_GRADIENT, dp=1, minDist=45, param1=75, param2=14,
minRadius=40, maxRadius=45)
for circle in circles[0]:
print(circle)
cv2.circle(img, (int(circle[0]), int(circle[1])), int(circle[2]), (255, 255, 255), 5)
cv2.imshow("Frame", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Original question
I have about 80-100 images such as (A). Each image is composed of shapes that were filled with black color after marking the outline in ImageJ. I need to extract each individual shape as in (B). I then need to find the widths of the longest axis as in (C).
And I then need to calculate all the possible widths at regular points within the black shape and return a .csv file containing these values.
I have been doing this manually and there must be a quicker way to do it. Any idea how I can go about doing this?
Partial solution as per Epsi's answer
import matplotlib.pyplot as plt
import cv2
import numpy as np
image = cv2.imread("Desktop/Analysis/NormalCol0.tif")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find contours, find rotated rectangle, obtain four verticies, and draw
contours, hierarchy = cv2.findContours(thresh_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
lengths = []
for i in contours:
x_y,width_height,angle_of_rotation = cv2.minAreaRect(i)
Height = print(width_height[1])
rect = cv2.minAreaRect(i)
box = np.int0(cv2.boxPoints(rect))
image_contours = np.zeros(image.shape)
# draw the contours on the empty image
cv2.drawContours(image, [box], 0, (36,255,12), 3) # OR
# cv2.polylines(image, [box], True, (36,255,12), 3)
plt.imshow(image)
cv2.imwrite(output,image)
Output:
4031.0
20.877727508544922
51.598445892333984
23.852108001708984
21.0
21.21320343017578
19.677398681640625
43.0
I am able to get the length of each shape. I next need to figure out how to get width of the shape at regular points along the length of the shape?
As I'm not that good with python, I wont post code, but will post links to explanations and documentary.
How I would do it is as follows:
Invert the image, so that all the white becomes black and black becomes white. https://www.delftstack.com/howto/python/opencv-invert-image/#invert-images-using-numpy.invert-method-in-python
Use findContours to find all the (now) white contours. https://pythonexamples.org/python-opencv-cv2-find-contours-in-image/
use minAreaRect to create bounding boxes around the contours that is positioned in a way that the area of the box is as small as possible. https://theailearner.com/tag/cv2-minarearect/
from the created bounding boxes, take the longest side, this will represent the length of your contour.
You can also get the angle of rotation from the bounding boxes
I would like to get the 4 corners of a page,
The steps I took:
Converted to grayscale
Applied threshold the image
Applied Canny for detecting edges
After that I have used findContours
Draw the approx polygon for each polygon, my assumption was the relevant polygon must have 4 vertices.
but along the way I found out my solution sometimes misses,
apparently my solution is not robust enough (probably a bit a naive solution).
I think some of the reasons for those paper corner detection failure are:
The thresholds are picked manually for canny detection.
The same about the epsilon value for approxPolyDP
My Code
import cv2
import numpy as np
image = cv2.imread('page1.jpg')
descalingFactor = 3
imgheight, imgwidth = image.shape[:2]
resizedImg = cv2.resize(image, (int(imgwidth / descalingFactor), int(imgheight / descalingFactor)),
interpolation=cv2.INTER_AREA)
cv2.imshow(winname="original", mat=resizedImg)
cv2.waitKey()
gray = cv2.cvtColor(resizedImg, cv2.COLOR_BGR2GRAY)
cv2.imshow(winname="gray", mat=gray)
cv2.waitKey()
img_blur = cv2.GaussianBlur(gray, (5, 5), 1)
cv2.imshow(winname="blur", mat=img_blur)
cv2.waitKey()
canny = cv2.Canny(gray,
threshold1=120,
threshold2=255,
edges=1)
cv2.imshow(winname="Canny", mat=canny)
cv2.waitKey()
contours, _ = cv2.findContours(image=canny, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
for idx, cnt in enumerate(contours):
# print("Contour #", idx)
# print("Contour #", idx, " len(cnt): ", len(cnt))
cv2.drawContours(image=resizedImg, contours=[cnt], contourIdx=0, color=(255, 0, 0), thickness=3)
cv2.imshow(winname="contour" + str(idx), mat=resizedImg)
conv = cv2.convexHull(cnt)
epsilon = 0.1 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
cv2.drawContours(resizedImg, [approx], 0, (0, 0, 255), 3)
cv2.waitKey(0)
if len(approx) == 4:
print("found the paper!!")
break
pts = np.squeeze(approx)
Another approach
I was wondering wouldn't it be a better approach to fit a polygon with 4 vertices (Quadrilateral) to the contour , and then check if the area difference between the polygon to the contour is below a specified threshold.
Can somebody please suggest a more robust solution (demonstrating it with code), thank you.
The images:
image1: https://ibb.co/K2SqLwZ
image2: https://ibb.co/mbGFsNp
image3: https://ibb.co/m6QKkzw
image4: https://ibb.co/xh7W41V
As fmw42 suggested, you need to restrict the problem more. There are way too many variables to build a "works under all circumstances" solution. A possible, very basic, solution would be to try and get the convex hull of the page.
Another, more robust approach, would be to search for the four vertices of the corners and extrapolate lines to approximate the paper edges. That way you don't need perfect, clean edges, because you would reconstruct them using the four (maybe even three) corners.
To find the vertices you can run Hough Line detector or a Corner Detector on the edges and get at least four discernible clusters of end/starting points. From that you can average the four clusters to get a pair of (x, y) points per corner and extrapolate lines using those points.
That solution would be hypothetical and pretty laborious for a Stack Overflow question, so let me try the first proposal - detection via convex hull. Here are the steps:
Threshold the input image
Get edges from the input
Get the external contours of the edges using a minimum area filter
Get the convex hull of the filtered image
Get the corners of the convex hull
Let's see the code:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "img2.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert BGR to grayscale:
grayInput = cv2.cvtColor(inputImageCopy, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(grayInput, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
The first step is to get a binary image, very straightforward. This is the result if you threshold via Otsu:
It is never a good idea to try and segment an object from a textured (or high frequency) background, however, in this case the paper it is discernible in the image histogram and the binary image is reasonably good. Let's try and detect edges on this image, I'm applying Canny with the same parameters as your code:
# Get edges:
cannyImage = cv2.Canny(binaryImage, threshold1=120, threshold2=255, edges=1)
Which produces this:
Seems good enough, the target edges are mostly present. Let's detect contours. The idea is to set an area filter, because the target contour is the biggest amongst the rest. I (heuristically) set a minimum area of 100000 pixels. Once the target contour is found I get its convex hull, like this:
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(cannyImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the corners:
cornerList = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):
# Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Get the bounding rect's data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Estimate the bounding rect area:
rectArea = rectWidth * rectHeight
# Set a min area threshold
minArea = 100000
# Filter blobs by area:
if rectArea > minArea:
# Get the convex hull for the target contour:
hull = cv2.convexHull(c)
# (Optional) Draw the hull:
color = (0, 0, 255)
cv2.polylines(inputImageCopy, [hull], True, color, 2)
You'll notice I've prepared beforehand a list (cornerList) in which I'll store (hopefully) all the corners. The last two lines of the previous snippet are optional, they draw the convex hull via cv2.polylines, this would be the resulting image:
Still inside the loop, after we compute the convex hull, we will get the corners via cv2.goodFeaturesToTrack, which implements a Corner Detector. The function receives a binary image, so we need to prepare a black image with the convex hull points drawn in white:
# Create image for good features to track:
(height, width) = cannyImage.shape[:2]
# Black image same size as original input:
hullImg = np.zeros((height, width), dtype =np.uint8)
# Draw the points:
cv2.drawContours(hullImg, [hull], 0, 255, 2)
cv2.imshow("hullImg", hullImg)
cv2.waitKey(0)
This is the image:
Now, we must set the corner detector. It needs the number of corners you are looking for, a minimum "quality" parameter that discards poor points detected as "corners" and a minimum distance between the corners. Check out the documentation for more parameters. Let's set the detector, it will return an array of points where it detected a corner. After we get this array, we will store each point in our cornerList, like this:
# Set the corner detection:
maxCorners = 4
qualityLevel = 0.01
minDistance = int(max(height, width) / maxCorners)
# Get the corners:
corners = cv2.goodFeaturesToTrack(hullImg, maxCorners, qualityLevel, minDistance)
corners = np.int0(corners)
# Loop through the corner array and store/draw the corners:
for c in corners:
# Flat the array of corner points:
(x, y) = c.ravel()
# Store the corner point in the list:
cornerList.append((x,y))
# (Optional) Draw the corner points:
cv2.circle(inputImageCopy, (x, y), 5, 255, 5)
cv2.imshow("Corners", inputImageCopy)
cv2.waitKey(0)
Additionally you can draw the corners as circles, it will yield this image:
This is the same algorithm tested on your third image:
I want to detect a circle from a given image. But it just doesn't work the way I want it to. I implemented a circle detection algorithm, which works on some images with a circle but not on the one I want. I tweaked with the parameters, but couldn't get it to work.
import cv2
import numpy as np
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread("damn-circle.png")
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect circles in the image
blur = cv2.GaussianBlur(gray,(5,5),0)
circles = cv2.HoughCircles(blur, cv2.HOUGH_GRADIENT, 2, 120)
cv2.imshow("output", np.hstack([blur]))
cv2.waitKey(0)
print circles
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", np.hstack([output]))
cv2.waitKey(0)
You're code is almost perfect. It's just that the method CV_HOUGH_GRADIENT sits inside a package, cv (at least for opencv version: 2.4.13). I changed that one line to mention the package and it worked well. You'll have to put specific versions for OpenCV and NumPy if you're still not getting the right result on this simple image. Change your line to be like this:
circles = cv2.HoughCircles(blur, cv2.cv.CV_HOUGH_GRADIENT, 2, 120)
You should get a nice result. At least I just did. image with found Hough circle shown
Edited:
Ah, I didn't understand which image the question was about. I changed parameters for several items particularly the Canny detector param and the radius min/max and the accumulator resolution. I think these params will find what you want:
circles = cv2.HoughCircles(blur, method = cv2.cv.CV_HOUGH_GRADIENT, minDist = 90 , dp = 1, param1 = 3, param2 = 12 , minRadius = 30, maxRadius = 50)
My found image now looks like this: another image with found circle
I am doing pupil detection for my school project. It's my first time working with OpenCV and Python, using Python version 3.4.2 and OpenCV 3.1.0.
I am using the Raspberry Pi NoIR camera, and I am getting good images.
But i can't detect a pupil nicely (because of glint, lashes and shadows.
I refer to some code on the web and the following is part of that code.
...
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
image = frame.array
cv2.imshow("image", image)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
retval, thresholded = cv2.threshold(gray, 80, 255, 0)
cv2.imshow("threshold", thresholded)
closed = cv2.erode(cv2.dilate(thresholded, kernel, iterations=1), kernel, iterations=1)
#closed = cv2.morphologyEx(close, cv2.MORPH_CLOSE, kernel)
cv2.imshow("closed", closed)
thresholded, contours, hierarchy = cv2.findContours(closed, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
drawing = np.copy(image)
cv2.drawContours(drawing, contours, -1, (255, 0, 0), 2)
for contour in contours:
area = cv2.contourArea(contour)
bounding_box = cv2.boundingRect(contour)
extend = area / (bounding_box[2] * bounding_box[3])
# reject the contours with big extend
if extend > 0.8:
continue
# calculate countour center and draw a dot there
m = cv2.moments(contour)
if m['m00'] != 0:
center = (int(m['m10'] / m['m00']), int(m['m01'] / m['m00']))
cv2.circle(drawing, center, 3, (0, 255, 0), -1)
# fit an ellipse around the contour and draw it into the image
try:
ellipse = cv2.fitEllipse(contour)
cv2.ellipse(drawing, box=ellipse, color=(0, 255, 0))
except:
pass
# show the frame
cv2.imshow("Drawing", drawing)
...
Input image:
Output image:
How can I remove the parts of the image that are not related to the pupil, as shown above?
In addition to answers, any hints are also welcome.
There are several things you can do. How well they work depends on how much variation there is in the images you want to apply the algorithm on. You could make several assumptions and then discard all the candidates that do not meet them.
remove small detections
At first I would consider removing candidates that are too small by adding this line at the beginning of your loop:
if area < 100:
continue
The threshold was chosen randomly and worked well for this particular image. It removed almost all the false detection. Only the biggest one remains. But you have to check it against your other images and adapt it to your needs.
remove detections that are not round
Another assumption you can make is that pupils are usually round and you can remove every detection that is not 'round' enough. One simple measure of roundness is to look at the ratio of circumference to area.
circumference = cv2.arcLength(contour,True)
circularity = circumference ** 2 / (4*math.pi*area)
The circularity is about 2.72 for the shadow in the right side and 1.31 for the pupil.
improving roundness
You notice, that the contour of your pupil is not perfectly round because of reflections. You can improve this by computing the convex hull of the contours.
contour = cv2.convexHull(contour)
If you do that before computing the area and circumference you get circularity values of 1.01 and 1.37. (A perfect circle has a circularity of 1) This means the defect from the reflection was almost perfectly repaired. This might not be necessary in this case but could be useful in cases with more reflections.
Some years ago I realized a very similar project. To everything added above I can suggest one little trick.
As you can see you have two reflections from any light source. One on the surface of the eye and second reflection on the surface of the glass.
If you will remove the glass (if it's possible) you will have a very bright reflection almost in the center of pupil. In that case you can ignore all objects without this bright reflection. This also will help you to find a position of eye in the space near camera