I have the following image in which I've detected the borders, representing 7 circles. In my opinion, it is fairly easy to identify the circles on it, but I am having trouble detecting all circles with the opencv Hough transform. Here it is what I've tried:
img = cv2.imread('sample.png',0)
edges = cv2.Canny(img,20,120)
circles = cv2.HoughCircles(edges,cv2.HOUGH_GRADIENT,1,100,
param2=40,minRadius=0,maxRadius=250)
I either get the central circle, the outer one or a lot of circles, depending on the parameters I input on the function. Do you guys have a set of parameters that would output all the circles?
Thanks in advance
Solved with this example from scikit-image adjusting canny thresholds to match the posted image and also the radii range.
Thanks to #barny
Related
I'm working on a project where I have to develop an automated microscope. I need an algorithm to be able to identify circles. I was able to get an algorithm that can deal with noise and find circles, but if the experimental setup changes a bit it no longer works with out tweaking the parameters (not what I want).
I have experimented multiple approaches and the one that works most of the time is using CLAHE, then blurring the image. After this I run the result on Hough Circles. I will show an example. This is the raw data that doesn't work on the current algorithm:
After applying the CLAHE I get:
The Hough circles can only find one circle:
Another approach is using histogram equalization:
This will give clearer circles but the Hough circles doesn't work at all. Some times I do some gainDivision to remove the background and then only apply the histogram equalization:
equilize histogram after doing gain division
The histogram equalization always improve contrast so my idea was to switch the CLAHE to this but I'm not being able to.
This site https://fiveko.com/online-tools/hough-circle-detection-demo/ works on every data after histogram equalization.
Can someone provide a way to detect circles in the equilize histogram image? With out it creating random circles in the back ground due to the noise.
The best I could do is tweak GaussianBlur and HoughCircles to work for the one eqi-hist image you provided. Hope it is more general than it seems and will help you somehow.
blur = cv2.GaussianBlur(gray, (11, 11), 0)
circles = cv2.HoughCircles(blur, cv2.HOUGH_GRADIENT, 1, 20,
param1=180, param2=17, minRadius=2, maxRadius=50)
My project is REM sleep detector, and the provided pictures show the contour of my eyelid. As my eye looks in directions, this contour moves in a distinct way. For lack of a better solution, my first attempt is to draw a grid of rois on my video stream, with that in place I want to use the countnonzero function or use blob detection on the rois. Depending on which rois in the grid change values, movement and direction is detected. (I am sure there is better way)
Problem: I can not specify one or several rois of my choice, the function always work only on the entire image. How do I retrieve values from each roi specifically? Rois are set up by means of multiple rectangle functions. Code is in python. Any help greatly appreciated.
Contour of eyelid:
EDIT: This is a deeper explanation of a question I asked earlier, which is still not solved for me.
I'm currently trying to write some code that can extract data from some uncommon graphs in a book. I scanned the pages of the book, and by using opencv I would like to detect some features ofthe graphs in order to convert them into useable data. In the left graph I'm looking for the height of the "triangles" and in the right graph the distance from the center to the points where the dotted lines intersect with the gray area. In both cases I would like to convert these values into numeric data for further usage.
For the left graph, I thought of detecting all the individual colors and computing the area of each sector by counting the amount of pixels in that color. When I have the area of these sectors, I can easily calculate their heights, using basic math. The following code snippet shows how far I've gotten already with identifying different colors. However I can't manage to make this work accurately. It always seems to detect some colors of other sectors as well, or not detect all pixels of one sector. I think it has something to do with the boundaries I'm using. I can't quite figure out how to make them work. Does someone know how I can determine these values?
import numpy as np
import cv2
img = cv2.imread('images/test2.jpg')
lower = np.array([0,0,100])
upper = np.array([50,56,150])
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask = mask)
cv2.imshow('img', img)
cv2.imshow('mask', mask)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
For the right graph, I still have no idea how to extract data from it. I thought of identifying the center by detecting all the dotted lines, and then by detecting the intersections of these dotted lines with the gray area, I could measure the distance between the center and these intersections. However I couldn't yet figure out how to do this properly, since it sounds quite complex. The following code snippet shows how far I've gotten with the line detection. Also in this case the detection is far from accurate. Does someone have an idea how to tackle this problem?
import numpy as np
import cv2
# Reading the image
img = cv2.imread('test2.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Apply edge detection
edges = cv2.Canny(gray,50,150,apertureSize = 3)
# Line detection
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength=50,maxLineGap=20)
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('linesDetected.jpg',img)
For the left image, using your approach, try to look at the RGB histogram, the colors should be significant peaks, if you would like to use the relative area of the segments.
Another alternative could be to use Hough Circle Transform, which should work on circle segments. See also here.
For the right image ... let me think ...
You could create a "empty" diagram with no data inside. You know the locations of the circle segment ("cake pieces"). Then you could identify the area where the data is (the dark ones), either by using a grey threshold, an RGB threshold, or Find Contours or look for Watershed / Distance Transform.
In the end the idea is to make a boolean overlay between the cleared image and the segments (your data) that was found. Then you can identify which share of your circle segments is covered, or knowing the center, find the farthest point from the center.
I am trying to extract the tiles ( Letters ) placed on a Scrabble Board. The goal is to identify / read all possible words present on the board.
An example image -
Ideally, I would like to find the four corners of the scrabble Board, and apply perspective transform, for further processing.
After Perspective transform -
The algorithm that I am using is as follows -
Apply Adaptive thresholding to the gray scale image of the Scrabble Board.
Dilate / Close the image, find the largest contour in the given image, then find the convex hull, and completely fill the area enclosed by the convex hull.
Find the boundary points ( contour ) of the resultant image, then apply Contour approximation to get the corner points, then apply perspective transform
Corner Points found -
This approach works with images like these. But, as you can see, many square boards have a base, which is curved at the top and the bottom. Sometimes, the base is a big circular board. And with these images my approach fails. Example images and outputs -
Board with Circular base:
Points found using above approach:
I can post more such problematic images, but this image should give you an idea about the problem that I am dealing with. My question is -
How do I find the rectangular board when a circular board is also present in the image?
Some points I would like to state -
I tried using hough lines to detect the lines in the image, find the largest vertical line(s), and then find their intersections to detect the corner points. Unfortunately, because of the tiles, all lines seem to be distorted / disconnected, and hence my attempts have failed.
I have also tried to apply contour approximation to all the contours found in the image ( I was assuming that the large rectangle, too, would be a contour ), but that approach failed as well.
I have implemented the solution in openCV-python. Since the approach is what matters here, and the question was becoming a tad too long, I didn't post the relevant code.
I am willing to share more such problematic images as well, if it is required.
Thank you!
EDIT1
#Silencer's answer has been mighty helpful to me for identifying letters in the image, but I want to accurately find the placement of the words in the image. Hence, I feel identifying the rows and columns is necessary, and I can do that only when a perspective transform is applied to the board.
I wrote an answer on MSER text detection:
Trying to Plot OpenCV's MSER regions using matplotlib
The code generate the following results on your images.
You can have a try.
I think #silencer has already given quite promising solution.
But to perform perspective transform as you have mentioned that you have already tried with hough lines to find the largest rectangle but it fails because for tiles present.
Given you have large image data set may be more than 1000 images, you can also give a shot to Deep learning based approach where you can train a model with images as input and corresponding rectangle boundary points coordinate as outputs.
I need to detect the different shapes on the image such as ellipses and circles with a dashed-border.
Using OpenCV causes some troubles, for e.g. openCV detects not a contour but an amount of small separated lines(contours).
In case of ellipse with a "dashed-border" I need it to be as single contour, not a separated objects.
What would you suggest to solve this problem?
I also need to recognize the ellipses(or circles) with solid border and ellipses(circles) with a dashed-border
Ellipse with solid border
Ellipse with dashed border:
The Hough transform should not have any troubles with either dashed or solid borders, or a mix thereof. I have used it to detect dashed lines in one of my project and ellipses should not be much harder.
OpenCV by default only implements the Hough transform for circles and straight lines but there is a blog entry with code on how to adapt it for elliptical shapes.
The better idea there is to use an specific transform to detect circle and ellipses rather than a general "find contour" call.
I am not enough acquainted with OpenCV to know if there this sort of detection is built in there - it should be.
An internet search for the more generic "image ellipse detection" shows it is not a settled matter - unlike square and rectangle detection, so whatever OpenCV already contains will have to be tunned up.
(The basic idea is that the underlying algorithm creates a new n-dimensional space where every possible paramterized ellipse for the target image could be represented (with, for example, an axis for each x and y center coordinates, w and h radii, and one for rotation), and fill in hits in this matrix according to the pixels contrast in the target image).- Sebastian's answer put the correct name for this approach I could not recall: "Hough transform", and provides the needed links on how it works, and how to extend it on OpenCV.