I am trying to detect the Iris of an eye by using HoughCircles in OpenCV and Python. Before doing so, I am converting the image to grayscale and I am applying a Gaussian Blur. However, I am getting disastrous results. I am suspecting that the parameters of my HoughCircles call are wrong, however I can't seem to find any that work. Could it be that HoughCircles is not able to find the right circle on an image or are my arguments just wrong?
My code:
eye = cv2.GaussianBlur(eye, (5, 5), 0)
circles = cv2.HoughCircles(eye, cv2.HOUGH_GRADIENT, 1, eye.shape[0] / 2, param1=110, param2=20, minRadius=0, maxRadius=0)
Original Image:
Image after applying HoughCircles:
Thanks to everybody for your help! I ended up dropping the idea of using HoughCircles for this task. Instead, I ended up using a CDF-approach as described here: cdf-approach
I think this solution works better since it's more robust to changes in lightning intensity and also less computationally intense (I think).
Related
I'm working on a project where I have to develop an automated microscope. I need an algorithm to be able to identify circles. I was able to get an algorithm that can deal with noise and find circles, but if the experimental setup changes a bit it no longer works with out tweaking the parameters (not what I want).
I have experimented multiple approaches and the one that works most of the time is using CLAHE, then blurring the image. After this I run the result on Hough Circles. I will show an example. This is the raw data that doesn't work on the current algorithm:
After applying the CLAHE I get:
The Hough circles can only find one circle:
Another approach is using histogram equalization:
This will give clearer circles but the Hough circles doesn't work at all. Some times I do some gainDivision to remove the background and then only apply the histogram equalization:
equilize histogram after doing gain division
The histogram equalization always improve contrast so my idea was to switch the CLAHE to this but I'm not being able to.
This site https://fiveko.com/online-tools/hough-circle-detection-demo/ works on every data after histogram equalization.
Can someone provide a way to detect circles in the equilize histogram image? With out it creating random circles in the back ground due to the noise.
The best I could do is tweak GaussianBlur and HoughCircles to work for the one eqi-hist image you provided. Hope it is more general than it seems and will help you somehow.
blur = cv2.GaussianBlur(gray, (11, 11), 0)
circles = cv2.HoughCircles(blur, cv2.HOUGH_GRADIENT, 1, 20,
param1=180, param2=17, minRadius=2, maxRadius=50)
i am quite new to Python and i try to write some code for image analysing.
Here is my initial image:
Initial image
After splitting the image in to the rgb channels, converting in to gradient, using a threshold and merging them back together i get the following image:
Gradient/Threshold
Now i have to draw contours around the black areas and get the size of the surrounded areas. I just dont know how to do it, since my trials with find/draw.contours in opencv are not succesfull at all.
Maybe someone also knows an easier way to get that from the initial image.
Hope someone can help me here!
I am coding in Python 3.
Try adaptive thresholding on the grayscale image of the input image.
Also play with the last two parameters of the adaptive thresholding. You will find good results as I have shown in the image. (Tip: Create trackbar and play with value, this will be quick and easy method to get best values of these params.)
I have this image:
I am trying to put the background into focus in order to perform edge-detection on the image. What would be the methods available to me (either on
space/frequency filed)?
What I tried is the following:
kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
im = cv2.filter2D(equ, -1, kernel)
This outputs this image:
I also played around with the centre value but with no positive result.
I also tried this:
psf = np.ones((5, 5)) / 25
equ = convolve2d(equ, psf, 'same')
deconvolved = restoration.wiener(equ, psf, 1, clip=False)
plt.imshow(deconvolved, cmap='gray')
With no appreciable changes to the image.
Any help on the matter is greatly appreciated!
EDIT:
Here is the code that I took from here:
psf = np.ones((5, 5)) / 25
equ = convolve2d(equ, psf, 'same')
deconvolved, _ = restoration.unsupervised_wiener(equ, psf)
plt.imshow(deconvolved, cmap='gray')
and here is the output:
Deblurring images is (unfortunately) quite difficult, the reason for this is that blurring removes noise, so there are several (noisy) images that will yield the same image when you blur it. This means that there is no simple way for the computer to "choose" which of the noisy images when you deblur it. Because of this, deblurring will often yield noisy images.
Now then, you might ask how photographers do this in reality. Well, they do not actually deblur images, they sharpen them (which is slightly different). When you sharpen an image, you increase the contrast near borders to emphasise them (this is why you sometimes see a halo around borders on images that have been too heavily sharpened).
In you case, you want to deblur it (and there is no convolution kernel that will allow you to do this). To do it in a good way, you need to know what process blurred the image in the first place (that is if you don't want to spend thousands of dollars on special software or don't have a masters in mathematics or astronomy).
If you still want to do this, I'd recommend searching for deconvolution, and if you don't know the blurring process, blind deconvolution. There are some (crude) functions for it in skimage, which might be of help (http://scikit-image.org/docs/stable/auto_examples/filters/plot_restoration.html#sphx-glr-auto-examples-filters-plot-restoration-py).
Finally, the final link in Jax Briggs seem helpful, but I would not cross my fingers for magical results.
I'm attempting to do some image analysis using OpenCV in python, but I think the images themselves are going to be quite tricky, and I've never done anything like this before so I want to sound out my logic and maybe get some ideas/practical code to achieve what I want to do, before I invest a lot of time going down the wrong path.
This thread comes pretty close to what I want to achieve, and in my opinion, uses an image that should be even harder to analyse than mine. I'd be interested in the size of those coloured blobs though, rather than their distance from the top left. I've also been following this code, though I'm not especially interested in a reference object (the dimensions in pixels alone would be enough for now and can be converted afterwards).
Here's the input image:
What you're looking at are ice crystals, and I want to find the average size of each. The boundaries of each are reasonably well defined, so conceptually this is my approach, and would like to hear any suggestions or comments if this is the wrong way to go:
Image in RGB is imported and converted to 8bit gray (32 would be better based on my testing in ImageJ, but I haven't figured out how to do that in OpenCV yet).
The edges are optionally Gaussian blurred to remove noise
A Canny edge detector picks up the lines
Morphological transforms (erosion + dilation) are done to attempt to close the boundaries a bit further.
At this point it seems like I have a choice to make. I could either binarise the image, and measure blobs above a threshold (i.e. max value pixels if the blobs are white), or continue with the edge detection by closing and filling contours more fully. Contours seems complicated though looking at that tutorial, and though I can get the code to run on my images, it doesn't detect the crystals properly (unsurprisingly). I'm also not sure if I should morph transform before binarizing too?
Assuming I can get all that to work, I'm thinking a reasonable measure would be the longest axis of the minimum enclosing box or ellipse.
I haven't quite ironed out all the thresholds yet, and consequently some of the crystals are missed, but since they're being averaged, this isn't presenting a massive problem at the moment.
The script stores the processed images as it goes along, so I'd also like the final output image similar to the 'labelled blobs' image in the linked SO thread, but with each blob annotated with its dimensions maybe.
Here's what an (incomplete) idealised output would look like, each crystal is identified, annotated and measured (pretty sure I can tackle the measurement when I get that far).
Abridged the images and previous code attempts as they are making the thread overly long and are no longer that relevant.
Edit III:
As per the comments, the watershed algorithm looks to be very close to achieving what I'm after. The problem here though is that it's very difficult to assign the marker regions that the algorithm requires (http://docs.opencv.org/3.2.0/d3/db4/tutorial_py_watershed.html).
I don't think this is something that can be solved with thresholds through the binarization process, as the apparent colour of the grains varies by much more than the toy example in that thread.
Edit IV
Here are a couple of the other test images I've played with. It fares much better than I expected with the smaller crystals, and theres obviously a lot of finessing that could be done with the thresholds that I havent tried yet.
Here's 1, top left to bottom right correspond to the images output in Alex's steps below.
And here's a second one with bigger crystals.
You'll notice these tend to be more homogeneous in colour, but with harder to discern edges. Something I found a little suprising is that the edge floodfilling is a little overzealous with some of the images, I would have thought this would be particularly the case for the image with the very tiny crystals, but actually it appears to have more of an effect on the larger ones. There is probably a lot of room to improve the quality of the input images from our actual microscopy, but the more 'slack' the programming can take from the system, the easier our lives will be!
As I mentioned in the comments, watershed looks to be an ok approach for this problem. But as you replied, defining the foreground and the background for the markers is the hard part! My idea was to use the morphological gradient to get good edges along the ice crystals and work from there; the morphological gradient seems to work great.
import numpy as np
import cv2
img = cv2.imread('image.png')
blur = cv2.GaussianBlur(img, (7, 7), 2)
h, w = img.shape[:2]
# Morphological gradient
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7, 7))
gradient = cv2.morphologyEx(blur, cv2.MORPH_GRADIENT, kernel)
cv2.imshow('Morphological gradient', gradient)
cv2.waitKey()
From here, I binarized the gradient using some thresholding. There's probably a cleaner way to do this...but this happens to work better than the dozen other ideas I tried.
# Binarize gradient
lowerb = np.array([0, 0, 0])
upperb = np.array([15, 15, 15])
binary = cv2.inRange(gradient, lowerb, upperb)
cv2.imshow('Binarized gradient', binary)
cv2.waitKey()
Now we have a couple issues with this. It needs some cleaning up as it's messy, and further, the ice crystals that are on the edge of the image are showing up---but we don't know where those crystals actually end so we should actually ignore those. To remove those from the mask, I looped through the pixels on the edge and used floodFill() to remove them from the binary image. Don't get confused here on the orders of rows and columns; the if statements are specifying rows and columns of the image matrix, while the input to floodFill() expects points (i.e. x, y form, which is opposite from row, col).
# Flood fill from the edges to remove edge crystals
for row in range(h):
if binary[row, 0] == 255:
cv2.floodFill(binary, None, (0, row), 0)
if binary[row, w-1] == 255:
cv2.floodFill(binary, None, (w-1, row), 0)
for col in range(w):
if binary[0, col] == 255:
cv2.floodFill(binary, None, (col, 0), 0)
if binary[h-1, col] == 255:
cv2.floodFill(binary, None, (col, h-1), 0)
cv2.imshow('Filled binary gradient', binary)
cv2.waitKey()
Great! Now just to clean this up with some opening and closing...
# Cleaning up mask
foreground = cv2.morphologyEx(binary, cv2.MORPH_OPEN, kernel)
foreground = cv2.morphologyEx(foreground, cv2.MORPH_CLOSE, kernel)
cv2.imshow('Cleanup up crystal foreground mask', foreground)
cv2.waitKey()
So this image was labeled as "foreground" because it has the sure foreground of the objects we want to segment. Now we need to create a sure background of the objects. Now, I did this in the naïve way, which just is to grow your foreground a bunch, so that your objects are probably all defined within that foreground. However, you could probably use the original mask or even the gradient in a different way to get a better definition. Still, this works OK, but is not very robust.
# Creating background and unknown mask for labeling
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (17, 17))
background = cv2.dilate(foreground, kernel, iterations=3)
unknown = cv2.subtract(background, foreground)
cv2.imshow('Background', background)
cv2.waitKey()
So all the black there is "sure background" for the watershed. Also I created the unknown matrix, which is the area between foreground and background, so that we can pre-label the markers that get passed to watershed as "hey, these pixels are definitely in the foreground, these others are definitely background, and I'm not sure about these ones between." Now all that's left to do is run the watershed! First, you label the foreground image with connected components, identify the unknown and background portions, and pass them in:
# Watershed
markers = cv2.connectedComponents(foreground)[1]
markers += 1 # Add one to all labels so that background is 1, not 0
markers[unknown==255] = 0 # mark the region of unknown with zero
markers = cv2.watershed(img, markers)
You'll notice that I ran watershed() on img. You might experiment running it on a blurred version of the image (maybe median blurring---I tried this and got a little smoother of boundaries for the crystals) or other preprocessed versions of the images which define better boundaries or something.
It takes a little work to visualize the markers as they're all small numbers in a uint8 image. So what I did was assign them some hue in 0 to 179 and set inside a HSV image, then convert to BGR to display the markers:
# Assign the markers a hue between 0 and 179
hue_markers = np.uint8(179*np.float32(markers)/np.max(markers))
blank_channel = 255*np.ones((h, w), dtype=np.uint8)
marker_img = cv2.merge([hue_markers, blank_channel, blank_channel])
marker_img = cv2.cvtColor(marker_img, cv2.COLOR_HSV2BGR)
cv2.imshow('Colored markers', marker_img)
cv2.waitKey()
And finally, overlay the markers onto the original image to check how they look.
# Label the original image with the watershed markers
labeled_img = img.copy()
labeled_img[markers>1] = marker_img[markers>1] # 1 is background color
labeled_img = cv2.addWeighted(img, 0.5, labeled_img, 0.5, 0)
cv2.imshow('watershed_result.png', labeled_img)
cv2.waitKey()
Well, that's the pipeline in it's entirety. You should be able to copy/paste each section in a row and you should be able to get the same results. The weakest parts of this pipeline is binarizing the gradient and defining the sure background for watershed. The distance transform might be useful in binarizing the gradient somehow, but I haven't gotten there yet. Either way...this was a cool problem, I would be interested to see any changes you make to this pipeline or how it fares on other ice-crystal images.
I am running blob detection on a camera image of circular objects, using OpenCV 2.4.9. I run a number of standard filters on the image (blur, adaptive threshold, skeletonization using skimage routines, and dilation of the skeleton), and would like to identify blobs (contiguous black areas) on the result. There is SimpleBlobDetector just for that, but however I set its parameters, it is not doing what I would like to.
This is the original image:
and this is the processed version, with keypoints from the detector drawn in yellow:
The keypoints don't seem to respect area constraints, and also don't appear where I would expect them.
The script looks like the following, is there something obviously wrong? Or any other suggestions?
import numpy as np
import cv2
import skimage, skimage.morphology
img0=cv2.imread('capture_b_07.cropped.png')
img1=cv2.cvtColor(img0,cv2.COLOR_BGR2GRAY)
img2=cv2.medianBlur(img1,5)
img3=cv2.bilateralFilter(img2,9,75,75)
img4=cv2.adaptiveThreshold(img3,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,21,0)
img5=skimage.img_as_ubyte(skimage.morphology.skeletonize(skimage.img_as_bool(img4)))
img6=cv2.dilate(img5,cv2.getStructuringElement(cv2.MORPH_RECT,(3,3)),iterations=1)
# blob detection
pp=cv2.SimpleBlobDetector_Params()
pp.filterByColor=True
pp.blobColor=0
pp.filterByArea=True
pp.minArea=500
pp.maxArea=5000
pp.filterByCircularity=True
pp.minCircularity=.4
pp.maxCircularity=1.
det=cv2.SimpleBlobDetector(pp)
keypts=det.detect(img6)
img7=cv2.drawKeypoints(img6,keypts,np.array([]),(0,255,255),cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('capture_b_07.blobs.png',img7)
Similar pipeline from ImageJ (Analyze Particles, circularity 0.5-1.0, area 500-5000 px^2), which I am trying to reproduce using OpenCV, gives something like this:
You can get a similar result to that of ImageJ using watershed.
I inverted your img6, labeled it and then used it as the marker for opencv watershed. Then I enlarged the watershed segmented boundary lines using a morphological filter and found connected components using opencv findContours. Below are the results. Not sure if this is what you want. I'm not posting the code as I quickly tried this out with a combination of python and c++, so it's a bit messy.
watershed segmentation
connected components
I was able to use SBD and produce a somewhat acceptable result. My first step was to try and even brightness over your image as the lower left was brighter than the upper right. Errors of omission > than commissions so there is still refinements possible.