Circle detection in microscopy images (noisy) - python

I'm working on a project where I have to develop an automated microscope. I need an algorithm to be able to identify circles. I was able to get an algorithm that can deal with noise and find circles, but if the experimental setup changes a bit it no longer works with out tweaking the parameters (not what I want).
I have experimented multiple approaches and the one that works most of the time is using CLAHE, then blurring the image. After this I run the result on Hough Circles. I will show an example. This is the raw data that doesn't work on the current algorithm:
After applying the CLAHE I get:
The Hough circles can only find one circle:
Another approach is using histogram equalization:
This will give clearer circles but the Hough circles doesn't work at all. Some times I do some gainDivision to remove the background and then only apply the histogram equalization:
equilize histogram after doing gain division
The histogram equalization always improve contrast so my idea was to switch the CLAHE to this but I'm not being able to.
This site https://fiveko.com/online-tools/hough-circle-detection-demo/ works on every data after histogram equalization.
Can someone provide a way to detect circles in the equilize histogram image? With out it creating random circles in the back ground due to the noise.

The best I could do is tweak GaussianBlur and HoughCircles to work for the one eqi-hist image you provided. Hope it is more general than it seems and will help you somehow.
blur = cv2.GaussianBlur(gray, (11, 11), 0)
circles = cv2.HoughCircles(blur, cv2.HOUGH_GRADIENT, 1, 20,
param1=180, param2=17, minRadius=2, maxRadius=50)

Related

Hough circles on edges

I have the following image in which I've detected the borders, representing 7 circles. In my opinion, it is fairly easy to identify the circles on it, but I am having trouble detecting all circles with the opencv Hough transform. Here it is what I've tried:
img = cv2.imread('sample.png',0)
edges = cv2.Canny(img,20,120)
circles = cv2.HoughCircles(edges,cv2.HOUGH_GRADIENT,1,100,
param2=40,minRadius=0,maxRadius=250)
I either get the central circle, the outer one or a lot of circles, depending on the parameters I input on the function. Do you guys have a set of parameters that would output all the circles?
Thanks in advance
Solved with this example from scikit-image adjusting canny thresholds to match the posted image and also the radii range.
Thanks to #barny

HoughCircles is not able to find Iris of an eye

I am trying to detect the Iris of an eye by using HoughCircles in OpenCV and Python. Before doing so, I am converting the image to grayscale and I am applying a Gaussian Blur. However, I am getting disastrous results. I am suspecting that the parameters of my HoughCircles call are wrong, however I can't seem to find any that work. Could it be that HoughCircles is not able to find the right circle on an image or are my arguments just wrong?
My code:
eye = cv2.GaussianBlur(eye, (5, 5), 0)
circles = cv2.HoughCircles(eye, cv2.HOUGH_GRADIENT, 1, eye.shape[0] / 2, param1=110, param2=20, minRadius=0, maxRadius=0)
Original Image:
Image after applying HoughCircles:
Thanks to everybody for your help! I ended up dropping the idea of using HoughCircles for this task. Instead, I ended up using a CDF-approach as described here: cdf-approach
I think this solution works better since it's more robust to changes in lightning intensity and also less computationally intense (I think).

Detecting overlay in video with python

I am working with frames from a video. The video is overlaid with several semi-transparent boxes and my goal is to find the coordinates of these boxes. These boxes are the only fixed points in the video - the camera is moving, color intensity changes, there is no fixed reference. The problem is that the boxes are semi-transparent, so they also change with the video, albeit not as much. It seems that neither background substraction nor tracking have the right tools for this problem.
Nevertheless, I've tried the background substractors that come with cv2 as well as some homebrewn methods using differences between frames and thresholding. Unfortunately, these don't work due to the box transparency.
For reference, here is what the mean difference between the first 50 frames looks like:
And here is what cv2 background subtractor KNN returns:
I've experimented with thresholds, number of frames taken into account, various contouring algorithms, blurring/sharpening/etc. I've also tried techniques from document layout analysis.
I wonder if maybe there is something I'm missing due to not knowing the right keyword. I don't expect anyone here to give me the perfect solution, but any pointers as to where to look/what approach to try, are appreciated. I'm not bound to cv2 either, anything that works in python will do.
If you take a sample of random frames as elements of an array, and calculate the FFT, all the semi-transparent boxes will have a very high signal, and the rest of the pixels would behave as noise, so noise remotion will filter away the semi-transparent boxes. You can add the result of your other methods as additional frames for the fft
You are trying to find something that does not changes on the entire video, so do not use consecutive frames, or if you are forced to use consecutive frames, shuffle them randomly.
To gain speed, you may only take only one color channel from each frame, and pick the color channel randomly. That way the colors becomes noise, and cancel each other.
If the FFT is too expensive, just averaging random frames should filter the noise.
Ok here is first step, you can make Canny from that image, from canny you can make countours:
import cv2
import random as rng
image = cv2.imread("c:\stackoverflow\interface.png")
edges = cv2.Canny(image, 100, 240)
contoursext, hierarchy = cv2.findContours(
edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#cv2.RETR_EXTERNAL would work better if the image would not be framed.
for i in range(len(contoursext)):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
cv2.drawContours(image, contoursext, i, color, 1, cv2.LINE_8, hierarchy, 0)
# Show in a window
cv2.imshow("Canny", edges)
cv2.imshow("Contour", image)
cv2.waitKey(0)
Then you can test if the contour or combination of 2 contours is rectangles for example...wich would probably detect most of the rectangle overlays...
Or Also you can try to detect canny lines if they are similar to rectangles.

Irregular shape detection and measurement in python opencv

I'm attempting to do some image analysis using OpenCV in python, but I think the images themselves are going to be quite tricky, and I've never done anything like this before so I want to sound out my logic and maybe get some ideas/practical code to achieve what I want to do, before I invest a lot of time going down the wrong path.
This thread comes pretty close to what I want to achieve, and in my opinion, uses an image that should be even harder to analyse than mine. I'd be interested in the size of those coloured blobs though, rather than their distance from the top left. I've also been following this code, though I'm not especially interested in a reference object (the dimensions in pixels alone would be enough for now and can be converted afterwards).
Here's the input image:
What you're looking at are ice crystals, and I want to find the average size of each. The boundaries of each are reasonably well defined, so conceptually this is my approach, and would like to hear any suggestions or comments if this is the wrong way to go:
Image in RGB is imported and converted to 8bit gray (32 would be better based on my testing in ImageJ, but I haven't figured out how to do that in OpenCV yet).
The edges are optionally Gaussian blurred to remove noise
A Canny edge detector picks up the lines
Morphological transforms (erosion + dilation) are done to attempt to close the boundaries a bit further.
At this point it seems like I have a choice to make. I could either binarise the image, and measure blobs above a threshold (i.e. max value pixels if the blobs are white), or continue with the edge detection by closing and filling contours more fully. Contours seems complicated though looking at that tutorial, and though I can get the code to run on my images, it doesn't detect the crystals properly (unsurprisingly). I'm also not sure if I should morph transform before binarizing too?
Assuming I can get all that to work, I'm thinking a reasonable measure would be the longest axis of the minimum enclosing box or ellipse.
I haven't quite ironed out all the thresholds yet, and consequently some of the crystals are missed, but since they're being averaged, this isn't presenting a massive problem at the moment.
The script stores the processed images as it goes along, so I'd also like the final output image similar to the 'labelled blobs' image in the linked SO thread, but with each blob annotated with its dimensions maybe.
Here's what an (incomplete) idealised output would look like, each crystal is identified, annotated and measured (pretty sure I can tackle the measurement when I get that far).
Abridged the images and previous code attempts as they are making the thread overly long and are no longer that relevant.
Edit III:
As per the comments, the watershed algorithm looks to be very close to achieving what I'm after. The problem here though is that it's very difficult to assign the marker regions that the algorithm requires (http://docs.opencv.org/3.2.0/d3/db4/tutorial_py_watershed.html).
I don't think this is something that can be solved with thresholds through the binarization process, as the apparent colour of the grains varies by much more than the toy example in that thread.
Edit IV
Here are a couple of the other test images I've played with. It fares much better than I expected with the smaller crystals, and theres obviously a lot of finessing that could be done with the thresholds that I havent tried yet.
Here's 1, top left to bottom right correspond to the images output in Alex's steps below.
And here's a second one with bigger crystals.
You'll notice these tend to be more homogeneous in colour, but with harder to discern edges. Something I found a little suprising is that the edge floodfilling is a little overzealous with some of the images, I would have thought this would be particularly the case for the image with the very tiny crystals, but actually it appears to have more of an effect on the larger ones. There is probably a lot of room to improve the quality of the input images from our actual microscopy, but the more 'slack' the programming can take from the system, the easier our lives will be!
As I mentioned in the comments, watershed looks to be an ok approach for this problem. But as you replied, defining the foreground and the background for the markers is the hard part! My idea was to use the morphological gradient to get good edges along the ice crystals and work from there; the morphological gradient seems to work great.
import numpy as np
import cv2
img = cv2.imread('image.png')
blur = cv2.GaussianBlur(img, (7, 7), 2)
h, w = img.shape[:2]
# Morphological gradient
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7, 7))
gradient = cv2.morphologyEx(blur, cv2.MORPH_GRADIENT, kernel)
cv2.imshow('Morphological gradient', gradient)
cv2.waitKey()
From here, I binarized the gradient using some thresholding. There's probably a cleaner way to do this...but this happens to work better than the dozen other ideas I tried.
# Binarize gradient
lowerb = np.array([0, 0, 0])
upperb = np.array([15, 15, 15])
binary = cv2.inRange(gradient, lowerb, upperb)
cv2.imshow('Binarized gradient', binary)
cv2.waitKey()
Now we have a couple issues with this. It needs some cleaning up as it's messy, and further, the ice crystals that are on the edge of the image are showing up---but we don't know where those crystals actually end so we should actually ignore those. To remove those from the mask, I looped through the pixels on the edge and used floodFill() to remove them from the binary image. Don't get confused here on the orders of rows and columns; the if statements are specifying rows and columns of the image matrix, while the input to floodFill() expects points (i.e. x, y form, which is opposite from row, col).
# Flood fill from the edges to remove edge crystals
for row in range(h):
if binary[row, 0] == 255:
cv2.floodFill(binary, None, (0, row), 0)
if binary[row, w-1] == 255:
cv2.floodFill(binary, None, (w-1, row), 0)
for col in range(w):
if binary[0, col] == 255:
cv2.floodFill(binary, None, (col, 0), 0)
if binary[h-1, col] == 255:
cv2.floodFill(binary, None, (col, h-1), 0)
cv2.imshow('Filled binary gradient', binary)
cv2.waitKey()
Great! Now just to clean this up with some opening and closing...
# Cleaning up mask
foreground = cv2.morphologyEx(binary, cv2.MORPH_OPEN, kernel)
foreground = cv2.morphologyEx(foreground, cv2.MORPH_CLOSE, kernel)
cv2.imshow('Cleanup up crystal foreground mask', foreground)
cv2.waitKey()
So this image was labeled as "foreground" because it has the sure foreground of the objects we want to segment. Now we need to create a sure background of the objects. Now, I did this in the naïve way, which just is to grow your foreground a bunch, so that your objects are probably all defined within that foreground. However, you could probably use the original mask or even the gradient in a different way to get a better definition. Still, this works OK, but is not very robust.
# Creating background and unknown mask for labeling
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (17, 17))
background = cv2.dilate(foreground, kernel, iterations=3)
unknown = cv2.subtract(background, foreground)
cv2.imshow('Background', background)
cv2.waitKey()
So all the black there is "sure background" for the watershed. Also I created the unknown matrix, which is the area between foreground and background, so that we can pre-label the markers that get passed to watershed as "hey, these pixels are definitely in the foreground, these others are definitely background, and I'm not sure about these ones between." Now all that's left to do is run the watershed! First, you label the foreground image with connected components, identify the unknown and background portions, and pass them in:
# Watershed
markers = cv2.connectedComponents(foreground)[1]
markers += 1 # Add one to all labels so that background is 1, not 0
markers[unknown==255] = 0 # mark the region of unknown with zero
markers = cv2.watershed(img, markers)
You'll notice that I ran watershed() on img. You might experiment running it on a blurred version of the image (maybe median blurring---I tried this and got a little smoother of boundaries for the crystals) or other preprocessed versions of the images which define better boundaries or something.
It takes a little work to visualize the markers as they're all small numbers in a uint8 image. So what I did was assign them some hue in 0 to 179 and set inside a HSV image, then convert to BGR to display the markers:
# Assign the markers a hue between 0 and 179
hue_markers = np.uint8(179*np.float32(markers)/np.max(markers))
blank_channel = 255*np.ones((h, w), dtype=np.uint8)
marker_img = cv2.merge([hue_markers, blank_channel, blank_channel])
marker_img = cv2.cvtColor(marker_img, cv2.COLOR_HSV2BGR)
cv2.imshow('Colored markers', marker_img)
cv2.waitKey()
And finally, overlay the markers onto the original image to check how they look.
# Label the original image with the watershed markers
labeled_img = img.copy()
labeled_img[markers>1] = marker_img[markers>1] # 1 is background color
labeled_img = cv2.addWeighted(img, 0.5, labeled_img, 0.5, 0)
cv2.imshow('watershed_result.png', labeled_img)
cv2.waitKey()
Well, that's the pipeline in it's entirety. You should be able to copy/paste each section in a row and you should be able to get the same results. The weakest parts of this pipeline is binarizing the gradient and defining the sure background for watershed. The distance transform might be useful in binarizing the gradient somehow, but I haven't gotten there yet. Either way...this was a cool problem, I would be interested to see any changes you make to this pipeline or how it fares on other ice-crystal images.

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

Categories