I am trying to detect ellipses in some images.
After some functions I got this edges map:
I tried using Hough transform to detect ellipses, but this transform has very high complexity, so my computer didn't finish running the transform command even after 5 hours(!).
I also tried doing connected components and got this:
In last case I also tried continue and binarized the image.
In all cases I am stuck in these steps, and have no idea how continue from here.
My mission is detect tomatoes in the image. I am approaching this by trying to detect circles and ellipses and find the radius (or average radius in ellipses case) for each one.
edited:
I add my code for the first method (the result is edge map from above):
img = cv2.imread(r'../images/assorted_tomatoes.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgAfterLight=lightreduce(img)
imgAfterGamma=gamma_correctiom(imgAfterLight,0.8)
th2 = 255 - cv2.adaptiveThreshold(imgAfterGamma,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,5,3)
median2 = cv2.medianBlur(th2,3)
where median2 is the result of shown above in edge map
and the code for connected components:
import scipy
from scipy import ndimage
import matplotlib.pyplot as plt
import cv2
import numpy as np
fname=r'../images/assorted_tomatoes.jpg'
blur_radius = 1.0
threshold = 50
img = scipy.misc.imread(fname) # gray-scale image
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print(img.shape)
# smooth the image (to remove small objects)
imgf = ndimage.gaussian_filter(gray_img, blur_radius)
threshold = 80
# find connected components
labeled, nr_objects = ndimage.label(imgf > threshold)
where labeled is the result above
another edit:
this is the input image:
Input
The problem is that after edge detection, there are a lot of unnecessary edges in sub regions that disturbing for make smooth edge map
To me this looks like a classic problem for the watershed algorithm. It is designed for segmenting out touching objects like the tomatoes. My example is in Matlab (I'm on the wrong computer today) but it should translate to python easily. First convert to greyscale as you do and then invert the images
I=rgb2gray(img)
I2=imcomplement(I)
The image as is will over segment, so we remove minima that are too shallow. This can be done with the h-minima transform
I3=imhmin(I2,50);
You might need to play with the 50 value which is the height threshold for suppressing shallow minima. Now run the watershed algorithm and we get the following result.
L=watershed(I3);
The results are not perfect. It needs additional logic to remove some of the small regions, but it will give a reasonable estimate. The watershed and h-minima are contained in the skimage.morphology package in python.
Related
I run the SLIC (Simple Linear Iterative Clustering) superpixels algorithm from opencv and skimage on the same picture with, but got different results, the skimage slic result is better, Shown in the picture below.First one is opencv SLIC, the second one is skimage SLIC. I got several questions hope someonc can help.
Why opencv have the parameter 'region_size' while skimage is 'n_segments'?
Is convert to LAB and a guassian blur necessary?
Is there any trick to optimize the opecv SLIC result?
===================================
OpenCV SLIC
Skimage SLIC
# Opencv
src = cv2.imread('pic.jpg') #read image
# gaussian blur
src = cv2.GaussianBlur(src,(5,5),0)
# Convert to LAB
src_lab = cv.cvtColor(src,cv.COLOR_BGR2LAB) # convert to LAB
# SLIC
cv_slic = ximg.createSuperpixelSLIC(src_lab,algorithm = ximg.SLICO,
region_size = 32)
cv_slic.iterate()
# Skimage
src = io.imread('pic.jpg')
sk_slic = skimage.segmentation.slic(src,n_segments = 256, sigma = 5)
Image with superpixels centroid generated with the code below
# Measure properties of labeled image regions
regions = regionprops(labels)
# Scatter centroid of each superpixel
plt.scatter([x.centroid[1] for x in regions], [y.centroid[0] for y in regions],c = 'red')
but there is one superpixel less(top-left corner), and I found that
len(regions) is 64 while len(np.unique(labels)) is 65 , why?
I'm not sure why you think skimage slic is better (and I maintain skimage! 😂), but:
different parameterizations are common in mathematics and computer science. Whether you use region size or number of segments, you should get the same result. I expect the formula to convert between the two will be something like n_segments = image.size / region_size.
The original paper suggests that for natural images (meaning images of the real world like you showed, rather than e.g. images from a microscope or from astronomy), converting to Lab gives better results.
to me, based on your results, it looks like the gaussian blur used for scikit-image was higher than for openCV. So you could make the results more similar by playing with the sigma. I also think the compactness parameter is probably not identical between the two.
I want to find the bright spots in the above image and tag them using some symbol. For this i have tried using the Hough Circle Transform algorithm that OpenCV already provides. But it is giving some kind of assertion error when i run the code. I also tried the Canny edge detection algorithm which is also provided in OpenCV but it is also giving some kind of assertion error. I would like to know if there is some method to get this done or if i can prevent those error messages.
I am new to OpenCV and any help would be really appreciated.
P.S. - I can also use Scikit-image if necessary. So if this can be done using Scikit-image then please tell me how.
Below is my preprocessing code:
import cv2
import numpy as np
image = cv2.imread("image1.png")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
binary_image = np.where(gray_image > np.mean(gray_image),1.0,0.0)
binary_image = cv2.Laplacian(binary_image, cv2.CV_8UC1)
If you are just going to work with simple images like your example where you have black background, you can use same basic preprocessing/thresholding then find connected components. Use this example code to draw a circle inside all circles in the image.
import cv2
import numpy as np
image = cv2.imread("image1.png")
# constants
BINARY_THRESHOLD = 20
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 4
# convert to gray
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# extract edges
binary_image = cv2.Laplacian(gray_image, cv2.CV_8UC1)
# fill in the holes between edges with dilation
dilated_image = cv2.dilate(binary_image, np.ones((5, 5)))
# threshold the black/ non-black areas
_, thresh = cv2.threshold(dilated_image, BINARY_THRESHOLD, 255, cv2.THRESH_BINARY)
# find connected components
components = cv2.connectedComponentsWithStats(thresh, CONNECTIVITY, cv2.CV_32S)
# draw circles around center of components
#see connectedComponentsWithStats function for attributes of components variable
centers = components[3]
for center in centers:
cv2.circle(thresh, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (255), thickness=-1)
cv2.imwrite("res.png", thresh)
cv2.imshow("result", thresh)
cv2.waitKey(0)
Here is resulting image:
Edit: connectedComponentsWithStats takes a binary image as input, and returns connected pixel groups in that image. If you would like to implement that function yourself, naive way would be:
1- Scan image pixels from top left to bottom right until you encounter a non-zero pixel that does not have a label (id).
2- When you encounter a non-zero pixel, search all its neighbours recursively( If you use 4 connectivity you check UP-LEFT-DOWN-RIGHT, with 8 connectivity you also check diagonals) until you finish that region. Assign each pixel a label. Increase your label counter.
3- Continue scanning from where you left.
For my project i'm trying to binarize an image with openCV in python. I used the adaptive gaussian thresholding from openCV to convert the image with the following result:
I want to use the binary image for OCR but it's too noisy. Is there any way to remove the noise from the binary image in python? I already tried fastNlMeansDenoising from openCV but it doesn't make a difference.
P.S better options for binarization are welcome as well
You should start by adjusting the parameters to the adaptive threshold so it uses a larger area. That way it won't be segmenting out noise. Whenever your output image has more noise than the input image, you know you're doing something wrong.
I suggest as an adaptive threshold to use a closing (on the input grey-value image) with a structuring element just large enough to remove all the text. The difference between this result and the input image is exactly all the text. You can then apply a regular threshold to this difference.
It is also possible using GraphCuts for this kind of task. You will need to install the maxflow library in order to run the code. I quickly copied the code from their tutorial and modified it, so you could run it more easily. Just play around with the smoothing parameter to increase or decrease the denoising of the image.
import cv2
import numpy as np
import matplotlib.pyplot as plt
import maxflow
# Important parameter
# Higher values means making the image smoother
smoothing = 110
# Load the image and convert it to grayscale image
image_path = 'your_image.png'
img = cv2.imread('image_path')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = 255 * (img > 128).astype(np.uint8)
# Create the graph.
g = maxflow.Graph[int]()
# Add the nodes. nodeids has the identifiers of the nodes in the grid.
nodeids = g.add_grid_nodes(img.shape)
# Add non-terminal edges with the same capacity.
g.add_grid_edges(nodeids, smoothing)
# Add the terminal edges. The image pixels are the capacities
# of the edges from the source node. The inverted image pixels
# are the capacities of the edges to the sink node.
g.add_grid_tedges(nodeids, img, 255-img)
# Find the maximum flow.
g.maxflow()
# Get the segments of the nodes in the grid.
sgm = g.get_grid_segments(nodeids)
# The labels should be 1 where sgm is False and 0 otherwise.
img_denoised = np.logical_not(sgm).astype(np.uint8) * 255
# Show the result.
plt.subplot(121)
plt.imshow(img, cmap='gray')
plt.title('Binary image')
plt.subplot(122)
plt.title('Denoised binary image')
plt.imshow(img_denoised, cmap='gray')
plt.show()
# Save denoised image
cv2.imwrite('img_denoised.png', img_denoised)
Result
You could try the morphological transformation close to remove small "holes".
First define a kernel using numpy, you might need to play around with the size. Choose the size of the kernel as big as your noise.
kernel = np.ones((5,5),np.uint8)
Then run the morphologyEx using the kernel.
denoised = cv2.morphologyEx(image, cv2.MORPH_CLOSE, kernel)
If text gets removed you can try to erode the image, this will "grow" the black pixels. If the noise is as big as the data, this method will not help.
erosion = cv2.erode(img,kernel,iterations = 1)
I'm using canny algorithm to find the edges.
Next, I want to keep the region inside the closed curves.
My code sample is:
import cv2
import numpy as np
from matplotlib import pyplot as plt
import scipy.ndimage as nd
from skimage.morphology import watershed
from skimage.filters import sobel
img1 = cv2.imread('coins.jpg')
img = cv2.imread('coins.jpg',0)
edges= cv2.Canny(img,120,200)
markers = np.zeros_like(img)
markers[edges<50] = 0
markers[edges==255] = 1
img1[markers == 1] = [0,0,255]
img1[markers == 0] = [255,255,255]
cv2.imshow('Original', img)
cv2.imshow('Canny', img1)
#Wait for user to press a key
cv2.waitKey(0)
My output image is
I want to show the original pixels values inside the coins. Is that possible?
I suggest you use an union-find structure to get the connected components of white pixels of your img1. (You might want to find the details of this algorithm on Wikipedia : https://en.wikipedia.org/wiki/Disjoint-set_data_structure).
Once you have the connected components, my best idea is to consider the conected components that do not contain any point on the border of your picture (they should correspond to the interior of your coins) and color them in the color of img.
Sure, you may have some kind of triangles between your coins that will still be colored, but you could remove the corresponding connected components by hand.
Not really. The coin outlines are not continuous so that any kind of filling will leak.
You can repair the edges by some form of morphological processing (erosion), but this will bring the coins in contact and create unreachable regions between them.
As a fallback solution, you can try a Hough circle detector and mask inside the disks.
I have a problem using the Hu moments for shape recognition. The goal is to be able to recognize the two white circles and the two white squares on the left in the picture.
http://i.stack.imgur.com/wVzYa.jpg
I tried using the cv2.approxPolyDP method but it doesn't quite work when there is a rotation. For the white circles I used the cv2.HoughCircles method and it works pretty well. However, I really need to use the Hu moments, because it seems it is a better method.
I have this code below:
import cv2
import numpy as np
nomeimg = "coded_target.jpg"
img = cv2.imread(nomeimg)
gray = cv2.imread(nomeimg,0)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(4,4))
imgbnbin = thresh
imgbnbin = cv2.dilate(imgbnbin, element)
#find contour
contours,hierarchy=cv2.findContours(imgbnbin,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
#Elimination small contours
Areacontours = list()
for i in Areacontours:
area = cv2.contourArea(contours[i])
if (area > 90 ):
Areacontours.append(contours[i])
contours = Areacontours
print('found objects')
print(len(contours))
print("humoments")
mom = cv2.moments(contours[0])
Humoments = cv2.HuMoments(mom)
Humoments2 = -np.sign(Humoments)*np.log10(np.abs(Humoments))
print(Humoments2)
It returns 7 numbers which are the Hu invariants. I tried rotating the picture and I see that only the last two are changing. It also says that it only found 1 object found when there are obviously more than that. Is it normal?
I thought of using templates for shape identification purposes but I don't know how to do it: I believe I should exploit the Hu moments of the templates and see where it fits but I'm not sure on how to achieve it.
I appreciate the help.
You can create a template image of the squares and implement a template matching technique in order to detect it on the image.
You can also detect the contour of the template image and use the function cv2.matchshapes . However this function is used in order to compare two images. So, I guess you will have to make a window with the same size with you template and run it through you original image in order to detect which part is the best match (minimum value for the function matchshape).