I am trying to measure the feret diameter of microscopic particles deposited onto glass using Python OpenCV2. Presently, I have close to 150 images for which, this process needs to be automated. For measuring, I have written a Python script which is given below:
import cv2
import numpy as np
import matplotlib.pyplot as plt
from skimage import io, color, measure
##step-1 reading the image
img = cv2.imread('1.tif', 0)
pixel_2_micron = 1.75 #1 pixel is equal too 1.75 microns
#img = color.rgb2gray(io.imread('1.tif', 0))
##step-2 selecting required region if necessary
cropped_img = img[0:1422,:]
#plt.hist(img.flat, bins=100, range=(0,255))
ret, thresh = cv2.threshold(cropped_img, 162, 217, cv2.THRESH_BINARY)
#Step-3
kernel = np.ones((3,3),np.uint8)
eroded = cv2.erode(thresh, kernel, iterations = 1)
dilated = cv2.dilate(eroded, kernel, iterations = 1)
#cv2.imshow("Original Image", img)
#cv2.imshow("Threshold Image", thresh)
#cv2.imshow("Eroded Image", eroded)
#cv2.imshow("Dilated Image", dilated)
#cv2.waitKey(0)
#step-4
mask = thresh == 217
io.imshow(mask) #show the masked image
Please assist me in measuring the dimensions of the masked regions. Especially the feret diameter for all the masked regions.
I have attached the image having masked the particles.
I have just released a python module to calculate the feret diameter of binary images which would solve your problem.
https://pypi.org/project/feret/
At the moment it can’t handle images with more than one region but as described above your can use this skimage module to find connecting regions and then just take the maximum and minimum of those regions to cutout the region of the image. If you need help, tell me.
Related
I am trying to write an algorithm to systematically determine how many different "curves" are in an image. Example Image. I'm specifically interested in the white lines here, so I've used a color threshold to mask the rest of the image and only get the white pixels. These lines represent a path run by a player (wide receivers in the NFL), so I'm interested in the x and y coordinates that the path represents - and each "curve" represents a different path that the player took (or "route"). All curves should start on or behind the blue line.
However, while I can get just the white pixels, I can't figure out how to systematically identify the separate curves. In this example image, there are 8 white curves (or routes) present. I've identified those curves in this image. I tried edge detection, and then using scipy ndimage to get the number of connected components, but because the curves overlap it counts them as connected and only gives me 3 labeled components for this image as opposed to eight. Here's what the edge detection output looks like. Is there a better way to go about this? Here is my sample code.
import cv2
from skimage.morphology import skeletonize
import numpy as np
from scipy import ndimage
#Read in image
image = cv2.imread('example_image.jpeg')
#Color boundary to get white pixels
lower_white = np.array([230, 230, 230])
upper_white = np.array([255, 255, 255])
#mask image for white pixels
mask = cv2.inRange(image, lower_white, upper_white)
c_pixels = cv2.bitwise_and(image, image, mask=mask)
#make pixels from 0 to 1 form to use in skeletonize
c_pixels = c_pixels.clip(0,1)
ske_c = skeletonize(c_pixels[:,:,1]).astype(np.uint8)
#Edge Detection
inputImage =ske_c*255
edges = cv2.Canny(inputImage,100,200,apertureSize = 7)
#Show edges
cv2.imshow('edges', edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
#Find number of components
# smooth the image (to remove small objects); set the threshold
edgesf = ndimage.gaussian_filter(edges, 1)
T = 50 # set threshold by hand to avoid installing `mahotas` or
# `scipy.stsci.image` dependencies that have threshold() functions
# find connected components
labeled, nr_objects = ndimage.label(edgesf > T) # `dna[:,:,0]>T` for red-dot case
print("Number of objects is %d " % nr_objects)
I want to find the bright spots in the above image and tag them using some symbol. For this i have tried using the Hough Circle Transform algorithm that OpenCV already provides. But it is giving some kind of assertion error when i run the code. I also tried the Canny edge detection algorithm which is also provided in OpenCV but it is also giving some kind of assertion error. I would like to know if there is some method to get this done or if i can prevent those error messages.
I am new to OpenCV and any help would be really appreciated.
P.S. - I can also use Scikit-image if necessary. So if this can be done using Scikit-image then please tell me how.
Below is my preprocessing code:
import cv2
import numpy as np
image = cv2.imread("image1.png")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
binary_image = np.where(gray_image > np.mean(gray_image),1.0,0.0)
binary_image = cv2.Laplacian(binary_image, cv2.CV_8UC1)
If you are just going to work with simple images like your example where you have black background, you can use same basic preprocessing/thresholding then find connected components. Use this example code to draw a circle inside all circles in the image.
import cv2
import numpy as np
image = cv2.imread("image1.png")
# constants
BINARY_THRESHOLD = 20
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 4
# convert to gray
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# extract edges
binary_image = cv2.Laplacian(gray_image, cv2.CV_8UC1)
# fill in the holes between edges with dilation
dilated_image = cv2.dilate(binary_image, np.ones((5, 5)))
# threshold the black/ non-black areas
_, thresh = cv2.threshold(dilated_image, BINARY_THRESHOLD, 255, cv2.THRESH_BINARY)
# find connected components
components = cv2.connectedComponentsWithStats(thresh, CONNECTIVITY, cv2.CV_32S)
# draw circles around center of components
#see connectedComponentsWithStats function for attributes of components variable
centers = components[3]
for center in centers:
cv2.circle(thresh, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (255), thickness=-1)
cv2.imwrite("res.png", thresh)
cv2.imshow("result", thresh)
cv2.waitKey(0)
Here is resulting image:
Edit: connectedComponentsWithStats takes a binary image as input, and returns connected pixel groups in that image. If you would like to implement that function yourself, naive way would be:
1- Scan image pixels from top left to bottom right until you encounter a non-zero pixel that does not have a label (id).
2- When you encounter a non-zero pixel, search all its neighbours recursively( If you use 4 connectivity you check UP-LEFT-DOWN-RIGHT, with 8 connectivity you also check diagonals) until you finish that region. Assign each pixel a label. Increase your label counter.
3- Continue scanning from where you left.
I am trying to detect bubbles on an OMR sheet which looks something like this:
My code for edge detection and contour display is referenced from here. However, before finding the actual contours, I am trying to detect the edges but somehow not able to set the correct values of parameters.
This is what I get:
Code:
from imutils.perspective import four_point_transform
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2
def auto_canny(image, sigma=0.50):
# compute the median of the single channel pixel intensities
v = np.median(image)
# apply automatic Canny edge detection using the computed median
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edged = cv2.Canny(image, lower, upper)
# return the edged image
return edged
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to the input image")
args = vars(ap.parse_args())
image = cv2.imread(args["image"])
r = 500.0 / image.shape[1]
dim = (500, int(image.shape[0] * r))
# perform the actual resizing of the image and show it
image = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equalized_img = cv2.equalizeHist(gray)
cv2.imshow('Equalized', equalized_img)
# cv2.waitKey(0)
blurred = cv2.GaussianBlur(equalized_img, (7, 7), 0)
# edged =cv2.Canny(equalized_img, 30, 160)
edged = auto_canny(blurred)
cv2.imshow('edged', edged)
cv2.waitKey(0)
How can I get all the 90*4 circles?
You should be using Hough to search for circles. This method project every single white pixel as a circle, and tries to get as many overlapping pixels possible. You'll have to specify the predicted radiuses of circles to be found within image.
Left - original image
Top-right - each white pixel is projected as red circle - they are too small to find intersecting point
Bottom-right - green circle is larger, and all the intersecting points meet exactly at the middle of the circle! Both radius and position is returned by cvHoughCircles
This person dealt with blob detection (that's what finding circles is called I think) using cvHoughCircles with cvCanny-ized image (read OPs update).
OpenCV: Error in cvHoughCircles usage
You need to improve your contour detection.
Eventually by not changing it, but by better pre-processing the earlier stage.
Contour detection works better with more contrast and color separation in image. If you don´t have yet need to threshold you image with techniques like Simple Threshold, Adaptive or more smart techniques like Otsu's. Check Open CV document here.
Besides that, for your case eventually need more advanced techniques like "Adaptive Thresholding Using the Integral Image", described here.
Currently I am trying to create a pattern recognition program as a pet project. It involves jpeg files of knitting swatches and basically recognizing the stitches out of the swatch. Each stitch essentially takes the shape of an inverted 'v'.
So far have managed to get current versions of OpenCV in Python up and running in a Visual Studio environment using the inbuilt Canny Edge detection but am unsure how to progress from there because am reading up on edge detection methods and finding there are quite many.
If anyone can point me in the right way would appreciate it a lot.
So heres the code:
import numpy as np
import cv2
#Defining the autocanny function
def auto_canny(image, sigma=0.10):
#compute median of image thresholds
v = np.median(image)
#apply automatic canny edge detection using the computed median
lower = int(max(0,(1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edged = cv2.Canny(image, lower, upper)
#return the edged image
return edged
#defining the image, grayscale, blurred
image = cv2.imread('img_knit_sample2.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (3, 3), 0)
#apply Canny edge detection using a wide threshold, tight
#threshold, and automatically determined threshold
wide = cv2.Canny(blurred, 10, 200)
tight = cv2.Canny(blurred, 225, 250)
auto = auto_canny(blurred)
#show the images
cv2.imshow("Original", image)
cv2.imshow("Edges-wide", wide)
cv2.imshow("Edges-tight", tight)
cv2.imshow("Edges-auto", auto)
#Save the images to disk
cv2.imwrite('Wide_config.jpg', wide)
cv2.imwrite('Tight_config.jpg', tight)
cv2.imwrite('Autocanny.jpg', auto)
cv2.waitKey(0)
cv2.destroyAllWindows()
Unfortunately i cannot upload more than 2 images but am more than happy to get the URL's for anyone willing to go further
(Apologies for the crappy description since I am new to this and if you do understand my query and can still help then kudos and much appreciation to you)
Cheers
Edges appear where there is contrast, i.e. at the limit between zones of a different color (intensity). In your picture, this is essentially between the blue and black wools.
You can see some separation between the blue threads, but these are ridges, not edges, and you'd better use a ridge detector.
In the black areas, seeing the edges is hopeless. Don't even try.
If your goal is to locate the stitches, you may be more lucky with template matching.
I wrote a little script to transform pictures of chalkboards into a form that I can print off and mark up.
I take an image like this:
Auto-crop it, and binarize it. Here's the output of the script:
I would like to remove the largest connected black regions from the image. Is there a simple way to do this?
I was thinking of eroding the image to eliminate the text and then subtracting the eroded image from the original binarized image, but I can't help thinking that there's a more appropriate method.
Sure you can just get connected components (of certain size) with findContours or floodFill, and erase them leaving some smear. However, if you like to do it right you would think about why do you have the black area in the first place.
You did not use adaptive thresholding (locally adaptive) and this made your output sensitive to shading. Try not to get the black region in the first place by running something like this:
Mat img = imread("desk.jpg", 0);
Mat img2, dst;
pyrDown(img, img2);
adaptiveThreshold(255-img2, dst, 255, ADAPTIVE_THRESH_MEAN_C,
THRESH_BINARY, 9, 10); imwrite("adaptiveT.png", dst);
imshow("dst", dst);
waitKey(-1);
In the future, you may read something about adaptive thresholds and how to sample colors locally. I personally found it useful to sample binary colors orthogonally to the image gradient (that is on the both sides of it). This way the samples of white and black are of equal size which is a big deal since typically there are more background color which biases estimation. Using SWT and MSER may give you even more ideas about text segmentation.
I tried this:
import numpy as np
import cv2
im = cv2.imread('image.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
grayout = 255*np.ones((im.shape[0],im.shape[1],1), np.uint8)
blur = cv2.GaussianBlur(gray,(5,5),1)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
wcnt = 0
for item in contours:
area =cv2.contourArea(item)
print wcnt,area
[x,y,w,h] = cv2.boundingRect(item)
if area>10 and area<200:
roi = gray[y:y+h,x:x+w]
cntd = 0
for i in range(x,x+w):
for j in range(y,y+h):
if gray[j,i]==0:
cntd = cntd + 1
density = cntd/(float(h*w))
if density<0.5:
for i in range(x,x+w):
for j in range(y,y+h):
grayout[j,i] = gray[j,i];
wcnt = wcnt + 1
cv2.imwrite('result.png',grayout)
You have to balance two things, removing the black spots but balance that with not losing the contents of what is on the board. The output I got is this:
Here is a Python numpy implementation (using my own mahotas package) of the method for the top answer (almost the same, I think):
import mahotas as mh
import numpy as np
Imported mahotas & numpy with standard abbreviations
im = mh.imread('7Esco.jpg', as_grey=1)
Load the image & convert to gray
im2 = im[::2,::2]
im2 = mh.gaussian_filter(im2, 1.4)
Downsample and blur (for speed and noise removal).
im2 = 255 - im2
Invert the image
mean_filtered = mh.convolve(im2.astype(float), np.ones((9,9))/81.)
Mean filtering is implemented "by hand" with a convolution.
imc = im2 > mean_filtered - 4
You might need to adjust the number 4 here, but it worked well for this image.
mh.imsave('binarized.png', (imc*255).astype(np.uint8))
Convert to 8 bits and save in PNG format.