I try to detect some dots on some dice for a hobby project.
The dice are cut free nicely and there are more or less good photos of them.
The photos are then gray-scaled, treated anti-noise with a median filter and made into np.array so that cv2 likes them.
Now I try to count the pips and... well... SimpleBlobDetector finds blobs everywhere depending on the exact adjustments of the parameters, or no blob at all, and never those most obvious pips on top. Especially if I activate "circularity" more than 0.7 or "Inertia", it finds nothing at all.
What's wrong here? Do I have to preprocess my picture further, or is it in the parameters below?
I tried to exchange the filter with Gaussian Blur, I also tried to posterize the image in order to have fairly large areas with exactly the same greyscale value. Nothing helps.
I tried many variants by now, this current variant results in the picture below.
blob_params = cv2.SimpleBlobDetector_Params()
blob_params.minDistBetweenBlobs = 1
blob_params.filterByCircularity = True
blob_params.minCircularity = 0.5
blob_params.minThreshold = 1
blob_params.thresholdStep = 1
blob_params.maxThreshold = 255
#blob_params.filterByInertia = False
#blob_params.minInertiaRatio = 0.2
blob_params.minArea = 6
blob_params.maxArea = 10000
detector = cv2.SimpleBlobDetector_create(blob_params)
keypoints = detector.detect(die_np) # detect blobs... just it doesn't work
image = cv2.drawKeypoints(die_np,
keypoints,
np.array([]),
(255,255,0),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im = Image.fromarray(image)
im
Okay, and the solution is:
There are some parameters set before you even start. Those parameters are optimized for finding dark and round blobs in a light background. Not the case in my image! This was taking me by surprise - after printing out every single parameter and correcting everything what was not fitting to my use case, I was finally able to use this function. Now it easily finds the dice pips on several different kind of dice, I've tried.
This would have been easier if the openCV parameter list would have been a little bit more accessible, as dict for example, or better commented.
I'll leave my code here - it contains a comprehensive list of parameters that this function uses, in the comments also remarks how to use some of these, and a function to use the function. I also leave the link to the simpleBlobDetector documentation here, as it is somewhat difficult to find in the net.
Coming from C/C++ background, this function totally not pythonic. It does not accept every image format as it would be normal in python context, instead it wants a very specific data type. In python at least it would tell you what it wants instead, but this function keeps quiet in order not to destroy the riddle. It then goes on to throw an exception without explanation.
So I was making a function that at least accepts numpy in colored and in black/white format, and PIL.Image. That opens the usability of this function up a little bit.
OpenCV, when you read this - normally in python a readable text is provided for each function in '''triple marks'''. Emphasis on "readable".
import cv2
import numpy as np
from PIL import Image
def blobize(im, blob_params):
'''
Takes an image and tries to find blobs on this image.
Parameters
----------
im : nd.array, single colored. In case of several colors, the first
color channel is taken. Alternatively you can provide an
PIL.Image, which is then converted to "L" and used directly.
blob_params : a cv2.SimpleBlobDetector_Params() parameter list
Returns
-------
blobbed_im : A greyscale image with the found blobs circled in red
keypoints : an OpenCV keypoint list.
'''
if Image.isImageType(im):
im = np.array(im.convert("L"))
if isinstance(im, np.ndarray):
if (len(im.shape) >= 3
and im.shape[2] > 1):
im = im[:,:,0]
detector = cv2.SimpleBlobDetector_create(blob_params)
try:
keypoints = detector.detect(im)
except:
keypoints = None
if keypoints:
blobbed_im = cv2.drawKeypoints(im, keypoints, np.array([]),
(255,0,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
else:
blobbed_im = im
return blobbed_im, keypoints
blob_params = cv2.SimpleBlobDetector_Params()
# images are converted to many binary b/w layers. Then 0 searches for dark blobs, 255 searches for bright blobs. Or you set the filter to "false", then it finds bright and dark blobs, both.
blob_params.filterByColor = False
blob_params.blobColor = 0
# Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).
blob_params.filterByArea = True
blob_params.minArea = 3. # Highly depending on image resolution and dice size
blob_params.maxArea = 400. # float! Highly depending on image resolution.
blob_params.filterByCircularity = True
blob_params.minCircularity = 0. # 0.7 could be rectangular, too. 1 is round. Not set because the dots are not always round when they are damaged, for example.
blob_params.maxCircularity = 3.4028234663852886e+38 # infinity.
blob_params.filterByConvexity = False
blob_params.minConvexity = 0.
blob_params.maxConvexity = 3.4028234663852886e+38
blob_params.filterByInertia = True # a second way to find round blobs.
blob_params.minInertiaRatio = 0.55 # 1 is round, 0 is anywhat
blob_params.maxInertiaRatio = 3.4028234663852886e+38 # infinity again
blob_params.minThreshold = 0 # from where to start filtering the image
blob_params.maxThreshold = 255.0 # where to end filtering the image
blob_params.thresholdStep = 5 # steps to go through
blob_params.minDistBetweenBlobs = 3.0 # avoid overlapping blobs. must be bigger than 0. Highly depending on image resolution!
blob_params.minRepeatability = 2 # if the same blob center is found at different threshold values (within a minDistBetweenBlobs), then it (basically) increases a counter for that blob. if the counter for each blob is >= minRepeatability, then it's a stable blob, and produces a KeyPoint, otherwise the blob is discarded.
res, keypoints = blobize(im2, blob_params)
Related
I'm trying to de-noise an image which looks like this:
How can the white noisy pixels be removed? Initially I thought a median filter should suffice, but it doesn't look like it. Moreover, the image is RGB. Any thoughts?
I use Python.
(img: https://www.dropbox.com/s/aiyxkswtiyji7cw/exampledenoise.png?dl=0)
Median blur, thresholding, comparisons, boolean expressions.
Unless you know why this picture was damaged and how, this is just guesswork.
In the PNG the defects all have an exact value of 255. Defects happen in dark areas of the picture. I'm guessing that something caused an underflow in those pixels, wrapping them from 0 or -1 to 255.
# using my own "imshow" shim for jupyter notebook, which handles boolean arrays
im = cv.imread("exampledenoise.png")
# imshow((im == 255).any(axis=2))
# estimating local brightness... so we DON'T "fix" bright spots
ksize = 13 # least value that appears to suppress all defects
med = cv.medianBlur(im, ksize=ksize)
# imshow(med)
# imshow(med >= 64) # candidate areas/channels.
# true = values likely explained by overexposure (don't fix)
# false = values likely explained by defect
fixup = (med < 64) & (im == 255)
assert len(fixup.shape) == 3 # fix individual channels
# imshow(fixup.any(axis=2)) # gonna fix values in these pixels
output = im.copy()
output[fixup] = 0 # assuming the damage was from some underflow, so 0 is appropriate
# imshow(output)
I have the following problem. In the image below, a so-called teardrop-like DNA is shown.
In red, a trace yielded by skimage.morphology.skeletonize is shown.
The bright yellow region is a molecular complex called "Streptavidin", which I need to identify.
In essence, my problem revolves around two steps:
I need to identify the brightest spot in the image. For that, I apply a Gaussian filter to the image and subsequently try a thresholding algorithm (for example, skimage.filters.threshold_yen).
Then, I use skimage.measure.regionprops.centroid to identify the Streptavidin.
However, this approach is prone to errors: Since I need not only identify this single image, but do the same for other images as well, I cannot use a global threshold. For some images, Yen Thresholding works fine, for some it doesn't yield a result at all. So my question here would be: Is there an alternative way to identify the centroid of the Streptavidin complex, given that the various image display different intenstity values, i.e. a global approach is difficult to implement?
I then want to delete all trace points (in red) that lie within the Streptavidin complex. Here, I wanted to use a boolean mask representing the Streptavidin complex and then tell the program to delete all points that lie within the complex.
But again, this is very dependent on the actual boolean mask I get from thresholding, so I would like to think of another way to do it.
Overall, here is my code for reference:
img_gaussian = filters.gaussian(mol_filtered, sigma=1.)
# Get threshold (use yen threshold)
if self.anal_pars['strep_thresh'] is None:
thresh = filters.threshold_yen(img_gaussian)
# If Yen thresholding fails:
if thresh < 0.8:
thresh = filters.threshold_minimum(img_gaussian)
else:
thresh = self.anal_pars['strep_thresh']
# Split the image according to threshold selected
img_bw = copy.deepcopy(img_gaussian)
img_bw[img_bw < thresh] = 0
img_bw[img_bw != 0] = 1
# Label the regions that lie above the threshold
img_labelled = morphology.label(img_bw)
# Check if the thresholding yielded more than one area:
if img_labelled.max() != 1:
# Remove possible artefacts (happens for Yen filter)
img_bw = morphology.remove_small_objects(img_bw.astype(bool))
img_labelled = morphology.label(img_bw) # relabel image
for region in measure.regionprops(img_labelled):
strep_index = region.centroid
mol_pars['strep_index'] = strep_index
# Add the streptavidin area to the mol_pars dictionary
img_strep = copy.deepcopy(img_bw)
mol_pars['img_strep'] = img_strep
So this seems quite messy and inefficient, there surely is a faster way to do it?
Thanks for your help
I use OpenCV and Python and I want to remove the small connected object from my image.
I have the following binary image as input:
The image is the result of this code:
dilation = cv2.dilate(dst,kernel,iterations = 2)
erosion = cv2.erode(dilation,kernel,iterations = 3)
I want to remove the objects highlighted in red:
How can I achieve this using OpenCV?
How about with connectedComponentsWithStats (doc):
# find all of the connected components (white blobs in your image).
# im_with_separated_blobs is an image where each detected blob has a different pixel value ranging from 1 to nb_blobs - 1.
nb_blobs, im_with_separated_blobs, stats, _ = cv2.connectedComponentsWithStats(im)
# stats (and the silenced output centroids) gives some information about the blobs. See the docs for more information.
# here, we're interested only in the size of the blobs, contained in the last column of stats.
sizes = stats[:, -1]
# the following lines result in taking out the background which is also considered a component, which I find for most applications to not be the expected output.
# you may also keep the results as they are by commenting out the following lines. You'll have to update the ranges in the for loop below.
sizes = sizes[1:]
nb_blobs -= 1
# minimum size of particles we want to keep (number of pixels).
# here, it's a fixed value, but you can set it as you want, eg the mean of the sizes or whatever.
min_size = 150
# output image with only the kept components
im_result = np.zeros_like(im_with_separated_blobs)
# for every component in the image, keep it only if it's above min_size
for blob in range(nb_blobs):
if sizes[blob] >= min_size:
# see description of im_with_separated_blobs above
im_result[im_with_separated_blobs == blob + 1] = 255
Output :
In order to remove objects automatically you need to locate them in the image.
From the image you provided I see nothing that distinguishes the 7 highlighted items from others.
You have to tell your computer how to recognize objects you don't want. If they look the same, this is not possible.
If you have multiple images where the objects always look like that you could use template matching techniques.
Also the closing operation doesn't make much sense to me.
#For isolated or unconnected blobs: Try this (you can set noise_removal_threshold to whatever you like and make it relative to the largest contour for example or a nominal value like 100 or 25).
mask = np.zeros_like(img)
for contour in contours:
area = cv2.contourArea(contour)
if area > noise_removal_threshold:
cv2.fillPoly(mask, [contour], 255)
Removing small connected components by area is called area opening. OpenCV does not have this as a function, it can be implemented as shown in other answers. But most other image processing packages will have an area opening function.
For example using scikit-image:
import skimage
import imageio.v3 as iio
img = iio.imread('cQMZm.png')[:,:,0]
out = skimage.morphology.area_opening(img, area_threshold=150, connectivity=2)
For example using DIPlib:
import diplib as dip
out = dip.AreaOpening(img, filterSize=150, connectivity=2)
PS: The DIPlib implementation is noticeably faster. Disclaimer: I'm an author of DIPlib.
I have a problem using the Hu moments for shape recognition. The goal is to be able to recognize the two white circles and the two white squares on the left in the picture.
http://i.stack.imgur.com/wVzYa.jpg
I tried using the cv2.approxPolyDP method but it doesn't quite work when there is a rotation. For the white circles I used the cv2.HoughCircles method and it works pretty well. However, I really need to use the Hu moments, because it seems it is a better method.
I have this code below:
import cv2
import numpy as np
nomeimg = "coded_target.jpg"
img = cv2.imread(nomeimg)
gray = cv2.imread(nomeimg,0)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(4,4))
imgbnbin = thresh
imgbnbin = cv2.dilate(imgbnbin, element)
#find contour
contours,hierarchy=cv2.findContours(imgbnbin,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
#Elimination small contours
Areacontours = list()
for i in Areacontours:
area = cv2.contourArea(contours[i])
if (area > 90 ):
Areacontours.append(contours[i])
contours = Areacontours
print('found objects')
print(len(contours))
print("humoments")
mom = cv2.moments(contours[0])
Humoments = cv2.HuMoments(mom)
Humoments2 = -np.sign(Humoments)*np.log10(np.abs(Humoments))
print(Humoments2)
It returns 7 numbers which are the Hu invariants. I tried rotating the picture and I see that only the last two are changing. It also says that it only found 1 object found when there are obviously more than that. Is it normal?
I thought of using templates for shape identification purposes but I don't know how to do it: I believe I should exploit the Hu moments of the templates and see where it fits but I'm not sure on how to achieve it.
I appreciate the help.
You can create a template image of the squares and implement a template matching technique in order to detect it on the image.
You can also detect the contour of the template image and use the function cv2.matchshapes . However this function is used in order to compare two images. So, I guess you will have to make a window with the same size with you template and run it through you original image in order to detect which part is the best match (minimum value for the function matchshape).
I have been asked to write a program to find 'stars' in an image by converting the image file to a numpy array and generating an array of the coordinates of the brightest pixels in the image above a specified threshold (representing background interference).
Once I have located the brightest pixel in the image I must record its x,y coordinates, and set the value of that pixel and surrounding 10X10 pixel area to zero, effectively removing the star from the image.
I already have a helper code which converts the image to an array, and have attempted to tackle the problem as follows;
I have defined a variable
Max = array.max()
and used a while loop;
while Max >= threshold
coordinates = numpy.where(array == Max) # find the maximum value
however I want this to loop over the whole array for all of the coordinates,not just find the first maximum, and also remove each maximum when found and setting the surrounding 10X10 area to zero. I have thought about using a for loop to do this but am unsure how I should use it since I am new to Python.
I would appreciate any suggestions,
Thanks
There are a number of different ways to do it with just numpy, etc.
There's the "brute force" way:
import Image
import numpy as np
im = Image.open('test.bmp')
data = np.array(im)
threshold = 200
window = 5 # This is the "half" window...
ni, nj = data.shape
new_value = 0
for i, j in zip(*np.where(data > threshold)):
istart, istop = max(0, i-window), min(ni, i+window+1)
jstart, jstop = max(0, j-window), min(nj, j+window+1)
data[istart:istop, jstart:jstop] = new_value
Or the faster approach...
import Image
import numpy as np
import scipy.ndimage
im = Image.open('test.bmp')
data = np.array(im)
threshold = 200
window = 10 # This is the "full" window...
new_value = 0
mask = data > threshold
mask = scipy.ndimage.uniform_filter(mask.astype(np.float), size=window)
mask = mask > 0
data[mask] = new_value
Astronomy.net will do this for you:
If you have astronomical imaging of the sky with celestial coordinates
you do not know—or do not trust—then Astrometry.net is for you. Input
an image and we'll give you back astrometric calibration meta-data,
plus lists of known objects falling inside the field of view.
We have built this astrometric calibration service to create correct,
standards-compliant astrometric meta-data for every useful
astronomical image ever taken, past and future, in any state of
archival disarray. We hope this will help organize, annotate and make
searchable all the world's astronomical information.
You don't even have to upload the images to their website. You can download the source. It is licensed under the GPL and uses NumPy, so you can muck around with it if you need to.
Note that you will need to first convert your bitmap to one of the following: JPEG, GIF, PNG, or FITS image.