I'm looking to fill holes inside the largest component of an binary image.
Input binary image:
So what I'm looking for is to fill "lakes" inside the purple component, but not altering any other pixels.
I know there is several answers on stackoverflow using connectedComponentsWithStats function from opencv or with scikit-image but I cannot make it work the way I'm looking to. So please don't tag this as duplicate.
Here is what I've made using other questions' answers:
nb_components, output, stats, centroids = cv.connectedComponentsWithStats(image, connectivity=4)
sizes = stats[:, -1]
max_label = 1
max_size = sizes[1]
for i in range(2, nb_components):
if sizes[i] > max_size:
max_label = i
max_size = sizes[i]
res = np.zeros(output.shape)
res[output == max_label] = res
It always modifies the rest of the pixels and is not filling all the holes.
I also tried different techniques from different packages (found on other questions). The result is not the one I'm looking for:
I'm looking to only modify the pixel of the holes and not touching anything else.
What should I change in my script to do that ?
Related
I have the following problem. In the image below, a so-called teardrop-like DNA is shown.
In red, a trace yielded by skimage.morphology.skeletonize is shown.
The bright yellow region is a molecular complex called "Streptavidin", which I need to identify.
In essence, my problem revolves around two steps:
I need to identify the brightest spot in the image. For that, I apply a Gaussian filter to the image and subsequently try a thresholding algorithm (for example, skimage.filters.threshold_yen).
Then, I use skimage.measure.regionprops.centroid to identify the Streptavidin.
However, this approach is prone to errors: Since I need not only identify this single image, but do the same for other images as well, I cannot use a global threshold. For some images, Yen Thresholding works fine, for some it doesn't yield a result at all. So my question here would be: Is there an alternative way to identify the centroid of the Streptavidin complex, given that the various image display different intenstity values, i.e. a global approach is difficult to implement?
I then want to delete all trace points (in red) that lie within the Streptavidin complex. Here, I wanted to use a boolean mask representing the Streptavidin complex and then tell the program to delete all points that lie within the complex.
But again, this is very dependent on the actual boolean mask I get from thresholding, so I would like to think of another way to do it.
Overall, here is my code for reference:
img_gaussian = filters.gaussian(mol_filtered, sigma=1.)
# Get threshold (use yen threshold)
if self.anal_pars['strep_thresh'] is None:
thresh = filters.threshold_yen(img_gaussian)
# If Yen thresholding fails:
if thresh < 0.8:
thresh = filters.threshold_minimum(img_gaussian)
else:
thresh = self.anal_pars['strep_thresh']
# Split the image according to threshold selected
img_bw = copy.deepcopy(img_gaussian)
img_bw[img_bw < thresh] = 0
img_bw[img_bw != 0] = 1
# Label the regions that lie above the threshold
img_labelled = morphology.label(img_bw)
# Check if the thresholding yielded more than one area:
if img_labelled.max() != 1:
# Remove possible artefacts (happens for Yen filter)
img_bw = morphology.remove_small_objects(img_bw.astype(bool))
img_labelled = morphology.label(img_bw) # relabel image
for region in measure.regionprops(img_labelled):
strep_index = region.centroid
mol_pars['strep_index'] = strep_index
# Add the streptavidin area to the mol_pars dictionary
img_strep = copy.deepcopy(img_bw)
mol_pars['img_strep'] = img_strep
So this seems quite messy and inefficient, there surely is a faster way to do it?
Thanks for your help
I try to detect some dots on some dice for a hobby project.
The dice are cut free nicely and there are more or less good photos of them.
The photos are then gray-scaled, treated anti-noise with a median filter and made into np.array so that cv2 likes them.
Now I try to count the pips and... well... SimpleBlobDetector finds blobs everywhere depending on the exact adjustments of the parameters, or no blob at all, and never those most obvious pips on top. Especially if I activate "circularity" more than 0.7 or "Inertia", it finds nothing at all.
What's wrong here? Do I have to preprocess my picture further, or is it in the parameters below?
I tried to exchange the filter with Gaussian Blur, I also tried to posterize the image in order to have fairly large areas with exactly the same greyscale value. Nothing helps.
I tried many variants by now, this current variant results in the picture below.
blob_params = cv2.SimpleBlobDetector_Params()
blob_params.minDistBetweenBlobs = 1
blob_params.filterByCircularity = True
blob_params.minCircularity = 0.5
blob_params.minThreshold = 1
blob_params.thresholdStep = 1
blob_params.maxThreshold = 255
#blob_params.filterByInertia = False
#blob_params.minInertiaRatio = 0.2
blob_params.minArea = 6
blob_params.maxArea = 10000
detector = cv2.SimpleBlobDetector_create(blob_params)
keypoints = detector.detect(die_np) # detect blobs... just it doesn't work
image = cv2.drawKeypoints(die_np,
keypoints,
np.array([]),
(255,255,0),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im = Image.fromarray(image)
im
Okay, and the solution is:
There are some parameters set before you even start. Those parameters are optimized for finding dark and round blobs in a light background. Not the case in my image! This was taking me by surprise - after printing out every single parameter and correcting everything what was not fitting to my use case, I was finally able to use this function. Now it easily finds the dice pips on several different kind of dice, I've tried.
This would have been easier if the openCV parameter list would have been a little bit more accessible, as dict for example, or better commented.
I'll leave my code here - it contains a comprehensive list of parameters that this function uses, in the comments also remarks how to use some of these, and a function to use the function. I also leave the link to the simpleBlobDetector documentation here, as it is somewhat difficult to find in the net.
Coming from C/C++ background, this function totally not pythonic. It does not accept every image format as it would be normal in python context, instead it wants a very specific data type. In python at least it would tell you what it wants instead, but this function keeps quiet in order not to destroy the riddle. It then goes on to throw an exception without explanation.
So I was making a function that at least accepts numpy in colored and in black/white format, and PIL.Image. That opens the usability of this function up a little bit.
OpenCV, when you read this - normally in python a readable text is provided for each function in '''triple marks'''. Emphasis on "readable".
import cv2
import numpy as np
from PIL import Image
def blobize(im, blob_params):
'''
Takes an image and tries to find blobs on this image.
Parameters
----------
im : nd.array, single colored. In case of several colors, the first
color channel is taken. Alternatively you can provide an
PIL.Image, which is then converted to "L" and used directly.
blob_params : a cv2.SimpleBlobDetector_Params() parameter list
Returns
-------
blobbed_im : A greyscale image with the found blobs circled in red
keypoints : an OpenCV keypoint list.
'''
if Image.isImageType(im):
im = np.array(im.convert("L"))
if isinstance(im, np.ndarray):
if (len(im.shape) >= 3
and im.shape[2] > 1):
im = im[:,:,0]
detector = cv2.SimpleBlobDetector_create(blob_params)
try:
keypoints = detector.detect(im)
except:
keypoints = None
if keypoints:
blobbed_im = cv2.drawKeypoints(im, keypoints, np.array([]),
(255,0,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
else:
blobbed_im = im
return blobbed_im, keypoints
blob_params = cv2.SimpleBlobDetector_Params()
# images are converted to many binary b/w layers. Then 0 searches for dark blobs, 255 searches for bright blobs. Or you set the filter to "false", then it finds bright and dark blobs, both.
blob_params.filterByColor = False
blob_params.blobColor = 0
# Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).
blob_params.filterByArea = True
blob_params.minArea = 3. # Highly depending on image resolution and dice size
blob_params.maxArea = 400. # float! Highly depending on image resolution.
blob_params.filterByCircularity = True
blob_params.minCircularity = 0. # 0.7 could be rectangular, too. 1 is round. Not set because the dots are not always round when they are damaged, for example.
blob_params.maxCircularity = 3.4028234663852886e+38 # infinity.
blob_params.filterByConvexity = False
blob_params.minConvexity = 0.
blob_params.maxConvexity = 3.4028234663852886e+38
blob_params.filterByInertia = True # a second way to find round blobs.
blob_params.minInertiaRatio = 0.55 # 1 is round, 0 is anywhat
blob_params.maxInertiaRatio = 3.4028234663852886e+38 # infinity again
blob_params.minThreshold = 0 # from where to start filtering the image
blob_params.maxThreshold = 255.0 # where to end filtering the image
blob_params.thresholdStep = 5 # steps to go through
blob_params.minDistBetweenBlobs = 3.0 # avoid overlapping blobs. must be bigger than 0. Highly depending on image resolution!
blob_params.minRepeatability = 2 # if the same blob center is found at different threshold values (within a minDistBetweenBlobs), then it (basically) increases a counter for that blob. if the counter for each blob is >= minRepeatability, then it's a stable blob, and produces a KeyPoint, otherwise the blob is discarded.
res, keypoints = blobize(im2, blob_params)
I use OpenCV and Python and I want to remove the small connected object from my image.
I have the following binary image as input:
The image is the result of this code:
dilation = cv2.dilate(dst,kernel,iterations = 2)
erosion = cv2.erode(dilation,kernel,iterations = 3)
I want to remove the objects highlighted in red:
How can I achieve this using OpenCV?
How about with connectedComponentsWithStats (doc):
# find all of the connected components (white blobs in your image).
# im_with_separated_blobs is an image where each detected blob has a different pixel value ranging from 1 to nb_blobs - 1.
nb_blobs, im_with_separated_blobs, stats, _ = cv2.connectedComponentsWithStats(im)
# stats (and the silenced output centroids) gives some information about the blobs. See the docs for more information.
# here, we're interested only in the size of the blobs, contained in the last column of stats.
sizes = stats[:, -1]
# the following lines result in taking out the background which is also considered a component, which I find for most applications to not be the expected output.
# you may also keep the results as they are by commenting out the following lines. You'll have to update the ranges in the for loop below.
sizes = sizes[1:]
nb_blobs -= 1
# minimum size of particles we want to keep (number of pixels).
# here, it's a fixed value, but you can set it as you want, eg the mean of the sizes or whatever.
min_size = 150
# output image with only the kept components
im_result = np.zeros_like(im_with_separated_blobs)
# for every component in the image, keep it only if it's above min_size
for blob in range(nb_blobs):
if sizes[blob] >= min_size:
# see description of im_with_separated_blobs above
im_result[im_with_separated_blobs == blob + 1] = 255
Output :
In order to remove objects automatically you need to locate them in the image.
From the image you provided I see nothing that distinguishes the 7 highlighted items from others.
You have to tell your computer how to recognize objects you don't want. If they look the same, this is not possible.
If you have multiple images where the objects always look like that you could use template matching techniques.
Also the closing operation doesn't make much sense to me.
#For isolated or unconnected blobs: Try this (you can set noise_removal_threshold to whatever you like and make it relative to the largest contour for example or a nominal value like 100 or 25).
mask = np.zeros_like(img)
for contour in contours:
area = cv2.contourArea(contour)
if area > noise_removal_threshold:
cv2.fillPoly(mask, [contour], 255)
Removing small connected components by area is called area opening. OpenCV does not have this as a function, it can be implemented as shown in other answers. But most other image processing packages will have an area opening function.
For example using scikit-image:
import skimage
import imageio.v3 as iio
img = iio.imread('cQMZm.png')[:,:,0]
out = skimage.morphology.area_opening(img, area_threshold=150, connectivity=2)
For example using DIPlib:
import diplib as dip
out = dip.AreaOpening(img, filterSize=150, connectivity=2)
PS: The DIPlib implementation is noticeably faster. Disclaimer: I'm an author of DIPlib.
I have a problem using the Hu moments for shape recognition. The goal is to be able to recognize the two white circles and the two white squares on the left in the picture.
http://i.stack.imgur.com/wVzYa.jpg
I tried using the cv2.approxPolyDP method but it doesn't quite work when there is a rotation. For the white circles I used the cv2.HoughCircles method and it works pretty well. However, I really need to use the Hu moments, because it seems it is a better method.
I have this code below:
import cv2
import numpy as np
nomeimg = "coded_target.jpg"
img = cv2.imread(nomeimg)
gray = cv2.imread(nomeimg,0)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(4,4))
imgbnbin = thresh
imgbnbin = cv2.dilate(imgbnbin, element)
#find contour
contours,hierarchy=cv2.findContours(imgbnbin,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
#Elimination small contours
Areacontours = list()
for i in Areacontours:
area = cv2.contourArea(contours[i])
if (area > 90 ):
Areacontours.append(contours[i])
contours = Areacontours
print('found objects')
print(len(contours))
print("humoments")
mom = cv2.moments(contours[0])
Humoments = cv2.HuMoments(mom)
Humoments2 = -np.sign(Humoments)*np.log10(np.abs(Humoments))
print(Humoments2)
It returns 7 numbers which are the Hu invariants. I tried rotating the picture and I see that only the last two are changing. It also says that it only found 1 object found when there are obviously more than that. Is it normal?
I thought of using templates for shape identification purposes but I don't know how to do it: I believe I should exploit the Hu moments of the templates and see where it fits but I'm not sure on how to achieve it.
I appreciate the help.
You can create a template image of the squares and implement a template matching technique in order to detect it on the image.
You can also detect the contour of the template image and use the function cv2.matchshapes . However this function is used in order to compare two images. So, I guess you will have to make a window with the same size with you template and run it through you original image in order to detect which part is the best match (minimum value for the function matchshape).
What would be the approach to trim an image that's been input using a scanner and therefore has a large white/black area?
the entropy solution seems problematic and overly intensive computationally. Why not edge detect?
I just wrote this python code to solve this same problem for myself. My background was dirty white-ish, so the criteria that I used was darkness and color. I simplified this criteria by just taking the smallest of the R, B or B value for each pixel, so that black or saturated red both stood out the same. I also used the average of the however many darkest pixels for each row or column. Then I started at each edge and worked my way in till I crossed a threshold.
Here is my code:
#these values set how sensitive the bounding box detection is
threshold = 200 #the average of the darkest values must be _below_ this to count (0 is darkest, 255 is lightest)
obviousness = 50 #how many of the darkest pixels to include (1 would mean a single dark pixel triggers it)
from PIL import Image
def find_line(vals):
#implement edge detection once, use many times
for i,tmp in enumerate(vals):
tmp.sort()
average = float(sum(tmp[:obviousness]))/len(tmp[:obviousness])
if average <= threshold:
return i
return i #i is left over from failed threshold finding, it is the bounds
def getbox(img):
#get the bounding box of the interesting part of a PIL image object
#this is done by getting the darekest of the R, G or B value of each pixel
#and finding were the edge gest dark/colored enough
#returns a tuple of (left,upper,right,lower)
width, height = img.size #for making a 2d array
retval = [0,0,width,height] #values will be disposed of, but this is a black image's box
pixels = list(img.getdata())
vals = [] #store the value of the darkest color
for pixel in pixels:
vals.append(min(pixel)) #the darkest of the R,G or B values
#make 2d array
vals = np.array([vals[i * width:(i + 1) * width] for i in xrange(height)])
#start with upper bounds
forupper = vals.copy()
retval[1] = find_line(forupper)
#next, do lower bounds
forlower = vals.copy()
forlower = np.flipud(forlower)
retval[3] = height - find_line(forlower)
#left edge, same as before but roatate the data so left edge is top edge
forleft = vals.copy()
forleft = np.swapaxes(forleft,0,1)
retval[0] = find_line(forleft)
#and right edge is bottom edge of rotated array
forright = vals.copy()
forright = np.swapaxes(forright,0,1)
forright = np.flipud(forright)
retval[2] = width - find_line(forright)
if retval[0] >= retval[2] or retval[1] >= retval[3]:
print "error, bounding box is not legit"
return None
return tuple(retval)
if __name__ == '__main__':
image = Image.open('cat.jpg')
box = getbox(image)
print "result is: ",box
result = image.crop(box)
result.show()
For starters, Here is a similar question. Here is a related question. And a another related question.
Here is just one idea, there are certainly other approaches. I would select an arbitrary crop edge and then measure the entropy* on either side of the line, then proceed to re-select the crop line (probably using something like a bisection method) until the entropy of the cropped-out portion falls below a defined threshold. As I think, you may need to resort to a brute root-finding method as you will not have a good indication of when you have cropped too little. Then repeat for the remaining 3 edges.
*I recall discovering that the entropy method in the referenced website was not completely accurate, but I could not find my notes (I'm sure it was in a SO post, however.)
Edit:
Other criteria for the "emptiness" of an image portion (other than entropy) might be contrast ratio or contrast ratio on an edge-detect result.