I'm trying the following to get the mask out of this image, but unfortunately I fail.
import numpy as np
import skimage.color
import skimage.filters
import skimage.io
# get filename, sigma, and threshold value from command line
filename = 'pathToImage'
# read and display the original image
image = skimage.io.imread(fname=filename)
skimage.io.imshow(image)
# blur and grayscale before thresholding
blur = skimage.color.rgb2gray(image)
blur = skimage.filters.gaussian(blur, sigma=2)
# perform inverse binary thresholding
mask = blur < 0.8
# use the mask to select the "interesting" part of the image
sel = np.ones_like(image)
sel[mask] = image[mask]
# display the result
skimage.io.imshow(sel)
How can I obtain the mask?
Is there a general approach that would work for this image as well. without custom fine-tuning and changing parameters?
Apply high contrast (maximum possible value)
convert to black & white image using high threshold (I've used 250)
min filter (value=8)
max filter (value=8)
Here is how you can get a rough mask using only the skimage library methods:
import numpy as np
from skimage.io import imread, imsave
from skimage.feature import canny
from skimage.color import rgb2gray
from skimage.filters import gaussian
from skimage.morphology import dilation, erosion, selem
from skimage.measure import find_contours
from skimage.draw import polygon
def get_mask(img):
kernel = selem.rectangle(7, 6)
dilate = dilation(canny(rgb2gray(img), 0), kernel)
dilate = dilation(dilate, kernel)
dilate = dilation(dilate, kernel)
erode = erosion(dilate, kernel)
mask = np.zeros_like(erode)
rr, cc = polygon(*find_contours(erode)[0].T)
mask[rr, cc] = 1
return gaussian(mask, 7) > 0.74
def save_img_masked(file):
img = imread(file)[..., :3]
mask = get_mask(img)
result = np.zeros_like(img)
result[mask] = img[mask]
imsave("masked_" + file, result)
save_img_masked('belt.png')
save_img_masked('bottle.jpg')
Resulting masked_belt.png:
Resulting masked_bottle.jpg:
One approach uses the fact that the background changes color only very slowly. Here I apply the gradient magnitude to each of the channels and compute the norm of the result, giving me an image highlighting the quicker changes in color. The watershed of this (with sufficient tolerance) should have one or more regions covering the background and touching the image edge. After identifying those regions, and doing a bit of cleanup we get these results (red line is the edge of the mask, overlaid on the input image):
I did have to adjust the tolerance, with a lower tolerance in the first case, more of the shadow is seen as object. I think it should be possible to find a way to set the tolerance based on the statistics of the gradient image, I have not tried.
There are no other parameters to tweak here, the minimum object area, 300, is quite safe; an alternative would be to keep only the one largest object.
This is the code, using DIPlib (disclaimer: I'm an author). out is the mask image, not the outline as displayed above.
import diplib as dip
import numpy as np
# Case 1:
img = dip.ImageRead('Pa9DO.png')
img = img[362:915, 45:877] # cut out actual image
img = img(slice(0,2)) # remove alpha channel
tol = 7
# Case 2:
#img = dip.ImageRead('jTnVr.jpg')
#tol = 1
# Compute gradient
gm = dip.Norm(dip.GradientMagnitude(img))
# Compute watershed with tolerance
lab = dip.Watershed(gm, connectivity=1, maxDepth=tol, flags={'correct','labels'})
# Identify regions touching the image edge
ll = np.unique(np.concatenate((
np.unique(lab[:,0]),
np.unique(lab[:,-1]),
np.unique(lab[0,:]),
np.unique(lab[-1,:]))))
# Remove regions touching the image edge
out = dip.Image(lab.Sizes(), dt='BIN')
out.Fill(1)
for l in ll:
if l != 0: # label zero is for the watershed lines
out = out - (lab == l)
# Remove watershed lines
out = dip.Opening(out, dip.SE(3, 'rectangular'))
# Remove small regions
out = dip.AreaOpening(out, filterSize=300)
# Display
dip.Overlay(img, dip.Dilation(out, 3) - out).Show()
Related
I have greyscale images with features of interest displayed as grey and white, and background as black.
I am trying to draw polygons around the features of interest.
My problem is that polygons are drawn e.g. around the edge of the images as well (input image). In the code below I have tried to filter out these "false positive" features of interest using gaussian blur and morphological operations (see code below),
import cv2
import matplotlib.pyplot as plt
import numpy as np
from imantics import Polygons, Mask
import imantics as imcs
import skimage
from shapely.geometry import Polygon as Pollygon
import matplotlib.image as mpimg
import PIL
mask = cv2.imread('mask.jpg',64)
print(mask.max())
print(mask.min())
# Apply gaussian blur filter
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
ellipseFootprint = skimage.morphology.footprints.ellipse(1, 1)
squareFootprint = skimage.morphology.footprints.square(8)
maskMorph = mask
for i in range(10):
maskMorph = skimage.morphology.erosion(maskMorph, footprint=ellipseFootprint, out=None)
print(i)
for k in range(2):
maskMorph = skimage.morphology.dilation(maskMorph, footprint=None, out=None)
print(k)
polygons = Mask(maskMorph).polygons()
print(len(polygons.segmentation))
print(type(polygons))
print(polygons.segmentation)
newPoly = polygons.draw(mask, color=[255, 255, 0],
thickness=3)
cv2.imshow("title", newPoly)
cv2.waitKey()
Indeed, I have tried to "filter" out smaller features/polygons and "false positive" features of interest in images using gaussian blur filter and morphological operations, but I am struggling with getting rid of all (see output image).
My thinking is therefore to add a minimum (size) threshold for the features/polygons in the image to be kept.
I have started on the following, but am not sure how to progress.
lengthPolySeg = len(polygons.segmentation)
for l in range(lengthPolySeg-1):
if len(polygons.segmentation[l]) < 50:
Any advise would be most appreciated.
SO far I have been able to perform medianBlur and Edge Detection, Now I want to o further remove noise from the image, Matlabs region **property functions ** was used to remove all white regions that had a total pixel area of less than the mean pixel area value. How can I implement this on python
import matplotlib.image as mpimg
import numpy as np
import cv2
import os
import math
from collections import defaultdict
from matplotlib import pyplot as plt
import imutils
#import generalized_hough
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
print(gray.shape)
blur = cv2.bilateralFilter(gray,9,75,75)
median = cv2.medianBlur(gray,5)
# display input and output image
titles = ["bilateral Smoothing","median bulr"]
images = [ blur, median]
plt.figure(figsize=(20, 20))
for i in range(2):
plt.subplot(1,2,i+1)
plt.imshow(images[i])
plt.title(titles[i])
plt.xticks([]), plt.yticks([])
plt.show()
sobelX = cv2.Sobel(median,cv2.cv2.CV_64F, 1, 0)
sobelY = cv2.Sobel(median,cv2.cv2.CV_64F, 0,1)
sobelX = np.uint8(np.absolute(sobelX))
sobelY = np.uint8(np.absolute(sobelY))
SobelCombined = cv2.bitwise_or(sobelX,sobelY)
cv2.imshow('img', SobelCombined)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is a Matlab code that works for the same task.
close all
%upload image of farm
figure,
farm = imread('small_farms.JPG');%change this to the file path of image
imshow(farm);%this shows the original image
%convert the image to grayscale for 2D manipulation
gfarm = rgb2gray(farm);
figure,
imshow(gfarm);%show grayscaled image
%median filters take a m*n area around a coordinate and
%find the median pixel value and set that coordinate to that
%pixel value. It's a method of removing noise or details in an
%image. may want to tune dimensions of filter.
A = medfilt2(gfarm,[4 4]);
figure,
imshow(A);
%perform a logarithmic edge detection filter,
%this picks out the edges of the image, log setting
%was found to wrok best, although 'Sobel' can also be tried
B = edge(A,'log');
%show results of the edge filter
figure,
imshow(B,[]);
%find the areas of the lines made
areas = regionprops(B,'Area');
%find the mean and one standard deviation
men = mean([areas.Area])+0*std([areas.Area]);
%find max pixel area
big = max([areas.Area]);
%remove regions that are too small
C = bwpropfilt(B,'Area',[men big]);
%perform a dilation on the remaining pixels, this
%helps fill in gaps. The size and shape of the dilation
%can be tuned below.
SE = strel('square',4);
C = imdilate(C,SE);
areas2 = regionprops(C,'Area');
%place white border around image to find areas of farms
%that go off the picture
[h,w] = size(C);
C(1,:) = 1;
C(:,1) = 1;
C(h,:) = 1;
C(:,w) = 1;
C = C<1;
%fill in holes
C = imfill(C,'holes');
%show final processed image
figure,imshow(C);
%the section below is for display purpose
%it creates the boundaries of the image and displays them
%in a rainbow fashion
figure,
[B,L,n,A] = bwboundaries(C,'noholes');
imshow(label2rgb(L, #jet, [.5 .5 .5]))
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'w', 'LineWidth', 2)
end
%The section below prints out the areas of each found
%region by pixel values. These values need to be scaled
%by the real measurements of the images to get relevant
%metrics
centers = regionprops(C,'Centroid','Area');
for k=1:length(centers)
if(centers(k).Area > mean([centers.Area])-std([areas.Area]))
text(centers(k).Centroid(1),centers(k).Centroid(2),string(centers(k).Area));
end
end
I need help for image segmentation. I have a MRI image of brain with tumor. I need to remove cranium (skull) from MRI and then segment only tumor object. How could I do that in python? with image processing. I have tried make contours, but I don't know how to find and remove the largest contour and get only brain without a skull.
Thank's a lot.
def get_brain(img):
row_size = img.shape[0]
col_size = img.shape[1]
mean = np.mean(img)
std = np.std(img)
img = img - mean
img = img / std
middle = img[int(col_size / 5):int(col_size / 5 * 4), int(row_size / 5):int(row_size / 5 * 4)]
mean = np.mean(middle)
max = np.max(img)
min = np.min(img)
img[img == max] = mean
img[img == min] = mean
kmeans = KMeans(n_clusters=2).fit(np.reshape(middle, [np.prod(middle.shape), 1]))
centers = sorted(kmeans.cluster_centers_.flatten())
threshold = np.mean(centers)
thresh_img = np.where(img < threshold, 1.0, 0.0) # threshold the image
eroded = morphology.erosion(thresh_img, np.ones([3, 3]))
dilation = morphology.dilation(eroded, np.ones([5, 5]))
These images are similar to the ones I'm looking at:
Thanks for answers.
Preliminaries
Some preliminary code:
%matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
from skimage.morphology import extrema
from skimage.morphology import watershed as skwater
def ShowImage(title,img,ctype):
plt.figure(figsize=(10, 10))
if ctype=='bgr':
b,g,r = cv2.split(img) # get b,g,r
rgb_img = cv2.merge([r,g,b]) # switch it to rgb
plt.imshow(rgb_img)
elif ctype=='hsv':
rgb = cv2.cvtColor(img,cv2.COLOR_HSV2RGB)
plt.imshow(rgb)
elif ctype=='gray':
plt.imshow(img,cmap='gray')
elif ctype=='rgb':
plt.imshow(img)
else:
raise Exception("Unknown colour type")
plt.axis('off')
plt.title(title)
plt.show()
For reference, here's one of the brain+skulls you linked to:
#Read in image
img = cv2.imread('brain.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ShowImage('Brain with Skull',gray,'gray')
Extracting a Mask
If the pixels in the image can be classified into two different intensity classes, that is, if they have a bimodal histogram, then Otsu's method can be used to threshold them into a binary mask. Let's check that assumption.
#Make a histogram of the intensities in the grayscale image
plt.hist(gray.ravel(),256)
plt.show()
Okay, the data is nicely bimodal. Let's apply the threshold and see how we do.
#Threshold the image to binary using Otsu's method
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
ShowImage('Applying Otsu',thresh,'gray')
Things are easier to see if we overlay our mask onto the original image
colormask = np.zeros(img.shape, dtype=np.uint8)
colormask[thresh!=0] = np.array((0,0,255))
blended = cv2.addWeighted(img,0.7,colormask,0.1,0)
ShowImage('Blended', blended, 'bgr')
Extracting the Brain
The overlap of the brain (shown in red) with the mask is so perfect, that we'll stop right here. To do so, let's extract the connected components and find the largest one, which will be the brain.
ret, markers = cv2.connectedComponents(thresh)
#Get the area taken by each component. Ignore label 0 since this is the background.
marker_area = [np.sum(markers==m) for m in range(np.max(markers)) if m!=0]
#Get label of largest component by area
largest_component = np.argmax(marker_area)+1 #Add 1 since we dropped zero above
#Get pixels which correspond to the brain
brain_mask = markers==largest_component
brain_out = img.copy()
#In a copy of the original image, clear those pixels that don't correspond to the brain
brain_out[brain_mask==False] = (0,0,0)
ShowImage('Connected Components',brain_out,'rgb')
Considering the Second Brain
Running this again with your second image produces a mask with many holes:
We can close many of these holes using a closing transformation:
brain_mask = np.uint8(brain_mask)
kernel = np.ones((8,8),np.uint8)
closing = cv2.morphologyEx(brain_mask, cv2.MORPH_CLOSE, kernel)
ShowImage('Closing', closing, 'gray')
We can now extract the brain:
brain_out = img.copy()
#In a copy of the original image, clear those pixels that don't correspond to the brain
brain_out[closing==False] = (0,0,0)
ShowImage('Connected Components',brain_out,'rgb')
If you need to cite this for some reason:
Richard Barnes. (2018). Using Otsu's method for skull-brain segmentation (v1.0.1). Zenodo. https://doi.org/10.5281/zenodo.6042312
Have you perhaps tried to use python skull_stripping.py
You can modify the parameters but it normally works good.
There are some new studies using deep learning for skull stripping which I found it interesting:
https://github.com/mateuszbuda/brain-segmentation/tree/master/skull-stripping
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 28 17:10:56 2021
#author: K Somasundaram, ka.somasundaram#gmail.com
"""
import numpy as npy
from skimage.filters import threshold_otsu
from skimage import measure
# import image reading module image from matplotlib
import matplotlib.image as img
#import image ploting module pyplot from matplotlib
import matplotlib.pyplot as plt
inim=img.imread('015.bmp')
#Find the dimension of the input image
dimn=inim.shape
print('dim=',dimn)
plt.figure(1)
plt.imshow(inim)
#-----------------------------------------------
# Find a threshold for the image using Otsu method in filters
th=threshold_otsu(inim)
print('Threshold = ',th)
# Binarize using threshold th
binim1=inim>th
plt.figure(2)
plt.imshow(binim1)
#--------------------------------------------------
# Erode the binary image with a structuring element
from skimage.morphology import disk
import skimage.morphology as morph
#Erode it with a radius of 5
eroded_image=morph.erosion(binim1,disk(3))
plt.figure(3)
plt.imshow(eroded_image)
#---------------------------------------------
#------------------------------------------------
# label the binar image
labelimg=measure.label(eroded_image,background=0)
plt.figure(4)
plt.imshow(labelimg)
#--------------------------------------------------
# Find area of the connected regiond
prop=measure.regionprops(labelimg)
# Find the number of objecte in the image
ncount=len(prop)
print ( 'Number of regions=',ncount)
#-----------------------------------------------------
# Find the LLC index
argmax=0
maxarea=0
#Find the largets connected region
for i in range(ncount):
if(prop[i].area >maxarea):
maxarea=prop[i].area
argmax=i
print('max area=',maxarea,'arg max=',argmax)
print('values=',[region.area for region in prop])
# Take only the largest connected region
# Generate a mask of size of th einput image with all zeros
bmask=npy.zeros(inim.shape,dtype=npy.uint8)
# Set all pixel values in whole image to the LCC index to 1
bmask[labelimg == (argmax+1)] =1
plt.figure(5)
plt.imshow(bmask)
#------------------------------------------------
#Dilate the isolated region to recover the pixels lost in erosion
dilated_mask=morph.dilation(bmask,disk(6))
plt.figure(6)
plt.imshow(dilated_mask)
#---------------------------------------
# Extract the brain using the barinmask
brain=inim*dilated_mask
plt.figure(7)
plt.imshow(brain)
-----------------------------------------
Input Image
--------------------
I have this image of an eye where I want to get the center of the pupil:
Original Image
I applied adaptive threshold as well as laplacian to the image using this code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\input\left.jpg',0)
image = cv2.medianBlur(img,5)
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,11,2)
laplacian = cv2.Laplacian(th,cv2.CV_64F)
cv2.imshow('output', laplacian)
cv2.imwrite('C:\Users\User\Documents\module4\output\output.jpg', laplacian)
cv2.waitKey(0)
cv2.destroyAllWindows
and the resulting image looks like this: Resulting image by applying adaptive threshold
I want to draw a circle around the smaller inner circle and get its center. I've tried using contours and circular hough transform but it does not correctly detect any circles in the image.
Here is my code for Circular Hough Transform:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(img,(i[0],i[1]),i[2],(255,255,0),2)
# draw the center of the circle
cv2.circle(img,(i[0],i[1]),2,(255,0,255),3)
cv2.imshow('detected circles',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
And here is the code for applying contour:
import cv2
import numpy as np
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
_, contours,hierarchy = cv2.findContours(img, 1, 2)
cnt = contours[0]
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
img = cv2.circle(img,center,radius,(0,255,255),2)
cv2.imshow('contour', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The resulting image of this code exactly looks like the image wherein I applied adaptive threshold. I would really appreciate it if anyone can help me solve my problem. I've been stuck with this for a while now. Also, if any of you guys can suggest a better way to detect the center of the pupil besides this method, I would also really appreciate it.
try to apply edge detection instead of shareholding after filtering of original image and then apply hough circle
My thought would be to use the Hough transform like you're doing. But another method might be template matching like this. This assumes you know the approximate radius of the pupil in the image, you can try to build a template.
import skimage
import numpy as np
import matplotlib.pyplot as plt
img = skimage.io.imread('Wjioe.jpg')
#just use grayscale, but you could make separate template for each r,g,b channel
img = np.mean(img, axis=2)
(M,N) = img.shape
mm = M-20
nn = N-20
template = np.zeros([mm,nn])
## Create template ##
#darkest inner circle (pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,4.5, shape=template.shape)
template[rr,cc]=-2
#iris (circle surrounding pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,8, shape=template.shape)
template[rr,cc] = -1
#Optional - pupil reflective spot (if centered)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,1.5, shape=template.shape)
template[rr,cc] = 1
plt.imshow(template)
normccf = skimage.feature.match_template(img, template,pad_input=True)
#center pixel
(i,j) = np.unravel_index( np.argmax(normccf), normccf.shape)
plt.imshow(img)
plt.plot(j,i,'r*')
You're defining a 3 channel color for a gray-scale image. Based on my test it will only read the first value in that tuple. Because the first value in your other colors (in the middle code) starts with 255, it draws a full white circle and because the first value in your last color (in your last code) starts with 0, it draws a full black circle which you can't see.
Just change your color values to a 1 channel color (an int between 0 and 255) and you'll be fine.
I have an image and would like to create polygons of segments this image using marker-controlled watershed. I wrote the following code but I can't separate objects attached each other and create the polygons of the object.
How can solve those issues? Thanks so much for your help.
import cv2
import numpy as np
import scipy.misc
import scipy.ndimage as snd
# image is read and is converted to a numpy array
img = cv2.imread('D:/exam_watershed/Example_2_medicine/Medicine_create_poly/medicine.jpg')
# image is convereted to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# binary thresholding is done using the threshold
# from Otsu's method
ret1,thresh1 = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# foreground pixels are determined by
# performing erosion
fore_ground = cv2.erode(thresh1,None,iterations = 3)
bgt = cv2.dilate(thresh1,None,iterations = 3)
ret,back_ground = cv2.threshold(bgt,1,100,1)
# marker is determined by adding foreground and background pixels
marker = cv2.add(fore_ground,back_ground)
# converting marker to 32 int
marker32 = np.int32(marker)
cv2.watershed(img,marker32)
res = scipy.misc.toimage(marker32)
res.save('D:/exam_watershed/Example_2_medicine/Medicine_create_poly/res_output.png')
This question seems to be pretty close to your needs, since the example uses the exact same image as yours.
To transform the resulting "dams" into polygons, I suggest using cv2.findContours together with cv2.approxPolyDP on the result image.