I am trying to remove the black spots from a face of this image using the erosion methods.
I have implemented:
img = skimage.io.imread('blemish.jpeg')
img = skimage.color.rgb2gray(img)
img_inten = skimage.exposure.rescale_intensity(img,in_range=(50,100))
diliation_seed = img_inten.copy()
diliation_seed[1:-1,1:-1] = img_inten.min()
mask = img_inten
eroded_img = skimage.morphology.reconstruction(diliation_seed,mask,method='dilation')
matplotlib.pyplot.imshow(eroded_img,cmap='gray')
My output is always a black image in both the cases. What is going wrong here?
rgb2gray is outputting an image as a matrix of floats, with values in [0;1]
So the rescale_intensity is just outputting a matrix of 0, since you ask for values between 50 and 100 and there is none in the gray img.
you can fix it like this :
import skimage
from skimage import data, exposure, img_as_float
from skimage.morphology import reconstruction
import matplotlib.pyplot as plt
img = skimage.io.imread('blemish.jpeg')
gray_img = 255*skimage.color.rgb2gray(img) # multiply by 255 to get back in the [0;255] range
img_inten = exposure.rescale_intensity(gray_img,in_range=(50,100))
diliation_seed = img_inten.copy()
diliation_seed[1:-1,1:-1] = img_inten.min()
mask = img_inten
eroded_img = reconstruction(diliation_seed,mask,method='dilation')
plt.imshow(eroded_img,cmap='gray')
plt.show()
Related
I have greyscale images with features of interest displayed as grey and white, and background as black.
I am trying to draw polygons around the features of interest.
My problem is that polygons are drawn e.g. around the edge of the images as well (input image). In the code below I have tried to filter out these "false positive" features of interest using gaussian blur and morphological operations (see code below),
import cv2
import matplotlib.pyplot as plt
import numpy as np
from imantics import Polygons, Mask
import imantics as imcs
import skimage
from shapely.geometry import Polygon as Pollygon
import matplotlib.image as mpimg
import PIL
mask = cv2.imread('mask.jpg',64)
print(mask.max())
print(mask.min())
# Apply gaussian blur filter
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
ellipseFootprint = skimage.morphology.footprints.ellipse(1, 1)
squareFootprint = skimage.morphology.footprints.square(8)
maskMorph = mask
for i in range(10):
maskMorph = skimage.morphology.erosion(maskMorph, footprint=ellipseFootprint, out=None)
print(i)
for k in range(2):
maskMorph = skimage.morphology.dilation(maskMorph, footprint=None, out=None)
print(k)
polygons = Mask(maskMorph).polygons()
print(len(polygons.segmentation))
print(type(polygons))
print(polygons.segmentation)
newPoly = polygons.draw(mask, color=[255, 255, 0],
thickness=3)
cv2.imshow("title", newPoly)
cv2.waitKey()
Indeed, I have tried to "filter" out smaller features/polygons and "false positive" features of interest in images using gaussian blur filter and morphological operations, but I am struggling with getting rid of all (see output image).
My thinking is therefore to add a minimum (size) threshold for the features/polygons in the image to be kept.
I have started on the following, but am not sure how to progress.
lengthPolySeg = len(polygons.segmentation)
for l in range(lengthPolySeg-1):
if len(polygons.segmentation[l]) < 50:
Any advise would be most appreciated.
I'm trying the following to get the mask out of this image, but unfortunately I fail.
import numpy as np
import skimage.color
import skimage.filters
import skimage.io
# get filename, sigma, and threshold value from command line
filename = 'pathToImage'
# read and display the original image
image = skimage.io.imread(fname=filename)
skimage.io.imshow(image)
# blur and grayscale before thresholding
blur = skimage.color.rgb2gray(image)
blur = skimage.filters.gaussian(blur, sigma=2)
# perform inverse binary thresholding
mask = blur < 0.8
# use the mask to select the "interesting" part of the image
sel = np.ones_like(image)
sel[mask] = image[mask]
# display the result
skimage.io.imshow(sel)
How can I obtain the mask?
Is there a general approach that would work for this image as well. without custom fine-tuning and changing parameters?
Apply high contrast (maximum possible value)
convert to black & white image using high threshold (I've used 250)
min filter (value=8)
max filter (value=8)
Here is how you can get a rough mask using only the skimage library methods:
import numpy as np
from skimage.io import imread, imsave
from skimage.feature import canny
from skimage.color import rgb2gray
from skimage.filters import gaussian
from skimage.morphology import dilation, erosion, selem
from skimage.measure import find_contours
from skimage.draw import polygon
def get_mask(img):
kernel = selem.rectangle(7, 6)
dilate = dilation(canny(rgb2gray(img), 0), kernel)
dilate = dilation(dilate, kernel)
dilate = dilation(dilate, kernel)
erode = erosion(dilate, kernel)
mask = np.zeros_like(erode)
rr, cc = polygon(*find_contours(erode)[0].T)
mask[rr, cc] = 1
return gaussian(mask, 7) > 0.74
def save_img_masked(file):
img = imread(file)[..., :3]
mask = get_mask(img)
result = np.zeros_like(img)
result[mask] = img[mask]
imsave("masked_" + file, result)
save_img_masked('belt.png')
save_img_masked('bottle.jpg')
Resulting masked_belt.png:
Resulting masked_bottle.jpg:
One approach uses the fact that the background changes color only very slowly. Here I apply the gradient magnitude to each of the channels and compute the norm of the result, giving me an image highlighting the quicker changes in color. The watershed of this (with sufficient tolerance) should have one or more regions covering the background and touching the image edge. After identifying those regions, and doing a bit of cleanup we get these results (red line is the edge of the mask, overlaid on the input image):
I did have to adjust the tolerance, with a lower tolerance in the first case, more of the shadow is seen as object. I think it should be possible to find a way to set the tolerance based on the statistics of the gradient image, I have not tried.
There are no other parameters to tweak here, the minimum object area, 300, is quite safe; an alternative would be to keep only the one largest object.
This is the code, using DIPlib (disclaimer: I'm an author). out is the mask image, not the outline as displayed above.
import diplib as dip
import numpy as np
# Case 1:
img = dip.ImageRead('Pa9DO.png')
img = img[362:915, 45:877] # cut out actual image
img = img(slice(0,2)) # remove alpha channel
tol = 7
# Case 2:
#img = dip.ImageRead('jTnVr.jpg')
#tol = 1
# Compute gradient
gm = dip.Norm(dip.GradientMagnitude(img))
# Compute watershed with tolerance
lab = dip.Watershed(gm, connectivity=1, maxDepth=tol, flags={'correct','labels'})
# Identify regions touching the image edge
ll = np.unique(np.concatenate((
np.unique(lab[:,0]),
np.unique(lab[:,-1]),
np.unique(lab[0,:]),
np.unique(lab[-1,:]))))
# Remove regions touching the image edge
out = dip.Image(lab.Sizes(), dt='BIN')
out.Fill(1)
for l in ll:
if l != 0: # label zero is for the watershed lines
out = out - (lab == l)
# Remove watershed lines
out = dip.Opening(out, dip.SE(3, 'rectangular'))
# Remove small regions
out = dip.AreaOpening(out, filterSize=300)
# Display
dip.Overlay(img, dip.Dilation(out, 3) - out).Show()
I have the following image which is a scanned copy of an old book. I want to remove the noise in the background (which is a bit reddish) that is coming due to the scanning of the old photo.
Update:
After applying opencv, following the parameter settings in opencv doc, I am getting the following output.
Please help fixing this.
The code that I am using:
import numpy as np
import cv2
from matplotlib import pyplot as plt
def display_image_in_actual_size(im_data):
dpi = 80
height, width, depth = im_data.shape
# What size does the figure need to be in inches to fit the image?
figsize = width / float(dpi), height / float(dpi)
# Create a figure of the right size with one axes that takes up the full figure
fig = plt.figure(figsize=figsize)
ax = fig.add_axes([0, 0, 1, 1])
# Hide spines, ticks, etc.
ax.axis('off')
# Display the image.
ax.imshow(im_data, cmap='gray')
plt.show()
img = cv2.imread('scan03.jpg')
dst = cv2.fastNlMeansDenoisingColored(img,None,10,10,7,21)
display_image_in_actual_size(img)
display_image_in_actual_size(dst)
The color of some pixels which has near threshold pixel values will be affected, but that depends on the task, here is one solution that you might adjust the threshold to a value that suits your task, also you might remove the median filter, or reduce the sigma value(5) if it affects the text badly, you might have some undesired noise, but the text will be readable.
import numpy as np
import matplotlib.pyplot as plt
import cv2
# Read Image
img = cv2.imread('input.jpg')
# BGR --> RGB
RGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# BGR --> Gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Set thresholds
th_white = 210
th_black = 85
# copy original gray
mask_white = gray.copy()
mask_black = gray.copy()
# Thresholding
mask_white[mask_white<th_white] = 0
mask_black[mask_black<th_black] = 0
mask_white[mask_white>=th_white] = 255
mask_black[mask_black>=th_black] = 255
# Median Filtering (you can remove if the text is not readable)
median_white = cv2.medianBlur(mask_white,5)
median_black = cv2.medianBlur(mask_black,5)
# Mask 3 channels
mask_white_3 = np.stack([median_white, median_white, median_white], axis=2)
mask_black_3 = np.stack([median_black, median_black, median_black], axis=2)
# Masking the image(in RGB)
result1 = np.maximum(mask_white_3, RGB)
result2 = np.minimum(mask_black_3, result1)
# Visualize the results
plt.imshow(result2)
plt.axis('off')
plt.show()
opencv library has couple of denoisong functions.
You can find reading with examples here
Below I have attached two images. I want the first image to be cropped in a heart shape according to the mask image (2nd image).
I searched for solutions but I was not able to get the simple and easier way to do this. Kindly help me with the solution.
2 images:
Image to be cropped:
Mask image:
Let's start by loading the temple image from sklearn:
from sklearn.datasets import load_sample_images
dataset = load_sample_images()
temple = dataset.images[0]
plt.imshow(temple)
Since, we need to use the second image as mask, we must do a binary thresholding operation. This will create a black and white masked image, which we can then use to mask the former image.
from matplotlib.pyplot import imread
heart = imread(r'path_to_im\heart.jpg', cv2.IMREAD_GRAYSCALE)
_, mask = cv2.threshold(heart, thresh=180, maxval=255, type=cv2.THRESH_BINARY)
We can now trim the image so its dimensions are compatible with the temple image:
temple_x, temple_y, _ = temple.shape
heart_x, heart_y = mask.shape
x_heart = min(temple_x, heart_x)
x_half_heart = mask.shape[0]//2
heart_mask = mask[x_half_heart-x_heart//2 : x_half_heart+x_heart//2+1, :temple_y]
plt.imshow(heart_mask, cmap='Greys_r')
Now we have to slice the image that we want to mask, to fit the dimensions of the actual mask. Another shape would have been to resize the mask, which is doable, but we'd then end up with a distorted heart image. To apply the mask, we have cv2.bitwise_and:
temple_width_half = temple.shape[1]//2
temple_to_mask = temple[:,temple_width_half-x_half_heart:temple_width_half+x_half_heart]
masked = cv2.bitwise_and(temple_to_mask,temple_to_mask,mask = heart_mask)
plt.imshow(masked)
If you want to instead make the masked (black) region transparent:
tmp = cv2.cvtColor(masked, cv2.COLOR_BGR2GRAY)
_,alpha = cv2.threshold(tmp,0,255,cv2.THRESH_BINARY)
b, g, r = cv2.split(masked)
rgba = [b,g,r, alpha]
masked_tr = cv2.merge(rgba,4)
plt.axis('off')
plt.imshow(dst)
Since I am on a remote server, cv2.imshow doesnt work for me. I imported plt.
This code does what you are looking for:
import cv2
import matplotlib.pyplot as plt
img_org = cv2.imread('~/temple.jpg')
img_mask = cv2.imread('~/heart.jpg')
##Resizing images
img_org = cv2.resize(img_org, (400,400), interpolation = cv2.INTER_AREA)
img_mask = cv2.resize(img_mask, (400,400), interpolation = cv2.INTER_AREA)
for h in range(len(img_mask)):
for w in range(len(img_mask)):
if img_mask[h][w][0] == 0:
for i in range(3):
img_org[h][w][i] = 0
else:
continue
plt.imshow(img_org)
Requirement is to crop region of interest from binary image.
I need a rectangle image from a binary image by removing the extra space around the region of interest.
For example:
From this Original image i want only the region of interest marked with yellow color rectangle.
Note: Yellow color rectangle is just for the reference and it is not present in the image that will be processed.
I tried the following python code but it is not giving the required output.
from PIL import Image
from skimage.io import imread
from skimage.morphology import convex_hull_image
import numpy as np
from matplotlib import pyplot as plt
from skimage import io
from skimage.color import rgb2gray
im = imread('binaryImageEdited.png')
plt.imshow(im)
plt.title('input image')
plt.show()
# create a binary image
im1 = 1 - rgb2gray(im)
threshold = 0.8
im1[im1 <= threshold] = 0
im1[im1 > threshold] = 1
chull = convex_hull_image(im1)
plt.imshow(chull)
plt.title('convex hull in the binary image')
plt.show()
imageBox = Image.fromarray((chull*255).astype(np.uint8)).getbbox()
cropped = Image.fromarray(im).crop(imageBox)
cropped.save('L_2d_cropped.png')
plt.imshow(cropped)
plt.show()
Thank you.
Your image is not actually binary on account of two things:
firstly, it has 26 colours, and
secondly it has an (entirely unnecessary) alpha channel.
You can trim it like this:
#!/usr/bin/env python3
from PIL import Image, ImageOps
# Open image and ensure greysale and discard useless alpha channel
im = Image.open("thing.png").convert('L')
# Threshold and invert image as not actually binary
thresh = im.point(lambda p: p < 64 and 255)
# Get bounding box of thresholded image
bbox1 = thresh.getbbox()
crop1 = thresh.crop(bbox1)
# Invert and crop again
crop1n = ImageOps.invert(crop1)
bbox2 = crop1n.getbbox()
crop2 = crop1.crop(bbox2) # You don't actually need this - it's just for debug
# Trim original, unthresholded, uninverted image to the two bounding boxes
result = im.crop(bbox1).crop(bbox2)
result.save('result.png')
even i have similar problem. Also it would be helpful if image saved is in 32X32 px.