I have an image and would like to create polygons of segments this image using marker-controlled watershed. I wrote the following code but I can't separate objects attached each other and create the polygons of the object.
How can solve those issues? Thanks so much for your help.
import cv2
import numpy as np
import scipy.misc
import scipy.ndimage as snd
# image is read and is converted to a numpy array
img = cv2.imread('D:/exam_watershed/Example_2_medicine/Medicine_create_poly/medicine.jpg')
# image is convereted to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# binary thresholding is done using the threshold
# from Otsu's method
ret1,thresh1 = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# foreground pixels are determined by
# performing erosion
fore_ground = cv2.erode(thresh1,None,iterations = 3)
bgt = cv2.dilate(thresh1,None,iterations = 3)
ret,back_ground = cv2.threshold(bgt,1,100,1)
# marker is determined by adding foreground and background pixels
marker = cv2.add(fore_ground,back_ground)
# converting marker to 32 int
marker32 = np.int32(marker)
cv2.watershed(img,marker32)
res = scipy.misc.toimage(marker32)
res.save('D:/exam_watershed/Example_2_medicine/Medicine_create_poly/res_output.png')
This question seems to be pretty close to your needs, since the example uses the exact same image as yours.
To transform the resulting "dams" into polygons, I suggest using cv2.findContours together with cv2.approxPolyDP on the result image.
Related
import numpy as np
from imageio import imread, imwrite
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')[...,:3]
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')[...,:3]
result = np.hstack((im1,im2))
imwrite('result.jpg', result)
Original images opening directly from the url's (I'm trying to concatenate the two images into one and keep the background white):
As can be seen both have no background, but when joining the two via Python, the defined background becomes this moss green:
I tried modifying the color reception:
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')[...,:1]
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')[...,:1]
But the result is a Black & White with the background still looking like it was converted from the previous green, even though the PNG's don't have such a background color.
How should I proceed to solve my need?
There is a 4th channel in your images - transparency. You are discarding that channel with [...,:1]. This is a mistake.
If you retain the alpha channel this will work fine:
import numpy as np
from imageio import imread, imwrite
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')
result = np.hstack((im1,im2))
imwrite('result.png', result)
However, if you try to make a jpg, you will have a problem:
>>> imwrite('test.jpg', result)
OSError: JPEG does not support alpha channel.
This is correct, as JPGs do not do transparency. If you would like to use transparency and also have your output be a JPG, I suggest a priest.
You can replace the transparent pixels by using np.where and looking for places that the alpha channel is 0:
result = np.hstack((im1,im2))
result[np.where(result[...,3] == 0)] = [255, 255, 255, 255]
imwrite('result.png', result)
If you want to improve image quality, here is a solution. #Brondy
# External libraries used for
# Image IO
from PIL import Image
# Morphological filtering
from skimage.morphology import opening
from skimage.morphology import disk
# Data handling
import numpy as np
# Connected component filtering
import cv2
black = 0
white = 255
threshold = 160
# Open input image in grayscale mode and get its pixels.
img = Image.open("image.jpg").convert("LA")
pixels = np.array(img)[:,:,0]
# Remove pixels above threshold
pixels[pixels > threshold] = white
pixels[pixels < threshold] = black
# Morphological opening
blobSize = 1 # Select the maximum radius of the blobs you would like to remove
structureElement = disk(blobSize) # you can define different shapes, here we take a disk shape
# We need to invert the image such that black is background and white foreground to perform the opening
pixels = np.invert(opening(np.invert(pixels), structureElement))
# Create and save new image.
newImg = Image.fromarray(pixels).convert('RGB')
newImg.save("newImage1.PNG")
# Find the connected components (black objects in your image)
# Because the function searches for white connected components on a black background, we need to invert the image
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(np.invert(pixels), connectivity=8)
# For every connected component in your image, you can obtain the number of pixels from the stats variable in the last
# column. We remove the first entry from sizes, because this is the entry of the background connected component
sizes = stats[1:,-1]
nb_components -= 1
# Define the minimum size (number of pixels) a component should consist of
minimum_size = 100
# Create a new image
newPixels = np.ones(pixels.shape)*255
# Iterate over all components in the image, only keep the components larger than minimum size
for i in range(1, nb_components):
if sizes[i] > minimum_size:
newPixels[output == i+1] = 0
# Create and save new image.
newImg = Image.fromarray(newPixels).convert('RGB')
newImg.save("new_img.PNG")
If you want to change the background of a Image, pixellib is the best solution because it seemed the most reasonable and easy library to use.
import pixellib
from pixellib.tune_bg import alter_bg
change_bg = alter_bg()
change_bg.load_pascalvoc_model("deeplabv3_xception_tf_dim_ordering_tf_kernels.h5")
change_bg.color_bg("sample.png", colors=(255,255,255), output_image_name="colored_bg.png")
This code requires pixellib to be higher or the same as 0.6.1
I'm trying the following to get the mask out of this image, but unfortunately I fail.
import numpy as np
import skimage.color
import skimage.filters
import skimage.io
# get filename, sigma, and threshold value from command line
filename = 'pathToImage'
# read and display the original image
image = skimage.io.imread(fname=filename)
skimage.io.imshow(image)
# blur and grayscale before thresholding
blur = skimage.color.rgb2gray(image)
blur = skimage.filters.gaussian(blur, sigma=2)
# perform inverse binary thresholding
mask = blur < 0.8
# use the mask to select the "interesting" part of the image
sel = np.ones_like(image)
sel[mask] = image[mask]
# display the result
skimage.io.imshow(sel)
How can I obtain the mask?
Is there a general approach that would work for this image as well. without custom fine-tuning and changing parameters?
Apply high contrast (maximum possible value)
convert to black & white image using high threshold (I've used 250)
min filter (value=8)
max filter (value=8)
Here is how you can get a rough mask using only the skimage library methods:
import numpy as np
from skimage.io import imread, imsave
from skimage.feature import canny
from skimage.color import rgb2gray
from skimage.filters import gaussian
from skimage.morphology import dilation, erosion, selem
from skimage.measure import find_contours
from skimage.draw import polygon
def get_mask(img):
kernel = selem.rectangle(7, 6)
dilate = dilation(canny(rgb2gray(img), 0), kernel)
dilate = dilation(dilate, kernel)
dilate = dilation(dilate, kernel)
erode = erosion(dilate, kernel)
mask = np.zeros_like(erode)
rr, cc = polygon(*find_contours(erode)[0].T)
mask[rr, cc] = 1
return gaussian(mask, 7) > 0.74
def save_img_masked(file):
img = imread(file)[..., :3]
mask = get_mask(img)
result = np.zeros_like(img)
result[mask] = img[mask]
imsave("masked_" + file, result)
save_img_masked('belt.png')
save_img_masked('bottle.jpg')
Resulting masked_belt.png:
Resulting masked_bottle.jpg:
One approach uses the fact that the background changes color only very slowly. Here I apply the gradient magnitude to each of the channels and compute the norm of the result, giving me an image highlighting the quicker changes in color. The watershed of this (with sufficient tolerance) should have one or more regions covering the background and touching the image edge. After identifying those regions, and doing a bit of cleanup we get these results (red line is the edge of the mask, overlaid on the input image):
I did have to adjust the tolerance, with a lower tolerance in the first case, more of the shadow is seen as object. I think it should be possible to find a way to set the tolerance based on the statistics of the gradient image, I have not tried.
There are no other parameters to tweak here, the minimum object area, 300, is quite safe; an alternative would be to keep only the one largest object.
This is the code, using DIPlib (disclaimer: I'm an author). out is the mask image, not the outline as displayed above.
import diplib as dip
import numpy as np
# Case 1:
img = dip.ImageRead('Pa9DO.png')
img = img[362:915, 45:877] # cut out actual image
img = img(slice(0,2)) # remove alpha channel
tol = 7
# Case 2:
#img = dip.ImageRead('jTnVr.jpg')
#tol = 1
# Compute gradient
gm = dip.Norm(dip.GradientMagnitude(img))
# Compute watershed with tolerance
lab = dip.Watershed(gm, connectivity=1, maxDepth=tol, flags={'correct','labels'})
# Identify regions touching the image edge
ll = np.unique(np.concatenate((
np.unique(lab[:,0]),
np.unique(lab[:,-1]),
np.unique(lab[0,:]),
np.unique(lab[-1,:]))))
# Remove regions touching the image edge
out = dip.Image(lab.Sizes(), dt='BIN')
out.Fill(1)
for l in ll:
if l != 0: # label zero is for the watershed lines
out = out - (lab == l)
# Remove watershed lines
out = dip.Opening(out, dip.SE(3, 'rectangular'))
# Remove small regions
out = dip.AreaOpening(out, filterSize=300)
# Display
dip.Overlay(img, dip.Dilation(out, 3) - out).Show()
I have some images and their associated ground truth outlined objects. For example this image shows the outlined objects for one of the original imagesoutlined objects in blue
Given this image and its original source, I would like to create some masks based on these outlines using openCV2 or skimage.
Using Contours I can roughly achieve that, but I have two problems:
1- Why I get repeated masks? (plz refer to the attached snippet)
2- How to overcome the issue of two touching objects
from skimage import io
from skimage import measure
import matplotlib.pyplot as plt
image = io.imread('path/to/the/attached/image')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
contours = measure.find_contours(gray, 0.1)
for n, contour in enumerate(contours):
r_mask = np.zeros_like(gray, dtype='bool')
r_mask[np.round(contour[:, 0]).astype('int'), np.round(contour[:,
1]).astype('int')] = 1
r_mask = ndimage.binary_fill_holes(r_mask)
io.imshow(r_mask)
plt.show()
Thank you
I'm trying to solve a homework problem where I need to get a mask from one image (DAPI) and then apply it to the second image (NPM1) of cells (they are the same cells in the exact same location)
I've been running in circles for about 4 hours trying to get the mask applied using a True/False approach but it doesn't seem to work. I've tried and failed with a bunch of other approaches but just pasting the one that I thought would most likely work (I'm super new to coding)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from skimage.color import rgb2gray
import cv2
#Load the images
image = np.array(Image.open("NOTREATDAPI.jpg"))
image1 = np.array(Image.open("NOTREATNPM1.jpg"))
No_Treat_DAPI = rgb2gray(image)
No_Treat_NPM1 = rgb2gray(image1)
plt.imshow(image)
#Create a mask using the DAPI image
arr = np.array(No_Treat_DAPI)
DAPI_stain = arr[:,0] > 25
plt.imshow(arr)
The DAPI image:
The NPM1 image:
I'm trying to only get the regions on the original image that have an intensity of 25 or greater so that all of the black space in the isn't counted towards the mask as I'm trying to get a histogram of intensity of the cells in the NPM1 image.
I limited my solution to the use of OpenCV, numpy, and matplotlib.
The general approach is the following:
Load both images as grayscale images, see cv2.imread.
Create a binary mask from the DAPI image using binary thresholding at intensity value 25, see cv2.threshold.
Do some morphological opening to get rid of possible small artifacts, see cv2.morphologyEx and cv2.getStructuringElement.
Calculate the histogram of the NPM1 image, only incorporating the masked pixels, see cv2.calcHist.
Here's the complete code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
# Load images as grayscale
dapi = cv2.imread('images/NOTREATDAPI.jpg', cv2.IMREAD_GRAYSCALE)
npm1 = cv2.imread('images/NOTREATNPM1.jpg', cv2.IMREAD_GRAYSCALE)
# Create a mask using the DAPI image and binary thresholding at 25
_, mask = cv2.threshold(dapi, 25, 255, cv2.THRESH_BINARY)
# Do some morphological opening to get rid of small artifacts
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15)))
# Calculate the histogram using the NPM1 image and the obtained binary mask
hist = cv2.calcHist([npm1], [0], mask, [256], [0, 256])
# Show bar plot of calculated histogram
plt.bar(np.arange(256), np.squeeze(hist))
plt.show()
# Show mask image
cv2.imshow('Mask', mask)
cv2.waitKey(0)
cv2.destroyAllWindows()
The mask then looks like this:
And, the histogram might look like this:
Hope that helps!
P.S. Next time, better use the opencv and python tags instead of only using the cv2 tag. You'll reach way more people.
I wrote a little script to transform pictures of chalkboards into a form that I can print off and mark up.
I take an image like this:
Auto-crop it, and binarize it. Here's the output of the script:
I would like to remove the largest connected black regions from the image. Is there a simple way to do this?
I was thinking of eroding the image to eliminate the text and then subtracting the eroded image from the original binarized image, but I can't help thinking that there's a more appropriate method.
Sure you can just get connected components (of certain size) with findContours or floodFill, and erase them leaving some smear. However, if you like to do it right you would think about why do you have the black area in the first place.
You did not use adaptive thresholding (locally adaptive) and this made your output sensitive to shading. Try not to get the black region in the first place by running something like this:
Mat img = imread("desk.jpg", 0);
Mat img2, dst;
pyrDown(img, img2);
adaptiveThreshold(255-img2, dst, 255, ADAPTIVE_THRESH_MEAN_C,
THRESH_BINARY, 9, 10); imwrite("adaptiveT.png", dst);
imshow("dst", dst);
waitKey(-1);
In the future, you may read something about adaptive thresholds and how to sample colors locally. I personally found it useful to sample binary colors orthogonally to the image gradient (that is on the both sides of it). This way the samples of white and black are of equal size which is a big deal since typically there are more background color which biases estimation. Using SWT and MSER may give you even more ideas about text segmentation.
I tried this:
import numpy as np
import cv2
im = cv2.imread('image.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
grayout = 255*np.ones((im.shape[0],im.shape[1],1), np.uint8)
blur = cv2.GaussianBlur(gray,(5,5),1)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
wcnt = 0
for item in contours:
area =cv2.contourArea(item)
print wcnt,area
[x,y,w,h] = cv2.boundingRect(item)
if area>10 and area<200:
roi = gray[y:y+h,x:x+w]
cntd = 0
for i in range(x,x+w):
for j in range(y,y+h):
if gray[j,i]==0:
cntd = cntd + 1
density = cntd/(float(h*w))
if density<0.5:
for i in range(x,x+w):
for j in range(y,y+h):
grayout[j,i] = gray[j,i];
wcnt = wcnt + 1
cv2.imwrite('result.png',grayout)
You have to balance two things, removing the black spots but balance that with not losing the contents of what is on the board. The output I got is this:
Here is a Python numpy implementation (using my own mahotas package) of the method for the top answer (almost the same, I think):
import mahotas as mh
import numpy as np
Imported mahotas & numpy with standard abbreviations
im = mh.imread('7Esco.jpg', as_grey=1)
Load the image & convert to gray
im2 = im[::2,::2]
im2 = mh.gaussian_filter(im2, 1.4)
Downsample and blur (for speed and noise removal).
im2 = 255 - im2
Invert the image
mean_filtered = mh.convolve(im2.astype(float), np.ones((9,9))/81.)
Mean filtering is implemented "by hand" with a convolution.
imc = im2 > mean_filtered - 4
You might need to adjust the number 4 here, but it worked well for this image.
mh.imsave('binarized.png', (imc*255).astype(np.uint8))
Convert to 8 bits and save in PNG format.