Convert polygons (in the image) to masks - python

I have some images and their associated ground truth outlined objects. For example this image shows the outlined objects for one of the original imagesoutlined objects in blue
Given this image and its original source, I would like to create some masks based on these outlines using openCV2 or skimage.
Using Contours I can roughly achieve that, but I have two problems:
1- Why I get repeated masks? (plz refer to the attached snippet)
2- How to overcome the issue of two touching objects
from skimage import io
from skimage import measure
import matplotlib.pyplot as plt
image = io.imread('path/to/the/attached/image')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
contours = measure.find_contours(gray, 0.1)
for n, contour in enumerate(contours):
r_mask = np.zeros_like(gray, dtype='bool')
r_mask[np.round(contour[:, 0]).astype('int'), np.round(contour[:,
1]).astype('int')] = 1
r_mask = ndimage.binary_fill_holes(r_mask)
io.imshow(r_mask)
plt.show()
Thank you

Related

How do I make a mask from one image and then transfer it to another?

I'm trying to solve a homework problem where I need to get a mask from one image (DAPI) and then apply it to the second image (NPM1) of cells (they are the same cells in the exact same location)
I've been running in circles for about 4 hours trying to get the mask applied using a True/False approach but it doesn't seem to work. I've tried and failed with a bunch of other approaches but just pasting the one that I thought would most likely work (I'm super new to coding)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from skimage.color import rgb2gray
import cv2
#Load the images
image = np.array(Image.open("NOTREATDAPI.jpg"))
image1 = np.array(Image.open("NOTREATNPM1.jpg"))
No_Treat_DAPI = rgb2gray(image)
No_Treat_NPM1 = rgb2gray(image1)
plt.imshow(image)
#Create a mask using the DAPI image
arr = np.array(No_Treat_DAPI)
DAPI_stain = arr[:,0] > 25
plt.imshow(arr)
The DAPI image:
The NPM1 image:
I'm trying to only get the regions on the original image that have an intensity of 25 or greater so that all of the black space in the isn't counted towards the mask as I'm trying to get a histogram of intensity of the cells in the NPM1 image.
I limited my solution to the use of OpenCV, numpy, and matplotlib.
The general approach is the following:
Load both images as grayscale images, see cv2.imread.
Create a binary mask from the DAPI image using binary thresholding at intensity value 25, see cv2.threshold.
Do some morphological opening to get rid of possible small artifacts, see cv2.morphologyEx and cv2.getStructuringElement.
Calculate the histogram of the NPM1 image, only incorporating the masked pixels, see cv2.calcHist.
Here's the complete code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
# Load images as grayscale
dapi = cv2.imread('images/NOTREATDAPI.jpg', cv2.IMREAD_GRAYSCALE)
npm1 = cv2.imread('images/NOTREATNPM1.jpg', cv2.IMREAD_GRAYSCALE)
# Create a mask using the DAPI image and binary thresholding at 25
_, mask = cv2.threshold(dapi, 25, 255, cv2.THRESH_BINARY)
# Do some morphological opening to get rid of small artifacts
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15)))
# Calculate the histogram using the NPM1 image and the obtained binary mask
hist = cv2.calcHist([npm1], [0], mask, [256], [0, 256])
# Show bar plot of calculated histogram
plt.bar(np.arange(256), np.squeeze(hist))
plt.show()
# Show mask image
cv2.imshow('Mask', mask)
cv2.waitKey(0)
cv2.destroyAllWindows()
The mask then looks like this:
And, the histogram might look like this:
Hope that helps!
P.S. Next time, better use the opencv and python tags instead of only using the cv2 tag. You'll reach way more people.

Feature extraction and take color histogram

I am working on an image processing feature extraction. I have a photo of a bird in which I have to extract bird area and tell what color the bird has. I used canny feature extraction method to get the edges of a bird.
How to extract only bird area and make the background to blue color?
openCv solution should also be fine.
import skimage
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import os
filename = os.path.join(os.getcwd(),'image\image_bird.jpeg')
from skimage import io
bird =io.imread(filename,as_grey=True)
plt.imshow(bird)
from skimage import feature
edges = feature.canny(bird,sigma=1)
plt.imshow(edges )
Actual bird image can be taken from bird link
Identify the edges of your image
Binarize the image via automatic thresholding
Use contour detection to identify black regions which are inside a white region and merge them with the white region. (Mockup, image may slightly vary)
Use the created image as mask to color the background and color it
This can be done by simply setting each background pixel (black) to its respective color.
As you can see, the approach is far from perfect, but should give you a general idea about how to accomplish your task. The final image quality might be improved by slightly eroding the map to tighten it to the contours of the bird. You then also use the mask to calculate your color histogram by only taking foreground pixels into account.
Edit: Look here:
Eroded mask
Final image
According to this article https://www.pyimagesearch.com/2016/04/11/finding-extreme-points-in-contours-with-opencv/
and this question CV - Extract differences between two images
I wrote some python code as below. As my predecessor said it is also far from perfect. The main disadvantages of this code are constants value to set manually: minThres (50), maxThres(100), dilate iteration count and erode iteration count.
import cv2
import numpy as np
windowName = "Edges"
pictureRaw = cv2.imread("bird.jpg")
## set to gray
pictureGray = cv2.cvtColor(pictureRaw, cv2.COLOR_BGR2GRAY)
## blur
pictureGaussian = cv2.GaussianBlur(pictureGray, (7,7), 0)
## canny edge detector - you must specify threshold values
pictureCanny = cv2.Canny(pictureGaussian, 50, 100)
## perform a series of erosions + dilations to remove any small regions of noise
pictureDilate = cv2.dilate(pictureCanny, None, iterations=20)
pictureErode = cv2.erode(pictureDilate, None, iterations=5)
## find the nozero regions in the erode
imask2 = pictureErode>0
## create a Mat like pictureRaw
canvas = np.full_like(pictureRaw, np.array([255,0,0]), dtype=np.uint8)
## set mask
canvas[imask2] = pictureRaw[imask2]
cv2.imwrite("result.png", canvas)

Filling "holes" of an image in python with cv2 not working

I am trying to fill the "holes" of red blood cells in an image after performing binary threshold. Almost all red blood cells have a black center when inverting the binary threshold. I want to remove them.
Example image:
This is my code:
import cv2
from PIL import Image
import numpy as np
from scipy import ndimage
from skimage.feature import peak_local_max
from skimage.morphology import watershed
image = cv2.imread("blood_cells.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
darker = cv2.equalizeHist(gray)
ret,thresh = cv2.threshold(darker,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
newimg = cv2.bitwise_not(thresh)
im2, contours, hierarchy = cv2.findContours(newimg,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
cv2.drawContours(newimg,[cnt],0,255,-1)
And it worked. I filled the holes using findContours() and drawContours().
but when I try to compute the euclidean distance, for applying the watershed algorithm, I get only 52 unique segments, however there should be more. Here is the code, if it might be helpful:
D = ndimage.distance_transform_edt(newimg)
localMax = peak_local_max(D, indices=False, min_distance=20, labels=thresh)
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
print("[INFO] {} unique segments found".format(len(np.unique(labels)) - 1))
I tried to segment each cell, but the results were quite off. Only the inside of the cells that had "holes" got segmented.
First image shows my result, second shows how it should roughly look like:
.
I then filled the holes manually, just to see if my code for segmentation works - and it works. The error should be somewhere between the part where I drew the contours, and the part where I calculated the euclidean distance.. Could anybody explain to me what could be wrong? I am clueless.
Your problem lies in the following line:
labels = watershed(-D, markers, mask=thresh)
You're passing as mask an inverted, uncorrected result from thresholding:
Giving you this bad segmentation:
Whereas you should be passing the corrected, filled in mask:
labels = watershed(-D, markers, mask=newimg)
Giving you the result you probably expect:

Approximate polygons from image (map)

So I have this map:
I've already done some filtering and now I have the following image:
Finally, I want to find the coordinates for each of the polygons in the image, but using the find_contours function is not giving me good results, as you can see on the following image.
On the left side, you can see all of the contours (polygons) found (one color for each), and on the right side, it's an example of a polygon it has found (which is clearly wrong).
Having the black and white image, how can I find the coordinates for each of the polygons in it?
So when I downloaded the image, I got a very disjoint kind of image. So what I did was I dilated it.
import numpy as np
from skimage import io, measure, morphology
img = io.imread('img.png', as_grey=True)
img = morphology.binary_dilation(img, selem=np.ones((5,5)))
Then what I did was I inverted the white line with black and the area as white with this
img_inverted = np.invert(img)
Once you have the inverted image, Label it for easy processing.
img_inverted_labeled = measure.label(img_inverted)
n_lbls = np.unique(img_inverted_labeled)[1:]
Now append each part's polygon.
pols = []
for i in n_lbls:
img_part = (img_inverted_labeled == i)
pols.append(measure.find_contours(img_part, level=0)[0])
Hope this helps.

How to create polygons for watershed segmentation

I have an image and would like to create polygons of segments this image using marker-controlled watershed. I wrote the following code but I can't separate objects attached each other and create the polygons of the object.
How can solve those issues? Thanks so much for your help.
import cv2
import numpy as np
import scipy.misc
import scipy.ndimage as snd
# image is read and is converted to a numpy array
img = cv2.imread('D:/exam_watershed/Example_2_medicine/Medicine_create_poly/medicine.jpg')
# image is convereted to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# binary thresholding is done using the threshold
# from Otsu's method
ret1,thresh1 = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# foreground pixels are determined by
# performing erosion
fore_ground = cv2.erode(thresh1,None,iterations = 3)
bgt = cv2.dilate(thresh1,None,iterations = 3)
ret,back_ground = cv2.threshold(bgt,1,100,1)
# marker is determined by adding foreground and background pixels
marker = cv2.add(fore_ground,back_ground)
# converting marker to 32 int
marker32 = np.int32(marker)
cv2.watershed(img,marker32)
res = scipy.misc.toimage(marker32)
res.save('D:/exam_watershed/Example_2_medicine/Medicine_create_poly/res_output.png')
This question seems to be pretty close to your needs, since the example uses the exact same image as yours.
To transform the resulting "dams" into polygons, I suggest using cv2.findContours together with cv2.approxPolyDP on the result image.

Categories