How to overlay segmented image on top of main image in python - python

I have an image in RGB and another segmented image in which the pixels have 3 values(segmented image). I want to overlay the segmented image on top of the main image as the segmented areas make contours over the main image such as image below. Here the value of the segmented image pixels are 0, 1 and 2. The red contour shows the contour of pixels with value1 , the yellow contour shows the contour of pixels with 2 value and the background pixel value is 0.
the image is from the paper "Dilated-Inception Net: Multi-Scale FeatureAggregation for Cardiac Right VentricleSegmentation"
Here is an example of a segmented image.
segmented image
The background image can be any image. I only need these rectangle counters appear on the background image as two contours similar to red and yellow lines above. So, the output will be similar to the image below.
output image
sorry as I draw rectangles by hand they are not exact. I just would like to give you an insight about the output.

I had a go at this using four different methods:
OpenCV
PIL/Pillow and Numpy
command-line with ImageMagick
morphology from skimage
Method 1 - OpenCV
Open segmented image as greyscale
Open main image as greyscale and make colour to allow annotation
Find the contours using cv2.findContours()
Iterate over contours and use cv2.drawContours() to draw each one onto main image in colour according to label in segmented image.
Documentation is here.
So, starting with this image:
and this segmented image:
which looks like this when contrast-stretched and the sandwich is labelled as grey(1) and the snout as grey(2):
Here's the code:
#!/usr/bin/env python3
import numpy as np
import cv2
# Load images as greyscale but make main RGB so we can annotate in colour
seg = cv2.imread('segmented.png',cv2.IMREAD_GRAYSCALE)
main = cv2.imread('main.png',cv2.IMREAD_GRAYSCALE)
main = cv2.cvtColor(main,cv2.COLOR_GRAY2BGR)
# Dictionary giving RGB colour for label (segment label) - label 1 in red, label 2 in yellow
RGBforLabel = { 1:(0,0,255), 2:(0,255,255) }
# Find external contours
_,contours,_ = cv2.findContours(seg,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
# Iterate over all contours
for i,c in enumerate(contours):
# Find mean colour inside this contour by doing a masked mean
mask = np.zeros(seg.shape, np.uint8)
cv2.drawContours(mask,[c],-1,255, -1)
# DEBUG: cv2.imwrite(f"mask-{i}.png",mask)
mean,_,_,_ = cv2.mean(seg, mask=mask)
# DEBUG: print(f"i: {i}, mean: {mean}")
# Get appropriate colour for this label
label = 2 if mean > 1.0 else 1
colour = RGBforLabel.get(label)
# DEBUG: print(f"Colour: {colour}")
# Outline contour in that colour on main image, line thickness=1
cv2.drawContours(main,[c],-1,colour,1)
# Save result
cv2.imwrite('result.png',main)
Result:
Method 2 - PIL/Pillow and Numpy
Open segmented image and find unique colours
Open main image and desaturate
Iterate over each unique colour in list
... Make all pixels that colour white and all others black
... Find edges and use edges as mask to draw colour on main image
Here's the code:
#!/usr/bin/env python3
from PIL import Image, ImageFilter
import numpy as np
def drawContour(m,s,c,RGB):
"""Draw edges of contour 'c' from segmented image 's' onto 'm' in colour 'RGB'"""
# Fill contour "c" with white, make all else black
thisContour = s.point(lambda p:p==c and 255)
# DEBUG: thisContour.save(f"interim{c}.png")
# Find edges of this contour and make into Numpy array
thisEdges = thisContour.filter(ImageFilter.FIND_EDGES)
thisEdgesN = np.array(thisEdges)
# Paint locations of found edges in color "RGB" onto "main"
m[np.nonzero(thisEdgesN)] = RGB
return m
# Load segmented image as greyscale
seg = Image.open('segmented.png').convert('L')
# Load main image - desaturate and revert to RGB so we can draw on it in colour
main = Image.open('main.png').convert('L').convert('RGB')
mainN = np.array(main)
mainN = drawContour(mainN,seg,1,(255,0,0)) # draw contour 1 in red
mainN = drawContour(mainN,seg,2,(255,255,0)) # draw contour 2 in yellow
# Save result
Image.fromarray(mainN).save('result.png')
You'll get this result:
Method 3 - ImageMagick
You can also do the same thing from the command-line without writing any Python, and just using ImageMagick which is installed on most Linux distros and is available for macOS and Windows:
#!/bin/bash
# Make red overlay for "1" labels
convert segmented.png -colorspace gray -fill black +opaque "gray(1)" -fill white -opaque "gray(1)" -edge 1 -transparent black -fill red -colorize 100% m1.gif
# Make yellow overlay for "2" labels
convert segmented.png -colorspace gray -fill black +opaque "gray(2)" -fill white -opaque "gray(2)" -edge 1 -transparent black -fill yellow -colorize 100% m2.gif
# Overlay both "m1.gif" and "m2.gif" onto main image
convert main.png -colorspace gray -colorspace rgb m1.gif -composite m2.gif -composite result.png
Method 4 - Morphology from skimage
Here I am using morphology to find black pixels near 1 pixels and black pixels near 2 pixels.
#!/usr/bin/env python3
import skimage.filters.rank
import skimage.morphology
import numpy as np
import cv2
# Load images as greyscale but make main RGB so we can annotate in colour
seg = cv2.imread('segmented.png',cv2.IMREAD_GRAYSCALE)
main = cv2.imread('main.png',cv2.IMREAD_GRAYSCALE)
main = cv2.cvtColor(main,cv2.COLOR_GRAY2BGR)
# Create structuring element that defines the neighbourhood for morphology
selem = skimage.morphology.disk(1)
# Mask for edges of segment 1 and segment 2
# We are basically looking for pixels with value 1 in the segmented image within a radius of 1 pixel of a black pixel...
# ... then the same again but for pixels with a vaue of 2 in the segmented image within a radius of 1 pixel of a black pixel
seg1 = (skimage.filters.rank.minimum(seg,selem) == 0) & (skimage.filters.rank.maximum(seg, selem) == 1)
seg2 = (skimage.filters.rank.minimum(seg,selem) == 0) & (skimage.filters.rank.maximum(seg, selem) == 2)
main[seg1,:] = np.asarray([0, 0, 255]) # Make segment 1 pixels red in main image
main[seg2,:] = np.asarray([0, 255, 255]) # Make segment 2 pixels yellow in main image
# Save result
cv2.imwrite('result.png',main)
Note: JPEG is lossy - do not save your segmented image as JPEG, use PNG or GIF!
Keywords: Python, PIL, Pillow, OpenCV, segmentation, segmented, labelled, image, image processing, edges, contours, skimage, ImageMagick, scikit-image, morphology, rank, ranking filter, pixel adjacency.

These are quick one-liners that automatically choose colors for the category/class integer values and execute the overlay onto the original image.
Color entire segmentation area:
from skimage import color
result_image = color.label2rgb(segmentation_results, input_image)
Color contours of segmentation areas:
from skimage import segmentation
result_image = segmentation.mark_boundaries(input_image, segmentation_results, mode='thick')

If semi-transparent segmentation masks are to be displayed on top of the image, skimage has a built-in label2rgb() function that colorizes by a label channel:
Input Image
from skimage import io, color
import matplotlib.pyplot as plt
import numpy as np
seg = np.zeros((256,256)) # create a matrix of zeroes of same size as image
seg[gt > 0.95] = 1 # Change zeroes to label "1" as per your condition(s)
seg[zz == 255] = 2
io.imshow(color.label2rgb(seg,img,colors=[(255,0,0),(0,0,255)],alpha=0.01, bg_label=0, bg_color=None))
plt.show()

Related

How to do transparent color inside the boundry in python

I have the images of brain tumor detection I want to do the transparent color inside the boundary of detection in python.
Transparent color will be same as the boundary color (122, 160, 255) with low opacity.
Image of Brain Tumor Detection
Expected Output:
Expected Image
Here's a way of doing this:
load boundary image in greyscale and threshold
find approximate centre of tumour
flood-fill tumour interior with value=64, leaving boundary=255
create a peachy overlay same size as original
push the greyscale into the peachy overlay as alpha layer
paste the overlay onto original
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Open boundary image and ensure greyscale
im = Image.open('boundary.png').convert('L')
# Get the bounding box so we can guesstimate its centre for flood-filling
bbox = im.getbbox()
cx = int((bbox[0]+bbox[1])/2)
cy = int((bbox[2]+bbox[3])/2)
print(f'DEBUG: cx={cx}, cy={cy}')
# Threshold the boundary image to pure black and white
thresh = im.point(lambda p: 255 if p>128 else 0)
# Flood-fill with 64 starting from the centre and proceeding to a pure white boundary border
ImageDraw.floodfill(thresh, (cx,cy), 64, border=255)
thresh.save('DEBUG-flood-filled.png')
# Open the original image
scan = Image.open('a0pMX.png')
# Create a peachy overlay, then push in the alpha layer
overlay = Image.new('RGB', scan.size, 'rgb(255,160,122)')
overlay.putalpha(thresh)
overlay.save('DEBUG-overlay.png')
# Paste the overlay onto the scan
scan.paste(overlay, mask=overlay)
scan.save('result.png')
Here are the intermediate images:
DEBUG-flood-filled.png
DEBUG-overlay.png
result.png

Overwrite the pixels closest to blue with (0,0,255) blue

I'm using Python and PIL (or Pillow) and want to run code on files that contain two pixels of a given intensity and RGB code (0,0,255).
The pixels may also be close to (0,0,255) but slightly adjusted ie (0,1,255). I'd like to overwrite the two pixels closest to (0,0,255) with (0,0,255).
Is this possible? If so, how?
Here's an example image , here zoomed with the pixels I want to make "more blue" here
The attempt at code I'm looking at comes from here:
# import the necessary packages
import numpy as np
import scipy.spatial as sp
import matplotlib.pyplot as plt
import cv2
from PIL import Image, ImageDraw, ImageFont
#Stored all RGB values of main colors in a array
# main_colors = [(0,0,0),
# (255,255,255),
# (255,0,0),
# (0,255,0),
# (0,0,255),
# (255,255,0),
# (0,255,255),
# (255,0,255),
# ]
main_colors = [(0,0,0),
(0,0,255),
(255,255,255)
]
background = Image.open("test-small.tiff").convert('RGBA')
background.save("test-small.png")
retina = cv2.imread("test-small.png")
#convert BGR to RGB image
retina = cv2.cvtColor(retina, cv2.COLOR_BGR2RGB)
h,w,bpp = np.shape(retina)
#Change colors of each pixel
#reference :https://stackoverflow.com/a/48884514/9799700
for py in range(0,h):
for px in range(0,w):
########################
#Used this part to find nearest color
#reference : https://stackoverflow.com/a/22478139/9799700
input_color = (retina[py][px][0],retina[py][px][1],retina[py][px][2])
tree = sp.KDTree(main_colors)
ditsance, result = tree.query(input_color)
nearest_color = main_colors[result]
###################
retina[py][px][0]=nearest_color[0]
retina[py][px][1]=nearest_color[1]
retina[py][px][2]=nearest_color[2]
print(str(px), str(py))
# show image
plt.figure()
plt.axis("off")
plt.imshow(retina)
plt.savefig('color_adjusted.png')
My logic is to replace the array of closest RGB colours to only contain (0,0,255) (my desired blue) and perhaps (255,255,255) for white - this way only the pixels that are black, white, or blue come through.
I've run the code on a smaller image, and it converts this to this as desired.
However, the code runs through every pixel, which is slow for larger images (I'm using images of 4000 x 4000 pixels). I would also like to output and save images to the same dimensions as the original file (which I expect to be an option when using plt.savefig.
If this could be optimized, that would be ideal. Similarly, picking the two "most blue" (ie closest to (0,0,255)) pixels and rewriting them with (0,0,255) should be quicker and just as effective for me.
As your image is largely unsaturated greys with just a few blue pixels, it will be miles faster to convert to convert to HLS colourspace and look for saturated pixels. You can do further tests easily enough on the identified pixels if you want to narrow it down to just two:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('eye.png', cv2.IMREAD_COLOR)
# Convert to HLS, so we can find saturated blue pixels
HLS = cv2.cvtColor(im,cv2.COLOR_BGR2HLS)
# Get x,y coordinates of pixels that have high saturation
SatPix = np.where(HLS[:,:,2]>60)
print(SatPix)
# Make them pure blue and save result
im[SatPix] = [255,0,0]
cv2.imwrite('result.png',im)
Output
(array([157, 158, 158, 272, 272, 273, 273, 273]), array([55, 55, 56, 64, 65, 64, 65, 66]))
That means pixels 157,55 and 158,55, and 158,56 and so on are blue. The conversion to HLS colourspace, identification of saturated pixels and setting them to solid blue takes 758 microseconds on my Mac.
You can achieve the same type of thing without writing any Python just using ImageMagick on the command line:
magick eye.png -colorspace hsl -channel g -separate -auto-level result.png
Here's a different way to do it. Use SciPy's cdist() to work out the Euclidean distance from each pixel to Blue, then pick the nearest two:
#!/usr/bin/env python3
import cv2
import numpy as np
from scipy.spatial.distance import cdist
# Load image, save shape, reshape as tall column of 3 RGB values
im = cv2.imread('eye.png', cv2.IMREAD_COLOR)
origShape = im.shape
im = im.reshape(-1,3)
# Work out distance to pure Blue for each pixel
blue = np.full((1,3), [255, 0 , 0])
d = cdist(im, blue, metric='euclidean') # THIS LINE DOES ALL THE WORK
indexNearest = np.argmin(d) # get index of pixel nearest to blue
im[np.argmin(d)] = [0,0,255] # make it red
d[indexNearest] = 99999 # make it appear further so we don't find it again
indexNearest = np.argmin(d) # get index of pixel second nearest to blue
im[np.argmin(d)] = [0,0,255] # make it red
# Reshape back to original shape and save result
im = im.reshape(origShape)
cv2.imwrite('result.png',im)

how to make my code identify the difference between 2 circles (2 circles one filled with white and one with black) using python and pillow?

I have 2 images,
1- White circle with black stroke
2- Black circle with black stroke
I want to compare both images and identify that both have the same circle but with different filling
I should only use python & pillow
I have already tried several methods like Edge Detection, but whenever I try to reform the picture for edge detection the new image appear as empty
from PIL import Image, ImageDraw
import numpy as np
from math import sqrt
# Load image:
input_image = Image.open("input.png")
input_pixels = input_image.load()
width, height = input_image.width, input_image.height
# Create output image
output_image = Image.new("RGB", input_image.size)
draw = ImageDraw.Draw(output_image)
# Convert to grayscale
intensity = np.zeros((width, height))
for x in range(width):
for y in range(height):
intensity[x, y] = sum(input_pixels[x, y]) / 3
# Compute convolution between intensity and kernels
for x in range(1, input_image.width - 1):
for y in range(1, input_image.height - 1):
magx = intensity[x + 1, y] - intensity[x - 1, y]
magy = intensity[x, y + 1] - intensity[x, y - 1]
# Draw in black and white the magnitude
color = int(sqrt(magx**2 + magy**2))
draw.point((x, y), (color, color, color))
output_image.save("edge.png")
expected result that the both pictures will be greyscaled with only the circle edges marked in white
actual result empty black image (as if it couldnt see the edges)
Well, If all you want is Edge Detection in an image, then you can try using Sobel Operator or its equivalents.
from PIL import Image, ImageFilter
image = Image.open(r"Circle.png").convert("RGB")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(r"ED_Circle.png")
The above code takes in an input image, converts it into RGB mode (certain images have P mode, which doesn't allows edge detection, therefore converting to RGB). Then finds edges in it via image.filter(ImageFilter.FIND_EDGES).
Sample Input Image (Black border with black circle):-
Output after processing through python program:-
Sample Image 2 (white circle with black border):-
Output after processing through python program:-
In the above sample, both the input images were of the same size and the circles in them were also of the same dimensions, the only difference between the two was that, one had a white circle inside a black border, and the other had a black circle inside black border.
Since the circles were of same dimensions, passing them through the edge detection process gave us same results.
NOTE:-
In the question, you wanted circle edges in white, and the rest of
part in greyscale. Which isn't the best choice for edge detection.
White and Black are inverse of each other, therefore edges could be
easily identified if the sample space of the image consists of these
two colors. Even then, if you want greyscale instead of black, then you can simple change each black pixel of the image to a grey one, or something that meets your needs
The results of above edge detection are same because the size of the
border is negligible. If the border is wider (a stroke), then when
the process is done on a white circle with black border, the edge
detection will create more then one white border. You can get through
that problem, by making the program ignore the inner edges and only
taking into account the outermost ones.

meaning of draw line's argument in OpenCv

I have some question about why img initialize like following code in document
import numpy as np
import cv2
# Create a black image
img = np.zeros((512,512,3), np.uint8)
# Draw a diagonal blue line with thickness of 5 px
cv2.line(img,(0,0),(511,511),(255,0,0),5)
it create a 3d array for img, I know 512,512 means image size, but why do we need "3" in third dimension?
The third component is used for the color channels.
In OpenCV it is default a BRG Color model.
In your example you created an Image 512x512 Pixel with 24bit color depth.
So if you just want a Gray scale image you can replace the 3 by a 1.

Detect white background on images using python

Is there a way to tell whether an image as a white background using python and what could be a good strategy to get a "percentage of confidence" about this question? Seems like the literature on internet doesn't cover exactly this case and I can't find anything strictly related.
The images I want to analyze are typical e-commerce website product pictures, so they should have a single focused object in the middle and white background only at the borders.
Another information that could be available is the max percentage of image space the object should occupy.
I would go with something like this.
Reduce the contrast of the image by making the brightest, whitest pixel something like 240 instead of 255 so that the whites generally found within the image and within parts of the product are no longer pure white.
Put a 1 pixel wide white border around your image - that will allow the floodfill in the next step to "flow" all the way around the edge (even if the "product" touches the edges of the frame) and "seep" into the image from all borders/edges.
Floofdill your image starting at the top-left corner (which is necessarily pure white after step 2) and allow a tolerance of 10-20% when matching the white in case the background is off-white or slightly shadowed, and the white will flow into your image all around the edges until it reaches the product in the centre.
See how many pure white pixels you have now - these are the background ones. The percentage of pure white pixels will give you an indicator of confidence in the image being a product on a whitish background.
I would use ImageMagick from the command line like this:
convert product.jpg +level 5% -bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" result.jpg
I will put a red border around the following 2 pictures just so you can see the edges on StackOverflow's white background, and show you the before and after images - look at the amount of white in the resulting images (there is none in the second one because it didn't have a white background) and also at the shadow under the router to see the effect of the -fuzz.
Before
After
If you want that as a percentage, you can make all non-white pixels black and then calculate the percentage of white pixels like this:
convert product.jpg -level 5% \
-bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" -shave 1 \
-fuzz 0 -fill black +opaque white -format "%[fx:int(mean*100)]" info:
62
Before
After
ImageMagick has Python bindings so you could do the above in Python - or you could use OpenCV and Python to implement the same algorithm.
This question may be years ago but I just had a similar task recently. Sharing my answer here might help others that will encounter the same task too and I might also improve my answer by having the community look at it.
import cv2 as cv
import numpy as np
THRESHOLD_INTENSITY = 230
def has_white_background(img):
# Read image into org_img variable
org_img = cv.imread(img, cv.IMREAD_GRAYSCALE)
# cv.imshow('Original Image', org_img)
# Create a black blank image for the mask
mask = np.zeros_like(org_img)
# Create a thresholded image, I set my threshold to 200 as this is the value
# I found most effective in identifying light colored object
_, thres_img = cv.threshold(org_img, 200, 255, cv.THRESH_BINARY_INV)
# Find the most significant contours
contours, hierarchy = cv.findContours(thres_img, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
# Get the outermost contours
outer_contours_img = max(contours, key=cv.contourArea)
# Get the bounding rectangle of the contours
x,y,w,h = cv.boundingRect(outer_contours_img)
# Draw a rectangle base on the bounding rectangle of the contours to our mask
cv.rectangle(mask,(x,y),(x+w,y+h),(255,255,255),-1)
# Invert the mask so that we create a hole for the detected object in our mask
mask = cv.bitwise_not(mask)
# Apply mask to the original image to subtract it and retain only the bg
img_bg = cv.bitwise_and(org_img, org_img, mask=mask)
# If the size of the mask is similar to the size of the image then the bg is not white
if h == org_img.shape[0] and w == org_img.shape[1]:
return False
# Create a np array of the
np_array = np.array(img_bg)
# Remove the zeroes from the "remaining bg image" so that we dont consider the black part,
# and find the average intensity of the remaining pixels
ave_intensity = np_array[np.nonzero(np_array)].mean()
if ave_intensity > THRESHOLD_INTENSITY:
return True
else:
return False
These are the images of the steps from the code above:
Here is the Original Image. No copyright infringement intended.
(Cant find the url of the actual imagem from unsplash)
First step is to convert the image to grayscale.
Apply thresholding to the image.
Get the contours of the "thresholded" image and get the contours. Drawing the contours is optional only.
From the contours, get the values of the outer contour and find its bounding rectangle. Optionally draw the rectangle to the image so that you'll see if your assumed thresholding value fits the object in the rectangle.
Create a mask out of the bounding rectangle.
Lastly, subtract the mask to the greyscale image. What will remain is the background image minus the mask.
To Finally identify if the background is white, find the average intensity values of the background image excluding the 0 values of the image array. And base on a certain threshold value, categorize it if its white or not.
Hope this helps. If you think it can still be improve, or if there are flaws with my solution pls comment below.
The most popular image format is .png. PNG image can have a transparent color (alpha). Often match with the white background page. With pillow is easy to find out which pixels are transparent.
A good starting point:
from PIL import Image
img = Image.open('image.png')
img = img.convert("RGBA")
pixdata = img.load()
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
pixel = pixdata[x, y]
if pixel[3] == 255:
# tranparent....
Or maybe it's enough if you check if top-left pixel it's white:
pixel = pixdata[0, 0]
if item[0] == 255 and item[1] == 255 and item[2] == 255:
# it's white

Categories