How to add a mask on an image blurring their intersection? - python

I have a mask and a background image. I want to add this mask on the background image and blurring (or perhaps something like cv2.inpaint()) the intersection to make it more natural, but am blocked for the bluring effect, any help would be highly appreciated.
More details:
I have two RGB images. The first image (foreground) is associated with a binary mask, which I would like to add on the second image (background).
The issue is that when we look at the final image we clearly see which part was added on the background image. Hence, I would like to add a blurring effect at the intersection of the mask and the background image. For now my code looks like:
#foreground image: we'll use only the mask part
#background image: where we will add the mask
foreground = cv2.imread(path1)
background = cv2.imread(path2)
#Convert to float
foreground = foreground.astype(float)
background = background.astype(float)
mask = mask.astype(float)
#Multiply the foreground with the mask
foreground = cv2.multiply(mask, foreground)
#Multiply the background with everywhere except with mask
background = cv2.multiply(1.0 - mask, background)
#Add the masked foreground to background image
outImage = cv2.add(foreground, background)
I could not find a straightforward way to do it, but I guess their should have one. A lot of related answer on internet works by thresholding to some pixel value, but this can not be used here. For now the easiest way I found is:
create a mask of the part I want to blurr
blurr the final images (background +foreground mask)
take from the blurr images only the part of the mask of 1) and add it the initial final image (background +foreground mask)
Before doing this, I was wondering if someone would have some advices.

The way I am doing it so far works well, but I am staying open to any other better solution!
Here's mine:
def foreground_background_into1(background, foreground, with_smooth=True, thickness=3, mask=[]):
'''will add the foreground image to the background image to create a new image, with smooth intersection
-foreground image: this image must be either black where there is no mask, or the mask parameter must be specified in
the mask parameter
-background image: image on which we will add the mask
-mask: binary mask if foreground image does not already contain this information
Note: not tested with mask'''
#make a copy for the smooth part
img = foreground.copy()
#create binary mask (as needed to multiply with the backgound) from foreground image if not existing (replacing all value
#bigger than 0 to 1)
if len(mask)==0:
_,mask = cv2.threshold(foreground,1,1,cv2.THRESH_BINARY)
#verification
if foreground.shape!=background.shape:
raise Warning("the foreground is not of same shape as background, we will convert it")
foreground = imresize(foreground, size=background.shape)
#if mask has one channel add two others
if len(mask.shape)==2:
mask = skimage.color.gray2rgb(mask)
#add foreground to background
foreground = foreground.astype(float)
background = background.astype(float)
mask = mask.astype(float)
foreground = cv2.multiply(mask, foreground)
background = cv2.multiply(1 - mask, background) #is the initial background with black where the mask of forground
outImage = cv2.add(foreground, background)
result = outImage.astype(np.uint8)
if with_smooth:
#find contour
foreground_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(foreground_gray,1,255,0)
_, contours, __ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
#create intersection_mask
intersection_mask = cv2.drawContours(np.zeros(foreground.shape, np.uint8),
contours,-1,(0,255,0),thickness)
#inpaint the contour in the first final image
intersection_mask = cv2.cvtColor(intersection_mask, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (1,1))
intersection_mask = cv2.dilate(intersection_mask, kernel, iterations=1)
result = cv2.inpaint(result,intersection_mask,1,cv2.INPAINT_TELEA)
return(result)
I am basically adding the mask on the background image, and then smooth the intersection part by inpaint with openCV the contour of the mask.

Related

Python - replicating the GIMP's "Erase color" blend mode

I'm looking for a way to recreate the GIMP's Erase color blending mode in Python 3 & OpenCV2.
I know it's possible to erase color using the that library, but the code I run works on exactly one of them. Furthermore, I don't believe such small amount of code could do that advanced thing.
Looking for a solution, I found the blend-modes by flrs, but it also doesn't include the option I want.
Sadly, I have no experience in OpenCV2 at the moment, but I think developing such thing could be very helpful.
Can someone guide me how to make this more reliable, or is it even possible to do with things that I've got already?
OpenCV2 color removal
Code
import cv2
from PIL import Image
#-=-=-=-#
File_Name = r"Spectrogram.png"
SRC = cv2.imread(File_Name, 1)
TMP = cv2.cvtColor(SRC, cv2.COLOR_BGR2GRAY)
_, A = cv2.threshold(TMP, 0, 255, cv2.THRESH_BINARY)
B, G, R = cv2.split(SRC)
Colors = [B, G, R, A]
Picture = cv2.merge(Colors, 4)
#-=-=-=-#
# My CV2 image display doesn't include transparency
im = cv2.cvtColor(Picture, cv2.COLOR_BGR2RGB)
im = Image.fromarray(im)
im.show()
Result
Original
Result
GIMP Erase color blending-mode
Type
Background
Foreground
Result
Image
Blending
Normal
Erase color
Normal
Here is one simple way in Python/OpenCV.
Read the input
Choose a color range
Apply range to threshold the image
Invert the range as a mask to be used later for the alpha channel
Convert the image from BGR to BGRA
Put mask into the alpha channel of the BGRA image
Save the result
Input:
import cv2
import numpy as np
# load image and set the bounds
img = cv2.imread("red_black.png")
# choose color range
lower =(0,0,0) # lower bound for each BGR channel
upper = (140,0,190) # upper bound for each BRG channel
# create the mask
mask = cv2.inRange(img, lower, upper)
# invert mask
mask = 255 - mask
# convert image to BGRA
result = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
# put mask into alpha channel
result[:,:,3] = mask
# write result to disk
cv2.imwrite("red_black_color_removed.png", result)
# display it (though does not display transparency properly)
cv2.imshow("mask", mask)
cv2.imshow("results", result)
cv2.waitKey(0)
Result:

How to overlay only the needed image?

I have two images.
Image 1:
Image 2:
Both the images have the same resolution. I want to overlay the second image on the first image.
This is what I tried:
from PIL import Image
img1 = Image.open("D:\\obj1__0.png")
img1 = img1.convert("RGBA")
img1 = img1.resize((640,360))
img2 = Image.open('D:\\Renderforest_Watermark.png')
img2 = img2.convert("RGBA")
img2 = img2.resize((640,360))
new_img = Image.blend(img1,img2,0.7)
new_img.save("D:\\Final.png")
Output:
My Question: How do I overlay image 2 on image 1 such that only the watermark is overlaid?
P.S: I already searched for this on Stack Overflow, but couldn't find any answers.
Your overlay image is incorrectly formed. It should be transparent where it is white. It has no transparency, so it isn't see-through. I can only suggest to make a synthetic alpha channel by guessing that it should be transparent where the overlay image is white:
#!/usr/bin/env python3
from PIL import Image
# Set a common size
size = (640, 360)
# Load background and overlay, removing the pointless alpha channel and resizing to a common size
bg = Image.open('background.png').convert('RGB').resize(size)
overlay = Image.open('overlay.png').convert('RGB').resize(size)
# Try and invent a mask by making white pixels transparent
mask = overlay.convert('L')
mask = mask.point(lambda p: 255 if p < 225 else 0)
# Paste overlay onto background only where the mask is, then save
bg.paste(overlay, None, mask)
bg.save('result.png')
If your image did have an alpha channel, you would avoid deleting the original alpha channel and open it like this instead:
overlay = Image.open('overlay.png').resize(size)
then delete these lines:
# Try and invent a mask by making white pixels transparent
mask = overlay.convert('L')
mask = mask.point(lambda p: 255 if p < 225 else 0)
then change the line after the above to:
# Paste overlay onto background only where the mask is, then save
bg.paste(overlay, None, overlay)
Keywords: Image processing, PIL, Pillow, overlay, watermark, transparent, alpha.

How masking is applied in bitwise_operation in opencv?

I was looking at the documentation of the OpenCV and found something which I couldn't understand. I've tried to find it on the web but couldn't find anything satisfying. Can you please help me in a line of code?
Here is the code:
# Load two images
img1 = cv.imread('messi5.jpg')
img2 = cv.imread('opencv-logo-white.png')
# I want to put logo on top-left corner, So I create a ROI
rows,cols,channels = img2.shape
roi = img1[0:rows, 0:cols ]
# Now create a mask of logo and create its inverse mask also
img2gray = cv.cvtColor(img2,cv.COLOR_BGR2GRAY)
ret, mask = cv.threshold(img2gray, 10, 255, cv.THRESH_BINARY)
mask_inv = cv.bitwise_not(mask)
# Now black-out the area of logo in ROI
img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv)
# Take only region of logo from logo image.
img2_fg = cv.bitwise_and(img2,img2,mask = mask)
# Put logo in ROI and modify the main image
dst = cv.add(img1_bg,img2_fg)
img1[0:rows, 0:cols ] = dst
cv.imshow('res',img1)
cv.waitKey(0)
cv.destroyAllWindows()
What I actually don't understand are these two lines
img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv)
img2_fg = cv.bitwise_and(img2,img2,mask = mask)
What these lines actually do and how the masking will be applied?
If anyone can explain the masking being applied in the bitwise_and operation that would be really helpful.Thanks
If you look at the tutorial.
The mask is the black and white image of the OpenCV logo, it was created from applying a threshold to the OpenCV logo.
The bitwise_and operation is a logical and operation
In this case, it is taking two 8 bit numbers representing a pixel and applying the and operation on those numbers.
Documentation describes what this function does.
Since the first two parameters are the same (both roi or img2) the result would be the same image if a mask wasn't being used. Places, where the mask is black, are left the same as the destination image.
In this case, no destination image is provided, so OpenCV allocates a black image (zeros) for the destination image used in the function (this is generally how OpenCV works when a function is not provided with a Matrix).
Specifically img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv) will create a black matrix used in the function which later becomes the output img1_bg. Only the parts of this black image that match up with white pixels in mask_inv are filled with the pixels from roi.This means that in the mask_inv where there are white pixels. the roi value will be copied in the pure black image generated by the function in the corresponding coordinate.
Similarly img2_fg = cv.bitwise_and(img2,img2,mask = mask) will create a black matrix used in the function which later becomes the output img2_fg. Only the parts of this black image that match up with white pixels in mask are filled with the pixels from img2.
This makes it so when you add img1_bg and img2_fg the result is only the part of each image that is masked.
Personally, I think this is a confusing use of bitwise_and. I think to demonstrate the function of bitwise_and it would be clearer to remove the mask parameter as follows: img1_bg = cv.bitwise_and(roi, mask_inv). This would give the same result, zeros where the mask is black, and the ROI values where it is not since the mask has pixels that are all ones or all zeroes.
If you don't care to demonstrate bitwise_and usage, in python I think it would be clearer to use logical indexing as follows:
output = np.zeros(img1.shape, np.uint8)
output[mask_inv] = img1_bg[mask_inv]
output[mask] = img2_fg[mask]

How do you lightness thresh hold with HSL on OpenCV?

There is a project that im working on which required the color white detection, after some research i decided to use covert RGB image to HSL image and thresh hold the lightness to get the color white, im working with openCV so wonder if there is a way to do it.
enter image description here
You can do it with 4 easy steps:
Convert HLS
img = cv2.imread("HLS.png")
imgHLS = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
Get the L channel
Lchannel = imgHLS[:,:,1]
Create the mask
#change 250 to lower numbers to include more values as "white"
mask = cv2.inRange(Lchannel, 250, 255)
Apply Mask to original image
res = cv2.bitwise_and(img,img, mask= mask)
This also depends on what is white for you, and you may change the values :) I used inRange in the L channel but you can save one step and do
mask = cv2.inRange(imgHLS, np.array([0,250,0]), np.array([255,255,255]))
instead of the lines:
Lchannel = imgHLS[:,:,1]
mask = cv2.inRange(Lchannel, 250, 255)
It is shorter, but I did it the other way first to make it more explicit and to show what I was doing.
Image:
Result:
The result looks almost as the mask (almost binary), but depending on your lowerbound (I chose 250) you may get even some almost white colors.

How to overlay object with transparent color in OpenCV

Example
I will try to explain my question according the image. Firstly i use Python3 and OpenCV3. I just want to colorize the white pixels of mask(for example with shinny blue). Then using addWeighted, i want to blend that mask onto original image. But the problem i can't colorize the mask. Mask is the result of inRange fuction and i can't transform it to RGB.
https://www.youtube.com/watch?v=hQ-bpfdWQh8
Just like in the video but single frame.
For a quick mask visualization, try this:
debug_img = img/2 + mask/2
If img isn't grayscale already, replace img with img.mean(axis=2) or use cvtColor().
Another way is to use indexing:
debug_img = img.copy()
debug_img[mask>0] = (0, 255, 0) # replace masked pixels with green
To make the green transparent, simply add
debug_img = debug_img/2 + img/2

Categories