So I just started learning openCV and I was learning about joining and masking images. So while doing some things practically, I came across two questions for the below code.
...
# HSV values
lower_bounds = numpy.array([h_min, s_min, v_min])
upper_bounds = numpy.array([h_max, s_max, v_max])
# Generating a mask
mask = cv2.inRange(img_hsv, lower_bounds, upper_bounds)
# Using this to convert the image into a 3 channel image so as to
# join it with other images below using the hstack and stack
# **Line 1**
th_mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
# Using bitwise_and with img as src1 and three channel mask as src2
# **Line 2**
result = cv2.bitwise_and(img, th_mask)
horizontal_stack_1 = numpy.hstack((img, img_hsv))
horizontal_stack_2 = numpy.hstack((th_mask, result))
vertical_stack = numpy.vstack((horizontal_stack_1, horizontal_stack_2))
...
So my questions were:
Can Line 1 in the code be considered correct to generate a three-channel image to join it with other three-channel images? If not what's the better solution?
What's the difference between Line 2 and result = cv2.bitwise_and(img, img, mask = mask), and if I just need to fetch out specific colors, which method seems better?
Thanks.
I would recommend using cv2.split and cv2.merge while working with image channels. detailed explanation.
The operation will not execute if the mask[index] value is zero. more
Related
I'm looking for a way to recreate the GIMP's Erase color blending mode in Python 3 & OpenCV2.
I know it's possible to erase color using the that library, but the code I run works on exactly one of them. Furthermore, I don't believe such small amount of code could do that advanced thing.
Looking for a solution, I found the blend-modes by flrs, but it also doesn't include the option I want.
Sadly, I have no experience in OpenCV2 at the moment, but I think developing such thing could be very helpful.
Can someone guide me how to make this more reliable, or is it even possible to do with things that I've got already?
OpenCV2 color removal
Code
import cv2
from PIL import Image
#-=-=-=-#
File_Name = r"Spectrogram.png"
SRC = cv2.imread(File_Name, 1)
TMP = cv2.cvtColor(SRC, cv2.COLOR_BGR2GRAY)
_, A = cv2.threshold(TMP, 0, 255, cv2.THRESH_BINARY)
B, G, R = cv2.split(SRC)
Colors = [B, G, R, A]
Picture = cv2.merge(Colors, 4)
#-=-=-=-#
# My CV2 image display doesn't include transparency
im = cv2.cvtColor(Picture, cv2.COLOR_BGR2RGB)
im = Image.fromarray(im)
im.show()
Result
Original
Result
GIMP Erase color blending-mode
Type
Background
Foreground
Result
Image
Blending
Normal
Erase color
Normal
Here is one simple way in Python/OpenCV.
Read the input
Choose a color range
Apply range to threshold the image
Invert the range as a mask to be used later for the alpha channel
Convert the image from BGR to BGRA
Put mask into the alpha channel of the BGRA image
Save the result
Input:
import cv2
import numpy as np
# load image and set the bounds
img = cv2.imread("red_black.png")
# choose color range
lower =(0,0,0) # lower bound for each BGR channel
upper = (140,0,190) # upper bound for each BRG channel
# create the mask
mask = cv2.inRange(img, lower, upper)
# invert mask
mask = 255 - mask
# convert image to BGRA
result = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
# put mask into alpha channel
result[:,:,3] = mask
# write result to disk
cv2.imwrite("red_black_color_removed.png", result)
# display it (though does not display transparency properly)
cv2.imshow("mask", mask)
cv2.imshow("results", result)
cv2.waitKey(0)
Result:
Hi I am trying to find the maximum of binary masks for which I have used numpy.maximum.reduce. I have a set of binary masks and in order to find the maximum from all the masks(which according to me points to the most edges in the images) and avoid overlapping. Therefore I used numpy.maximum.reduce to find the max of all the binary pixels. But the result is not as expected. It is not showing the white regions as maximum and taking the gray regions for the final output mask(max). And to get the mask I have performed dilation and blurring of edge detected images. So will that affect the final result(meaning comparing the masks to find the most edges)?
Q1. Is it the right approach to find the maximum of the masks and avoid overlapping regions?
Q2. Will it return the regions with most edges?
Q3. Should I use the masks or the original edge detected images for getting the most edges?
Results of combining masks
Individual Masked Images
for file in glob.glob("images/*.jpg")):
img = cv2.imread(file)
#edge detection
canny = auto_canny(img)
#Dilation(Morphological function to increase edge width)
img_dilate = cv2.dilate(canny, (3,3), iterations = 1)
#Gaussian Blur to blur the edges to remove noise
mask= ndimage.gaussian_filter(img_dilate, sigma=5)
mask[mask< 30] = 0
mask[(mask>= 30) & (mask< 70)] = 30
mask[(mask>= 70) & (mask< 110)] = 110
mask[mask> 110] = 255
# Retrieve regions from original images
res = cv2.bitwise_and(img, img, mask = mask)
mask_inv= cv2.bitwise_not(mask)
#adding the extracted regions to background image
main_ = cv2.bitwise_and(im_1, im_1, mask = mask_inv)
result = cv2.add(main_, res)
cv2.imshow("result",mask)
cv2.waitKey(0)
image_1.append(mask) #list of masks
image_2.append(res) #list of all extracted regions
for i in range(0,len(image_1)):
max_ = np.maximum.reduce([image_1[i]])
cv2.imshow("max",max_)
cv2.waitKey(0)
Update
max_img = np.zeros((image_1[0].shape[:3]),dtype=np.uint8)
max_img = np.maximum(max_img,image_1)
cv2.imshow("max",max_img)
cv2.waitKey(0)
Using this update section, I ran into the error: (-206) Unrecognized or unsupported array type in function cvGetMat
Here is a simple example of getting the maximum value pixel-by-pixel of 4 binary mask images using Python/OpenCV/Numpy
4 Masks:
import cv2
import numpy as np
# read masks
mask1 = cv2.imread('mask1.png')
mask2 = cv2.imread('mask2.png')
mask3 = cv2.imread('mask3.png')
mask4 = cv2.imread('mask4.png')
# check shape so that image of zeros (below) is created compatible
print(mask1.shape)
# make list
masks = [mask1, mask2, mask3, mask4]
# initialize maximum array to zeros
image_max = np.full((mask1.shape), (0,0,0), dtype=np.uint8)
# get maximum pair-wise from mask and previous maximum image
for mask in masks:
image_max = np.maximum(image_max, mask)
# show results
cv2.imshow('maximum', image_max)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('mask1-4max.png', image_max)
I was looking at the documentation of the OpenCV and found something which I couldn't understand. I've tried to find it on the web but couldn't find anything satisfying. Can you please help me in a line of code?
Here is the code:
# Load two images
img1 = cv.imread('messi5.jpg')
img2 = cv.imread('opencv-logo-white.png')
# I want to put logo on top-left corner, So I create a ROI
rows,cols,channels = img2.shape
roi = img1[0:rows, 0:cols ]
# Now create a mask of logo and create its inverse mask also
img2gray = cv.cvtColor(img2,cv.COLOR_BGR2GRAY)
ret, mask = cv.threshold(img2gray, 10, 255, cv.THRESH_BINARY)
mask_inv = cv.bitwise_not(mask)
# Now black-out the area of logo in ROI
img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv)
# Take only region of logo from logo image.
img2_fg = cv.bitwise_and(img2,img2,mask = mask)
# Put logo in ROI and modify the main image
dst = cv.add(img1_bg,img2_fg)
img1[0:rows, 0:cols ] = dst
cv.imshow('res',img1)
cv.waitKey(0)
cv.destroyAllWindows()
What I actually don't understand are these two lines
img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv)
img2_fg = cv.bitwise_and(img2,img2,mask = mask)
What these lines actually do and how the masking will be applied?
If anyone can explain the masking being applied in the bitwise_and operation that would be really helpful.Thanks
If you look at the tutorial.
The mask is the black and white image of the OpenCV logo, it was created from applying a threshold to the OpenCV logo.
The bitwise_and operation is a logical and operation
In this case, it is taking two 8 bit numbers representing a pixel and applying the and operation on those numbers.
Documentation describes what this function does.
Since the first two parameters are the same (both roi or img2) the result would be the same image if a mask wasn't being used. Places, where the mask is black, are left the same as the destination image.
In this case, no destination image is provided, so OpenCV allocates a black image (zeros) for the destination image used in the function (this is generally how OpenCV works when a function is not provided with a Matrix).
Specifically img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv) will create a black matrix used in the function which later becomes the output img1_bg. Only the parts of this black image that match up with white pixels in mask_inv are filled with the pixels from roi.This means that in the mask_inv where there are white pixels. the roi value will be copied in the pure black image generated by the function in the corresponding coordinate.
Similarly img2_fg = cv.bitwise_and(img2,img2,mask = mask) will create a black matrix used in the function which later becomes the output img2_fg. Only the parts of this black image that match up with white pixels in mask are filled with the pixels from img2.
This makes it so when you add img1_bg and img2_fg the result is only the part of each image that is masked.
Personally, I think this is a confusing use of bitwise_and. I think to demonstrate the function of bitwise_and it would be clearer to remove the mask parameter as follows: img1_bg = cv.bitwise_and(roi, mask_inv). This would give the same result, zeros where the mask is black, and the ROI values where it is not since the mask has pixels that are all ones or all zeroes.
If you don't care to demonstrate bitwise_and usage, in python I think it would be clearer to use logical indexing as follows:
output = np.zeros(img1.shape, np.uint8)
output[mask_inv] = img1_bg[mask_inv]
output[mask] = img2_fg[mask]
There is a project that im working on which required the color white detection, after some research i decided to use covert RGB image to HSL image and thresh hold the lightness to get the color white, im working with openCV so wonder if there is a way to do it.
enter image description here
You can do it with 4 easy steps:
Convert HLS
img = cv2.imread("HLS.png")
imgHLS = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
Get the L channel
Lchannel = imgHLS[:,:,1]
Create the mask
#change 250 to lower numbers to include more values as "white"
mask = cv2.inRange(Lchannel, 250, 255)
Apply Mask to original image
res = cv2.bitwise_and(img,img, mask= mask)
This also depends on what is white for you, and you may change the values :) I used inRange in the L channel but you can save one step and do
mask = cv2.inRange(imgHLS, np.array([0,250,0]), np.array([255,255,255]))
instead of the lines:
Lchannel = imgHLS[:,:,1]
mask = cv2.inRange(Lchannel, 250, 255)
It is shorter, but I did it the other way first to make it more explicit and to show what I was doing.
Image:
Result:
The result looks almost as the mask (almost binary), but depending on your lowerbound (I chose 250) you may get even some almost white colors.
I am trying to extract blue colour of an input image. For that I create a blue HSV colour boundary and threshold HSV image by using the command
mask_img = cv2.inRange(hsv, lower_blue, upper_blue)
After that I used a bitwise_and on the input image and the threshold image by using
res = cv2.bitwise_and(img, img, mask = mask_img)
Where img is the input image. I got this code from opencv. But I didn't understand why are three arguments used in bitwise_and and what actually each arguments mean? Why the same image is used at src1 and src2 ?
And also what is the use of mask keyword here? Please help me to find out the answer
The basic concept behind this is the value of color black ,it's value is 0 in OPEN_CV.So black + anycolor= anycolor because value of black is 0.
Now suppose we have two images one is named img1 and other is img2.
img2 contains a logo which we want to place on the img1. We create threshold and then the mask and mask_inv of img2,and also create roi of img1.
Now we have to do two things to add the logo of img2 on img1.
We create background of roi as img1_bg with help of : mask_inv,mask_inv will have two region one black and one white, in the white region we will put img1 part and leave black as it is-
img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv)
In your question you have used directly the mask of the img created
res = cv2.bitwise_and(img,img,mask = mask_img)
and in img2 we need to create the logo as foreground of roi ,
img2_fg = cv2.bitwise_and(img2,img2,mask = mask)
here we have used mask layer , the logo part of img2 gets filled in the white part of mask
Now when we add both we get a perfect combined roi
For full description and understanding visit:
OPEN CV CODE FILES AND FULL DESCRIPTION
The operation of "And" will be performed only if mask[i] doesn't equal zero, else the the result of and operation will be zero. The mask should be either white or black image with single channel. you can see this link
http://docs.opencv.org/2.4.13.2/modules/core/doc/operations_on_arrays.html?highlight=bitwise#bitwise-and
what is actually each arguments mean?
res = cv2.bitwise_and(img,img,mask = mask_img)
src1: the first image (the first object for merging)
src2: the second image (the second object for merging)
mask: understood as rules to merge. If region of image (which is gray-scaled, and then masked) has black color (valued as 0), then it is not combined (merging region of the first image with that of the second one), vice versa, it will be carried out. In your code, referenced image is "mask_img".
In my case, my code is correct, when it makes white + anycolor = anycolor
import cv2
import numpy as np
# Load two images
img1 = cv2.imread('bongSung.jpg')
img2 = cv2.imread('opencv.jpg')
# I want to put logo on top-left corner, so I create a ROI
rows, cols, channels = img2.shape
roi = img1[0:rows, 0:cols]
# NOw we need to create a mask of the logo, mask is conversion to grayscale of an image
img2gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 220, 255, cv2.THRESH_BINARY_INV)
cv2.imshow('mask', mask)
mask_inv = cv2.bitwise_not(mask)
#cv2.imshow("mask_inv", mask_inv)
#When using bitwise_and() in opencv with python then white + anycolor = anycolor; black + anycolor = black
img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv)
#cv2.imshow("img1_bg", img1_bg)
cv2.imshow("img2", img2)
img2_fg = cv2.bitwise_and(img2,img2,mask = mask)
cv2.imshow('img2_fg', img2_fg)
dst = cv2.add(img1_bg,img2_fg)
img1[0:rows, 0:cols] = dst
#cv2.imshow("Image", img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
From above answers we may know the definitions of the parameters of bitwise_and(), but they all do not answer the other question
Why the same image is used at src1 and src2 ?
This question should be caused by the too simplified function definition in the document of OpenCV, it may be ambiguous to some people, in the document the bitwise_and() is defined as
dst(I)=sur1(I) ^ sur2(I), if mask(I) != 0, where ^ represents the 'and' operator
from this definition at first sight I cannot get the picture about how to process the dst(I) when the mask(I) is 0.
From the test result, I think that it should give a more clear function definition as
dst(I)=sur1(I) ^sur2(I), if mask(I) != 0,
otherwise the dst(I) keep its original value and the default value of all elements of the dst array is 0.
Now we may know that using the same image for sur1 and sur2, it will only keep the original image parts in the area of mask(I) !=0 and the other area shows the part of the dst image (as the mask shape)
Additionally for other bitwise operations the definitions should be the same as above, they also need to add the otherwise condition and the default value description of the dst array
The below link explains clearly the bitwise operation and also the significance of each parameters.
http://opencvexamples.blogspot.com/2013/10/bitwise-and-or-xor-and-not.html
void bitwise_and(InputArray src1, InputArray src2, OutputArray dst, InputArray mask=noArray())
Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar.
Parameters:
src1 – first input array or a scalar.
src2 – second input array or a scalar.
src – single input array.
value – scalar value.
dst – output array that has the same size and type as the input arrays.
mask – optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed
Regarding using img twice, my guess is that we don't really care what img[i] and img[i] is, as it's just img[i] for binary. What matters is that, as mentioned by Mohammed Awney, when the mask is 0, we make img[i] be 0, otherwise we leave the pixel alone. This is a way to make certain pixels in img black, according to our mask.
bitwise_and ( InputArray src1,
InputArray src2,
OutputArray dst,
InputArray mask = noArray()
)
src1 first input array or a scalar.
src2 second input array or a scalar.
dst output array that has the same size and type as the input arrays.
mask optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.
dst(I)=src1(I)∧src2(I)if mask(I)≠0
and mask operate on dst
computes bitwise conjunction of the two arrays (dst = src1 & src2) Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar.