explain arguments meaning in res = cv2.bitwise_and(img,img,mask = mask) - python

I am trying to extract blue colour of an input image. For that I create a blue HSV colour boundary and threshold HSV image by using the command
mask_img = cv2.inRange(hsv, lower_blue, upper_blue)
After that I used a bitwise_and on the input image and the threshold image by using
res = cv2.bitwise_and(img, img, mask = mask_img)
Where img is the input image. I got this code from opencv. But I didn't understand why are three arguments used in bitwise_and and what actually each arguments mean? Why the same image is used at src1 and src2 ?
And also what is the use of mask keyword here? Please help me to find out the answer

The basic concept behind this is the value of color black ,it's value is 0 in OPEN_CV.So black + anycolor= anycolor because value of black is 0.
Now suppose we have two images one is named img1 and other is img2.
img2 contains a logo which we want to place on the img1. We create threshold and then the mask and mask_inv of img2,and also create roi of img1.
Now we have to do two things to add the logo of img2 on img1.
We create background of roi as img1_bg with help of : mask_inv,mask_inv will have two region one black and one white, in the white region we will put img1 part and leave black as it is-
img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv)
In your question you have used directly the mask of the img created
res = cv2.bitwise_and(img,img,mask = mask_img)
and in img2 we need to create the logo as foreground of roi ,
img2_fg = cv2.bitwise_and(img2,img2,mask = mask)
here we have used mask layer , the logo part of img2 gets filled in the white part of mask
Now when we add both we get a perfect combined roi
For full description and understanding visit:
OPEN CV CODE FILES AND FULL DESCRIPTION

The operation of "And" will be performed only if mask[i] doesn't equal zero, else the the result of and operation will be zero. The mask should be either white or black image with single channel. you can see this link
http://docs.opencv.org/2.4.13.2/modules/core/doc/operations_on_arrays.html?highlight=bitwise#bitwise-and

what is actually each arguments mean?
res = cv2.bitwise_and(img,img,mask = mask_img)
src1: the first image (the first object for merging)
src2: the second image (the second object for merging)
mask: understood as rules to merge. If region of image (which is gray-scaled, and then masked) has black color (valued as 0), then it is not combined (merging region of the first image with that of the second one), vice versa, it will be carried out. In your code, referenced image is "mask_img".
In my case, my code is correct, when it makes white + anycolor = anycolor
import cv2
import numpy as np
# Load two images
img1 = cv2.imread('bongSung.jpg')
img2 = cv2.imread('opencv.jpg')
# I want to put logo on top-left corner, so I create a ROI
rows, cols, channels = img2.shape
roi = img1[0:rows, 0:cols]
# NOw we need to create a mask of the logo, mask is conversion to grayscale of an image
img2gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 220, 255, cv2.THRESH_BINARY_INV)
cv2.imshow('mask', mask)
mask_inv = cv2.bitwise_not(mask)
#cv2.imshow("mask_inv", mask_inv)
#When using bitwise_and() in opencv with python then white + anycolor = anycolor; black + anycolor = black
img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv)
#cv2.imshow("img1_bg", img1_bg)
cv2.imshow("img2", img2)
img2_fg = cv2.bitwise_and(img2,img2,mask = mask)
cv2.imshow('img2_fg', img2_fg)
dst = cv2.add(img1_bg,img2_fg)
img1[0:rows, 0:cols] = dst
#cv2.imshow("Image", img1)
cv2.waitKey(0)
cv2.destroyAllWindows()

From above answers we may know the definitions of the parameters of bitwise_and(), but they all do not answer the other question
Why the same image is used at src1 and src2 ?
This question should be caused by the too simplified function definition in the document of OpenCV, it may be ambiguous to some people, in the document the bitwise_and() is defined as
dst(I)=sur1(I) ^ sur2(I), if mask(I) != 0, where ^ represents the 'and' operator
from this definition at first sight I cannot get the picture about how to process the dst(I) when the mask(I) is 0.
From the test result, I think that it should give a more clear function definition as
dst(I)=sur1(I) ^sur2(I), if mask(I) != 0,
otherwise the dst(I) keep its original value and the default value of all elements of the dst array is 0.
Now we may know that using the same image for sur1 and sur2, it will only keep the original image parts in the area of mask(I) !=0 and the other area shows the part of the dst image (as the mask shape)
Additionally for other bitwise operations the definitions should be the same as above, they also need to add the otherwise condition and the default value description of the dst array

The below link explains clearly the bitwise operation and also the significance of each parameters.
http://opencvexamples.blogspot.com/2013/10/bitwise-and-or-xor-and-not.html
void bitwise_and(InputArray src1, InputArray src2, OutputArray dst, InputArray mask=noArray())
Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar.
Parameters:
src1 – first input array or a scalar.
src2 – second input array or a scalar.
src – single input array.
value – scalar value.
dst – output array that has the same size and type as the input arrays.
mask – optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed

Regarding using img twice, my guess is that we don't really care what img[i] and img[i] is, as it's just img[i] for binary. What matters is that, as mentioned by Mohammed Awney, when the mask is 0, we make img[i] be 0, otherwise we leave the pixel alone. This is a way to make certain pixels in img black, according to our mask.

bitwise_and ( InputArray src1,
InputArray src2,
OutputArray dst,
InputArray mask = noArray()
)
src1 first input array or a scalar.
src2 second input array or a scalar.
dst output array that has the same size and type as the input arrays.
mask optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.
dst(I)=src1(I)∧src2(I)if mask(I)≠0
and mask operate on dst
computes bitwise conjunction of the two arrays (dst = src1 & src2) Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar.

Related

How to properly insert mask into original image?

My goal is to cover a face with circular noise (salt and pepper/black and white dots), however what I have managed to achieve is just rectangular noise.
Using this image:
I found the face coordinates (x,y,w,h) = [389, 127, 209, 209]
And using this function added noise Like:
img = cv2.imread('like_this.jpg')
x,y,w,h = [389, 127, 209, 209]
noised = add_noise(img[y:y+h,x:x+w])
new = img.copy()
new[y:y+h,x:x+w] = noised
cv2.imshow('new', new)
From x,y,w,h I found that I want my circle to be at (493, 231) with radius 105
I researched I found something about masking and bitwise operations, so I tried:
mask = np.zeros(new.shape[:2], dtype='uint8')
cv2.circle(mask, (493, 231), 105, 255, -1)
new_gray = cv2.cvtColor(new, cv2.COLOR_BGR2GRAY)
masked = cv2.bitwise_and(new_gray, new_gray , mask=mask)
cv2.imshow('masked', masked)
Here, the problem arises:
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
OR = cv2.bitwise_or(masked, img_gray) # removes the black dots, idk why
cv2.imshow('bitwise - OR', OR)
The black dots get removed from the noise and besides that, I can't seem to convert OR back to BGR.
Maybe there is a better way to do that.
Please, help/guidance needed!
So the issue is how to use masking. There are two options, numpy and OpenCV.
numpy
Since you copied the noisy area into the result and now want to restore everything outside of the circle, I'll use mask == 0 to get a boolean array that is true everywhere outside of the circle.
With numpy, boolean arrays can be used as indices. The result is a "view", it behaves like a slice. Operations through it affect the original data.
noised = add_noise(img[y:y+h,x:x+w])
new = img.copy()
new[y:y+h,x:x+w] = noised # whole rectangle affected
new[mask == 0] = img[mask == 0] # restore everything outside of the circle
All three arrays (mask, new, img) need to have the same shape.
OpenCV, masks
Not much point to it with Python and numpy available, but many of its C++ APIs take an optional mask argument that modifies that function's operation. Mat::copyTo() is one such method.
OpenCV, bitwise operations
With bitwise operations, the mask would no longer just label each pixel as true or false, but it would have to be 3-channels and all eight bits of every value count, so it must contain only 0 and 255 (0xFF).
I'll erase everything outside of the circle first, then add back the part of the source that is outside of the circle. Both operations use bitwise_and. Two operations are required because bitwise operations can't just "overwrite". They react to both operands. I'll also use numpy's ~ operator to bitwise-negate the mask.
bitwise_mask = cv.cvtColor(mask, cv.COLOR_GRAY2BGR) # blow it up
new = cv.bitwise_and(new, bitwise_mask)
new += cv.bitwise_and(img, ~bitwise_mask)
Bitwise opperations function on binary conditions turning "off" a pixel if it has a value of zero, and turning it "on" if the pixel has a value greater than zero.
In your case the bitwise_or "removes" both the black background of the mask and the black points of the noise.
I propose to do it like this:
# Read image
img = cv2.imread('like_this.jpg')
x,y,w,h = [389, 127, 209, 209]
# Add squared noise
noised = add_noise(img[y:y+h,x:x+w])
new = img.copy()
new[y:y+h,x:x+w] = noised
# Transform the image to graylevel
new_gray = cv2.cvtColor(new, cv2.COLOR_BGR2GRAY)
# Create a mask in which black noise is equal to 0 and white to 2
mask = np.zeros(new.shape[:2], dtype='uint8')
mask[new_gray==0] = 1
mask[new_gray==255] = 2
# Mask the previous mask with a circle
circle = np.zeros(new.shape[:2], dtype='uint8')
circle = cv2.circle(circle, (493, 231), 105, 1, -1)
mask = mask * circle
plt.imshow(mask)
plt.show()
# Join the mask adding the noise to the image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray[mask==1] = 0
gray[mask==2] = 255
cv2.imshow('Result', gray)
cv2.waitKey(0)
Hope it works. If you need more help let me know.

Masking and bitwise_and in openCV

So I just started learning openCV and I was learning about joining and masking images. So while doing some things practically, I came across two questions for the below code.
...
# HSV values
lower_bounds = numpy.array([h_min, s_min, v_min])
upper_bounds = numpy.array([h_max, s_max, v_max])
# Generating a mask
mask = cv2.inRange(img_hsv, lower_bounds, upper_bounds)
# Using this to convert the image into a 3 channel image so as to
# join it with other images below using the hstack and stack
# **Line 1**
th_mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
# Using bitwise_and with img as src1 and three channel mask as src2
# **Line 2**
result = cv2.bitwise_and(img, th_mask)
horizontal_stack_1 = numpy.hstack((img, img_hsv))
horizontal_stack_2 = numpy.hstack((th_mask, result))
vertical_stack = numpy.vstack((horizontal_stack_1, horizontal_stack_2))
...
So my questions were:
Can Line 1 in the code be considered correct to generate a three-channel image to join it with other three-channel images? If not what's the better solution?
What's the difference between Line 2 and result = cv2.bitwise_and(img, img, mask = mask), and if I just need to fetch out specific colors, which method seems better?
Thanks.
I would recommend using cv2.split and cv2.merge while working with image channels. detailed explanation.
The operation will not execute if the mask[index] value is zero. more

OpenCV + Python - get average RGB around point

I am new to opencv.
My Idea is: I have a picture, and defined 4 points (pixels?) e.g. 0x0,0x100,100x0,100x00
What would be best approach to probe each of those BUT, creating square around them.
so e.g. for 0x0 (well not the best example as it can't go around), so let's say 50x50 point and create some kind of mask around that pixel let's say 10x10 pixels square width and height, and then get average RGB of that square, and then do it for all points.
So far I can only probe single points for RGB, but don't have an idea how to approach masking.
I have a feeling like openCV could have some easy solution for that, but all I am finding is super overcomplicated (imho) code that I don't really understand.
If you have an irregular region, then make a mask for it. You can compute the mean of region corresponding to the mask in Python/OpenCV as follows:
Input:
Mask:
import cv2
# load image
img = cv2.imread('zelda1.jpg')
# load mask as grayscale
mask = cv2.imread('zelda1_mask.png', 0)
# get mean of pixels corresponding to mask
mean = cv2.mean(img, mask=mask)
# print mean of each channel including alpha; alpha=0 is opaque
print(mean)
# mask region on input
region = img.copy()
img_masked = cv2.bitwise_and(img, img, mask=mask)
# Save result
cv2.imwrite('zelda1_region2.jpg', img_masked)
# Display input
cv2.imshow('input', img)
cv2.imshow('mask', mask)
cv2.imshow('input masked', img_masked)
cv2.waitKey(0)
cv2.destroyAllWindows()
Region of image where mean is computed:
Mean:
(50.23702664796634, 32.84151472650771, 198.3702664796634, 0.0)
Here is one way to do that in Python/OpenCV using Numpy slicing to get a square region about any give point.
Input:
import cv2
# load image
img = cv2.imread('zelda1.jpg')
# Define point
x = 90
y = 200
# Define region size
rr = 10
# crop image +-20 pixels
crop = img[y-rr:y+rr, x-rr:x+rr]
# compute mean
mean = cv2.mean(crop)
# print mean of each channel including alpha; alpha=0 is opaque
print(mean)
# draw region on input
region = img.copy()
cv2.rectangle(region, (x-rr,y-rr), (x+rr,y+rr), (255,255,255), 1)
# Save result
cv2.imwrite('zelda1_region.jpg', region)
# Display input
cv2.imshow('input', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Region:
Mean of region for each channel:
(53.6175, 35.9, 205.2375, 0.0)

How masking is applied in bitwise_operation in opencv?

I was looking at the documentation of the OpenCV and found something which I couldn't understand. I've tried to find it on the web but couldn't find anything satisfying. Can you please help me in a line of code?
Here is the code:
# Load two images
img1 = cv.imread('messi5.jpg')
img2 = cv.imread('opencv-logo-white.png')
# I want to put logo on top-left corner, So I create a ROI
rows,cols,channels = img2.shape
roi = img1[0:rows, 0:cols ]
# Now create a mask of logo and create its inverse mask also
img2gray = cv.cvtColor(img2,cv.COLOR_BGR2GRAY)
ret, mask = cv.threshold(img2gray, 10, 255, cv.THRESH_BINARY)
mask_inv = cv.bitwise_not(mask)
# Now black-out the area of logo in ROI
img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv)
# Take only region of logo from logo image.
img2_fg = cv.bitwise_and(img2,img2,mask = mask)
# Put logo in ROI and modify the main image
dst = cv.add(img1_bg,img2_fg)
img1[0:rows, 0:cols ] = dst
cv.imshow('res',img1)
cv.waitKey(0)
cv.destroyAllWindows()
What I actually don't understand are these two lines
img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv)
img2_fg = cv.bitwise_and(img2,img2,mask = mask)
What these lines actually do and how the masking will be applied?
If anyone can explain the masking being applied in the bitwise_and operation that would be really helpful.Thanks
If you look at the tutorial.
The mask is the black and white image of the OpenCV logo, it was created from applying a threshold to the OpenCV logo.
The bitwise_and operation is a logical and operation
In this case, it is taking two 8 bit numbers representing a pixel and applying the and operation on those numbers.
Documentation describes what this function does.
Since the first two parameters are the same (both roi or img2) the result would be the same image if a mask wasn't being used. Places, where the mask is black, are left the same as the destination image.
In this case, no destination image is provided, so OpenCV allocates a black image (zeros) for the destination image used in the function (this is generally how OpenCV works when a function is not provided with a Matrix).
Specifically img1_bg = cv.bitwise_and(roi,roi,mask = mask_inv) will create a black matrix used in the function which later becomes the output img1_bg. Only the parts of this black image that match up with white pixels in mask_inv are filled with the pixels from roi.This means that in the mask_inv where there are white pixels. the roi value will be copied in the pure black image generated by the function in the corresponding coordinate.
Similarly img2_fg = cv.bitwise_and(img2,img2,mask = mask) will create a black matrix used in the function which later becomes the output img2_fg. Only the parts of this black image that match up with white pixels in mask are filled with the pixels from img2.
This makes it so when you add img1_bg and img2_fg the result is only the part of each image that is masked.
Personally, I think this is a confusing use of bitwise_and. I think to demonstrate the function of bitwise_and it would be clearer to remove the mask parameter as follows: img1_bg = cv.bitwise_and(roi, mask_inv). This would give the same result, zeros where the mask is black, and the ROI values where it is not since the mask has pixels that are all ones or all zeroes.
If you don't care to demonstrate bitwise_and usage, in python I think it would be clearer to use logical indexing as follows:
output = np.zeros(img1.shape, np.uint8)
output[mask_inv] = img1_bg[mask_inv]
output[mask] = img2_fg[mask]

OpenCV, Python: How to use mask parameter in ORB feature detector

By reading a few answers on stackoverflow, I've learned this much so far:
The mask has to be a numpy array (which has the same shape as the image) with data type CV_8UC1 and have values from 0 to 255.
What is the meaning of these numbers, though? Is it that any pixels with a corresponding mask value of zero will be ignored in the detection process and any pixels with a mask value of 255 will be used? What about the values in between?
Also, how do I initialize a numpy array with data type CV_8UC1 in python? Can I just use dtype=cv2.CV_8UC1
Here is the code I am using currently, based on the assumptions I'm making above. But the issue is that I don't get any keypoints when I run detectAndCompute for either image. I have a feeling it might be because the mask isn't the correct data type. If I'm right about that, how do I correct it?
# convert images to grayscale
base_gray = cv2.cvtColor(self.base, cv2.COLOR_BGRA2GRAY)
curr_gray = cv2.cvtColor(self.curr, cv2.COLOR_BGRA2GRAY)
# initialize feature detector
detector = cv2.ORB_create()
# create a mask using the alpha channel of the original image--don't
# use transparent or partially transparent parts
base_cond = self.base[:,:,3] == 255
base_mask = np.array(np.where(base_cond, 255, 0))
curr_cond = self.base[:,:,3] == 255
curr_mask = np.array(np.where(curr_cond, 255, 0), dtype=np.uint8)
# use the mask and grayscale images to detect good features
base_keys, base_desc = detector.detectAndCompute(base_gray, mask=base_mask)
curr_keys, curr_desc = detector.detectAndCompute(curr_gray, mask=curr_mask)
print("base keys: ", base_keys)
# []
print("curr keys: ", curr_keys)
# []
So here is most, if not all, of the answer:
What is the meaning of those numbers
0 means to ignore the pixel and 255 means to use it. I'm still unclear on the values in between, but I don't think all nonzero values are considered "equivalent" to 255 in the mask. See here.
Also, how do I initialize a numpy array with data type CV_8UC1 in python?
The type CV_8U is the unsigned 8-bit integer, which, using numpy, is numpy.uint8. The C1 postfix means that the array is 1-channel, instead of 3-channel for color images and 4-channel for rgba images. So, to create a 1-channel array of unsigned 8-bit integers:
import numpy as np
np.zeros((480, 720), dtype=np.uint8)
(a three-channel array would have shape (480, 720, 3), four-channel (480, 720, 4), etc.) This mask would cause the detector and extractor to ignore the entire image, though, since it's all zeros.
how do I correct [the code]?
There were two separate issues, each separately causing each keypoint array to be empty.
First, I forgot to set the type for the base_mask
base_mask = np.array(np.where(base_cond, 255, 0)) # wrong
base_mask = np.array(np.where(base_cond, 255, 0), dtype=uint8) # right
Second, I used the wrong image to generate my curr_cond array:
curr_cond = self.base[:,:,3] == 255 # wrong
curr_cond = self.curr[:,:,3] == 255 # right
Some pretty dumb mistakes.
Here is the full corrected code:
# convert images to grayscale
base_gray = cv2.cvtColor(self.base, cv2.COLOR_BGRA2GRAY)
curr_gray = cv2.cvtColor(self.curr, cv2.COLOR_BGRA2GRAY)
# initialize feature detector
detector = cv2.ORB_create()
# create a mask using the alpha channel of the original image--don't
# use transparent or partially transparent parts
base_cond = self.base[:,:,3] == 255
base_mask = np.array(np.where(base_cond, 255, 0), dtype=np.uint8)
curr_cond = self.curr[:,:,3] == 255
curr_mask = np.array(np.where(curr_cond, 255, 0), dtype=np.uint8)
# use the mask and grayscale images to detect good features
base_keys, base_desc = detector.detectAndCompute(base_gray, mask=base_mask)
curr_keys, curr_desc = detector.detectAndCompute(curr_gray, mask=curr_mask)
TL;DR: The mask parameter is a 1-channel numpy array with the same shape as the grayscale image in which you are trying to find features (if image shape is (480, 720), so is mask).
The values in the array are of type np.uint8, 255 means "use this pixel" and 0 means "don't"
Thanks to Dan Mašek for leading me to parts of this answer.

Categories