I am calculating the histogram of a gray scale image and i want to convert the range of grayscale pixel to black. I am using the following code to retrieve the histogram.`
image = cv2.imread('./images/test/image_5352.jpg')
cv2.imshow("image", image)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)
hist = cv2.calcHist([gray], [0], None, [256], [0, 256])
plt.figure()
plt.title("Grayscale Histogram")
plt.xlabel("Bins")
plt.ylabel("# of Pixels")
plt.plot(hist)
plt.xlim([0, 256])
key = cv2.waitKey(0)
cv2.destroyAllWindows()
For example i want to convert all the pixels with value of 50 to black.
Maybe image thresholding is what you're after...
If pixel value is greater than a threshold value, it is assigned one value (may be white), else it is assigned another value (may be black).
cv2.threshold(img,127,255,cv2.THRESH_BINARY)
The above code is in C++ but easy enough to convert. The first parameter is the image in question, the second is the threshold value, the third is what to assign any pixel values to if it surpasses the threshold, and the fourth is the thresholding type in which there are multiple:
For further reading: see here
openCV in range is what you need. You could use in range to produce mask.
http://docs.opencv.org/3.3.0/d2/de8/group__core__array.html#ga48af0ab51e36436c5d04340e036ce981/
You could do that inside following procedure
cv::Mat mask;
cv::inRange(src,40,60,mask);
cv::bitwise_not(mask,mask);
cv::bitwise_and(src,mask);
Related
I have a few images, I converted them to grayscale and thresholded them by the usual steps:
image = cv2.imread('img_path')
image_gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
ret, frame_thresh = cv2.threshold(image_gray,128,255,cv2.THRESH_BINARY)
How can I check if the resulting image is mostly black or white? I tried using an histogram:
hist = cv2.calcHist([frame_thresh], [0], None, [2], [0, 1])
hist_ok /= hist_ok.sum()
but I tested it on two images, one of kind (one is mostly black, the other mostly white)
and the resulting hist is always the same
print(hist)
>> [[1.]
[0.]]
You're looking for the average pixel value of the image, to do that you can use numpy:
import numpy as np
avg_color_per_row = np.average(frame_thresh, axis=0)
avg_color = np.average(avg_color_per_row, axis=0)
it outputs a value that ranges from 0 to 255.
After thresholding your image contains only black (0) and white (255) pixels. You can use countNonZero function to calculate the number of non-zero pixels (not black) and compare it with image size.
Alternatively you can just calculate mean value of an image. If its lower than 127 (half of the range) image is mostly black.
I'm creating a code which can detect the percentage of green colour from an image.
.
I have a little experience with OpenCV but am still pretty new to image processing and would like some help with my code. How should I change this code so that it is capable of calculating the percentage of green instead of brown? And if it isn't too troublesome could someone please explain how the changes affect the code? Below is the link to the image I would like to use.
Credit for the code goes to #mmensing
import numpy as np
import cv2
img = cv2.imread('potato.jpg')
brown = [145, 80, 40] # RGB
diff = 20
boundaries = [([brown[2]-diff, brown[1]-diff, brown[0]-diff],
[brown[2]+diff, brown[1]+diff, brown[0]+diff])]
for (lower, upper) in boundaries:
lower = np.array(lower, dtype=np.uint8)
upper = np.array(upper, dtype=np.uint8)
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask=mask)
ratio_brown = cv2.countNonZero(mask)/(img.size/3)
print('brown pixel percentage:', np.round(ratio_brown*100, 2))
cv2.imshow("images", np.hstack([img, output]))
cv2.waitKey(0)
I've modified your script so you can find the (approximate) percent of green color in your test images. I've added some comments to explain the code:
# Imports
import cv2
import numpy as np
# Read image
imagePath = "D://opencvImages//"
img = cv2.imread(imagePath+"leaves.jpg")
# Here, you define your target color as
# a tuple of three values: RGB
green = [130, 158, 0]
# You define an interval that covers the values
# in the tuple and are below and above them by 20
diff = 20
# Be aware that opencv loads image in BGR format,
# that's why the color values have been adjusted here:
boundaries = [([green[2], green[1]-diff, green[0]-diff],
[green[2]+diff, green[1]+diff, green[0]+diff])]
# Scale your BIG image into a small one:
scalePercent = 0.3
# Calculate the new dimensions
width = int(img.shape[1] * scalePercent)
height = int(img.shape[0] * scalePercent)
newSize = (width, height)
# Resize the image:
img = cv2.resize(img, newSize, None, None, None, cv2.INTER_AREA)
# check out the image resized:
cv2.imshow("img resized", img)
cv2.waitKey(0)
# for each range in your boundary list:
for (lower, upper) in boundaries:
# You get the lower and upper part of the interval:
lower = np.array(lower, dtype=np.uint8)
upper = np.array(upper, dtype=np.uint8)
# cv2.inRange is used to binarize (i.e., render in white/black) an image
# All the pixels that fall inside your interval [lower, uipper] will be white
# All the pixels that do not fall inside this interval will
# be rendered in black, for all three channels:
mask = cv2.inRange(img, lower, upper)
# Check out the binary mask:
cv2.imshow("binary mask", mask)
cv2.waitKey(0)
# Now, you AND the mask and the input image
# All the pixels that are white in the mask will
# survive the AND operation, all the black pixels
# will remain black
output = cv2.bitwise_and(img, img, mask=mask)
# Check out the ANDed mask:
cv2.imshow("ANDed mask", output)
cv2.waitKey(0)
# You can use the mask to count the number of white pixels.
# Remember that the white pixels in the mask are those that
# fall in your defined range, that is, every white pixel corresponds
# to a green pixel. Divide by the image size and you got the
# percentage of green pixels in the original image:
ratio_green = cv2.countNonZero(mask)/(img.size/3)
# This is the color percent calculation, considering the resize I did earlier.
colorPercent = (ratio_green * 100) / scalePercent
# Print the color percent, use 2 figures past the decimal point
print('green pixel percentage:', np.round(colorPercent, 2))
# numpy's hstack is used to stack two images horizontally,
# so you see the various images generated in one figure:
cv2.imshow("images", np.hstack([img, output]))
cv2.waitKey(0)
Output:
green pixel percentage: 89.89
I've produced some images, this is the binary mask of the green color:
And this is the ANDed out of the mask and the input image:
Some additional remarks about this snippet:
Gotta be careful loading images with OpenCV, as they are loaded in
BGR format rather than the usual RGB. Here, the snippet has this
covered by reversing the elements in the boundary list, but keep an
eye open for this common pitfall.
Your input image was too big to even display it properly using
cv2.imshow. I resized it and processed that instead. At the end,
you see I took into account this resized scale in the final percent
calculation.
Depending on the target color you define and the difference you
use, you could be producing negative values. In this case, for
instance, for the R = 0 value, after subtracting diff you would
get -20. That doesn't make sense when you are encoding color
intensity in unsigned 8 bits. The values must be in the [0, 255] range.
Watch out for negative values using this method.
Now, you may see that the method is not very robust. Depending on what you are doing, you could switch to the HSV color space to get a nicer and more accurate binary mask.
You can try the HSV-based mask with this:
# The HSV mask values, defined for the green color:
lowerValues = np.array([29, 89, 70])
upperValues = np.array([179, 255, 255])
# Convert the image to HSV:
hsvImage = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Create the HSV mask
hsvMask = cv2.inRange(hsvImage, lowerValues, upperValues)
# AND mask & input image:
hsvOutput = cv2.bitwise_and(img, img, mask=hsvMask)
Which gives you this nice masked image instead:
I have an image containing a girl's face and I would like to change the colour of her eyes starting from an identified colour using the RGB in python. I have this colour:
rbg_source=[85, 111, 47]
and this colour as the destination
rbg_destination=[72.50445669, 56.82411376, 47.7519902]
reconstruction the image as the original one substituting only the mentioned colours.
Do you have an idea to do that in python?
I have already used the following solution:
resized_image[np.all(resized_image == (85, 111, 47), axis=-1)] = (72.50445669, 56.82411376,
47.7519902)
# Save result
cv2.imwrite('result1.png',resized_image)
But it returns a bad image without the expect solution.
Please find below an example image
in that image, I would like to change the right eye colour knowing the RGB of that colour i.e. (5, 155, 122)
Here is one way to do that in Python/OpenCV, since I do not have the corresponding RGB colors for the blue and green that you want to use. So I will approximate using baseline blue and green.
Read input
Convert to HSV and separate channels
Specify blue and green hues and get hue difference
Threshold on green color range to make a mask
Use morphology to clean the mask
Add the hue difference to the hue channel and modulo 180
Combine the new hue, the old saturation channel and the old value channel that is biased to increase brightness to match the left eye color and convert back to BGR
Use the mask to merge the new BGR and the original image
Save the results
Input:
import cv2
import numpy as np
# load image with alpha channel
img = cv2.imread('eyes.png')
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
# blue is 240 in range 0 to 360; so half in OpenCV
# green is 120 in range 0 to 360; so half in OpenCV
blue_hue = 120
green_hue = 60
# diff hue (blue_hue - green_hue)
diff_hue = blue_hue - green_hue
# create mask for green color in hsv
lower = (30,90,90)
upper = (90,170,180)
mask = cv2.inRange(hsv, lower, upper)
mask = cv2.merge([mask,mask,mask])
# apply morphology to clean mask
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
mask = cv2.morphologyEx(mask, cv2.MORPH_ERODE, kernel)
# modify hue channel by adding difference and modulo 180
hnew = np.mod(h + diff_hue, 180).astype(np.uint8)
# recombine channels and bias value to make brighter
hsv_new = cv2.merge([hnew,s,v+60])
# convert back to bgr
bgr_new = cv2.cvtColor(hsv_new, cv2.COLOR_HSV2BGR)
# blend with original using mask
result = np.where(mask==(255, 255, 255), bgr_new, img)
# save output
cv2.imwrite('eyes_green_mask.png', mask)
cv2.imwrite('eyes_green2blue.png', result)
# Display various images to see the steps
cv2.imshow('mask',mask)
cv2.imshow('bgr_new',bgr_new)
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Mask
Hue and Value shifted BGR image:
Result:
ADDITION:
If you know the exact blue and green BGR colors, you can convert them each to HSV and get the H,S,V differences. Then use those differences as biases to the H,S,V channels of the input image and use the mask to combine that result with the original as I did above.
Am learning Python and Open CV now. My problem is I need to find if my image as RED color, If it has red color call one function if it doesnot have RED color call an other function.
I have code to find the RED color in my image, problem is am not able to write If condition like
if image as RED do this
else do this.
Can anyone help me with this ?
Below is the code am trying with this am able to detect RED color and print the image but not able to add condition in my script. Please help me to solve this issue.
import cv2
import numpy as np
img = cv2.imread("Myimage.png")
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask1 = cv2.inRange(img_hsv, (0,50,20), (5,255,255))
mask2 = cv2.inRange(img_hsv, (175,50,20), (180,255,255))
mask = cv2.bitwise_or(mask1, mask2 )
croped = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow("mask", mask)
cv2.imshow("croped", croped)
cv2.waitKey()
The idea is to check the mask image for white pixels with cv2.countNonZero(). After color thresholding with cv2.inRange(), you will get a binary mask with any pixels within the lower and upper HSV thresholds in white. So you can simply check if white pixels exist on the mask to determine if red is present in the image
# Binary mask with pixels matching the color threshold in white
mask = cv2.bitwise_or(mask1, mask2)
# Determine if the color exists on the image
if cv2.countNonZero(mask) > 0:
print('Red is present!')
else:
print('Red is not present!')
Assuming the variable mask contains the areas that are RED and is a binary array then you can just sum the elements that are 1 and compare the sum to zero. If no elements are 1 than your image does not contain RED
if (mask == 1).sum() > 1:
# do your stuff
How can I apply mask to a color image in latest python binding (cv2)? In previous python binding the simplest way was to use cv.Copy e.g.
cv.Copy(dst, src, mask)
But this function is not available in cv2 binding. Is there any workaround without using boilerplate code?
Here, you could use cv2.bitwise_and function if you already have the mask image.
For check the below code:
img = cv2.imread('lena.jpg')
mask = cv2.imread('mask.png',0)
res = cv2.bitwise_and(img,img,mask = mask)
The output will be as follows for a lena image, and for rectangular mask.
Well, here is a solution if you want the background to be other than a solid black color. We only need to invert the mask and apply it in a background image of the same size and then combine both background and foreground. A pro of this solution is that the background could be anything (even other image).
This example is modified from Hough Circle Transform. First image is the OpenCV logo, second the original mask, third the background + foreground combined.
# http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.html
import cv2
import numpy as np
# load the image
img = cv2.imread('E:\\FOTOS\\opencv\\opencv_logo.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# detect circles
gray = cv2.medianBlur(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), 5)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 20, param1=50, param2=50, minRadius=0, maxRadius=0)
circles = np.uint16(np.around(circles))
# draw mask
mask = np.full((img.shape[0], img.shape[1]), 0, dtype=np.uint8) # mask is only
for i in circles[0, :]:
cv2.circle(mask, (i[0], i[1]), i[2], (255, 255, 255), -1)
# get first masked value (foreground)
fg = cv2.bitwise_or(img, img, mask=mask)
# get second masked value (background) mask must be inverted
mask = cv2.bitwise_not(mask)
background = np.full(img.shape, 255, dtype=np.uint8)
bk = cv2.bitwise_or(background, background, mask=mask)
# combine foreground+background
final = cv2.bitwise_or(fg, bk)
Note: It is better to use the opencv methods because they are optimized.
import cv2 as cv
im_color = cv.imread("lena.png", cv.IMREAD_COLOR)
im_gray = cv.cvtColor(im_color, cv.COLOR_BGR2GRAY)
At this point you have a color and a gray image. We are dealing with 8-bit, uint8 images here. That means the images can have pixel values in the range of [0, 255] and the values have to be integers.
Let's do a binary thresholding operation. It creates a black and white masked image. The black regions have value 0 and the white regions 255
_, mask = cv.threshold(im_gray, thresh=180, maxval=255, type=cv.THRESH_BINARY)
im_thresh_gray = cv.bitwise_and(im_gray, mask)
The mask can be seen below on the left. The image on its right is the result of applying bitwise_and operation between the gray image and the mask. What happened is, the spatial locations where the mask had a pixel value zero (black), became pixel value zero in the result image. The locations where the mask had pixel value 255 (white), the resulting image retained its original gray value.
To apply this mask to our original color image, we need to convert the mask into a 3 channel image as the original color image is a 3 channel image.
mask3 = cv.cvtColor(mask, cv.COLOR_GRAY2BGR) # 3 channel mask
Then, we can apply this 3 channel mask to our color image using the same bitwise_and function.
im_thresh_color = cv.bitwise_and(im_color, mask3)
mask3 from the code is the image below on the left, and im_thresh_color is on its right.
You can plot the results and see for yourself.
cv.imshow("original image", im_color)
cv.imshow("binary mask", mask)
cv.imshow("3 channel mask", mask3)
cv.imshow("im_thresh_gray", im_thresh_gray)
cv.imshow("im_thresh_color", im_thresh_color)
cv.waitKey(0)
The original image is lenacolor.png that I found here.
Answer given by Abid Rahman K is not completely correct. I also tried it and found very helpful but got stuck.
This is how I copy image with a given mask.
x, y = np.where(mask!=0)
pts = zip(x, y)
# Assuming dst and src are of same sizes
for pt in pts:
dst[pt] = src[pt]
This is a bit slow but gives correct results.
EDIT:
Pythonic way.
idx = (mask!=0)
dst[idx] = src[idx]
The other methods described assume a binary mask. If you want to use a real-valued single-channel grayscale image as a mask (e.g. from an alpha channel), you can expand it to three channels and then use it for interpolation:
assert len(mask.shape) == 2 and issubclass(mask.dtype.type, np.floating)
assert len(foreground_rgb.shape) == 3
assert len(background_rgb.shape) == 3
alpha3 = np.stack([mask]*3, axis=2)
blended = alpha3 * foreground_rgb + (1. - alpha3) * background_rgb
Note that mask needs to be in range 0..1 for the operation to succeed. It is also assumed that 1.0 encodes keeping the foreground only, while 0.0 means keeping only the background.
If the mask may have the shape (h, w, 1), this helps:
alpha3 = np.squeeze(np.stack([np.atleast_3d(mask)]*3, axis=2))
Here np.atleast_3d(mask) makes the mask (h, w, 1) if it is (h, w) and np.squeeze(...) reshapes the result from (h, w, 3, 1) to (h, w, 3).