I have this image:
And I'm trying to write a function in Python that will return True if the image contains blue pixels, or False otherwise.
That image is just an example. I will have others were the blue colour can be slightly different. But they will always be blue letters over a black background.
So far I have this:
def contains_blue(img):
# Convert the image to HSV colour space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Define a range for blue color
hsv_l = np.array([100, 150, 0])
hsv_h = np.array([140, 255, 255])
# Find blue pixels in the image
#
# cv2.inRange will create a mask (binary array) where the 1 values
# are blue pixels and 0 values are any other colour out of the blue
# range defined by hsv_l and hsv_h
return 1 in cv2.inRange(hsv, hsv_l, hsv_h)
The function always returns False because no 1 values are found in the array returned by cv2.inRange. Maybe the range defined by hsv_l and hsv_h is not good? I took it from here: OpenCV & Python -- Can't detect blue objects
Any help is appreciated. Thanks.
You could have just used np.any() instead. It will return True if any one pixel has a value of 255.
So instead of
return 1 in cv2.inRange(hsv, hsv_l, hsv_h),
you can just add the following:
return np.any(cv2.inRange(hsv, hsv_l, hsv_h))
Update:
As #AKX mentioned in the comments you could rather try out the following:
return cv2.inRange(hsv, hsv_l, hsv_h).any()
The problem is that you are not reading the documentation of inRange :D
Which tells the following:
That is, dst (I) is set to 255 (all 1 -bits) if src (I) is within the
specified 1D, 2D, 3D, ... box and 0 otherwise.
and you check for 1
# cv2.inRange will create a mask (binary array) where the 1 values
# are blue pixels and 0 values are any other colour out of the blue
# range defined by hsv_l and hsv_h
return 1 in cv2.inRange(hsv, hsv_l, hsv_h)
So the solution is to change it to:
return 255 in cv2.inRange(hsv, hsv_l, hsv_h)
I tested it with your image and returns true, also with a black and white image (BGR though) and returns false.
In my opinion the blue ranges you have chosen are a little far to the violet side... You may use a hsv colorpicker like this one http://colorizer.org/ and select the ranges you will like. Just rememeber OpenCV uses H -> Hue / 2 and S and V are like percentages (0-100) and you just divide them by 100 (0-1.) and multiply them by 255.
Related
So I have this python script that detects and print a range of HSV color in an image but i want to add another functionality to it.
It want it to be able to print the percentage of that color.
My Script:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('C:/Users/Vishu Rana/Documents/PY/test_cases/image.jpg')
grid_RGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(20,8))
plt.imshow(grid_RGB) # Printing the original picture after converting to RGB
grid_HSV = cv2.cvtColor(grid_RGB, cv2.COLOR_RGB2HSV) # Converting to HSV
lower_green = np.array([25,52,72])
upper_green = np.array([102,255,255])
mask= cv2.inRange(grid_HSV, lower_green, upper_green)
res = cv2.bitwise_and(img, img, mask=mask) # Generating image with the green part
print("Green Part of Image")
plt.figure(figsize=(20,8))
plt.imshow(res)
What do i need to add in this code to make it also print the percentage of green color.
OpenCV arrays create a mask that uses the value 255. A simple way to get the percentage of green is simply implement the following code after you generate the mask.
green_perc = (mask>0).mean()
A more thorough explanation was asked about why this works. When OpenCV creates a mask it will create a 2 dimensional array with values of 0s and values of 255. In the context of this question the values of 255 are the parts of the mask that indicate the picture is green.
The reason that (mask>0).mean() works is because we only have values of 255 and 0. Mask > 0 will create a boolean array of True/False for every value in our mask.
The value True will indicate that the part of the array is green and the value of False will indicate it is not green. Taking the mean of this boolean array will give us the percentage of the array that is green. (when taking the mean of a boolean array True will take the value of 1 and False will take the value of 0).
Another way to get the same result is to implement code like this.
green_perc = (mask==255).mean()
A comment above also mentions the solution of np.sum(mask)/np.size(mask). This does not work right because OpenCV uses the value 255. You could tweak this to get the same percentage by dividing this value by 255.
green_perc = (np.sum(mask) / np.size(mask))/255
I want my image to look like this.
No Spots Appearing in Purple Region
However, my image looks like this, with white spots sometimes showing up in the area that is supposed to be "outlined."
Spots Appearing
Basically, I coded an eroded version of an image Eroded as well as a dilated version Dilated. If you would like to see the code for those two versions, please let me know and I will add it.
My goal is to make the white regions in the eroded image purple and place these purple eroded letters/numbers inside of the dilated letters/numbers. The onechannel function only displays a specified R/G/B channel of a given image.
def outline():
red,green,blue = range(3)
imgD = dilation(chars,7,20,480)
imgE = erosion(chars,7,20,480)
imgDOr = imgD.copy()
imgDcop = onechannel(imgD,0)
imgDcop[:,:,0] = 128
imgEcop = onechannel(imgE,2)
imgEcop[:,:,2] = 128
for i in range (0,len(imgD)):
for j in range (0,len(imgD[0])):
if imgE[i,j,0] == 255:
imgDOr[i,j,0] = imgDcop[i,j,0]
imgDOr[i,j,1] = imgDcop[i,j,1]
imgDOr[i,j,2] = imgEcop[i,j,2]
imageshow(imgDOr)
print(outline())
It's a bug in your erosion function where it does not set the white pixels to 255,255,255. If you inspect the RGB of the eroded image you posted you will see that the first channel of the white areas has values ranging from 250 to 255, and the grayish edges are starting from 239,239,239. You need to either fix the erosion function to strictly set all white areas to absolute 255,255,255 or relax the condition in your outline function from if imgE[i,j,0] == 255: to something like if 255 - imgE[i,j,0] <= 16:.
This question already has answers here:
Finding red color in image using Python & OpenCV
(3 answers)
Closed 10 months ago.
I am trying to make a program where I detect red. However sometimes it is darker than usual so I can't just use one value.
What is a good range for detecting different shades of red?
I am currently using the range 128, 0, 0 - 255, 60, 60 but sometimes it doesn't even detect a red object I put in front of it.
RGBis not a good color space for specific color detection. HSV will be a good choice.
For RED, you can choose the HSV range (0,50,20) ~ (5,255,255) and (175,50,20)~(180,255,255)using the following colormap. Of course, the RED range is not that precise, but it is just ok.
The code taken from my another answer: Detect whether a pixel is red or not
#!/usr/bin/python3
# 2018.07.08 10:39:15 CST
# 2018.07.08 11:09:44 CST
import cv2
import numpy as np
## Read and merge
img = cv2.imread("ColorChecker.png")
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
## Gen lower mask (0-5) and upper mask (175-180) of RED
mask1 = cv2.inRange(img_hsv, (0,50,20), (5,255,255))
mask2 = cv2.inRange(img_hsv, (175,50,20), (180,255,255))
## Merge the mask and crop the red regions
mask = cv2.bitwise_or(mask1, mask2 )
croped = cv2.bitwise_and(img, img, mask=mask)
## Display
cv2.imshow("mask", mask)
cv2.imshow("croped", croped)
cv2.waitKey()
Related answers:
Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)
How to define a threshold value to detect only green colour objects in an image :Opencv
How to detect two different colors using `cv2.inRange` in Python-OpenCV?
Detect whether a pixel is red or not
Of course, for the specific question, maybe other color space is also OK.
How to read utility meter needle with opencv?
You could check that the red component is the maximum and others are both clearly lower:
def red(r, g, b):
threshold = max(r, g, b)
return (
threshold > 8 # stay away from black
and r == threshold # red is biggest component
and g < threshold*0.5 # green is much smaller
and b < threshold*0.5 # so is b
)
This can be implemented very efficiently using numpy.
The "right way" would be doing a full conversion to HSV and check there, but it's going to be slower and somewhat trickier (hue is an angle so you cannot just take the absolute value of the difference, moreover colors like (255, 254, 254) are going to be qualified as "red" even if they're considered white for a human).
Note also that human visual system tends to compensate for average, so something could be seen as "blue" even if indeed the biggest component is red, but everything in the image is red, so that "doesn't count" for our brain.
In the image below if you ask a human what color is the part in the circle area most would say "blue" while indeed the biggest component is red:
Please, use HSV or HSL (hue, saturation, luminance) instead of RGB, in HSV the red color can be easily detected using the value of hue within some threshold.
Red Color means Red value is higher than Blue and Green.
So you can check the differences between Red and Blue, Red and Green.
You can simply split RGB into individual channels and apply threshold like this.
b,g,r = cv2.split(img_rgb)
rg = r - g
rb = r - b
rg = np.clip(rg, 0, 255)
rb = np.clip(rb, 0, 255)
mask1 = cv2.inRange(rg, 50, 255)
mask2 = cv2.inRange(rb, 50, 255)
mask = cv2.bitwise_and(mask1, mask2)
Hope it can be a solution for your problem.
Thank you.
There is a project that im working on which required the color white detection, after some research i decided to use covert RGB image to HSL image and thresh hold the lightness to get the color white, im working with openCV so wonder if there is a way to do it.
enter image description here
You can do it with 4 easy steps:
Convert HLS
img = cv2.imread("HLS.png")
imgHLS = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
Get the L channel
Lchannel = imgHLS[:,:,1]
Create the mask
#change 250 to lower numbers to include more values as "white"
mask = cv2.inRange(Lchannel, 250, 255)
Apply Mask to original image
res = cv2.bitwise_and(img,img, mask= mask)
This also depends on what is white for you, and you may change the values :) I used inRange in the L channel but you can save one step and do
mask = cv2.inRange(imgHLS, np.array([0,250,0]), np.array([255,255,255]))
instead of the lines:
Lchannel = imgHLS[:,:,1]
mask = cv2.inRange(Lchannel, 250, 255)
It is shorter, but I did it the other way first to make it more explicit and to show what I was doing.
Image:
Result:
The result looks almost as the mask (almost binary), but depending on your lowerbound (I chose 250) you may get even some almost white colors.
I'm using opencv and numpy to process some satellite images.
I need to differentiate what is "land" from what is "green" (crops and vegetation).
My question is: How can I decide which values are close to green in the RGB format?
What I'm doing so far is:
img = cv2.imread('image1.jpg',1)
mat = np.asarray(img)
for elemento in mat:
for pixel in elemento:
if pixel[1] > 200: # If the level of green is higher than 200, I change it to black
pixel[0] = 0
pixel[1] = 0
pixel[2] = 0
else: # If the level of G is lower than 200 I change it to white.
pixel[0] = 255
pixel[1] = 255
pixel[2] = 255
This code works, but isn't really useful. I need a more precise manner to decide which RGB values correspond to green and which ones does not.
How can I achieve this?
You could use InRange function to find colors in specific range, because you will not be able to find green color from satelites just with one or few values of pixels. InRange function will help you to find a range of set colors (you should set the range of green colors) and return an image with the coordinates of those green pixels ir the original image. I've answered a similar quiestion HERE with examples and code (although it is not python, you should understand the methods and easily implement it in your OpenCV project), you should find everything you need there.