I am new to openCV and Python and a have a question concerning it. I am trying to find the amount of blue pixels of a picture so i can use them as a threshold in order to compare other pictures with it. I have tried looking through the documentation but i couldn't find anything helpful yet.
Can anyone give a hint or some help?
BLUE_MAX = np.array([0, 0, 200], np.uint8)
BLUE_MIN = np.array([50, 50, 255], np.uint8)
dst = cv.inRange(img, BLUE_VALUE_MIN, BLUE_VALUE_MAX)
no_blue = cv.countNonZero(dst)
print('The number of blue pixels is: ' + str(no_blue))
-So based on your recommendation I built the following function but all I get when I run it is a blank picture.
For counting blue pixel in a RGB image you can simply do the following
Inrange the source image to filter out blue component to a binary image.
Count non zero pixel in the binary image using the function countNonZero.
You can refer below C++ code for how to do it
#include <stdio.h>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat src,dst;
src = imread("rgb1.jpg",1);
inRange(src, Scalar(200,0,0), Scalar(255,50,50), dst); //In range with approximate blue range
cout<<"No of blue pixels---->"<<countNonZero(dst)<<endl;
imshow("src",src);
imshow("out",out);
waitKey(0);
return 0;
}
Edit:-
Here is the working python code
import cv2
import numpy as np
img = cv2.imread("bgr.png")
BLUE_MIN = np.array([0, 0, 200], np.uint8)
BLUE_MAX = np.array([50, 50, 255], np.uint8)
dst = cv2.inRange(img, BLUE_MIN, BLUE_MAX)
no_blue = cv2.countNonZero(dst)
print('The number of blue pixels is: ' + str(no_blue))
cv2.namedWindow("opencv")
cv2.imshow("opencv",img)
cv2.waitKey(0)
Hope this is what you looking for.....
Edit2:-
As # kigurai commented below OpenCV consider image in BGR order and I gave wrong order for BLUE_MIN and BLUE_MAX array.
So in the above code the lines
BLUE_MIN = np.array([0, 0, 200], np.uint8)
BLUE_MAX = np.array([50, 50, 255], np.uint8)
should changed to
BLUE_MIN = np.array([200, 0, 0], np.uint8) // minimum value of blue pixel in BGR order
BLUE_MAX = np.array([255, 50, 50], np.uint8)// maximum value of blue pixel in BGR order
If you are looking for blue pixels in a photographed image, I recommend converting to HSV colour space first and then look for the color range for blue. This way, you can ignore the brightness component.
See this question for colour ranges in HSV color space.
Related
I'm creating a code which can detect the percentage of green colour from an image.
.
I have a little experience with OpenCV but am still pretty new to image processing and would like some help with my code. How should I change this code so that it is capable of calculating the percentage of green instead of brown? And if it isn't too troublesome could someone please explain how the changes affect the code? Below is the link to the image I would like to use.
Credit for the code goes to #mmensing
import numpy as np
import cv2
img = cv2.imread('potato.jpg')
brown = [145, 80, 40] # RGB
diff = 20
boundaries = [([brown[2]-diff, brown[1]-diff, brown[0]-diff],
[brown[2]+diff, brown[1]+diff, brown[0]+diff])]
for (lower, upper) in boundaries:
lower = np.array(lower, dtype=np.uint8)
upper = np.array(upper, dtype=np.uint8)
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask=mask)
ratio_brown = cv2.countNonZero(mask)/(img.size/3)
print('brown pixel percentage:', np.round(ratio_brown*100, 2))
cv2.imshow("images", np.hstack([img, output]))
cv2.waitKey(0)
I've modified your script so you can find the (approximate) percent of green color in your test images. I've added some comments to explain the code:
# Imports
import cv2
import numpy as np
# Read image
imagePath = "D://opencvImages//"
img = cv2.imread(imagePath+"leaves.jpg")
# Here, you define your target color as
# a tuple of three values: RGB
green = [130, 158, 0]
# You define an interval that covers the values
# in the tuple and are below and above them by 20
diff = 20
# Be aware that opencv loads image in BGR format,
# that's why the color values have been adjusted here:
boundaries = [([green[2], green[1]-diff, green[0]-diff],
[green[2]+diff, green[1]+diff, green[0]+diff])]
# Scale your BIG image into a small one:
scalePercent = 0.3
# Calculate the new dimensions
width = int(img.shape[1] * scalePercent)
height = int(img.shape[0] * scalePercent)
newSize = (width, height)
# Resize the image:
img = cv2.resize(img, newSize, None, None, None, cv2.INTER_AREA)
# check out the image resized:
cv2.imshow("img resized", img)
cv2.waitKey(0)
# for each range in your boundary list:
for (lower, upper) in boundaries:
# You get the lower and upper part of the interval:
lower = np.array(lower, dtype=np.uint8)
upper = np.array(upper, dtype=np.uint8)
# cv2.inRange is used to binarize (i.e., render in white/black) an image
# All the pixels that fall inside your interval [lower, uipper] will be white
# All the pixels that do not fall inside this interval will
# be rendered in black, for all three channels:
mask = cv2.inRange(img, lower, upper)
# Check out the binary mask:
cv2.imshow("binary mask", mask)
cv2.waitKey(0)
# Now, you AND the mask and the input image
# All the pixels that are white in the mask will
# survive the AND operation, all the black pixels
# will remain black
output = cv2.bitwise_and(img, img, mask=mask)
# Check out the ANDed mask:
cv2.imshow("ANDed mask", output)
cv2.waitKey(0)
# You can use the mask to count the number of white pixels.
# Remember that the white pixels in the mask are those that
# fall in your defined range, that is, every white pixel corresponds
# to a green pixel. Divide by the image size and you got the
# percentage of green pixels in the original image:
ratio_green = cv2.countNonZero(mask)/(img.size/3)
# This is the color percent calculation, considering the resize I did earlier.
colorPercent = (ratio_green * 100) / scalePercent
# Print the color percent, use 2 figures past the decimal point
print('green pixel percentage:', np.round(colorPercent, 2))
# numpy's hstack is used to stack two images horizontally,
# so you see the various images generated in one figure:
cv2.imshow("images", np.hstack([img, output]))
cv2.waitKey(0)
Output:
green pixel percentage: 89.89
I've produced some images, this is the binary mask of the green color:
And this is the ANDed out of the mask and the input image:
Some additional remarks about this snippet:
Gotta be careful loading images with OpenCV, as they are loaded in
BGR format rather than the usual RGB. Here, the snippet has this
covered by reversing the elements in the boundary list, but keep an
eye open for this common pitfall.
Your input image was too big to even display it properly using
cv2.imshow. I resized it and processed that instead. At the end,
you see I took into account this resized scale in the final percent
calculation.
Depending on the target color you define and the difference you
use, you could be producing negative values. In this case, for
instance, for the R = 0 value, after subtracting diff you would
get -20. That doesn't make sense when you are encoding color
intensity in unsigned 8 bits. The values must be in the [0, 255] range.
Watch out for negative values using this method.
Now, you may see that the method is not very robust. Depending on what you are doing, you could switch to the HSV color space to get a nicer and more accurate binary mask.
You can try the HSV-based mask with this:
# The HSV mask values, defined for the green color:
lowerValues = np.array([29, 89, 70])
upperValues = np.array([179, 255, 255])
# Convert the image to HSV:
hsvImage = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Create the HSV mask
hsvMask = cv2.inRange(hsvImage, lowerValues, upperValues)
# AND mask & input image:
hsvOutput = cv2.bitwise_and(img, img, mask=hsvMask)
Which gives you this nice masked image instead:
I'm trying to extract the pad section from the following image with OpenCv.
Starting with an image like this:
I am trying to extract into an image like this:
to end up with an image something like this
I currently have the following
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('strip.png')
grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresholded = cv2.threshold(grayscale, 0, 255, cv2.THRESH_OTSU)
bbox = cv2.boundingRect(thresholded)
x, y, w, h = bbox
foreground = img[y:y+h, x:x+w]
cv2.imwrite("output.png", foreground)
Which outputs this:
If you look closely to the upper and the lower parts of the image, it seems more cluttered and the center part (which is your desired output), it looks soft and smooth.
Since the center part is homogeneous , a smoothing filter (like an erosion) won't effect that part so much, the upper part otherwise, would change noticeably more.
At the first step, I remove the black background with a simple thresholding. At further I did some smoothing effect on the image and compute the difference between the result and the original image, then thresholded the final result to remove the unwanted pixels.
Then I did some morphology to remove noisy residual of the process. At the end with the help of boundingRect command, I extracted the desired segment (the white contour):
background removed:
the difference image after bluring with erosion:
the difference image after opening process and a threshold:
And finally the bounding box of the white objects:
The code I wrote (C++ opencv):
Mat im = imread("E:/t.jpg", 0);
resize(im, im, Size() , 0.3, 0.3); // # resizing just for better visualization
Mat im1,im2, im3;
// Removing the black background:
threshold(im, im1, 50, 255, THRESH_BINARY);
vector<vector<Point>> contours_1;
findContours(im1, contours_1, RETR_CCOMP, CHAIN_APPROX_NONE);
Rect r = boundingRect(contours_1[0]);
im(r).copyTo(im);
im.copyTo(im3);
imshow("background removed", im);
// detecting the cluttered parts and cut them:
erode(im, im2, Mat::ones(3, 3, CV_8U), Point(-1, -1), 3);
im2.convertTo(im2, CV_32F);
im3.convertTo(im3, CV_32F);
subtract(im2, im3, im1);
double min, max;
minMaxIdx(im1, &min, &max);
im1 = 255*(im1 - min) / (max - min);
im1.convertTo(im1, CV_8U);
imshow("the difference image", im1);
threshold(im1, im1, 250, 255, THRESH_BINARY);
erode(im1, im1, Mat::ones(3, 3, CV_8U), Point(-1, -1), 3);
dilate(im1, im1, Mat::ones(3, 3, CV_8U), Point(-1, -1), 7);
imshow("the difference image thresholded", im1);
vector<Point> idx, hull;
vector<vector<Point>> hullis;
findNonZero(im1, idx);
Rect rr = boundingRect(idx);
rectangle(im, rr, Scalar(255, 255, 255), 2);
imshow("Final segmentation", im);
waitKey(0);
I'm writing a script that creates a mask for an image. My input image looks like this:
The original image is only 40x40px, here it is for reference:
I want to create a mask of the purple area in the center of the image. This is what I do:
# read the 40x40 image and convert it to RGB
input_image = cv2.cvtColor(cv2.imread('image.png'), cv2.COLOR_BGR2RGB)
# get the value of the color in the center of the image
center_color = input_image[20, 20]
# create the mask: pixels with same color = 255 (white), other pixels = 0 (black)
mask_bw = np.where(input_image == center_color, 255, 0)
# show the image
plt.imshow(mask_bw)
Most of the time this works perfectly fine, but for some images (like the one I attached to this question) I consistently get some blue areas in my mask like on the image below. This is reproducible and the areas are always the same for the same input images.
This is already weird enough, but if I try to remove the blue areas, this doesn't work either.
mask_bw[mask_bw != (255, 255, 255)] = 0 # this doesn't change anything..
Why is this happening and how do I fix this?
Additional info
tried with numpy version 1.17.3 and 1.17.4
Reproduced in my local environment and in a google colab notebook
The main problem is that you're trying to compare three channels but only setting the value for one channel. This is most likely causing the blue areas on the mask. When you use np.where() to set the other pixels to black, you are only setting this on the 1st channel instead of all three channels. You can visualize this by splitting each channel and printing the before/after arrays which will show you that the resulting array values are RGB(0,0,255). So to fix this problem, we need to compare each individual channel then set the desired area in white while setting any black areas on the mask to black for all three channels. Here is one way to do it:
import numpy as np
import cv2
image = cv2.imread('1.png')
center_color = image[20, 20]
b, g, r = cv2.split(image)
mask = (b == center_color[0]) & (g == center_color[1]) & (r == center_color[2])
image[mask] = 255
image[mask==0] = 0
cv2.imshow('image', image)
cv2.waitKey()
A hotfix to remove the blue areas using your current code would be to convert the image to grayscale (1-channel) then change all non-white pixels to black.
import numpy as np
import cv2
# Load image, find color, create mask
image = cv2.imread('1.png')
center_color = image[20, 20]
mask = np.where(image == center_color, 255, 0)
mask = np.array(mask, dtype=np.uint8)
# Convert image to grayscale, convert all non-white pixels to black
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask[mask != 255] = 0
cv2.imshow('mask', mask)
cv2.waitKey()
Here are two alternative methods to obtain a mask of the purple area
Method #1: Work in grayscale space
import numpy as np
import cv2
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
center_color = gray[20, 20]
mask = np.array(np.where(gray == center_color, 255, 0), dtype=np.uint8)
cv2.imshow('mask', mask)
cv2.waitKey()
Method #2: Color thresholding
The idea is to convert the image to HSV color space then use a lower and upper color range to segment the image to create a binary mask
import numpy as np
import cv2
image = cv2.imread('1.png')
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 124, 0])
upper = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower, upper)
cv2.imshow('mask', mask)
cv2.waitKey()
Both methods should yield the same result
If you have a 3-channel image (i.e. RGB or BGR or somesuch) and you want to generate a single channel mask (i.e. you want 0/1 or True/False) for each pixel, then you effectively need to group the 3 values into a single using np.all() like this:
import cv2
import numpy as np
# Load image and get centre colour
image = cv2.imread('40x40.png')
cc = im[20, 20]
print(image.shape)
(40, 40, 3)
# Generate list of unique colours present in image so we know what we are dealing with
print(np.unique(im.reshape(-1,3), axis=0))
array([[140, 109, 142],
[151, 106, 140],
[160, 101, 137],
[165, 134, 157],
[175, 149, 171],
[206, 87, 109],
[206, 185, 193]], dtype=uint8)
# Generate mask of pixels matching centre colour
mask_bw = np.where(np.all(im==cc,axis=2), 255, 0)
# Check shape of mask - no 3rd dimension !!!
print(mask_bw.shape)
(40, 40)
# Check unique colours in mask
print(np.unique(mask_bw.reshape(-1,1), axis=0))
array([[ 0],
[255]])
I'm trying to create an 8-bit 1-channel mask used for use in some image operations. I have an image that has certain pixels filled with fuscia (255, 0, 255) in the original image which indicates that pixel should be used in masking.
My idea is to simply copy the original picture, then replace all the fuscia pixels with white, and all the non-fuscia pixels with black. I am using numpy.place to do this. It appears, however, to only really "apply" the last place operation.
For example, in the code below, I am trying to first set all the fuscia pixels to white, and then all the non-fuscia pixels to black. However, when I go and actually save the image out and look at it, only the non-fuscia pixels have been turned black.
mask = original.copy()
np.place(mask, mask == (255, 0, 255), (255, 255, 255))
np.place(mask, mask != (255, 0, 255), (0, 0, 0))
mask = mask.reshape((h, w, 3))
mask = cv2.cvtColor(mask, cv2.COLOR_RGB2GRAY)
original
mask
I expect the fuscia area to be white, but it isn't. It is the greyscale version of the fuscia color (112, 112, 112)
I'm fairly new to numpy, so I may even be barking up the wrong tree and there could be an easier way to do this. What am I doing wrong? Is there an easier way to do what I'm describing? Thanks!
Seems like you could use a boolean array as the mask. For example:
mask = np.any(original==[255, 0, 255], axis=-1)
Now you can do original[mask] to get only the magenta pixels, or orignal[~mask] to get the others.
You'll find you can't overwrite original but you can overwrite a copy:
newimg = original.copy()
newimg[mask] = [255, 255, 255]
newimg[~mask] = [0, 0, 0]
By the way, I think you're 'supposed' to use masked arrays for this sort of thing, but I never got to grips with those.
I am using python, OpenCV and Numpy. My goal is to find all white pixel and turn it red and turn everything else off or white. My code:
import numpy as np
import cv2
import matplotlib.pyplot as plt
# Read mask
image = cv2.imread("path to my image")
any_white = np.any(image == [255,255,255], axis = -1)
image[any_white]=[255,0,0]
plt.imshow(image)
plt.show()
cv2.imwrite('result.png',image)
Problem 1: Targetting any [255,255,255] doesn't find all, whiteist, I starting finding any [244,244,244], [243,243,243] and so on. Is there a way to set a range of white, maybe from [255,255,255] to [230,230,230]?
Problem 2: clearly, with plt.imshow(image) and plt.show() within python, the result shows red, but when i used cv2.imwrite('result.png',image) to save, it's blue. See result image.
Problem 1:
You can create a mask and set the red channel to False so that you keep the value at 255 if you want to target only the white pixels
mask_bg = (image == [255, 255, 255])
mask_bg[:, :, 0] = False # set red channel mask to false (leave 255 value)
image[mask_bg] = 0 # set all white pixels to [255, 0, 0]
If you want to find all values in a range you can use cv2.inRange:
mask = cv2.inRange(image, (230, 230, 230), (255, 255,255))
Problem 2:
OpenCV uses BGR as default instead of RGB, you can convert from BGR to RGB with:
new_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
cv2.imshow('BGR Image', new_image )
Keep in mind that if you open an image with OpenCV it will be BGR, so convert it before manipulating the channels.
Problem 1:
The pixels you are planning to target may not have the exact value of (255, 255, 255). Hence it is better to binarize the image by setting a range of pixel values. You can find the exact range by creating Trackbars and tuning them manually. You can find more about implementing Trackbars in OpenCV here.
Problem 2:
This happens because OpenCV uses BGR or (Blue, Green, Red) colorspace by default. You can change the colorspace into RGB or (Red, Green, Blue) by using cv2.cvtColor(image, cv2.COLOR_BGR2RGB) before saving.