Measuring Average Brightness of a Black and White Image? - python

I currently have a code for measuring the average brightness of an RGB image.
When i run this with a black and white image, the RGB values are the same, e.g. 37, 37, 37
I have no idea about colours etc but i dont think this is correct
Here is my code at the moment:
from PIL import Image
from math import sqrt
imag = Image.open("../Images/pexels-photo-57905.jpeg")
imag = imag.convert ('RGB')
imag.show()
X,Y = 0,0
pixelRGB = imag.getpixel((X,Y))
R,G,B = pixelRGB
brightness = sum([R,G,B])/3 ##0 is dark (black) and 255 is bright (white)
print(brightness)
print(R,G,B)
In a nutshell, i must convert an RGB image into grayscale, which im using .convert('LA') for, i must THEN measure the brightness of the image by adding the black and white values then dividing them by 2

Are these codes correct to measure the average brightness of a greyscale image? What do the three lines below mean? Does it return the average brightness of the whole picture, or just (0,0)?
X,Y = 0,0
pixelRGB = imag.getpixel((X,Y))
R,G,B = pixelRGB

Related

How to analyze only a part of an image?

I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)

Get a info about coloured pixels at gray image. Python, opencv

I have small r g b image. Imake it gray.
original = cv2.imread('im/auto5.png')
print(original.shape) # 27,30,3
print(original[13,29]) # [254 254 254]
orig_gray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
print(orig_gray.shape) # 27,30
Is it in this array info about white and black pixels? Or it lost this data? What mean this numbers?
print(orig_gray[5,5]) #6
At r g b image it mean color (3 digits, like [254,254,254]). But what mean one digit in my case with gray image? I want to get quanity of white pixels for my recognising.
Once you convert to gray scale there is only one value for each 'pixel' or index in the 2D array which represents the brightness in the original RGB image. The RGB image is essentially 3 of these arrays which represent the brightness for each of the three colors.
The idea of a 'white pixel' is a little confusing. I guess you could say any location in the grayscale array with a value of 255 is a white pixel. That would be an RGB pixel which is fully saturated at (255, 255, 255). There is basically only one value for each pixel after converting to gray scale.
Hope that helps.

Detecting an object and get the mean pixel value (BGR)

I have a picture of a leaf with a white paper as the background and I need to remove the noise (yellow dot) and get the pixel value (bgr) of the leaf.
I used green threshold to detect the leaf and masked it with the original image. I used cv2.mean to get the pixel value, but it counts all the pixel include the black area/background.
How to get the pixel value only for the leaf?
Here is the code I used:
import cv2
import numpy as np
img=cv2.imread('crop21.jpg')
blur=cv2.GaussianBlur(img,(5,5),0)
hsv=cv2.cvtColor(blur,cv2.COLOR_BGR2HSV)
#threshold green
low_g=np.array([35,100,60],np.uint8)
up_g=np.array([85,255,190],np.uint8)
mask=cv2.inRange(hsv,low_g,up_g)
mask_upstate=cv2.bitwise_and(blur, blur, mask=mask)
#get the bgr value
mean=cv2.mean(mask_upstate)
print (mean)
cv2.imshow('image',mask_upstate)
cv2.waitKey(0)
cv2.destroyAllWindows()
So basically you have a masked image with a leaf and a black background. The problem now is that it is dividing the sum of the colours by the amount of all pixels, instead of just dividing it by the amount of pixels that has the leaf. An easy quick way to solve this is by multiplying the result from the mean = cv2.mean(mask_upstate) by Total pixels / Non-black pixels, which can be done as follows:
# Get the BGR value
mean = cv2.mean(mask_upstate)
multiplier = float(mask.size)/cv2.countNonZero(mask)
mean = tuple([multiplier * x for x in mean])
Thus, you have the mean of just the non-black pixels, ergo the leaf without the black background.
Hope this helped!

Change color of a pixel with OpenCV

Let say I have this rose (Do not care about the background, only the white leaves are important).
I transform it to a grayscale picture:
grayscaled=cv2.imread('white_rose.png',cv2.IMREAD_GRAYSCALE)
How can I change every white pixel to a red one under the condition the red color (R=255) will have the same contrast as the white one has ? Meaning I want to see the white leaves in red color but with the same L value of every pixel that in grayscaled ?
You need to loop over your grey image and create a new coloured image by yourself.
For each pixel, you can replace the R value of your coloured image with the remainder of dividing of 255 and relative grey value:
import cv2
import numpy as np
img = cv2.imread('5585T.jpg')
print type(img)
img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
new=[[[0,0,255%j] for j in i] for i in img_gray]
dt = np.dtype('f8')
new=np.array(new,dtype=dt)
cv2.imwrite('img.jpg',new)
and with new=[[[255%j,255%j,j] for j in i] for i in img_gray] :

How to detect if an image is partially occluded?

I have a large number of aerial images. Some of them have the lens partially occluded. For example:
and
I'm trying to automatically detect which of the images have this using OpenCV. My initial though was to check how much of the image is black across multiple images. But hopefully there is a smarted way to do it for images in isolation.
An idea is to determine how many black pixels are on the image. We can do this by creating a blank mask and then coloring all detected black pixels white on the mask using np.where. From here we can count the number of white pixels on the mask with cv2.countNonZero then calculate the pixel percentage. If the calculated percentage is greater than some threshold (say 2%) then the image is partially occluded. Here's the results:
Input image -> mask
Pixel Percentage: 3.33%
Occluded: True
Pixel Percentage: 2.54%
Occluded: True
Code
import cv2
import numpy as np
def detect_occluded(image, threshold=2):
"""Determines occlusion percentage and returns
True for occluded or False for not occluded"""
# Create mask and find black pixels on image
# Color all found pixels to white on the mask
mask = np.zeros(image.shape, dtype=np.uint8)
mask[np.where((image <= [15,15,15]).all(axis=2))] = [255,255,255]
# Count number of white pixels on mask and calculate percentage
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
h, w = image.shape[:2]
percentage = (cv2.countNonZero(mask)/ (w * h)) * 100
if percentage < threshold:
return (percentage, False)
else:
return (percentage, True)
image = cv2.imread('2.jpg')
percentage, occluded = detect_occluded(image)
print('Pixel Percentage: {:.2f}%'.format(percentage))
print('Occluded:', occluded)
I'd recommend using some sort of floodfill algorithm with black pixels. By checking large (connected) black area's, you could indentify these. This approach has the advantage that you can tweak the parameters for aggressiveness (eg; When is a pixel labled as black, how large must the connected area be, etc).

Categories