Why is output image black after normalization? - python

I have many grayscale images that I want to normalize by using mean and standard deviation. I use the following process:
Calculate the image's mean and standard deviation.
Subtract the mean from the image.
Divide the resulting image by the standard deviation.
However, I got a black image as a result. What is wrong in my code?
import cv2
img = cv2.imread('E.png') # read an image
gray_image = cv2.cvtColor(img , cv2.COLOR_BGR2GRAY) # converting the image to grayscale image
img = cv2.resize(gray_image, (60, 60)) # Resize the image to the size 60x60 pixels
cv2.imwrite("Grayscale Image.png",img) #To write the result
mean, stdDev = cv2.meanStdDev(img) #Get Mean and Standard-deviation
image = (img-mean)/stdDev #Normalization process
cv2.imwrite("Normalized Image.png",image) #To write the result
Input image :
Grayscale output:
Normalized image output:

When you save the image you need to consider the data type. To save the normalized image as png, you need to scale the normalized values to integer range (such as [0, 255]) or use image format that supports floating point format.
When using z-score normalization (as in your code), you could save it as png with
image -= image.min()
image /= image.max()
image *= 255 # [0, 255] range
cv2.imwrite("Normalized Image.png", image)

Related

How to normalize image to remove brightness variations

In computer vision course the teacher says that first of all image should be normalized to remove brightness variations.
The link for the video https://youtu.be/0WNiYrRjJbM
The formula looks like below:
I = I/||I||, where I is an image, ||I|| is the magnitude of this image.
Could somebody explain how to implement this normalization using python and any library, opencv for instance. May be there is already exists such function in some library and ready to use?
What I think is the magnitude of an image calculates like m=sqrt(sum(v*v)), where v - is the array of values for each point after converting image to hsv. And then I=v/m, each point value divided by magnitude. But this doesn't work. It looks strange.
Thanks.
Below is the small code i wrote which does image normalization.
import numpy as np
import cv2
img = cv2.imread("../images/segmentation/peppers_BlueHills.png")
print("img shape = ", img.shape)
print("img type = ", img.dtype)
print("img[0][0]", img[0][0])
#2-norm
norm = np.linalg.norm(img)
print("img norm = ", norm)
img2 = img / norm
#here img2 becomes float64, reducing it to float32
img2 = np.float32(img2)
print("img2 type = ", img2.dtype)
print("img2[0][0]", img2[0][0])
cv2.imwrite('../images/segmentation/NormalizedPeppers_BlueHills.tif', img2)
cv2.imshow('normalizedImg', img2.astype(np.uint8))
cv2.waitKey(0)
cv2.destroyAllWindows()
exit(0)
The output looks like below:
img shape = (384, 512, 3)
img type = uint8
img[0][0] [64 29 62]
img norm = 78180.45637497904
img2 type = float32
img2[0][0] [0.00081862 0.00037094 0.00079304]
The output image looks like black square.
However it's possible to equalize brightness in Photoshop for instance, to see something.
Each channel (R,G,B) becomes float and only tiff format supports it.
To me it's still not clear what it gives us to divide each pixel brightness by some value, in this case it's 2-norm value of an image. It just makes an image too dark and unreadable. But it doesn't equalize brightness to make it even across entire image.
What do you think about?

contrast normalization of image python

i want to ask how to get the image result (Icon) with python code as indicated in
where ishade is a preprocessed image and std(Ishade) is the standard deviation of this image
result = ndimage.median_filter(blur, size=68)
std=cv2.meanStdDev(result)
I tried to follow the article in the reference you posted and the reference in that post to the original. But I do not get exactly what they do. Nevertheless, here is my interpretation (apart from the initial CLAHE). You can adjust the mean and median filter sizes as desired.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image
img = cv2.imread("lena.jpg")
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Gaussian blurred gray image
mean = cv2.GaussianBlur(gray, (0,0), sigmaX=5, sigmaY=5)
# apply median filter to mean image
median = cv2.medianBlur(mean, 25)
# divide mean by median
division = cv2.divide(mean.astype(np.float64)/255, median.astype(np.float64)/255)
# get global standard deviation of division
std = np.std(division)
print(std)
# divide the division by the std and normalize to range 0 to 255 as unint8
result = np.divide(division, std)
result = skimage.exposure.rescale_intensity(result, in_range='image', out_range=(0,255)).astype(np.uint8)
# write result to disk
cv2.imwrite("lena_std_division2.jpg", result)
# display it
cv2.imshow("mean", mean)
cv2.imshow("median", median)
cv2.imshow("division", division)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I am not sure I understand what you want. There are different types of normalization formulae.
The most common would be to subtract the mean from the image and then divide by the standard deviation. (I-mean(I))/std(I)
But if you want to do your formulae, I/std(I), then it can be done as follows:
Input:
import cv2
import numpy as np
import skimage.exposure
# load image
img = cv2.imread("lena.jpg")
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY).astype(np.float64)/255
# get local mean from blurred gray image and square it
sigma=15
mean = cv2.GaussianBlur(gray, (0,0), sigmaX=sigma, sigmaY=sigma)
mean_sq = cv2.multiply(mean,mean)
# get mean of gray image squared
gray2 = cv2.multiply(gray,gray)
mean2 = cv2.GaussianBlur(gray2, (0,0), sigmaX=sigma, sigmaY=sigma)
# get variance image from the two means
var = cv2.subtract(mean2, mean_sq)
# get the standard deviation image from the variance image
std = np.sqrt(var)
print(std.dtype, np.amax(std), np.amin(std))
# divide image by std and scale using skimage
divide = (255*cv2.divide(gray, std, scale=1)).clip(0,255).astype(np.uint8)
divide = skimage.exposure.rescale_intensity(divide, in_range='image', out_range=(0,255)).astype(np.uint8)
print(divide.dtype, np.amax(divide), np.amin(divide))
# write result to disk
cv2.imwrite("lena_std_division.jpg", divide)
# display it
cv2.imshow("std", std)
cv2.imshow("divide", divide)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result (depending upon the sigma value):
An alternate formula for which I have posted a number of examples (called division normalization), would be to divide the image by its local mean image. I/mean(I))

Converting Color Images to Grayscale but shape of the image stays identical

I've converted some images from RGB to Grayscale for ML purpose.
However the shape of the converted grayscale image is still 3, the same as the color image.
The code for the Conversion:
from PIL import Image
img = Image.open('path/to/color/image')
imgGray = img.convert('L')
imgGray.save('path/to/grayscale/image')
The code to check the shape of the images:
import cv2
im_color = cv2.imread('path/to/color/image')
print(im_color.shape)
im_gray2 = cv2.imread('path/to/grayscale/image')
print(im_gray2.shape)
You did
im_gray2 = cv2.imread('path/to/grayscale/image')
OpenCV does not inspect colorness of image - it does assume image is color and desired output is BGR 8-bit format. You need to inform OpenCV you want output to be grayscale (2D intensity array) as follows
im_gray2 = cv2.imread('path/to/grayscale/image', cv2.IMREAD_GRAYSCALE)
If you want to know more about reading images read OpenCV: Getting Started with Images
cv.imread, without any flags, will always convert any image content to BGR, 8 bits per channel.
If you want any image file, grayscale or color, to be read as grayscale, you can pass the cv.IMREAD_GRAYSCALE flag.
If you want to read the file as it really is, then you need to use cv.IMREAD_UNCHANGED.
im_color = cv2.imread('path/to/color/image', cv2.IMREAD_UNCHANGED)
print(im_color.shape)
im_gray2 = cv2.imread('path/to/grayscale/image', cv2.IMREAD_UNCHANGED)
print(im_gray2.shape)

Calculate the blur degree with variance of laplacian in open cv

I try to obtain the blur degree of a image. I reference this tutorial with calculating the variance of laplacian in open cv.
import cv2
def variance_of_laplacian(image):
return cv2.Laplacian(image, cv2.CV_64F).var()
def check_blurry(image):
"""
:param: the image
:return: True or False for blurry
"""
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fm = variance_of_laplacian(gray)
return fm
When I try to calculate the fm from two images, which look exactly same but with different size.
filePath = 'small.jpeg'
image1 = cv2.imread(filePath)
print('small image shape', image1.shape)
print('small image fm', check_blurry(image1))
filePath = 'large.jpg'
image2 = cv2.imread(filePath)
print('large image shape', image2.shape)
print('large image fm', check_blurry(image2))
The output of is:
small image shape (1440, 1080, 3)
small image fm 4.7882723403428065
large image shape (4032, 3024, 3)
large image fm 8.44476634687877
Obviously, the small image is scaled down 2.8 ratio of large image. Is fm related to the size of image? If so, what's the relationship between them? Or is there any solution to evaluate the blur degree for different size images?
"Is fm related to the size of image?"
Yes partially (at least in regard of your question), because if you scale an image, you have to interpolate pixel values. Downscaling will not only lose information but create new information by pixel interpolation (if it's not nearest-neighbor interpolation) and thus influence the variance of the resulting image. But this only applies to scaled images, not to images which are different in the first place.

Normalizing images in OpenCV produces black image?

I wrote the following code to normalize an image using NORM_L1 in OpenCV. But the output image was just black. How to solve this?
import cv2
import numpy as np
import Image
img = cv2.imread('img7.jpg')
gray_image = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
a = np.asarray(gray_image)
dst = np.zeros(shape=(5,2))
b=cv2.normalize(a,dst,0,255,cv2.NORM_L1)
im = Image.fromarray(b)
im.save("img50.jpg")
cv2.waitKey(0)
cv2.destroyAllWindows()
If you want to change the range to [0, 1], make sure the output data type is float.
image = cv2.imread("lenacolor512.tiff", cv2.IMREAD_COLOR) # uint8 image
norm_image = cv2.normalize(image, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
The other answers normalize an image based on the entire image. But if your image has a predominant color (such as black), it will mask out the features that you're trying to enhance since it will not be as pronounced. To get around this limitation, we can normalize the image based on a subsection region of interest (ROI). Essentially we will normalize based on the section of the image that we want to enhance instead of equally treating each pixel with the same weight. Take for instance this earth image:
Input image -> Normalization based on entire image
If we want to enhance the clouds by normalizing based on the entire image, the result will not be very sharp and will be over saturated due to the black background. The features to enhance are lost. So to obtain a better result we can crop a ROI, normalize based on the ROI, and then apply the normalization back onto the original image. Say we crop the ROI highlighted in green:
This gives us this ROI
The idea is to calculate the mean and standard deviation of the ROI and then clip the frame based on the lower and upper range. In addition, we could use an offset to dynamically adjust the clip intensity. From here we normalize the original image to this new range. Here's the result:
Before -> After
Code
import cv2
import numpy as np
# Load image as grayscale and crop ROI
image = cv2.imread('1.png', 0)
x, y, w, h = 364, 633, 791, 273
ROI = image[y:y+h, x:x+w]
# Calculate mean and STD
mean, STD = cv2.meanStdDev(ROI)
# Clip frame to lower and upper STD
offset = 0.2
clipped = np.clip(image, mean - offset*STD, mean + offset*STD).astype(np.uint8)
# Normalize to range
result = cv2.normalize(clipped, clipped, 0, 255, norm_type=cv2.NORM_MINMAX)
cv2.imshow('image', image)
cv2.imshow('ROI', ROI)
cv2.imshow('result', result)
cv2.waitKey()
The difference between normalizing based on the entire image vs a specific section of the ROI can be visualized by applying a heatmap to the result. Notice the difference on how the clouds are defined.
Input image -> heatmap
Normalized on entire image -> heatmap
Normalized on ROI -> heatmap
Heatmap code
import matplotlib.pyplot as plt
import numpy as np
import cv2
image = cv2.imread('result.png', 0)
colormap = plt.get_cmap('inferno')
heatmap = (colormap(image) * 2**16).astype(np.uint16)[:,:,:3]
heatmap = cv2.cvtColor(heatmap, cv2.COLOR_RGB2BGR)
cv2.imshow('image', image)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Note: The ROI bounding box coordinates were obtained using how to get ROI Bounding Box Coordinates without Guess & Check and heatmap code was from how to convert a grayscale image to heatmap image with Python OpenCV
When you normalize a matrix using NORM_L1, you are dividing every pixel value by the sum of absolute values of all the pixels in the image.
As a result, all pixel values become much less than 1 and you get a black image. Try NORM_MINMAX instead of NORM_L1.

Categories