Calculate the blur degree with variance of laplacian in open cv - python

I try to obtain the blur degree of a image. I reference this tutorial with calculating the variance of laplacian in open cv.
import cv2
def variance_of_laplacian(image):
return cv2.Laplacian(image, cv2.CV_64F).var()
def check_blurry(image):
"""
:param: the image
:return: True or False for blurry
"""
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fm = variance_of_laplacian(gray)
return fm
When I try to calculate the fm from two images, which look exactly same but with different size.
filePath = 'small.jpeg'
image1 = cv2.imread(filePath)
print('small image shape', image1.shape)
print('small image fm', check_blurry(image1))
filePath = 'large.jpg'
image2 = cv2.imread(filePath)
print('large image shape', image2.shape)
print('large image fm', check_blurry(image2))
The output of is:
small image shape (1440, 1080, 3)
small image fm 4.7882723403428065
large image shape (4032, 3024, 3)
large image fm 8.44476634687877
Obviously, the small image is scaled down 2.8 ratio of large image. Is fm related to the size of image? If so, what's the relationship between them? Or is there any solution to evaluate the blur degree for different size images?

"Is fm related to the size of image?"
Yes partially (at least in regard of your question), because if you scale an image, you have to interpolate pixel values. Downscaling will not only lose information but create new information by pixel interpolation (if it's not nearest-neighbor interpolation) and thus influence the variance of the resulting image. But this only applies to scaled images, not to images which are different in the first place.

Related

How to normalize image to remove brightness variations

In computer vision course the teacher says that first of all image should be normalized to remove brightness variations.
The link for the video https://youtu.be/0WNiYrRjJbM
The formula looks like below:
I = I/||I||, where I is an image, ||I|| is the magnitude of this image.
Could somebody explain how to implement this normalization using python and any library, opencv for instance. May be there is already exists such function in some library and ready to use?
What I think is the magnitude of an image calculates like m=sqrt(sum(v*v)), where v - is the array of values for each point after converting image to hsv. And then I=v/m, each point value divided by magnitude. But this doesn't work. It looks strange.
Thanks.
Below is the small code i wrote which does image normalization.
import numpy as np
import cv2
img = cv2.imread("../images/segmentation/peppers_BlueHills.png")
print("img shape = ", img.shape)
print("img type = ", img.dtype)
print("img[0][0]", img[0][0])
#2-norm
norm = np.linalg.norm(img)
print("img norm = ", norm)
img2 = img / norm
#here img2 becomes float64, reducing it to float32
img2 = np.float32(img2)
print("img2 type = ", img2.dtype)
print("img2[0][0]", img2[0][0])
cv2.imwrite('../images/segmentation/NormalizedPeppers_BlueHills.tif', img2)
cv2.imshow('normalizedImg', img2.astype(np.uint8))
cv2.waitKey(0)
cv2.destroyAllWindows()
exit(0)
The output looks like below:
img shape = (384, 512, 3)
img type = uint8
img[0][0] [64 29 62]
img norm = 78180.45637497904
img2 type = float32
img2[0][0] [0.00081862 0.00037094 0.00079304]
The output image looks like black square.
However it's possible to equalize brightness in Photoshop for instance, to see something.
Each channel (R,G,B) becomes float and only tiff format supports it.
To me it's still not clear what it gives us to divide each pixel brightness by some value, in this case it's 2-norm value of an image. It just makes an image too dark and unreadable. But it doesn't equalize brightness to make it even across entire image.
What do you think about?

python how to resize(shrink) image without losing quality

I want to resize png picture 476x402 to 439x371, and I used resize method of PIL(image) or opencv, however, it will loss some sharp. After resize, The picture becomes blurred.
How to resize(shrink) image without losing sharpness with use python?
from skimage import transform, data, io
from PIL import Image
import os
import cv2
infile = 'D:/files/script/org/test.png'
outfile = 'D:/files/script/out/test.png'
''' PIL'''
def fixed_size1(width, height):
im = Image.open(infile)
out = im.resize((width, height),Image.ANTIALIAS)
out.save(outfile)
''' open cv'''
def fixed_size2(width, height):
img_array = cv2.imread(infile)
new_array = cv2.resize(img_array, (width, height), interpolation=cv2.INTER_CUBIC)
cv2.imwrite(outfile, new_array)
def fixed_size3(width, height):
img = io.imread(infile)
dst = transform.resize(img, (439, 371))
io.imsave(outfile, dst)
fixed_size2(371, 439)
src:476x402
resized:439x371
How can you pack 2000 pixels into a box that only holds 1800? You can't.
Putting the same amount of information (stored as pixels in your source image) into a smaller pixelarea only works by
throwing away pixels (i.e. discarding single values or by cropping an image which is not what you want to do)
blending neighbouring pixels into some kind of weighted average and replace say 476 pixels with slightly altered 439 pixels
That is exactly what happens when resizing images. Some kind of algorithm (interpolation=cv2.INTER_CUBIC, others here) tweaks the pixel values to merge/average them so you do not loose too much of information.
You can try to change the algorithm or you can apply further postprocessing ("sharpening") to enrich the contrasts again.
Upon storing the image, certain formats do "lossy" storage to minimize file size (JPG) others are lossless (PNG, TIFF, JPG2000, ...) which further might blur your image if you choose a lossy image format.
See
Shrink/resize an image without interpolation
How can I sharpen an image in OpenCV?

Why is output image black after normalization?

I have many grayscale images that I want to normalize by using mean and standard deviation. I use the following process:
Calculate the image's mean and standard deviation.
Subtract the mean from the image.
Divide the resulting image by the standard deviation.
However, I got a black image as a result. What is wrong in my code?
import cv2
img = cv2.imread('E.png') # read an image
gray_image = cv2.cvtColor(img , cv2.COLOR_BGR2GRAY) # converting the image to grayscale image
img = cv2.resize(gray_image, (60, 60)) # Resize the image to the size 60x60 pixels
cv2.imwrite("Grayscale Image.png",img) #To write the result
mean, stdDev = cv2.meanStdDev(img) #Get Mean and Standard-deviation
image = (img-mean)/stdDev #Normalization process
cv2.imwrite("Normalized Image.png",image) #To write the result
Input image :
Grayscale output:
Normalized image output:
When you save the image you need to consider the data type. To save the normalized image as png, you need to scale the normalized values to integer range (such as [0, 255]) or use image format that supports floating point format.
When using z-score normalization (as in your code), you could save it as png with
image -= image.min()
image /= image.max()
image *= 255 # [0, 255] range
cv2.imwrite("Normalized Image.png", image)

Normalizing images in OpenCV produces black image?

I wrote the following code to normalize an image using NORM_L1 in OpenCV. But the output image was just black. How to solve this?
import cv2
import numpy as np
import Image
img = cv2.imread('img7.jpg')
gray_image = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
a = np.asarray(gray_image)
dst = np.zeros(shape=(5,2))
b=cv2.normalize(a,dst,0,255,cv2.NORM_L1)
im = Image.fromarray(b)
im.save("img50.jpg")
cv2.waitKey(0)
cv2.destroyAllWindows()
If you want to change the range to [0, 1], make sure the output data type is float.
image = cv2.imread("lenacolor512.tiff", cv2.IMREAD_COLOR) # uint8 image
norm_image = cv2.normalize(image, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
The other answers normalize an image based on the entire image. But if your image has a predominant color (such as black), it will mask out the features that you're trying to enhance since it will not be as pronounced. To get around this limitation, we can normalize the image based on a subsection region of interest (ROI). Essentially we will normalize based on the section of the image that we want to enhance instead of equally treating each pixel with the same weight. Take for instance this earth image:
Input image -> Normalization based on entire image
If we want to enhance the clouds by normalizing based on the entire image, the result will not be very sharp and will be over saturated due to the black background. The features to enhance are lost. So to obtain a better result we can crop a ROI, normalize based on the ROI, and then apply the normalization back onto the original image. Say we crop the ROI highlighted in green:
This gives us this ROI
The idea is to calculate the mean and standard deviation of the ROI and then clip the frame based on the lower and upper range. In addition, we could use an offset to dynamically adjust the clip intensity. From here we normalize the original image to this new range. Here's the result:
Before -> After
Code
import cv2
import numpy as np
# Load image as grayscale and crop ROI
image = cv2.imread('1.png', 0)
x, y, w, h = 364, 633, 791, 273
ROI = image[y:y+h, x:x+w]
# Calculate mean and STD
mean, STD = cv2.meanStdDev(ROI)
# Clip frame to lower and upper STD
offset = 0.2
clipped = np.clip(image, mean - offset*STD, mean + offset*STD).astype(np.uint8)
# Normalize to range
result = cv2.normalize(clipped, clipped, 0, 255, norm_type=cv2.NORM_MINMAX)
cv2.imshow('image', image)
cv2.imshow('ROI', ROI)
cv2.imshow('result', result)
cv2.waitKey()
The difference between normalizing based on the entire image vs a specific section of the ROI can be visualized by applying a heatmap to the result. Notice the difference on how the clouds are defined.
Input image -> heatmap
Normalized on entire image -> heatmap
Normalized on ROI -> heatmap
Heatmap code
import matplotlib.pyplot as plt
import numpy as np
import cv2
image = cv2.imread('result.png', 0)
colormap = plt.get_cmap('inferno')
heatmap = (colormap(image) * 2**16).astype(np.uint16)[:,:,:3]
heatmap = cv2.cvtColor(heatmap, cv2.COLOR_RGB2BGR)
cv2.imshow('image', image)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Note: The ROI bounding box coordinates were obtained using how to get ROI Bounding Box Coordinates without Guess & Check and heatmap code was from how to convert a grayscale image to heatmap image with Python OpenCV
When you normalize a matrix using NORM_L1, you are dividing every pixel value by the sum of absolute values of all the pixels in the image.
As a result, all pixel values become much less than 1 and you get a black image. Try NORM_MINMAX instead of NORM_L1.

How can i apply proportionally dilation operation to binary images?

I have to make dilation on binary images
My problem is adapting dilation operation on difference sizes of binary images.
large image
large image dilated
small image
small image dilated
I want to apply proportionally dilation operation to all images and prevent that a small image like a wheel become a white circle.
#image dilation
import cv2
path "pathimage"
gray = cv2.imread(path,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(6,6))
graydilate = cv2.erode(gray, element) #imgbnbin
graydilate = cv2.erode(graydilate, element)
#graydilate = cv2.erode(graydilate, element)
cv2.imshow('erode',graydilate)
cv2.waitKey()
ret,thresh = cv2.threshold(graydilate,127,255,cv2.THRESH_BINARY_INV)
imgbnbin = thresh
print("shape imgbnbin")
print(imgbnbin.shape)
cv2.imshow('binaria',imgbnbin)
cv2.waitKey()
May I have to rescale images?
You can do either of the following:
Re-scale the images to the same size.
OR
If the object in the image is not directly proportional to the size of the image, then extract a bounding box for the object, and then determine the size of the structural element (currently always 6x6) relative to the size of the object's bounding box. That way you're sure that the morphology is proportional to the object size.

Categories