How I calculate the covariance between 2 images? - python

I am working on image processing with python. Specifically, I am trying to implement an algorithm called Structural similarity index measure (SSIM) between two images (x and y), which I extracted from this article this article.
In that formula I need the covariance between the two images. I know how to calculate the covariance between two vectors, but I don't know how to calculate the covariance of two matrices (I assume each image is a matrix of pixels), anyone can help me? I tried using the numpy function numpy.cov(x,y) [doc] but I have a large 3-D matrix, and I actually need a scalar value

Using python we can calculate covariance between two images with following way
import numpy as np
def Covariance(x, y):
xbar, ybar = x.mean(), y.mean()
return np.sum((x - xbar)*(y - ybar))/(len(x) - 1)
now take two images img1,img2 and call the function and print it as given way
print( Covariance(img1,img2))

Check this library: pyssim. Might be what you're looking for.

import cv2
import numpy as np
from PIL import Image, ImageOps #PIL = pillow
from numpy import asarray
'''read image via PIL -- in opencv it equals to img1 = cv2.imread("c1.jpg") '''
img1 = Image.open('c1.jpg')
img2 = Image.open('d1.jpg')
gimg1 = ImageOps.grayscale(img1) #convert to grayscale PIL
gimg2 = ImageOps.grayscale(img2)
#asarray() class is used to convert PIL images into NumPy arrays
numpydata1 = asarray(gimg1)
numpydata2 = asarray(gimg2)
print("Array of image 1: ", numpydata1.shape)
print("Array of image 2: ", numpydata2.shape)
#grayscale images are saved as 2D ndarray of rows(height) x columns(width)
height = int(numpydata2.shape[0] *
(numpydata1.shape[0]/numpydata2.shape[0] ) )
width = int(numpydata2.shape[1] * (numpydata1.shape[1]/
numpydata2.shape[1] ) )
#print(width)
#print(height)
#when using resize(), format should be width x height therefore, create a new image called new and set it to w x h
new = (width, height)
#resize image so dimensions of both images are same
resized = cv2.resize(numpydata2, new, interpolation = cv2.INTER_AREA)
print("Array of resized image 2: ", resized.shape)
def Covariance(x, y):
xbar, ybar = x.mean(), y.mean()
return np.sum((x - xbar)*(y - ybar))/(len(x) - 1)
print( Covariance(numpydata1, resized))
'''#Alternative Method - convert grayscale image to array using np.array
np_img1 = np.array(gimg1)
np_img2 = np.array(gimg2)
'''

Related

How to perform operations on images in python

I am trying to complete a challenge where i use an equation to construct a new image (d) from other images. Then i must get the flag in the image (d). The given images are a.png, b.png c.png and y.png and they can be found here: https://drive.google.com/drive/folders/1bZOm_0apr5ZmaRNf9R5UVIEmtMuYSphn?usp=sharing
The equation: d = y - 21a - 3b + 41c
My current code
from PIL import Image
imagey = Image.open('y.png')
imagea = Image.open('a.png')
imageb = Image.open('b.png')
imagec = Image.open('c.png')
size = width, height = imagey.size
new = Image.new('RGB', size)
imgy = imagey.load()
imga = imagea.load()
imgb = imageb.load()
imgc = imagec.load()
data = new.load()
for x in range(width):
for y in range(height):
they = imgy[x, y]
thea = imga[x, y]
theb = imgb[x, y]
thec = imgc[x, y]
new_color = ((int(they[0])) & ~(int((21 * thea[0])) ^ int((3 * theb[0])) ^ int(~(41 * thec[0]))),
(int(they[1])) & ~(int((21 * thea[1])) ^ int((3 * theb[1])) ^ int(~(41 * thec[1]))),
(int(they[2])) & ~(int((21 * thea[2])) ^ int((3 * theb[2])) ^ int(~(41 * thec[2]))))
data[x, y] = new_color
new.save('final.png')
new.show()
If you would convert Pillow image to numpy array or you would use OpenCV or imageio to load image (and get directly numpy array) then you could do
directly
new = imagey - 21*imagea - 3*imageb + 41*imagec
Result:
Not ideal but much better than with your code.
It can be problem with overflow. It may create array with 8bits values and calculations can gives 16bits or 32bits values which can be reduced to 8bits in every calculation.
Full working code:
import imageio
imagey = imageio.imread('y.png')
imagea = imageio.imread('a.png')
imageb = imageio.imread('b.png')
imagec = imageio.imread('c.png')
new = imagey - 21*imagea - 3*imageb + 41*imagec
imageio.imwrite('final.png', new)
# --- imageio doesn't have function to display it ---
import matplotlib.pyplot as plt
plt.imshow(new)
plt.show()
EDIT:
If I use OpenCV then I get ideal result
Full working code:
import cv2
imagey = cv2.imread('y.png')
imagea = cv2.imread('a.png')
imageb = cv2.imread('b.png')
imagec = cv2.imread('c.png')
new = imagey - 21*imagea - 3*imageb + 41*imagec
cv2.imwrite('final.png', new)
# --- show window with image and wait for press any key ---
cv2.imshow('Image', new)
cv2.waitKey(0)
cv2.destroyAllWindows()
EDIT:
By the way: version which converts PIL Image to numpy array and later it converts back to PIL Image - but it gives the same result as imageio.
from PIL import Image
import numpy as np
imagey = Image.open('y.png')
imagea = Image.open('a.png')
imageb = Image.open('b.png')
imagec = Image.open('c.png')
arr_y = np.array(imagey)
arr_a = np.array(imagea)
arr_b = np.array(imageb)
arr_c = np.array(imagec)
arr_new = arr_y - 21*arr_a - 3*arr_b + 41*arr_c
new = Image.fromarray(arr_new)
new.save('final.png')
new.show()
BTW:
If I check images on Linux using program file then it shows that b.png and c.png are JPEG, not PNG.
$ file b.png
b.png: JPEG image data, JFIF standard 1.01, resolution (DPI),
density 300x300, segment length 16,
Exif Standard: [TIFF image data, big-endian, direntries=0], baseline,
precision 8, 960x640, components 3
I found that cv2.imread() gives little different values for c.png(which is JPG file) then other modules - and I don't mean that cv2 gives colors in BGR instead of RGB - and later this gives correct result. Probably cv2 uses different C library to read JPG.

How to autocrop randomly using PIL?

I'm trying to write a script in PIL that basically creates a bunch of images out of a larger image.
I want to take a larger image, let's say it's 1000X1000 pixels, and crop squares of 250x250 out of it at "random" locations.
I know that .crop(box) exists and works one image and one crop at a time. Is there any way I can use this to do the task I want?
from PIL import Image
from random import randrange
img = Image.open(r"image_path")
x, y = img.size
matrix = 250
sample = 10
sample_list = []
for i in range(sample):
x1 = randrange(0, x - matrix)
y1 = randrange(0, y - matrix)
sample_list.append(img.crop((x1, y1, x1 + matrix, y1 + matrix)))
matrix => Size of the matrix that would be used for cropping (has to be symmetric & should be smaller then the image size by a considerable amount)
sample => Number of samples that must be taken (or the number of cropped images)
sample_list => The list the would be used to store the cropped images
After the execution of the code, sample_list will contain several image objects which could be individually referenced by their index numbers (ex. sample_list[0])
a small adaptation of #Vasu answer, added margin crop if needed (if margins are not important) and set matrix based on image height / width. Can be adapted to your needs.
from random import randrange
from PIL import Image
def autocrop(pil_img, pct_focus=0.3, matrix_HW_pct=0.3, sample=1):
"""
random crop from an input image
Args:
- pil_img
- pct_focus(float): PCT of margins to remove based on image H/W
- matrix_HW_pct(float): crop size in PCT based on image Height
- sample(int): number of random crops to return
returns:
- crop_list(list): list of PIL cropped images
"""
x, y = pil_img.size
img_focus = pil_img.crop((x*pct_focus, y*pct_focus, x*(1-pct_focus), y*(1-pct_focus)))
x_focus, y_focus = img_focus.size
matrix = round(matrix_HW_pct*y_focus)
crop_list = []
for i in range(sample):
x1 = randrange(0, x_focus - matrix)
y1 = randrange(0, y_focus - matrix)
cropped_img = img_focus.crop((x1, y1, x1 + matrix, y1 + matrix))
#display(cropped_img)
crop_list.append(cropped_img)
return crop_list

Python -- change the RGB values of the image and save as a image

I can read every pixel' RGB of the image already, but I don't know how to change the values of RGB to a half and save as a image.Thank you in advance.
from PIL import *
def half_pixel(jpg):
im=Image.open(jpg)
img=im.load()
print(im.size)
[xs,ys]=im.size #width*height
# Examine every pixel in im
for x in range(0,xs):
for y in range(0,ys):
#get the RGB color of the pixel
[r,g,b]=img[x,y]
get the RGB color of the pixel
[r,g,b]=img.getpixel((x, y))
update new rgb value
r = r + rtint
g = g + gtint
b = b + btint
value = (r,g,b)
assign new rgb value back to pixel
img.putpixel((x, y), value)
You can do everything you are wanting to do within PIL.
If you are wanting to reduce the value of every pixel by half, you can do something like:
import PIL
im = PIL.Image.open('input_filename.jpg')
im.point(lambda x: x * .5)
im.save('output_filename.jpg')
You can see more info about point operations here: https://pillow.readthedocs.io/en/latest/handbook/tutorial.html#point-operations
Additionally, you can do arbitrary pixel manipulation as:
im[row, col] = (r, g, b)
There are many ways to do this with Pillow. You can use Image.point, for example.
# Function to map over each channel (r, g, b) on each pixel in the image
def change_to_a_half(val):
return val // 2
im = Image.open('./imagefile.jpg')
im.point(change_to_a_half)
The function is actually only called 256 times (assuming 8-bits color depth), and the resulting map is then applied to the pixels. This is much faster than running a nested loop in python.
If you have Numpy and Matplotlib installed, one solution would be to convert your image to a numpy array and then e.g. save the image with matplotlib.
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
img = Image.open(jpg)
arr = np.array(img)
arr = arr/2 # divide each pixel in each channel by two
plt.imsave('output.png', arr.astype(np.uint8))
Be aware that you need to have a version of PIL >= 1.1.6

K-means color clustering - omit background pixels with masked numpy arrays

I'm trying to find the 3 dominant colours of an several images using K-means clustering. The problem I'm facing is that K-means also clusters the background of the image. I am using Python 2.7 and OpenCV 3
All images have the same grey background of the following RGB colour: 150,150,150. To avoid that K-means also clusters the background color, I created a masked array which masks all '150' pixel values from the original image array, theoretically leaving only the non-background pixels in the array for K-Means to work with. However, when I run my script, it still returns the grey as one of the dominant colours.
My question: is a masked array the way to go (and did I do something wrong) or are there better alternatives to somehow exclude pixels from K-means clustering?
Please find my code below:
from sklearn.cluster import KMeans
from sklearn import metrics
import cv2
import numpy as np
def centroid_histogram(clt):
numLabels = np.arange(0, len(np.unique(clt.labels_)) + 1)
(hist, _) = np.histogram(clt.labels_, bins=numLabels)
hist = hist.astype("float")
hist /= hist.sum()
return hist
image = cv2.imread("test1.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
h, w, _ = image.shape
w_new = int(100 * w / max(w, h))
h_new = int(100 * h / max(w, h))
image = cv2.resize(image, (w_new, h_new))
image_array = image.reshape((image.shape[0] * image.shape[1], 3))
image_array = np.ma.masked_values(image_array,150)
clt = KMeans(n_clusters=3)
clt.fit(image_array)
hist = centroid_histogram(clt)
zipped = zip(hist, clt.cluster_centers_)
zipped.sort(reverse=True, key=lambda x: x[0])
hist, clt.cluster_centers = zip(*zipped)
print(clt.cluster_centers_)
If you want to extract the values of pixels other than your background, you can use numpy indexation :
img2=image_array[image_array!=[150,150,150]]
img2=img2.reshape((len(img2)/3,3))
This will yield the list of pixels which are not [150,150,150].
However, it does not preserve the structure of the image, just gives you the list of pixels values. I can't really remember, but maybe for K-means you need to give the whole image, i.e. you also need to feed it the position of the pixels ? But in that case, no masking will ever help because masking is just replacing values of certain pixels by another, not getting rid of pixels all together.

Converting 2D Numpy array of grayscale values to a PIL image

Say I have a 2D Numpy array of values on the range 0 to 1, which represents a grayscale image. How do I then convert this into a PIL Image object? All attempts so far have yielded extremely strange scattered pixels or black images.
for x in range(image.shape[0]):
for y in range(image.shape[1]):
image[y][x] = numpy.uint8(255 * (image[x][y] - min) / (max - min))
#Create a PIL image.
img = Image.fromarray(image, 'L')
In the code above, the numpy array image is normalized by (image[x][y] - min) / (max - min) so every value is on the range 0 to 1. Then it is multiplied by 255 and cast to an 8 bit integer. This should, in theory, process through Image.fromarray with mode L into a grayscale image - but the result is a set of scattered white pixels.
I think the answer is wrong. The Image.fromarray( ____ , 'L') function seems to only work properly with an array of integers between 0 and 255. I use the np.uint8 function for this.
You can see this demonstrated if you try to make a gradient.
import numpy as np
from PIL import Image
# gradient between 0 and 1 for 256*256
array = np.linspace(0,1,256*256)
# reshape to 2d
mat = np.reshape(array,(256,256))
# Creates PIL image
img = Image.fromarray(np.uint8(mat * 255) , 'L')
img.show()
Makes a clean gradient
vs
import numpy as np
from PIL import Image
# gradient between 0 and 1 for 256*256
array = np.linspace(0,1,256*256)
# reshape to 2d
mat = np.reshape(array,(256,256))
# Creates PIL image
img = Image.fromarray( mat , 'L')
img.show()
Has the same kind of artifacting.
If I understood you question, you want to get a grayscale image using PIL.
If this is the case, you do not need to multiply each pixels by 255.
The following worked for me
import numpy as np
from PIL import Image
# Creates a random image 100*100 pixels
mat = np.random.random((100,100))
# Creates PIL image
img = Image.fromarray(mat, 'L')
img.show()
im = Image.fromarray(np.uint8(mat), 'L')
or
im = Image.fromarray(np.uint8(mat))
Apparently it accepts type np.uint8(insert array here), also may be able to remove 'L' for conciseness.

Categories