Image.fromarray changes size - python

I have data that I want to store into an image. I created an image with width 100 and height 28, my matrix has the same shape. When I use Image.fromarray(matrix) the shape changes:
from PIL import Image
img = Image.new('L', (100, 28))
tmp = Image.fromarray(matrix)
print(matrix.shape) # (100, 28)
print(tmp.size) # (28, 100)
img.paste(tmp, (0, 0, 100, 28) # ValueError: images do not match
When I use img.paste(tmp, (0, 0)) the object is pasted into the image, but the part starting with the x value 28 is missing.
Why does the dimension change?

PIL and numpy have different indexing systems. matrix[a, b] gives you the point at x position b, and y position a, but img.getpixel((a, b)) gives you the point at x position a, and y position b. As a result of this, when you are converting between numpy and PIL matrices, they switch their dimensions. To fix this, you could take the transpose (matrix.transpose()) of the matrix.
Here's what's happening:
import numpy as np
from PIL import Image
img = Image.new('L', (100, 28))
img.putpixel((5, 3), 17)
matrix = np.array(img)
print matrix[5, 3] #This returns 0
print matrix[3, 5] #This returns 17
matrix = matrix.transpose()
print matrix[5, 3] #This returns 17
print matrix[3, 5] #This returns 0

NumPy and PIL have different indexing systems. So a (100, 28) numpy array will be interpreted as an image with width 28 and height 100.
If you want a 28x100 image, then you should swap the dimensions for your image instantiation.
img = Image.new('L', (28, 100))
If you want a 100x28 image, then you should transpose the numpy array.
tmp = Image.fromarray(matrix.transpose())
More generally, if you're working with RGB, you can use transpose() to only swap the first two axes.
>>> arr = np.zeros((100, 28, 3))
>>> arr.shape
(100, 28, 3)
>>> arr.transpose(1, 0, 2).shape
(28, 100, 3)

Related

Colour of image is being changed after conversion from PIL image to numpy array and converting it back

Here is my code
img = Image.open('./data/imgs/' + '0.jpg')
//converting to numpy array
img.load()
imgdata = np.asarray(img)
np.interp(imgdata, (imgdata.min(), imgdata.max()), (0, +1))
//converting back to PIL image
image = imgdata * 255
image = Image.fromarray(image.astype('uint8'), 'RGB')
image.show()
Here is my output with color distortion
How to solve this?
Reason for the problem:
You are not using the return value of np.interp, so the the imgdata is not replaced by the new values. Thus it retains the initial range of values as [0, 255] and its dtype also is np.uint8. When you did imgdata * 255 it could not fit the results in np.uint8 and it overflowed(but starting from 0 again as it's unsigned).
Assumption:
I assume that you wanted to map the image values from [min(), max()] to [0, 1] and then rescale it back by multiplying it with 255.
Solution:
If my assumption about your code is true, do this:
imgdata = np.interp(imgdata, (imgdata.min(), imgdata.max()), (0, +1))
else remove (* 255) from the code and change it to,
image = imgdata
You can test this overflow by taking a sample array as follows:
sample = np.array([10, 20, 30], dtype=np.uint8)
sample2 = sample * 10
print(sample, sample2)
you will see that values of sample2 are [100, 200, 44] but not [100, 200, 300]

Plot a Numpy array with shape (200, 200) shows a vertical line instead of a square

I've just started to learn Python 3.7.7.
I'm trying to show a NIFTI images, as numpy array with float32 elements, using plot.
This is the code to show the images:
import os
import SimpleITK as sitk
import numpy as np
import matplotlib.pyplot as plt
import cv2
def plot_grey_images_as_figure(img1, img2, title1, title2):
"""
Show img1 and img2 as a single figure using matplotlib.pylot.
Show img1 and img2 as a single figure using matplotlib.pylot with the titles
title1 for img1 and title2 for img2.
Parameters:
img1 (array-like or PIL image): Image to show first.
img2 (array-like or PIL image): Image to show second.
title1 (string) : Title for image 1.
title2 (string) : Title for image 2.
Returns:
Nothing.
"""
plt.subplot(121), plt.imshow(img1, cmap = 'gray')
plt.title(title1), plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(img2, cmap = 'gray')
plt.title(title2), plt.xticks([]), plt.yticks([])
plt.show()
return
This is how I preprocess the image before displaying them:
def preprocessing_array(array, rows_standard, cols_standard, debug): # array shape is (48, 240, 240)
image_rows_Dataset = np.shape(array)[1]
image_cols_Dataset = np.shape(array)[2]
num_rows_1 = ((image_rows_Dataset // 2) - (rows_standard // 2)) # 20
num_rows_2 = ((image_rows_Dataset // 2) + (rows_standard // 2)) # 220
num_cols_1 = ((image_cols_Dataset // 2) - (cols_standard // 2)) # 20
num_cols_2 = ((image_cols_Dataset // 2) + (cols_standard // 2)) # 220
array = array[:, num_rows_1:num_rows_2, num_cols_1:num_cols_2]
### ------ New Axis --------------------------------------------------------
# Add a new axis to denote that this is a one channel image.
array = array[..., np.newaxis]
return array # array shape is (48, 200, 200, 1)
To display the images I do this:
# D is a dataset with shape (960, 2, 200, 200, 1) with float32 elements.
print("Shape: ", D[:,0,:][0].shape)
nifti.plot_grey_images_as_figure(D[:,0,:][0][:,-1], D[:,1,:][0][:,-1], "mask", "output")
And I get this output:
Why am I getting lines instead of squares?
Maybe the problem is that I add a new axis, and then I remove it because doesn't allow me to plot an image with shape (200, 200, 1).
Use numpy.sqeeze() to drop the size-one dimension. The problem was in passing the images as non-2D arrays. Squeezing helps us in dropping the dimension along axis=2, making the shape from (200, 200, 1) to (200, 200).
import numpy as np
np.queeze(D[:,0,:][0], axis=2).shape
# (200, 200)
Suggested Improvement
# Choose the index i (axis=0)
i = 0 # Values (0, 1, ..., 960)
# Extract mask and output images
img1 = D[i,0,:,:,0] # shape (200, 200) --> mask
img2 = D[i,1,:,:,0] # shape (200, 200) --> output
nifti.plot_grey_images_as_figure(img1, img2, "mask", "output")
Reference
numpy.squeeze() documentation
The image slicing is not correct. Your plot function should be
base = D[:,0,:]
img1 = base[0][:, :, 0] # Shape of (200, 200)
img2 = base[1][:, :, 0]
nifti.plot_grey_images_as_figure(img1, img2, "mask", "output")

After changing the image to numpy array, I want to import only 1 channel

I took some images and replaced them with numpy array.
The image is a RGB image.
The converted numpy array is of size (256, 256, 3).
I wanted to import only the Y channel after I switched this RGB image to YCbCr.
What I want is an array of size (256,256, 1).
So I used [:,:, 0] in the array.
However, I have now become a two-dimensional image as shown in the code below.
I created an array of (256, 256, 1) size with 15 lines of code.
But I failed to see it again as an image.
Below is my code.
from PIL import Image
import numpy as np
img = Image.open('test.bmp') # input image 256 x 256
img = img.convert('YCbCr')
img.show()
print(np.shape(img)) # (256, 256, 3)
arr_img = np.asarray(img)
print(np.shape(arr_img)) # (256, 256, 3)
arr_img = arr_img[:, :, 0]
print(np.shape(arr_img)) # (256, 256)
arr_img = arr_img.reshape( * arr_img.shape, 1 )
print(np.shape(arr_img)) # (256, 256, 1)
pi = Image.fromarray(arr_img)
pi.show # error : TypeError: Cannot handle this data type
When I forcibly changed a two-dimensional image into a three-dimensional image,
The image can not be output.
I want to have a purely (256, 256, 1) sized array.
Y image of the channel!
I tried to use arr_img = arr_img [:,:, 0: 1] but I got an error.
How can I output an image with only Y (256,256,1) size and save it?
A single-channel image should actually be in 2D, with shape of just (256, 256). Extracting out the Y channel is effectively the same as having a greyscale image, which is just 2D. Adding the third dimension is causing the error because it is expecting just the two dimensions.
If you remove the reshape to (256, 256, 1), you will be able to save the image.
Edit:
from PIL import Image
import numpy as np
img = Image.open('test.bmp') # input image 256 x 256
img = img.convert('YCbCr')
arr_img = np.asarray(img) # (256, 256, 3)
arr_img = arr_img[:, :, 0] # (256, 256)
pi = Image.fromarray(arr_img)
pi.show()
# Save image
pi.save('out.bmp')
Try this:
arr_img_1d = np.expand_dims(arr_img, axis=1)
Here is the numpy documentation for the expand_dims function.

Skimage rgb2gray reduces one dimension

I am trying to convert multiple RGB images to grayscale. However, I am loosing one dimension
# img is an array of 10 images of 32x32 dimensions in RGB
from skimage.color import rgb2gray
print(img.shape) # (10, 32, 32, 3)
img1 = rgb2gray(img)
print(img.shape) # (10, 32, 3)
As you can see, though the shape of img is expected to be (10, 32, 32, 1), it is coming out as (10, 32, 3).
What point am I missing?
This function assumes the input to be one single image of dims 3 or 4 (with alpha).
(As your input has 4 dimensions, it's interpreted as single image of RGB + Alpha; not as N images of 3 dimensions)
If you got multiple images you will need to loop somehow, like this (untested):
import numpy as np
from skimage.color import rgb2gray
print(img.shape) # (10, 32, 32, 3)
img_resized = np.stack([rgb2gray(img[i]) for i in range(img.shape[0])])

copy/reshape high dimensional numpy array

I have a numpy array with a shape like this
x.shape
(100, 1, 300, 300)
Think of this as 100 observations of grayscale images of size 300x300.
Grayscale images have only 1 channel, hence the second 1 in the shape.
I want to convert this to an array of RGB images, with 3 channels.
I want to just copy the grayscale image to the two other channels.
So the final shape would be (100, 3, 300, 300)
How can I do that?
Use np.repeat -
np.repeat(x,3,axis=1)
Sample run -
In [8]: x = np.random.randint(11,99,(2,1,3,4))
In [9]: np.repeat(x,3,axis=1).shape
Out[9]: (2, 3, 3, 4)

Categories