Shape of numpy array when converting from PIL image - python

When I convert a PIL image to a numpy array using
image = Image.open(file_path)
image_array = numpy.array(image)
sometimes I get a 2D array with dimensions equal to the dimensions of the image. Other times I get a 3D array with each entry being a array of pixels (i.e. [RRR, GGG, BBB]). And other times I get a 3D array with each entry being an array with 4 values (i.e. [RRR, GGG, BBB, XXX]). What determines the shape of the numpy array? And if it's a 2D array, what do the entries represent?

Answered my own question:
The shape of the numpy array is determined my the mode of the PIL image. The full lists of modes can be found here. A 2D array is created when the image's mode is 1, L, or P. A 3D array is created otherwise with values analogous to the values represented by the mode.

Related

Resizing arrays without converting them to images

I have a few arrays of shape
(137,236)
Can I reshape them into (128,128) just like we would do with an image? I do not want to convert the array to an image, then resize and then retrieve the array

2D RGB image construction from 3D array in SimpleITK

I have an RGB image in the format of a 3D array with the shape of (m, n, 3). I would like to create a SimpleITK image. Using the GetImageFromArray() function results in creation of an image in 3D which is not what I am looking for. How can I create a 2D RGB image instead?
The documentation reads:
Signature: sitk.GetImageFromArray(arr, isVector=None)
Docstring: Get a SimpleITK Image from a numpy array. If isVector is True, then the Image will have a Vector pixel type, and the last dimension of the array will be considered the component index. By default when isVector is None, 4D images are automatically considered 3D vector images.
Have you tried passing isVector=True?

Iterating over pixels in an image as a numpy array

Let's say I have a numpy array of shape (100, 100, 3), and that it represents an image in RGB encoding. How do I iterate over the individual pixels of this image.
Specifically I want to map this image with a function.
Note, I got that array from opencv.

Reordering numpy array indices

I have a 2D array that I want to create an image from. I want to transform the image array of dimensions 140x120 to an array of 140x120x3 by stacking the same array 3 times (to get a grayscale image to use with skimage).
I tried the following:
image = np.uint8([image, image, image])
which results in a 3x120x140 image. How can I reorder the array to get 120x140x3 instead?
np.dstack([image, image, image]) (docs) will return an array of the desired shape, but whether this has the right semantics for your application depends on your image generation library.

Add two 3D numpy arrays with a 2D mask

I would like to add two 3D numpy arrays (RGB image arrays) with a 2D mask generated by some algorithms on a greyscale image. What is the best way to do this?
As an example of what I am trying to do:
from PIL import Image, ImageChops, ImageOps
import numpy as np
img1=Image.open('./foo.jpg')
img2=Image.open('./bar.jpg')
img1Grey=ImageOps.grayscale(img1)
img2Grey=ImageOps.grayscale(img2)
# Some processing for example:
diff=ImageChops.difference(img1Grey,img2Grey)
mask=np.ma.masked_array(img1,diff>1)
img1Array=np.asarray(im1)
img2Array=np.asarray(im2)
imgResult=img1Array+img2Array[mask]
I was thinking:
1) break up the RGB image and do each color separately
2) duplicate the mask into a 3D array
or is there a more pythonic way to do this?
Thanks in advance!
Wish I could add a comment instead of an answer. Anyhow:
masked_array is not for making masks. It's for including only the data outside the mask in calculations such as sum, mean, etc.. scientific statistical applications. It's comprised of an array and the mask for the array.
It's probably NOT what you want.
You probably just want a normal boolean mask, as in:
mask = diff>1
Then you'll need to modify the shape so numpy broadcasts in the correct dimension, then broadcast it into the 3rd dimension:
mask.shape = mask.shape + (1,)
mask = np.broadcast_arrays(img1Array, mask)[1]
After that, you can just add the pixels:
img1Array[mask] += img2Array[mask]
A further point of clarification:
imgResult=img1Array+img2Array[mask]
That could never work. You are saying 'add some of the pixels from img2Array to all of the pixels in img1Array' 6_9
If you want to apply a ufunc between two or more arrays, they must be either the same shape, or broadcastable to the same shape.

Categories