Resizing arrays without converting them to images - python

I have a few arrays of shape
(137,236)
Can I reshape them into (128,128) just like we would do with an image? I do not want to convert the array to an image, then resize and then retrieve the array

Related

What is the need of converting an image into numpy array?

I was watching a tutorial on a facial recognition project using OpenCV,numpy, PIL.
During training, the image was converted into a numpy array, what is the need of converting it into a numpy array?
THE CODE:
PIL_IMAGE = Image.open(path).convert("L")
image_array = np.array(PIL_IMAGE, "uint8")
TLDR; OpenCV images are stored as three-dimensional Numpy arrays.
When you read in digital images using the library, they are represented as Numpy arrays. The rectangular shape of the array corresponds to the shape of the image. Consider this image of a chair
Here's a visualization of how this image is stored as a Numpy array in OpenCV
If we read in the image of the chair we can see how it is structured with image.shape which returns a tuple (height, width, channels). Image properties will be a tuple of the number of rows, columns, and channels if it is a colored image. If it is a grayscale image, image.shape only returns the number of rows and columns.
import cv2
image = cv2.imread("chair.jpg")
print(image.shape)
(222, 300, 3)
When working with OpenCV images, we specify the y coordinate first, then the x coordinate. Colors are stored as BGR values with blue in layer 0, green in layer 1, and red in layer 2. So for this chair image, it has a height of 222, a width of 300, and has 3 channels (meaning it is a color image). Essentially, when the library reads in any image, it stores it as a Numpy array in this format.
The answer is rather simple:
With Numpy you can make blazing fast operations on numerical arrays, no matter which dimension, shape, etc. they are.
Image processing libraries (OpenCV, PIL, scikit-image) sometimes wrap images in some special format that already uses Numpy behind the scenes. If they are not already using Numpy in the background, the images can be converted to Numpy arrays explicitly. Then you can do speedy numerical calculations on them (convolution, FFT, blurry, filters, ...).

2D RGB image construction from 3D array in SimpleITK

I have an RGB image in the format of a 3D array with the shape of (m, n, 3). I would like to create a SimpleITK image. Using the GetImageFromArray() function results in creation of an image in 3D which is not what I am looking for. How can I create a 2D RGB image instead?
The documentation reads:
Signature: sitk.GetImageFromArray(arr, isVector=None)
Docstring: Get a SimpleITK Image from a numpy array. If isVector is True, then the Image will have a Vector pixel type, and the last dimension of the array will be considered the component index. By default when isVector is None, 4D images are automatically considered 3D vector images.
Have you tried passing isVector=True?

Iterating over pixels in an image as a numpy array

Let's say I have a numpy array of shape (100, 100, 3), and that it represents an image in RGB encoding. How do I iterate over the individual pixels of this image.
Specifically I want to map this image with a function.
Note, I got that array from opencv.

Shape of numpy array when converting from PIL image

When I convert a PIL image to a numpy array using
image = Image.open(file_path)
image_array = numpy.array(image)
sometimes I get a 2D array with dimensions equal to the dimensions of the image. Other times I get a 3D array with each entry being a array of pixels (i.e. [RRR, GGG, BBB]). And other times I get a 3D array with each entry being an array with 4 values (i.e. [RRR, GGG, BBB, XXX]). What determines the shape of the numpy array? And if it's a 2D array, what do the entries represent?
Answered my own question:
The shape of the numpy array is determined my the mode of the PIL image. The full lists of modes can be found here. A 2D array is created when the image's mode is 1, L, or P. A 3D array is created otherwise with values analogous to the values represented by the mode.

Reordering numpy array indices

I have a 2D array that I want to create an image from. I want to transform the image array of dimensions 140x120 to an array of 140x120x3 by stacking the same array 3 times (to get a grayscale image to use with skimage).
I tried the following:
image = np.uint8([image, image, image])
which results in a 3x120x140 image. How can I reorder the array to get 120x140x3 instead?
np.dstack([image, image, image]) (docs) will return an array of the desired shape, but whether this has the right semantics for your application depends on your image generation library.

Categories