I have 5 pictures and i want to convert each image to 1d array and put it in a matrix as vector. I want to be able to convert each vector to image again.
img = Image.open('orig.png').convert('RGBA')
a = np.array(img)
I'm not familiar with all the features of numpy and wondered if there other tools I can use.
Thanks.
import numpy as np
from PIL import Image
img = Image.open('orig.png').convert('RGBA')
arr = np.array(img)
# record the original shape
shape = arr.shape
# make a 1-dimensional view of arr
flat_arr = arr.ravel()
# convert it to a matrix
vector = np.matrix(flat_arr)
# do something to the vector
vector[:,::10] = 128
# reform a numpy array of the original shape
arr2 = np.asarray(vector).reshape(shape)
# make a PIL image
img2 = Image.fromarray(arr2, 'RGBA')
img2.show()
import matplotlib.pyplot as plt
img = plt.imread('orig.png')
rows,cols,colors = img.shape # gives dimensions for RGB array
img_size = rows*cols*colors
img_1D_vector = img.reshape(img_size)
# you can recover the orginal image with:
img2 = img_1D_vector.reshape(rows,cols,colors)
Note that img.shape returns a tuple, and multiple assignment to rows,cols,colors as above lets us compute the number of elements needed to convert to and from a 1D vector.
You can show img and img2 to see they are the same with:
plt.imshow(img) # followed by
plt.show() # to show the first image, then
plt.imshow(img2) # followed by
plt.show() # to show you the second image.
Keep in mind in the python terminal you have to close the plt.show() window to come back to the terminal to show the next image.
For me it makes sense and only relies on matplotlib.pyplot. It also works for jpg and tif images, etc. The png I tried it on has float32 dtype and the jpg and tif I tried it on have uint8 dtype (dtype = data type); each seems to work.
I hope this is helpful.
I used to convert 2D to 1D image-array using this code:
import numpy as np
from scipy import misc
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
face = misc.imread('face1.jpg');
f=misc.face(gray=True)
[width1,height1]=[f.shape[0],f.shape[1]]
f2=f.reshape(width1*height1);
but I don't know yet how to change it back to 2D later in code, Also note that not all the imported libraries are necessary, I hope it helps
Related
I am trying to create a random image using NUMPY. First I am creating a random 3D array as it should be in the case of an image e.g. (177,284,3).
random_im = np.random.rand(177,284,3)
data = np.array(random_im)
print(data.shape)
Image.fromarray(data)
But when I am using Image.fromarray(random_array), this is throwing the following error.
Just to check if there is any issue with the shape of the array, I converted an image back to the array and converted it back after copying it to the other variable. And I got the output I was looking for.
img = np.array(Image.open('Sample_imgs/dog4.jpg'))
git = img.copy()
git.shape
Image.fromarray(git)
They both have the same shape, I don't understand where am I making the mistake.
When I am creating a 2D array and then converting it back it is giving me a black canvas of that size (even though the pixels should not be black).
random_im = np.random.randint(0,256,size=(231,177))
print(random_im)
# data = np.array(random_im)
print(data.shape)
Image.fromarray(random_im)
I was able to get this working with the solution detailed here:
import numpy as np
from PIL import Image
random_array = np.random.rand(177,284,3)
random_array = np.random.random_sample(random_array.shape) * 255
random_array = random_array.astype(np.uint8)
random_im = Image.fromarray(random_array)
random_im.show()
----EDIT
A more elegant way to get a random array of the correct type without conversions is like so:
import numpy as np
from PIL import Image
random_array = np.random.randint(low=0, high=255,size=(250,250),dtype=np.uint8)
random_im = Image.fromarray(random_array)
random_im.show()
Which is almost what you were doing in your solution, but you have to specify the dtype to be np.uint8:
random_im = np.random.randint(0,256,size=(231,177),dtype=np.uint8)
I want to convert an rgb image to 2d matrix in gray. How can I do this using loops and PIL? I don't want to use a canned function. How can I do that ?
I manipulate a lot of images as NumPy arrays like so:
import numpy as np
from PIL import Image
# Load image
imgIn = Image.open(''c:/path/to/my/input/file.jpg'')
imgArray = np.array(imgIn)
#Do whatever manipulations to the image you need to, e.g.,
grayArray = np.mean(imgArray,axis=2)
#Save the final result
imgOut = Image.fromarray(grayArray)
imgOut.save('c:/path/to/my/output/file.jpg')
I saved an numpy array to an image as follows:
plt.imshow(xNext[0,:,:,0]) #xNext has shape (1,64,25,1)
print(xNext[0,:,:,0].shape) #outputs (64,25)
plt.savefig(os.path.join(root,filename)+'.png')
np.save(os.path.join(root,filename)+'.npy',xNext[0,:,:,0])
How can I obtain the same numpy array back from the .png saved image? Can you also please show me if I had saved as .jpg image?
I've tried the following and works with 3D array (v1) where resulting image close to the original numpy array produced image (original).
image = Image.open(imageFilename) #brings in as 3D array
box = (315,60,500,540)
image = image.crop(box)
image = image.resize((25,64)) #to correct to desired shape
arr = np.asarray(image)
plt.imshow(arr)
plt.savefig('v1.png')
plt.close()
However, when I convert the 3D array to 2D array, the resulting image is different (v1b and v1c).
arr2 = arr[:,:,0]
plt.imshow(arr2)
plt.savefig('v1b.png')
plt.close()
arr3 = np.dot(arr[...,:3],[0.299,0.587,0.11])
plt.imshow(arr3)
plt.savefig('v1c.png')
plt.close()
How can I convert the 3D to 2D correctly? Thanks for your help.
original, v1 (saved from 3D array)
v1b, v1c (saved from 2D arrays)
original (with original size)
If your objective is to save a numpy array as an image, your approach have a problem. The function plt.savefig saves an image of the plot, not the array. Also transforming an array into an image may carry some precision loss (when converting from float64 or float32 to uint16). That been said, I suggest you use skimage and imageio:
import imageio
import numpy as np
from skimage import img_as_uint
data = np.load('0058_00086_brown_2_recording1.wav.npy')
print("original", data.shape)
img = img_as_uint(data)
imageio.imwrite('image.png', img)
load = imageio.imread('image.png')
print("image", load.shape)
This script loads the data you provided and prints the shape for verification
data = np.load('0058_00086_brown_2_recording1.wav.npy')
print("original", data.shape)
then it transform the data to uint, saves the image as png and loads it:
img = img_as_uint(data)
imageio.imwrite('image.png', img)
load = imageio.imread('image.png')
the output of the script is:
original (64, 25)
image (64, 25)
i.e. the image is loaded with the same shape that data. Some notes:
image.png is saved as a grayscale image
To save to .jpg just change to imageio.imwrite('image.jpg', img)
In the case of .png the absolute average distance from the original image was 3.890e-06 (this can be verified using np.abs(img_as_float(load) - data).sum() / data.size)
Information about skimage and imageio can be found in the respectives websites. More on saving numpy arrays as images can be found in the following answers: [1], [2], [3] and [4].
link
from scipy.misc import imread
image_data = imread('test.jpg').astype(np.float32)
This should give you the numpy array (I would suggest using imread from scipy)
I am trying to read a image file using PIL and then obtaining the raw pixel values in form of numpy array and then i am trying to put together the values to form a copy of original image. The code does not produce any runtime error but the image formed ("my.png") is unreadable.
from PIL import Image
import numpy as np
img_filename = "image.png"
img = Image.open(img_filename)
img = img.convert("RGB")
img.show()
aa = np.array(img.getdata())
alpha = Image.fromarray(aa,"RGB")
alpha.save('my.png')
alpha.show()
np.array(img.getdata()) gives a 2D array of shape (X, 3), where X depends on the dimensions of the original image.
Just change the relevant line of code to:
aa = np.array(img)
This will assign a 3D array to aa, and thus solve your problem.
I try to access a DICOM file's RGB pixel array with unknown compression (maybe none). Extracting grayscale pixel arrays works completely fine.
However, using
import dicom
import numpy as np
data_set = dicom.read_file(path)
pixel_array = data_set.pixel_array
size_of_array = pixel_array.shape
if len(size_of_array ) == 3:
chanR = pixel_array[0][0:size_of_array[1], 0:size_of_array[2]]
chanG = pixel_array[1][0:size_of_array[1], 0:size_of_array[2]]
chanB = pixel_array[2][0:size_of_array[1], 0:size_of_array[2]]
output_array = (0.299 ** chanR) + (0.587 ** chanG) + (0.114 ** chanB)
with the goal to convert it to an common grayscale array. Unfortunately the result array output_array is not containing correct pixel data. Contents are not false scaled, they are spatially disturbed. Where is the issue?
It is not RGB pixel array and the better way is converting to gray image.
The way to get CT Image is to get the attribute of pixel_array in CT dicom file.
The type of elements in pixel_array of CT dicom file are all uint16.But a lot of tool in python, like OpenCV, Some AI stuff, cannot be compatible with the type.
After getting pixel_array (CT Image) from CT dicom file, you always need to convert the pixel_array into gray image, so that you can process this gray image by a lot of image processing tool in python.
The following code is a working example to convert pixel_array into gray image.
import matplotlib.pyplot as plt
import os
import pydicom
import numpy as np
# Abvoe code is to import dependent libraries of this code
# Read some CT dicom file here by pydicom library
ct_filepath = r"<YOUR_CT_DICOM_FILEPATH>"
ct_dicom = pydicom.read_file(ct_filepath)
img = ct_dicom.pixel_array
# Now, img is pixel_array. it is input of our demo code
# Convert pixel_array (img) to -> gray image (img_2d_scaled)
## Step 1. Convert to float to avoid overflow or underflow losses.
img_2d = img.astype(float)
## Step 2. Rescaling grey scale between 0-255
img_2d_scaled = (np.maximum(img_2d,0) / img_2d.max()) * 255.0
## Step 3. Convert to uint
img_2d_scaled = np.uint8(img_2d_scaled)
# Show information of input and output in above code
## (1) Show information of original CT image
print(img.dtype)
print(img.shape)
print(img)
## (2) Show information of gray image of it
print(img_2d_scaled.dtype)
print(img_2d_scaled.shape)
print(img_2d_scaled)
## (3) Show the scaled gray image by matplotlib
plt.imshow(img_2d_scaled, cmap='gray', vmin=0, vmax=255)
plt.show()
And the following is result of what I print out.
You probably worked around this by now, but I think pydicom doesn't interpret planar configuration correctly.
You need to do this first:
img = data_set.pixel_array
img = img.reshape([img.shape[1], img.shape[2], 3])
From here on your image will have shape [rows cols 3], with the channels separated
As said by #Daniel since you have a PlanarConfiguration== 1 you have to rearrange your colors in columns through np.reshape and then converting to grayscale, for example using OpenCV:
import pydicom as dicom
import numpy as np
import cv2 as cv
data_set = dicom.read_file(path)
pixel_array = data_set.pixel_array
## converting to shape (m,n,3)
pixel_array_rgb = pixel_array.reshape((pixel_array.shape[1], pixel_array.shape[2], 3))
## converting to grayscale
pixel_array_gs = cv.cvtColor(pixel_array_rgb, cv.COLOR_RGB2GRAY)