Python wand supports converting images directly to a Numpy arrays, such as can be seen in related questions.
However, when doing this for .hdr (high dynamic range) images, this appears to compress the image to 0/255. As a result, converting from a Python Wand image to a np array and back drastically reduces file size/quality.
# Without converting to a numpy array
img = Image('image.hdr') # Open with Python Wand Image
img.save(filename='test.hdr') # Save with Python wand
Running this opens the image and saves it again, which creates a file with a size of 41.512kb. However, if we convert it to numpy before saving it again..
# With converting to a numpy array
img = Image(filename=os.path.join(path, 'N_SYNS_89.hdr')) # Open with Python Wand Image
arr = np.asarray(img, dtype='float32') # convert to np array
img = Image.from_array(arr) # convert back to Python Wand Image
img.save(filename='test.hdr') # Save with Python wand
This results in a file with a size of 5.186kb.
Indeed, if I look at arr.min() and arr.max() I see that the min and max values for the numpy array are 0 and 255. If I open the .hdr image with cv2 however as an numpy array, the range is much higher.
img = cv2.imread('image.hdr'), -1)
img.min() # returns 0
img.max() # returns 868352.0
Is there a way to convert back and forth between numpy arrays and Wand images without this loss?
As per the comment of #LudvigH, the following worked as in this answer.
img = Image(filename='image.hdr'))
img.format = 'rgb'
img.alpha_channel = False # was not required for me, including it for completion
img_array = np.asarray(bytearray(img.make_blob()), dtype='float32')
Now we much reshape the returned img_array. In my case I could not run the following
img_array.reshape(img.shape)
Instead, for my img.size was a (x,y) tuple that should have been an (x,y,z) tuple.
n_channels = img_array.size / img.size[0] / img.size[1]
img_array = img_array.reshape(img.size[0],img.size[1],int(n_channels))
After manually calculating z as above, it worked fine. Perhaps this is also what caused the original fault in converting using arr = np.asarray(img, dtype='float32')
I am looking for something where I just want to print the first channel of the np array.
Original size = 240*240*4
Target size = 240*240*1 (only the first channel.
I tried, below but does seems to be working.
image[:,:,:1]
But saving back with size 240*240*1 to png or jpg doesn't work
Sample code
import numpy as np
from PIL import Image
import scipy.misc as sp
image = np.array(Image.open("FLAIR-148.png"))
test_image = image[:,:,:1]
sp.imsave('out.png', test_image)
output
File "/anaconda3/lib/python3.6/site-packages/scipy/misc/pilutil.py", line 327, in toimage
raise ValueError("'arr' does not have a suitable array shape for "
ValueError: 'arr' does not have a suitable array shape for any mode.
If you don't slice the last index (ie do image[:, :, 1]) then everything should work fine:
import numpy as np
from PIL import Image
import scipy.misc as smc
image = np.array(Image.open("FLAIR-148.png"))
test_image = image[:, :, 1]
smc.imsave('out.png', test_image)
Basically, scipy.misc.imsave does't know what to do with an array of shape (M, N, 1). However, it does know that it should save an array of shape (M, N) as a grayscale image.
You may also need to convert the array to uint8 to ensure consistent results. Here's a complete minimal example:
import scipy.misc as smc
# get the test image as an array
img = smc.face()
# slice test image
img = img[:, :, 1]
# convert to uint8
img = img.astype('uint8')
# save
smc.imsave('test.png', img)
Output:
Caveat
scipy.misc.imsave is deprecated. It is suggested to use imageio.imwrite instead.
I want to convert a PythonMagick Image Object to a NumPy array that can be used in OpenCV, and then I want to convert it into a PIL image object. I have searched Google but cannot find any sources explaining how to do this. Can someone show me how to convert image objects between these different modules?
The fastest way that I've found consist in saving and opening it:
import PythonMagic
import cv2
# pm_img is a PythonMagick.Image
pm_img.write('path/to/temporary/file.png')
np_img = cv2.imread('path/to/temporary/file.png')
I haven't found any satisfactory way to convert PythonMagick images to NumPy arrays without saving them, but there is a slow way that involves using python loops:
import PythonMagick
import numpy as np
pm_img = PythonMagick.Image('path/to/image.jpg')
h, w = pm_img.size().height(), pm_img.size().width()
np_img = np.empty((h, w, 3), np.uint16) # PythonMagick opens images with 16 bit depth
# It seems to store the same byte twice (weird)
for i in range(h):
for j in range(w):
# OpenCV stores pixels as BGR
np_img[i, j] = (pm_img.pixelColor(j, i).quantumBlue(),
pm_img.pixelColor(j, i).quantumGreen(),
pm_img.pixelColor(j, i).quantumRed())
np_img = np_img.astype(np.uint8)
Converting NumPy arrays to PIL images is easier:
from PIL import Image
pil_img = Image.fromarray(np_img[:, :, ::-1].astype(np.uint8))
Since PIL stores images in RGB but OpenCV stores them in BGR it's necessary to change the order of the channels ([:, :, ::-1]).
Image.fromarray() takes a NumPy array with dtype np.uint8.
I have 5 pictures and i want to convert each image to 1d array and put it in a matrix as vector. I want to be able to convert each vector to image again.
img = Image.open('orig.png').convert('RGBA')
a = np.array(img)
I'm not familiar with all the features of numpy and wondered if there other tools I can use.
Thanks.
import numpy as np
from PIL import Image
img = Image.open('orig.png').convert('RGBA')
arr = np.array(img)
# record the original shape
shape = arr.shape
# make a 1-dimensional view of arr
flat_arr = arr.ravel()
# convert it to a matrix
vector = np.matrix(flat_arr)
# do something to the vector
vector[:,::10] = 128
# reform a numpy array of the original shape
arr2 = np.asarray(vector).reshape(shape)
# make a PIL image
img2 = Image.fromarray(arr2, 'RGBA')
img2.show()
import matplotlib.pyplot as plt
img = plt.imread('orig.png')
rows,cols,colors = img.shape # gives dimensions for RGB array
img_size = rows*cols*colors
img_1D_vector = img.reshape(img_size)
# you can recover the orginal image with:
img2 = img_1D_vector.reshape(rows,cols,colors)
Note that img.shape returns a tuple, and multiple assignment to rows,cols,colors as above lets us compute the number of elements needed to convert to and from a 1D vector.
You can show img and img2 to see they are the same with:
plt.imshow(img) # followed by
plt.show() # to show the first image, then
plt.imshow(img2) # followed by
plt.show() # to show you the second image.
Keep in mind in the python terminal you have to close the plt.show() window to come back to the terminal to show the next image.
For me it makes sense and only relies on matplotlib.pyplot. It also works for jpg and tif images, etc. The png I tried it on has float32 dtype and the jpg and tif I tried it on have uint8 dtype (dtype = data type); each seems to work.
I hope this is helpful.
I used to convert 2D to 1D image-array using this code:
import numpy as np
from scipy import misc
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
face = misc.imread('face1.jpg');
f=misc.face(gray=True)
[width1,height1]=[f.shape[0],f.shape[1]]
f2=f.reshape(width1*height1);
but I don't know yet how to change it back to 2D later in code, Also note that not all the imported libraries are necessary, I hope it helps
I have an RGB image. I want to convert it to numpy array. I did the following
im = cv.LoadImage("abc.tiff")
a = numpy.asarray(im)
It creates an array with no shape. I assume it is a iplimage object.
You can use newer OpenCV python interface (if I'm not mistaken it is available since OpenCV 2.2). It natively uses numpy arrays:
import cv2
im = cv2.imread("abc.tiff",mode='RGB')
print(type(im))
result:
<type 'numpy.ndarray'>
PIL (Python Imaging Library) and Numpy work well together.
I use the following functions.
from PIL import Image
import numpy as np
def load_image( infilename ) :
img = Image.open( infilename )
img.load()
data = np.asarray( img, dtype="int32" )
return data
def save_image( npdata, outfilename ) :
img = Image.fromarray( np.asarray( np.clip(npdata,0,255), dtype="uint8"), "L" )
img.save( outfilename )
The 'Image.fromarray' is a little ugly because I clip incoming data to [0,255], convert to bytes, then create a grayscale image. I mostly work in gray.
An RGB image would be something like:
out_img = Image.fromarray( ycc_uint8, "RGB" )
out_img.save( "ycc.tif" )
You can also use matplotlib for this.
from matplotlib.image import imread
img = imread('abc.tiff')
print(type(img))
output:
<class 'numpy.ndarray'>
As of today, your best bet is to use:
img = cv2.imread(image_path) # reads an image in the BGR format
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # BGR -> RGB
You'll see img will be a numpy array of type:
<class 'numpy.ndarray'>
Late answer, but I've come to prefer the imageio module to the other alternatives
import imageio
im = imageio.imread('abc.tiff')
Similar to cv2.imread(), it produces a numpy array by default, but in RGB form.
You need to use cv.LoadImageM instead of cv.LoadImage:
In [1]: import cv
In [2]: import numpy as np
In [3]: x = cv.LoadImageM('im.tif')
In [4]: im = np.asarray(x)
In [5]: im.shape
Out[5]: (487, 650, 3)
You can get numpy array of rgb image easily by using numpy and Image from PIL
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
im = Image.open('*image_name*') #These two lines
im_arr = np.array(im) #are all you need
plt.imshow(im_arr) #Just to verify that image array has been constructed properly
When using the answer from David Poole I get a SystemError with gray scale PNGs and maybe other files. My solution is:
import numpy as np
from PIL import Image
img = Image.open( filename )
try:
data = np.asarray( img, dtype='uint8' )
except SystemError:
data = np.asarray( img.getdata(), dtype='uint8' )
Actually img.getdata() would work for all files, but it's slower, so I use it only when the other method fails.
load the image by using following syntax:-
from keras.preprocessing import image
X_test=image.load_img('four.png',target_size=(28,28),color_mode="grayscale"); #loading image and then convert it into grayscale and with it's target size
X_test=image.img_to_array(X_test); #convert image into array
OpenCV image format supports the numpy array interface. A helper function can be made to support either grayscale or color images. This means the BGR -> RGB conversion can be conveniently done with a numpy slice, not a full copy of image data.
Note: this is a stride trick, so modifying the output array will also change the OpenCV image data. If you want a copy, use .copy() method on the array!
import numpy as np
def img_as_array(im):
"""OpenCV's native format to a numpy array view"""
w, h, n = im.width, im.height, im.channels
modes = {1: "L", 3: "RGB", 4: "RGBA"}
if n not in modes:
raise Exception('unsupported number of channels: {0}'.format(n))
out = np.asarray(im)
if n != 1:
out = out[:, :, ::-1] # BGR -> RGB conversion
return out
I also adopted imageio, but I found the following machinery useful for pre- and post-processing:
import imageio
import numpy as np
def imload(*a, **k):
i = imageio.imread(*a, **k)
i = i.transpose((1, 0, 2)) # x and y are mixed up for some reason...
i = np.flip(i, 1) # make coordinate system right-handed!!!!!!
return i/255
def imsave(i, url, *a, **k):
# Original order of arguments was counterintuitive. It should
# read verbally "Save the image to the URL" — not "Save to the
# URL the image."
i = np.flip(i, 1)
i = i.transpose((1, 0, 2))
i *= 255
i = i.round()
i = np.maximum(i, 0)
i = np.minimum(i, 255)
i = np.asarray(i, dtype=np.uint8)
imageio.imwrite(url, i, *a, **k)
The rationale is that I am using numpy for image processing, not just image displaying. For this purpose, uint8s are awkward, so I convert to floating point values ranging from 0 to 1.
When saving images, I noticed I had to cut the out-of-range values myself, or else I ended up with a really gray output. (The gray output was the result of imageio compressing the full range, which was outside of [0, 256), to values that were inside the range.)
There were a couple other oddities, too, which I mentioned in the comments.
We can use following function of open CV2 to convert BGR 2 RGB format.
RBG_Image = cv2.cvtColor(Image, cv.COLOR_BGR2RGB)
Using Keras:
from keras.preprocessing import image
img = image.load_img('path_to_image', target_size=(300, 300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
Try timing the options to load an image to numpy array, they are quite similar. Go for plt.imread for simplicity and speed.
def time_this(function, times=100):
cum_time = 0
for t in range(times):
st = time.time()
function()
cum_time += time.time() - st
return cum_time / times
import matplotlib.pyplot as plt
def load_img_matplotlib(img_path):
return plt.imread(img_path)
import cv2
def load_img_cv2(img_path):
return cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB)
from PIL import Image
import numpy as np
def load_img_pil(img_path):
img = Image.open(img_path)
img.load()
return np.asarray( img, dtype="int32" )
if __name__=='__main__':
img_path = 'your_image_path'
for load_fn in [load_img_pil, load_img_cv2, load_img_matplotlib]:
print('-'*20)
print(time_this(lambda: load_fn(img_path)), 10000)
Result:
--------------------
0.0065201687812805175 10000 PIL, as in [the second answer][1]https://stackoverflow.com/a/7769424/16083419)
--------------------
0.0053211402893066405 10000 CV2
--------------------
0.005320906639099121 10000 matplotlib
You can try the following method. Here is a link to the docs.
tf.keras.preprocessing.image.img_to_array(img, data_format=None, dtype=None)
from PIL import Image
img_data = np.random.random(size=(100, 100, 3))
img = tf.keras.preprocessing.image.array_to_img(img_data)
array = tf.keras.preprocessing.image.img_to_array(img)