Convert python wand hdr image to numpy array and back - python

Python wand supports converting images directly to a Numpy arrays, such as can be seen in related questions.
However, when doing this for .hdr (high dynamic range) images, this appears to compress the image to 0/255. As a result, converting from a Python Wand image to a np array and back drastically reduces file size/quality.
# Without converting to a numpy array
img = Image('image.hdr') # Open with Python Wand Image
img.save(filename='test.hdr') # Save with Python wand
Running this opens the image and saves it again, which creates a file with a size of 41.512kb. However, if we convert it to numpy before saving it again..
# With converting to a numpy array
img = Image(filename=os.path.join(path, 'N_SYNS_89.hdr')) # Open with Python Wand Image
arr = np.asarray(img, dtype='float32') # convert to np array
img = Image.from_array(arr) # convert back to Python Wand Image
img.save(filename='test.hdr') # Save with Python wand
This results in a file with a size of 5.186kb.
Indeed, if I look at arr.min() and arr.max() I see that the min and max values for the numpy array are 0 and 255. If I open the .hdr image with cv2 however as an numpy array, the range is much higher.
img = cv2.imread('image.hdr'), -1)
img.min() # returns 0
img.max() # returns 868352.0
Is there a way to convert back and forth between numpy arrays and Wand images without this loss?

As per the comment of #LudvigH, the following worked as in this answer.
img = Image(filename='image.hdr'))
img.format = 'rgb'
img.alpha_channel = False # was not required for me, including it for completion
img_array = np.asarray(bytearray(img.make_blob()), dtype='float32')
Now we much reshape the returned img_array. In my case I could not run the following
img_array.reshape(img.shape)
Instead, for my img.size was a (x,y) tuple that should have been an (x,y,z) tuple.
n_channels = img_array.size / img.size[0] / img.size[1]
img_array = img_array.reshape(img.size[0],img.size[1],int(n_channels))
After manually calculating z as above, it worked fine. Perhaps this is also what caused the original fault in converting using arr = np.asarray(img, dtype='float32')

Related

Python - Convert RGB picture to Gray Manually

I want to convert an rgb image to 2d matrix in gray. How can I do this using loops and PIL? I don't want to use a canned function. How can I do that ?
I manipulate a lot of images as NumPy arrays like so:
import numpy as np
from PIL import Image
# Load image
imgIn = Image.open(''c:/path/to/my/input/file.jpg'')
imgArray = np.array(imgIn)
#Do whatever manipulations to the image you need to, e.g.,
grayArray = np.mean(imgArray,axis=2)
#Save the final result
imgOut = Image.fromarray(grayArray)
imgOut.save('c:/path/to/my/output/file.jpg')

Convert back to 2D numpy array from .jpg image

I saved an numpy array to an image as follows:
plt.imshow(xNext[0,:,:,0]) #xNext has shape (1,64,25,1)
print(xNext[0,:,:,0].shape) #outputs (64,25)
plt.savefig(os.path.join(root,filename)+'.png')
np.save(os.path.join(root,filename)+'.npy',xNext[0,:,:,0])
How can I obtain the same numpy array back from the .png saved image? Can you also please show me if I had saved as .jpg image?
I've tried the following and works with 3D array (v1) where resulting image close to the original numpy array produced image (original).
image = Image.open(imageFilename) #brings in as 3D array
box = (315,60,500,540)
image = image.crop(box)
image = image.resize((25,64)) #to correct to desired shape
arr = np.asarray(image)
plt.imshow(arr)
plt.savefig('v1.png')
plt.close()
However, when I convert the 3D array to 2D array, the resulting image is different (v1b and v1c).
arr2 = arr[:,:,0]
plt.imshow(arr2)
plt.savefig('v1b.png')
plt.close()
arr3 = np.dot(arr[...,:3],[0.299,0.587,0.11])
plt.imshow(arr3)
plt.savefig('v1c.png')
plt.close()
How can I convert the 3D to 2D correctly? Thanks for your help.
original, v1 (saved from 3D array)
v1b, v1c (saved from 2D arrays)
original (with original size)
If your objective is to save a numpy array as an image, your approach have a problem. The function plt.savefig saves an image of the plot, not the array. Also transforming an array into an image may carry some precision loss (when converting from float64 or float32 to uint16). That been said, I suggest you use skimage and imageio:
import imageio
import numpy as np
from skimage import img_as_uint
data = np.load('0058_00086_brown_2_recording1.wav.npy')
print("original", data.shape)
img = img_as_uint(data)
imageio.imwrite('image.png', img)
load = imageio.imread('image.png')
print("image", load.shape)
This script loads the data you provided and prints the shape for verification
data = np.load('0058_00086_brown_2_recording1.wav.npy')
print("original", data.shape)
then it transform the data to uint, saves the image as png and loads it:
img = img_as_uint(data)
imageio.imwrite('image.png', img)
load = imageio.imread('image.png')
the output of the script is:
original (64, 25)
image (64, 25)
i.e. the image is loaded with the same shape that data. Some notes:
image.png is saved as a grayscale image
To save to .jpg just change to imageio.imwrite('image.jpg', img)
In the case of .png the absolute average distance from the original image was 3.890e-06 (this can be verified using np.abs(img_as_float(load) - data).sum() / data.size)
Information about skimage and imageio can be found in the respectives websites. More on saving numpy arrays as images can be found in the following answers: [1], [2], [3] and [4].
link
from scipy.misc import imread
image_data = imread('test.jpg').astype(np.float32)
This should give you the numpy array (I would suggest using imread from scipy)

Image to numpy array to Image and finally again to array resulting in wrong array

I converted an image to numpy array using:
arr = np.array(PIL.Image.open('1.jpg'))
Then I modified part of the array:
arr[0][0][0] = 128
and converted the array back to an image:
img = PIL.Image.fromarray(np.uint8(arr))
im.save('2.jpg')
Then, I converted the 2.jpg image into numpy array and checked value of arr:
arr = np.array(PIL.Image.open('2.jpg'))
print(arr)
I am getting a completely different array than I got before.
Why is this happening?
The way you save the image affects the results.
The jpg compresses the image and alters the values.
About the image formats see here:
http://pillow.readthedocs.io/en/3.1.x/handbook/image-file-formats.html
Use this:
arr = np.array(PIL.Image.open('1.jpg')
arr[0][0][0] = 128
img = PIL.Image.fromarray(np.uint8(arr))
im.save('2.bmp')
arr2 = np.array(PIL.Image.open('2.bmp'))
print(arr)
print(arr2)
This works fine.
Because .jpg is not lossless image format.
If you want to save image as is, save as lossless image format like bmp, tiff, etc.
The reason that your arrays don't match is that you are storing the image as JPEG and this is a lossy format - the two images are visually identical but have been compressed.
If you save your image as a bitmap then load it into an array, they will be identical.

PIL/Pillow convert Image to list and back again

I'm trying to open an RGB picture, convert it to grayscale, then represent it as a list of floats scaled from 0 to 1. At last, I want to convert it back again to an Image. However, in the code below, something in my conversion procedure fails, as img.show() (the original image) displays correctly while img2.show() display an all black picture. What am I missing?
import numpy as np
from PIL import Image
ocr_img_path = "./ocr-test.jpg"
# Open image, convert to grayscale
img = Image.open(ocr_img_path).convert("L")
# Convert to list
img_data = img.getdata()
img_as_list = np.asarray(img_data, dtype=float) / 255
img_as_list = img_as_list.reshape(img.size)
# Convert back to image
img_mul = img_as_list * 255
img_ints = np.rint(img_mul)
img2 = Image.new("L", img_as_list.shape)
img2.putdata(img_ints.astype(int))
img.show()
img2.show()
The image used
The solution is to flatten the array before putting it into the image. I think PIL interprets multidimensional arrays as different color bands.
img2.putdata(img_ints.astype(int).flatten())
For a more efficient way of loading images, check out
https://blog.eduardovalle.com/2015/08/25/input-images-theano/
but use image.tobytes() (Pillow) instead of image.tostring() (PIL).
.

PIL image to array (numpy array to array) - Python

I have a .jpg image that I would like to convert to Python array, because I implemented treatment routines handling plain Python arrays.
It seems that PIL images support conversion to numpy array, and according to the documentation I have written this:
from PIL import Image
im = Image.open("D:\Prototype\Bikesgray.jpg")
im.show()
print(list(np.asarray(im)))
This is returning a list of numpy arrays. Also, I tried with
list([list(x) for x in np.asarray(im)])
which is returning nothing at all since it is failing.
How can I convert from PIL to array, or simply from numpy array to Python array?
I highly recommend you use the tobytes function of the Image object. After some timing checks this is much more efficient.
def jpg_image_to_array(image_path):
"""
Loads JPEG image into 3D Numpy array of shape
(width, height, channels)
"""
with Image.open(image_path) as image:
im_arr = np.fromstring(image.tobytes(), dtype=np.uint8)
im_arr = im_arr.reshape((image.size[1], image.size[0], 3))
return im_arr
The timings I ran on my laptop show
In [76]: %timeit np.fromstring(im.tobytes(), dtype=np.uint8)
1000 loops, best of 3: 230 µs per loop
In [77]: %timeit np.array(im.getdata(), dtype=np.uint8)
10 loops, best of 3: 114 ms per loop
```
I think what you are looking for is:
list(im.getdata())
or, if the image is too big to load entirely into memory, so something like that:
for pixel in iter(im.getdata()):
print pixel
from PIL documentation:
getdata
im.getdata() => sequence
Returns the contents of an image as a sequence object containing pixel
values. The sequence object is flattened, so that values for line one
follow directly after the values of line zero, and so on.
Note that the sequence object returned by this method is an internal
PIL data type, which only supports certain sequence operations,
including iteration and basic sequence access. To convert it to an
ordinary sequence (e.g. for printing), use list(im.getdata()).
Based on zenpoy's answer:
import Image
import numpy
def image2pixelarray(filepath):
"""
Parameters
----------
filepath : str
Path to an image file
Returns
-------
list
A list of lists which make it simple to access the greyscale value by
im[y][x]
"""
im = Image.open(filepath).convert('L')
(width, height) = im.size
greyscale_map = list(im.getdata())
greyscale_map = numpy.array(greyscale_map)
greyscale_map = greyscale_map.reshape((height, width))
return greyscale_map
I use numpy.fromiter to invert a 8-greyscale bitmap, yet no signs of side-effects
import Image
import numpy as np
im = Image.load('foo.jpg')
im = im.convert('L')
arr = np.fromiter(iter(im.getdata()), np.uint8)
arr.resize(im.height, im.width)
arr ^= 0xFF # invert
inverted_im = Image.fromarray(arr, mode='L')
inverted_im.show()

Categories