Converting uint8 array to image - python

I am trying to convert a uint8 array of a 48x48 image to its corresponding 48x48 image file (jpg/jpeg/gif). I tried converting the array contents to first binary and then wrote ('wb' mode) it to a file, but that did not work out.
Is there a way I can accomplish this?

If you are producing the image in TensorFlow (as I'm inferring from your tag), you can use the tf.image.encode_jpeg() or tf.image.encode_png() ops to encode a uint8 tensor as an image:
uint8_data = ...
image_data = tf.image.encode_png(uint8_data)
The result of either op is a tf.string tensor that you can evaluate and write out to a file.

I was able to the same very easily using Octave.
Here i have tried to generate a random matrix of 48x48 and then i saved the image as jpg format.
img = rand(48,48);
imwrite(img, "test.jpg")
You can save any type of image with this approach.
If you can give some more details about what you want to achieve. Do u need to do it just once or u need it as part of program.
Hope that helped.

Related

How to convert a float32 img to uint8 where range is(-1 to -0.85)?

I want to convert a float32 image into uint8 image in Python.
I tried using the following code, but the output image only has values like 2 and 3 so the image is practically black.
gen_samples[0] * 255).round().astype(np.uint8)
When I try displaying the float32 image I get a blackish/greyish image where I can somewhat make out the required image.
Normalize the array to 0..1 first.
Assuming gen_samples is the image matrix:
arr_min = np.min(gen_samples)
arr_max = np.max(gen_samples)
gen_samples = (gen_samples - arr_min) / (arr_max - arr_min)
Since you tagged the question using scikit-image you are probably using skimage. In this case img_as_ubyte should do the trick:
from skimage.util import image_as_ubyte
img = img_as_ubyte(gen_samples[0])
Further, since you tagged the question with imageio-python I'll assume that the image data actually comes from some (binary) image format rather than being generated in the script. In this case, you can often use the backend that does the decoding and do the conversion while the image is being loaded. This is, however, specific to the format being used, so for a more specific answer you would have to provide more insight into where your images are coming from.

How can i import an image file into python , read it as an array, then output the array as same image file type

I am tasked with writing a program that can take an image file as input, then encrypt the image with some secondary code that i have already written, and finally output the encrypted image file.
I would like to import an image, make it a 1d array of numbers, perform some encryption on this 1d array (which will make it into a 2d array which i will flatten), and then be able to output the encrypted 1d array as an image file after converting it back to whatever format it was in on input.
I am wondering how this can be done, and details about what types of image files can be accepted, and what libraries may be required.
Thanks
EDIT:
this is some code i have used, img_arr stores the image in an array of integers, max 255. This is what i want, however now i require to convert back into the original format, after i have performed some functions on img_arr.
from PIL import Image
img = Image.open('testimage.jfif')
print('img first: ',img)
img = img.tobytes()
img_arr=[]
for x in img:
img_arr.append(x)
img2=Image.frombytes('RGB',(460,134),img)
print('img second: ',img2)
my outputs are slightly different
img first: <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=460x134 at 0x133C2D6F970>
img second: <PIL.Image.Image image mode=RGB size=460x134 at 0x133C2D49EE0>
In programming, Base64 is a group of binary-to-text encoding schemes that represent binary data (more specifically, a sequence of 8-bit bytes) in an ASCII string format by translating the data into a radix-64 representation.
Fortunately, you can encode and decode the image binary file with python based on base64. The following link helps you.
Encoding an image file with base64

Python Image array modified between save/load

I am modifying images using Image and Numpy in Python 2.7.
I am saving an image with a numpy Integer grayscale array using :
img = Image.fromarray(grayscale_arr, 'L')
img.save(output)
In an other function, i reopen the same image and get the grayscale array with :
img = Image.open(output)
grayscale_arr = np.array(img)
However the arrays do not match. The differences are not big but significant enough ...
My questions are :
First why is that ? Is it an imprecision of some sort ? (just out of curiosity)
And how can I fix this ?
Thank you in advance

convert greyscale image back to vector

I have a list called w (size: 784), which I outputted to a png greyscale image:
import matplotlib.pyplot as plt
tmp = 1/(1+np.exp(-10*w/w.max()))
plt.imshow(tmp.reshape(28,28),cmap="gray")
plt.draw()
plt.savefig("final_weight_vector")
Now I want to read the png image back to be a vector.
The solutions I found so far:
First:
import matplotlib.image as mpimg
img=mpimg.imread('final_weight_vector.png')
but img appears to not be greyscale, because its dimensions turend out to be (600, 800, 4).
Second:
reading the file as RGB and converting to greyscale:
im = Image.open('final_weight_vector.png').convert('LA')
However, I couldn't find how to iterate over im so I have no idea as to what's inside. Further, I am not sure the output of im will have the exact same values as the original w.
Help please?
The problem is that what you saved is probably a plot of the 28x28 image, not the image itself.
To be sure, please preview the image. I bet it is 600x800, not 28x28. I also suppose it contains many additional elements, like axes and padding.
If you want to store your array in a loadable format, you may use numpy.save() (and numpy.load() to load it).
You may also use PIL to save your array as image (e.g. using something similar to: http://code.activestate.com/recipes/577591-conversion-of-pil-image-and-numpy-array/)

python mahotas.imread reads a 2d image as 3d

I have an image saved by another code of mine. The image is a normal JPG file. I saved it with imsave.
now when I'm reading it in another code, it seems to be 3d :S
the image is here.
and a simple code to read it is this :
import mahotas
img = mahotas.imread('d:/normal.jpg')
print img.shape, img.dtype
Try reading the jpg as greyscale like this:
mahotas.imread('d:/normal.jpg', as_grey = True)
(Author of mahotas here).
The suggestion by Junuxx is correct:
mahotas.imread('file.jpg', as_grey=True)
This reads the RGB file and converts it to grey scale by a weighted average of the components (they are not equally weighted, but use typical coefficients that attempt to be perceptually more accurate).
The alternative (which I rather prefer) is:
im = mahotas.imread('file.jpg')
im = im[:,:,0]
I assume that all the channels have the same values and just use the first one.

Categories