I am trying to get images from cifar10-dataset. When i rebuild image from array,
i see 9 same images in one picture, i don't know what is the problem.
When i load image from data, single_img shape (3072,). After that, i reshape
my single_img varible (32, 32, 3). I don't know where is the problem.
Here my code;
import cPickle
from PIL import Image
import numpy as np
f = open("/home/leo/Downloads/cifar-10-batches-py/data_batch_1", "rb")
tupled_data= cPickle.load(f)
f.close()
img = tupled_data['data']
single_img = np.array(img[0])
single_img_reshaped = single_img.reshape(32, 32 ,3)
j = Image.fromarray(single_img_reshaped)
j.save("/home/leo/Desktop/blabla.bmp")
Example image;
Be sure to be careful about the format of the pixel array of the image..
[R....G....B]
So you just change its format to
[[[R,G,B],....,[R,G,B]]
[[R,G,B],....,[R,G,B]]
[[R,G,B],....,[R,G,B]]]
But
single_img_reshaped = single_img.reshape(32, 32 ,3)
don't do it like before.
Related
My goal is to shuffle all pixels in a 512x512 Python Pillow image. Also, I need the time performance to be relatively good. What I've tried:
from PIL import Image
import numpy as np
orig = Image.open('img/input2.jpg')
orig_px = orig.getdata()
np_px = np.asarray(orig_px)
np.random.shuffle(np_px)
res = Image.fromarray(np_px.astype('uint8')).convert('RGB')
res.show()
The Preview app gives me the following error:
The file “tmp11g28d6z.PNG” could not be opened.
It may be damaged or use a file format that Preview doesn’t recognise.
I cannot figure out, what went wrong. I would be grateful for any suggestions about fixing this code or trying a different approach to solving this problem.
Main problem that getdata provide you 1d array, and fromarray requires 2d or 3d array. see corrected code. You maybe notice two reshapes. So first reshape make array of pixels. Each pixel has 3 values. Than shuffle them, than reshape in image. If you comment np.random.shuffle(orig_px) you will get original image as is.
from PIL import Image
import numpy as np
orig = Image.open('test.jpg')
orig_px = orig.getdata()
orig_px = np.reshape(orig_px, (orig.height * orig.width, 3))
np.random.shuffle(orig_px)
orig_px = np.reshape(orig_px, (orig.height, orig.width, 3))
res = Image.fromarray(orig_px.astype('uint8'))
res.save('out.jpg')
I want to load thousands of animal images into a pandas df, add features and maybe convert to HDF.
I tried the following approach using cv2.imread()
import cv2
import os
import numpy as np
import pandas as pd
def images_to_hdf(folder_path, label):
"""
Save a folder of images to hdf format.
Args:
folder_path: Path to folder containing images.
label: A string of the image content.
Return:
None
"""
image_data = [np.array(cv2.imread(folder_path + img)) for img in os.listdir(folder_path)]
data = pd.DataFrame()
data['Images'] = image_data
data['Label'] = label
data.to_hdf(path, key)
But it's taking longer than 1 minute for reading only 100 images plus an error(too much numerical value to store...) and I'm sure that's a very inefficient way of doing it.
I tried np.fromfile() instead of cv2.imread() it's ultra fast in comparison(I'm not really sure what it does) but it returns rank1 arrays and I want to have image 3 dimensional data stored in a pd dataframe to add labels which I'll be using to train a classifier and I'm thinking this might be a way of doing it.
With the help of h5py you can directly save your images and labels into a hdf5 file (without using pandas). Here's one example how to do it (adaptation from here):
import os
import glob
import cv2
import h5py
import numpy as np
def images_to_hdf5(images_path='/path/to/images',
label=0,
hdf5_path='/path/to/hdf5_file/file.hdf5'):
image_names = glob.glob(os.path.join(images_path, '*.jpg'))
n = len(image_names)
labels = [label]*n
hdf5_file = h5py.File(hdf5_path, mode='w')
hdf5_file.create_dataset("Images", (n, 3, 224, 224), np.int8)
hdf5_file.create_dataset("Labels", (n,), np.int8)
hdf5_file["Labels"][...] = labels
for i, image_name in enumerate(image_names):
img = cv2.imread(image_name)
img = cv2.resize(img, (224, 224)) # shape (224, 224, 3)
img = np.rollaxis(img, 2)[None] # shape (1, 3, 224, 224)
hdf5_file["Images"][i, ...] = img
hdf5_file.close()
To open it:
hdf5_file = h5py.File(hdf5_path, "r")
To access e.g. the first image and label:
hdf5_file["Images"][0]
hdf5_file["Labels"][0]
#hdf5_file.close()
I was trying to read a batch of images in Python and resize them. It gives error as in the title, which I think is because the images are in different sizes. How do I solve this? The code is as following:
from PIL import Image
import numpy as np
for x in range(1,10):
fin = 'path/image%03d.jpg'%(x,)
im = Image.open(fin)
im1 = im.resize((400,400),Image.ANTIALIAS)
fout = 'path/resize/image%03d.jpg'%(x,)
im1.save(fout)
So I have some problems with a simple image processing task. The code looks like this:
from scipy.misc.pilutil import imread, imsave
import numpy as np
infile = imread('in.png')
outfile = np.multiply(infile, 1.0).astype(int) # Just an example of array manipulation
print type(infile) == type(outfile) # True
# Exactly the same
print infile
print outfile
imsave('out.png', outfile)
This produces an array with different grayscale colors than the input image. It magically works if I change the manipulation to be outfile = np.multiply(infile, 1) (with an int instead of a float).
Can someone explain to me what I don't understand? The input image is a greyscale image.
I have pixel data that I want to use to create a new .tif image that has multiple frames. How would I go about doing this? I have tried python PIL however I have only found it supports multiple frame reading not writing. See below for my attempt that didn't work.
new_Image = Image.new("I;16", (num_pixels,num_rows))
for frame in range((len(final_rows)/num_rows)):
pixels = new_Image.load()
for row in range(num_rows):
row_pixel = final_rows[row].getPixels()
for pixel in range(num_pixels):
pixels[pixel,row] = row_pixel[pixel]
print frame
new_Image.seek(frame)
For example, using numpy and scikit-image with FreeImage plugin:
import numpy as np
from skimage.io._plugins import freeimage_plugin as fi
image = np.zeros((32, 256, 256), 'uint16')
fi.write_multipage(image, 'multipage.tif')
Or save it uncompressed using numpy and tifffile.py:
import numpy as np
from tifffile import imsave
image = np.zeros((32, 256, 256), 'uint16')
imsave('multipage.tif', image)
This assumes that all pages have the same data shape and type and no additional tags need to be written.