Convert Tensorflow array to Keras array - python

I am trying to run a Keras model in which I read 88 images from a folder into a numpy array. This array should be converted into a Keras tensor so that I can work with the data in the model. I am running the following code:
import os
import numpy as np
from PIL import Image
from keras import backend as K
current_dir = os.path.dirname('__file__')
image_names = os.listdir(os.path.join(current_dir, 'images'))
images = np.ndarray((len(image_names), 256, 256), dtype=np.uint8)
for i, filename in enumerate(image_names):
images[i] = Image.open(os.path.join(current_dir,
'images',
filename)).resize((256, 256)).convert('L')
images = images.astype(K.floatx())
images *= 0.96/255
images += 0.02
images = images.reshape(images.shape[0], 256, 256, 1)
print(images.shape)
cats_q = K.variable(images)
print(type(cats_q))
print(K.is_keras_tensor(cats_q))
I am getting the following output
(87, 256, 256, 1)
<class 'tensorflow.python.ops.variables.Variable'>
False
How can I convert the output into a Keras tensor? Any help would be much appreciated!
Many thanks,
Andi

You should build your model first, including an input tensor built with the correct size to handle this data, then pass the numpy array to the keras model when you call the 'fit' function.
When you build a keras model, the tensors are edges in the computation graph. You don't want to initialize it with a value, but with a size, then pass the value when necessary.
This page on the keras functional API has some good examples of this.

Related

Image Shape Issue with Tensorflow and Numpy

I am trying to run a basic GAN Neural Network from: https://www.tensorflow.org/tutorials/generative/dcgan
Following along with the code in here it works fine when I use the mnist dataset. I would like to try this with my own custom images instead.
I am loading the images as follows:
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
import tensorflow as tf
from PIL import Image
from IPython import display
#Set Max image pixels to none to avoid pixel limit breach
Image.MAX_IMAGE_PIXELS = None
#Create empty list for images
images = []
#Glob together images from file and create numpy aray with them
for f in glob.iglob("...Images/*"):
images.append(np.asarray(Image.open(f)))
#Load image array into empty list
images = np.array(images)
#Show array shape
images.shape
Output of shape is:
(100,)
Following the tensorflow doc to load and preprocess images they use the following:
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
My question is how can I reshape my current batch set of images to match the input needed to follow along with the doc?
If I try to just plug in my own data I get:
ValueError: cannot reshape array of size 100 into shape (100,28,28,3)
Maybe the images you are loading have a different number of channels. Try printing out the shape of each image array in your for loop. Anyway, I would recommend using open opencv to read your images. It is pretty straightforward and apparently faster than PIL:
import glob
import cv2
Image.MAX_IMAGE_PIXELS = None
images = []
for f in glob.iglob("images/*"):
images.append(np.asarray(cv2.imread(f)))
images = np.array(images)
images.shape
# (3, 100, 100, 3) 3 images 100x100x3

Tensorflow 2.3. How to change the batch for each epoch?

I would like to train with a different custom image augmentation during each epoch in the training.
The wrong solution would be to save the augmented images, and run the training on the saved images. Because if you try to loads hundreds of thousands of images for the training, you will get a memory error.
The right solution will have to use augmentation during the fit routine.
Can you please indicate me how to do it, pointing out a working example?
It won't create many images, and you won't get a memory error. While iterating through the dataset, it will apply random transformations to the image, without "creating" new images that will be saved in memory. So just do it normally:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
import tensorflow_datasets as tfds
[train_set_raw] = tfds.load('cats_vs_dogs', split=['train[:100]'], as_supervised=True)
def augment(tensor):
tensor = tf.cast(x=tensor, dtype=tf.float32)
tensor = tf.image.rgb_to_grayscale(images=tensor)
tensor = tf.image.resize(images=tensor, size=(96, 96))
tensor = tf.divide(x=tensor, y=tf.constant(255.))
tensor = tf.image.random_flip_left_right(image=tensor)
tensor = tf.image.random_brightness(image=tensor, max_delta=2e-1)
tensor = tf.image.random_crop(value=tensor, size=(64, 64, 1))
return tensor
train_set_raw = train_set_raw.shuffle(128).map(lambda x, y: (augment(x), y)).batch(16)
import matplotlib.pyplot as plt
plt.imshow((next(iter(train_set_raw))[0][0][..., 0].numpy()*255).astype(int))
plt.show()

How to reshape an array of shape (150,150,3) to an array of shape (1,8192)

I have trained a deep learning model as follows, its a classifier base for VGG16.
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(256, activation='relu', input_dim=4 * 4 * 512),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid')
1-My model accepts tensors of shape (1,8192) for predictions.
2-I have test images of shape (150,150,3) which are converted to arrays.
3-Now I want a method to convert my (150,150,3) images to tensors of shape (1,8192).
The input dimension seems very arbitrary and not suitable to the task. If you are very stubborn to proceed, you could simply cast it to 1d array and resample like this:
import numpy as np
from scipy import signal
image = np.random.rand(150,150,3)
image_8192 = signal.resample(image.ravel(), 8192)
... but it's a very bad idea. Somewhat smarter would be to more intelligently downsample the image, first by converting it to grayscale and then downsample:
from skimage.color import rgb2gray
from skimage.transform import resize
grayscale = rgb2gray(image)
grayscale_91pix = resize(image, (91, 91)) # size 8291
image_8192 = signal.resample(grayscale_91pix.ravel(), 8192)
It's still not great, but better than the naive approach.

How to convert Tensorflow dataset to 2D numpy array

I have a TensorFlow dataset which contains nearly 15000 multicolored images with 168*84 resolution and a label for each image. Its type and shape are like this:
< ConcatenateDataset shapes: ((168, 84, 3), ()), types: (tf.float32, tf.int32)>
I need to use it to train my network. That's why I need to pass it as a parameter to this function that I built my layers in:
def cnn_model_fn(features, labels, mode):
input_layer = tf.reshape(features["x"], [-1, 168, 84, 3])
# Convolutional Layer #1
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
.
.
.
I tried to convert each tensor into np.array(which is the proper type for the function above, i guess) by using tf.eval() and np.ravel(). But I failed.
So, how can I convert this dataset into the proper type to pass it to the function?
Plus
I am new to python and tensorflow and I don't think I understand why there are datasets if we can not use them directly to build layers (I am following the tutorial in TensorFlow's website btw).
Thanks.
You could try eager execution, previously I gave an answer with session run (showed below).During eager execution using .numpy() on a tensor will convert that tensor to numpy array.Example code (from my use case):
#enable eager execution
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
print('Is executing eagerly?',tf.executing_eagerly())
#load datasets
import tensorflow_datasets as tfds
dataset, metadata = tfds.load('cycle_gan/horse2zebra',
with_info=True, as_supervised=True)
train_horses, train_zebras = dataset['trainA'], dataset['trainB']
#load dataset in to numpy array
train_A=train_horses.batch(1000).make_one_shot_iterator().get_next()[0].numpy()
print(train_A.shape)
#preview one of the images
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
print(train_A.shape)
plt.imshow(train_A[1])
plt.show()
Old, session run, answer:
I recently had this problem, and I did it like this:
#load datasets
import tf
import tensorflow_datasets as tfds
dataset, metadata = tfds.load('cycle_gan/horse2zebra',
with_info=True, as_supervised=True)
train_horses, train_zebras = dataset['trainA'], dataset['trainB']
#load dataset in to numpy array
sess = tf.compat.v1.Session()
tra=train_horses.batch(1000).make_one_shot_iterator().get_next()
train_A=np.array(sess.run(tra)[0])
print(train_A.shape)
sess.close()
#preview one of the images
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
print(train_A.shape)
plt.imshow(train_A[1])
plt.show()
It doesn't sound like you set up things using the Tensorflow Dataset pipeline, here is the guide for doing so:
https://www.tensorflow.org/programmers_guide/datasets
You can either follow that (it's the right approach, but there's a small learning curve to get used to it), or you can just pass in the numpy array to sess.run as part of the feed_dict parameter. If you go this way then you should just create a tf.placeholder which will be populated by the value in feed_dict. Many of the basic tutorial examples here follow this approach:
https://github.com/aymericdamien/TensorFlow-Examples
I was also needing to accomplish this task (Dataset to array), but without turning on eager mode. I managed to come up with the following:
dataset = tf.data.Dataset.from_tensor_slices([[1,2],[3,4]])
tensor_array = tf.TensorArray(dtype=dataset.element_spec.dtype,
size=0,
dynamic_size=True,
element_shape=dataset.element_spec.shape)
tensor_array = dataset.reduce(tensor_array, lambda a, t: a.write(a.size(), t))
tensor = tf.reshape(tensor_array.concat(), (-1,)+tuple(dataset.element_spec.shape))
array = tf.Session().run(tensor)
print(type(array))
# <class 'numpy.ndarray'>
print(array)
# [[1 2]
# [3 4]]
What this does:
We start with a dataset containing 2 tensors of shape (2,).
Since eager is off, we need to run the dataset through a Tensorflow session. And since a session requires a tensor, we have to convert the dataset into a tensor.
To accomplish this, we use Dataset.reduce() to put all the elements into a TensorArray (symbolically).
We now use TensorArray.concat() to convert the whole array into a single tensor. However when we do this the whole dataset becomes flattened into a 1-D array. So we need tf.reshape() to get it back into our original tensor's shape, plus an extra dimension to stack them all.
Finally we take the tensor and run it through a session. This gives us our numpy ndarray.
This was the simplest method for me for supervised problem with (X, y).
def dataset_to_numpy(ds):
"""
Convert tensorflow dataset to numpy arrays
"""
images = []
labels = []
# Iterate over a dataset
for i, (image, label) in enumerate(tfds.as_numpy(ds)):
images.append(image)
labels.append(label)
for i, img in enumerate(images):
if i < 3:
print(img.shape, labels[i])
return images, labels
Usage:
ds = tfds.load('mnist', split='train', as_supervised=True)
You can use the following methods to get the images and the corresponding captions:
def separate_dataset(dataset):
images, labels = tf.compat.v1.data.make_one_shot_iterator(dataset.batch(len(dataset))).get_next()
return images, labels

Numpy array of images wrong dimension Python and Keras

I'm building an image classifier and trying to compute the features for a dataset using keras but my array dimension is not on the right format. I'm getting
ValueError: Error when checking : expected input_1 to have 4 dimensions, but got array with shape (324398, 1)
My code is this:
import glob
from keras.applications.resnet50 import ResNet50
def extract_resnet(X):
# X : images numpy array
resnet_model = ResNet50(input_shape=(image_h, image_w, 3),
weights='imagenet', include_top=False) # Since top layer is the fc layer used for predictions
features_array = resnet_model.predict(X)
return features_array
filelist = glob.glob('dataset/*.jpg')
myarray = np.array([np.array(Image.open(fname)) for fname in filelist])
print(extract_resnet(myarray))
So it looks like for some reason the images array is only two dimensional when it should be 4 dimensional. How can I convert myarray so that it is able to work with the feature extractor?
First up, make sure that all of the images in dataset directory have the same size (image_h, image_w, 3):
print([np.array(Image.open(fname)).shape for fname in filelist])
If they are not, you won't be able to make a mini-batch, so you'll need to select only the subset of suitable images. If the size is right, you can then reshape the array manually:
myarray = myarray.reshape([-1, image_h, image_w, 3])
... to match ResNet specification exactly.

Categories