TensorFlow: training on my own image - python

I am new to TensorFlow. I am looking for the help on the image recognition where I can train my own image dataset.
Is there any example for training the new dataset?

If you are interested in how to input your own data in TensorFlow, you can look at this tutorial.
I've also written a guide with best practices for CS230 at Stanford here.
New answer (with tf.data) and with labels
With the introduction of tf.data in r1.4, we can create a batch of images without placeholders and without queues. The steps are the following:
Create a list containing the filenames of the images and a corresponding list of labels
Create a tf.data.Dataset reading these filenames and labels
Preprocess the data
Create an iterator from the tf.data.Dataset which will yield the next batch
The code is:
# step 1
filenames = tf.constant(['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg'])
labels = tf.constant([0, 1, 0, 1])
# step 2: create a dataset returning slices of `filenames`
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
# step 3: parse every image in the dataset using `map`
def _parse_function(filename, label):
image_string = tf.read_file(filename)
image_decoded = tf.image.decode_jpeg(image_string, channels=3)
image = tf.cast(image_decoded, tf.float32)
return image, label
dataset = dataset.map(_parse_function)
dataset = dataset.batch(2)
# step 4: create iterator and final input tensor
iterator = dataset.make_one_shot_iterator()
images, labels = iterator.get_next()
Now we can run directly sess.run([images, labels]) without feeding any data through placeholders.
Old answer (with TensorFlow queues)
To sum it up you have multiple steps:
Create a list of filenames (ex: the paths to your images)
Create a TensorFlow filename queue
Read and decode each image, resize them to a fixed size (necessary for batching)
Output a batch of these images
The simplest code would be:
# step 1
filenames = ['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg']
# step 2
filename_queue = tf.train.string_input_producer(filenames)
# step 3: read, decode and resize images
reader = tf.WholeFileReader()
filename, content = reader.read(filename_queue)
image = tf.image.decode_jpeg(content, channels=3)
image = tf.cast(image, tf.float32)
resized_image = tf.image.resize_images(image, [224, 224])
# step 4: Batching
image_batch = tf.train.batch([resized_image], batch_size=8)

Based on #olivier-moindrot's answer, but for Tensorflow 2.0+:
# step 1
filenames = tf.constant(['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg'])
labels = tf.constant([0, 1, 0, 1])
# step 2: create a dataset returning slices of `filenames`
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
def im_file_to_tensor(file, label):
def _im_file_to_tensor(file, label):
path = f"../foo/bar/{file.numpy().decode()}"
im = tf.image.decode_jpeg(tf.io.read_file(path), channels=3)
im = tf.cast(image_decoded, tf.float32) / 255.0
return im, label
return tf.py_function(_im_file_to_tensor,
inp=(file, label),
Tout=(tf.float32, tf.uint8))
dataset = dataset.map(im_file_to_tensor)
If you are hitting an issue similar to:
ValueError: Cannot take the length of Shape with unknown rank
when passing tf.data.Dataset tensors to model.fit, then take a look at https://github.com/tensorflow/tensorflow/issues/24520. A fix for the code snippet above would be:
def im_file_to_tensor(file, label):
def _im_file_to_tensor(file, label):
path = f"../foo/bar/{file.numpy().decode()}"
im = tf.image.decode_jpeg(tf.io.read_file(path), channels=3)
im = tf.cast(image_decoded, tf.float32) / 255.0
return im, label
file, label = tf.py_function(_im_file_to_tensor,
inp=(file, label),
Tout=(tf.float32, tf.uint8))
file.set_shape([192, 192, 3])
label.set_shape([])
return (file, label)

2.0 Compatible Answer using Tensorflow Hub: Tensorflow Hub is a Provision/Product Offered by Tensorflow, which comprises the Models developed by Google, for Text and Image Datasets.
It saves Thousands of Hours of Training Time and Computational Effort, as it reuses the Existing Pre-Trained Model.
If we have an Image Dataset, we can take the Existing Pre-Trained Models from TF Hub and can adopt it to our Dataset.
Code for Re-Training our Image Dataset using the Pre-Trained Model, MobileNet, is shown below:
import itertools
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
module_selection = ("mobilenet_v2_100_224", 224) ##param ["(\"mobilenet_v2_100_224\", 224)", "(\"inception_v3\", 299)"] {type:"raw", allow-input: true}
handle_base, pixels = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {}".format(MODULE_HANDLE, IMAGE_SIZE))
BATCH_SIZE = 32 ##param {type:"integer"}
#Here we need to Pass our Dataset
data_dir = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
model = tf.keras.Sequential([
hub.KerasLayer(MODULE_HANDLE, trainable=do_fine_tuning),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(train_generator.num_classes, activation='softmax',
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()
Complete Code for Image Retraining Tutorial can be found in this Github Link.
More information about Tensorflow Hub can be found in this TF Blog.
The Pre-Trained Modules related to Images can be found in this TF Hub Link.
All the Pre-Trained Modules, related to Images, Text, Videos, etc.. can be found in this TF HUB Modules Link.
Finally, this is the Basic Page for Tensorflow Hub.

If your dataset consists of subfolders, you can use ImageDataGenerator it has flow_from_directory it helps to load data from a directory,
train_batches = ImageDataGenerator().flow_from_directory(
directory=train_path, target_size=(img_height,img_weight), batch_size=32 ,color_mode="grayscale")
The structure of the folder hierarchy can be as follows,
train
-- cat
-- dog
-- moneky

Related

How to run image classification on multiple images?

I have worked through the image classification tutorial on the tensorflow website here
The tutorial explains how the trained model can be run as a predictor on a new image.
Is there a way to run this on a batch/multiple images? The code is as follows:
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = tf.keras.utils.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
You can use this code to run your prediction on multiple images at the same time.
import cv2 (pip install opencv-python)
batch_images = [cv2.imread(img) for img in list_images]
predictions = model.predict_on_batch(batch_images)
list_images is a list with paths to your images you want to predict on for example ["path_img1","path_img2",...]
predictions is a list of predictions that your model made on the given batch of images and they are in the same order as the images you used as an input.
so predictions[x] gives you the predictions for the x'th images of the input batch.
Here's a solution that uses Dataset.from_generator to stream images from disk with a generator function.
def read_dir():
files = os.listdir(source_folder)
for file_name in files:
yield keras.utils.load_img(source_folder + file_name, color_mode="rgb")
ds = tf.data.Dataset.from_generator(
lambda: read_dir(),
output_types=(tf.int8),
output_shapes=([128, 128, 3])
)
model = keras.models.load_model('my-model.keras')
predictions = model.predict(ds.batch(64))

How to train transfer-learning model on custom dataset? ValueError: Shape must be rank 4

I am trying to build a transfer learning model to classify images. The images are a gray-scale (2D). previously I used image_dataset_from_directory method to read the images and there was no problem. However, I am trying to use a custom read function to have more control and access on the data such as knowing how many images in each class. When using this custom read function, I get an error (down below) while trying to train the model. I am not sure about what caused this error.
part1: reading the dataset
import numpy as np
import os
import tensorflow as tf
import cv2
from tensorflow import keras
# neural network
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers.experimental import preprocessing
IMG_WIDTH=160
IMG_HEIGHT=160
DATA_PATH = r"C:\Users\user\Documents\chest_xray"
TRAIN_DIR = os.path.join(DATA_PATH, 'train')
def create_dataset(img_folder):
img_data_array=[]
class_name=[]
for dir1 in os.listdir(img_folder):
for file in os.listdir(os.path.join(img_folder, dir1)):
image_path= os.path.join(img_folder, dir1, file)
image= cv2.imread( image_path, 0)
image=cv2.resize(image, (IMG_HEIGHT, IMG_WIDTH),interpolation = cv2.INTER_AREA)
image=np.array(image)
image = image.astype('float32')
image /= 255
img_data_array.append(image)
class_name.append(dir1)
return img_data_array, class_name
# extract the image array and class name
img_data, class_name =create_dataset(TRAIN_DIR)
target_dict={k: v for v, k in enumerate(np.unique(class_name))}
target_dict
target_val= [target_dict[class_name[i]] for i in range(len(class_name))]
this part will produce A list that has a size of 5232. inside the list there are numpy arrays of size 160X160 (float 32)
part 2: creating the model
def build_model():
inputs = tf.keras.Input(shape=(160, 160, 3))
x = Sequential(
[
preprocessing.RandomRotation(factor=0.15),
preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),
preprocessing.RandomFlip(),
preprocessing.RandomContrast(factor=0.1),
],
name="img_augmentation",
)(inputs)
# x = img_augmentation(inputs)
model=tf.keras.applications.EfficientNetB7(include_top=False,
drop_connect_rate=0.4,
weights='imagenet',
input_tensor=x)
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = tf.keras.layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = tf.keras.layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = tf.keras.layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = tf.keras.layers.Dense(1, name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"]
)
return model
model = build_model()
part 3: train the model
history = model.fit(x=np.array(img_data), y=np.array(target_val), epochs=5)
the error I get:
ValueError: Shape must be rank 4 but is rank 3 for '{{node
EfficientNet/img_augmentation/random_rotation_1/transform/ImageProjectiveTransformV3}} =
ImageProjectiveTransformV3[dtype=DT_FLOAT, fill_mode="REFLECT", interpolation="BILINEAR"]
(IteratorGetNext, EfficientNet/img_augmentation/random_rotation_1/rotation_matrix/concat,
EfficientNet/img_augmentation/random_rotation_1/transform/strided_slice,
EfficientNet/img_augmentation/random_rotation_1/transform/fill_value)' with input shapes:
[?,160,160], [?,8], [2], [].
The problem in the code is that OpenCV reads the image in grayscale format, but the grayscale format of the image returned is not (160,160,1) but (160,160).
Because of this fact, the error is thrown.
I managed to replicate your problem by testing it locally.
Say we randomly train on 12 samples.
Possible input formats:
#This one works
1. history = model.fit(x=np.random.rand(12,160,160,3), y=np.array([1,1,1,1,1,1,0,0,0,0,0,0]), epochs=5,verbose=1) WORKS
#This one works
2. history = model.fit(x=np.random.rand(12,160,160,1), y=np.array([1,1,1,1,1,1,0,0,0,0,0,0]), epochs=5,verbose=1) WORKS
#This one fails
3. history = model.fit(x=np.random.rand(12,160,160), y=np.array([1,1,1,1,1,1,0,0,0,0,0,0]), epochs=5,verbose=1) FAILS
(1) and (2) work.
(3) fails, yielding:
ValueError: Shape must be rank 4 but is rank 3 for '{{node
EfficientNet/img_augmentation/random_rotation_4/transform/ImageProjectiveTransformV2}} = ImageProjectiveTransformV2[dtype=DT_FLOAT, fill_mode="REFLECT", interpolation="BILINEAR"](IteratorGetNext,
EfficientNet/img_augmentation/random_rotation_4/rotation_matrix/concat,
EfficientNet/img_augmentation/random_rotation_4/transform/strided_slice)'
with input shapes: [?,160,160], [?,8], [2].
Therefore, ensure that your data format is in the shape (160,160,1) or (160,160,3).
As an alternative, after you you read the image with OpenCV, you can use
image = np.expand_dims(image,axis=-1)
to programatically insert the last axis (the grayscale).

ValueError: Cannot reshape a tensor with 150528 elements to shape [224,150528]

I am new to tensorflow and im still learning so I apologize if I am missing something obvious. So basically my problem is that I am trying to set up a simple image classifier with tensorflow in python, I figured out how to create my datasets and such but now I am just trying to get the training process going but the problem is that whenever I try I get this error
ValueError: Cannot reshape a tensor with 150528 elements to shape [224,150528] (33718272 elements) for 'dnn/input_from_feature_columns/input_layer/Image/Reshape' (op: 'Reshape') with input shapes: [224,224,3], [2] and with input tensors computed as partial shapes: input[1] = [224,150528].
I looked at some other posts I found here on stack overflow on this problem and they said that he was reshaping his data incorrectly and from what ive read from my error I have the same problem, the thing is I am using a premade estimator and I myself am not calling any reshaping functions in my code, the estimator is doing that for me I assume so how can I go about resolving this error? Thanks for any help here is my code
Training Code:
import tensorflow as tf
import cv2
import numpy as np
from DatasetCreator import DatasetCreator
trainingImagesPath = "TrainingImages"
#Loads the image data
def load_image(addr):
imageDimensions = (224, 224)
image = cv2.imread(addr)
#Resize the image to the size we need
image = cv2.resize(image, imageDimensions, interpolation=cv2.INTER_CUBIC)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)#Convert to RGB color space
image = image.astype(np.float32)
print(image.shape)
return image
#Load in the datasets
with tf.Graph().as_default() as graph:
dataCreator = DatasetCreator()
trainingDataset = dataCreator.generateDataset(trainingImagesPath)
#Create a training iterator for getting info from dataset
trainingIterator = trainingDataset.make_one_shot_iterator()
next_element = trainingIterator.get_next()
def train_input_fn():
with tf.Session(graph = graph) as sess:
features = sess.run(next_element)
#Get the image path
imagePath = str(features["ImagePath"])
imagePath = imagePath[2:len(imagePath)-1]
#Get the label
label = features["ImageLabel"]
image = load_image(imagePath)#Get image data
return {"Image": image, "Label": label}
featureColumns = [tf.feature_column.numeric_column("Image", [224, 224, 3]),
tf.feature_column.numeric_column("Label")]
estimator = tf.estimator.DNNClassifier(
model_dir = "Model",
feature_columns = featureColumns,
hidden_units = [1024, 512, 256],
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
)
print("Began training")
estimator.train(input_fn = train_input_fn, steps=1000)
Here is my code that creates my training dataset:
import tensorflow as tf
import cv2
from random import shuffle
import glob
import numpy as np
class DatasetCreator:
def __init__(self):
pass;
def generateDataset(self, trainingImagesPath):
addrs = glob.glob(trainingImagesPath + "/*.jpg")#Get list of paths of all the images in folder
#Label each address
labels = [0 if "car" in image else 1 for image in addrs]#Compile list of labels
# Divide the hata into 60% train, 20% validation, and 20% test
train_addrs = addrs[0:int(0.6*len(addrs))]
train_labels = labels[0:int(0.6*len(labels))]
val_addrs = addrs[int(0.6*len(addrs)):int(0.8*len(addrs))]
val_labels = labels[int(0.6*len(addrs)):int(0.8*len(addrs))]
test_addrs = addrs[int(0.8*len(addrs)):]
test_labels = labels[int(0.8*len(labels)):]
#Create dataset
dataset = tf.data.Dataset.from_tensor_slices(
{"ImagePath": train_addrs,
"ImageLabel": train_labels})
return dataset;

Mnist with Tensorflow Premade Estimator, input dimension mismatch on evaluate

I am rather new to Tensorflow, and has been trying to pick up the basics by reading through the guides and documentation on tensorflow.org
I have learnt the basics of how to use the tf.data and tf.estimator APIs and is trying to get them to work together on a basic classification model for MNIST.
I am using this script to load MNIST: https://github.com/tensorflow/models/blob/master/official/mnist/dataset.py
I made modifications to the dataset function to return a feature dictionary rather than vector:
def dataset(directory, images_file, labels_file):
"""Download and parse MNIST dataset."""
images_file = download(directory, images_file)
labels_file = download(directory, labels_file)
check_image_file_header(images_file)
check_labels_file_header(labels_file)
def decode_image(image):
# Normalize from [0, 255] to [0.0, 1.0]
image = tf.decode_raw(image, tf.uint8)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [784])
return image / 255.0
def decode_label(label):
label = tf.decode_raw(label, tf.uint8) # tf.string -> [tf.uint8]
label = tf.reshape(label, []) # label is a scalar
return tf.to_int32(label)
images = tf.data.FixedLengthRecordDataset(
images_file, 28 * 28, header_bytes=16).map(decode_image)
labels = tf.data.FixedLengthRecordDataset(
labels_file, 1, header_bytes=8).map(decode_label)
return tf.data.Dataset.zip(({"image":images}, labels))
My MNIST classifier script using the premade estimator in tf is as follows:
import tensorflow as tf
import dataset
fc = [tf.feature_column.numeric_column("image", shape=784)]
mnist_classifier = tf.estimator.DNNClassifier(
hidden_units=[512,512],
feature_columns=fc,
model_dir="models/mnist/dnn",
n_classes=10)
def input_fn(train=False, batch_size=None):
if train:
ds = mnist.train("MNIST-data")
ds = ds.shuffle(1000).repeat().batch(batch_size)
else:
ds = mnist.test("MNIST-data")
return ds
mnist_classifier.train(
input_fn=lambda:input_fn(True, 32),
steps=10000)
eval_results = mnist_classifier.evaluate(input_fn=lambda:input_fn())
The classifier doesn't crash on training, but on evaluate, I faced the following traceback:
ValueError: Cannot reshape a tensor with 784 elements to shape
[784,784] (614656 elements) for
'dnn/input_from_feature_columns/input_layer/image/Reshape' (op:
'Reshape') with input shapes: [784,1], [2] and with input tensors
computed as partial shapes: input[1] = [784,784].
What could be causing the issue here?
I have tried printing the output shapes and types of both train and test datasets, and they are exactly the same.
I have also tried viewing the model on tensorboard, and only the projector tab is available, no scalars or graphs tab.
Thanks!
PS: Any links to TF tutorials using the Datasets and Estimators APIs will be great too.

Tensorflow mixes up images and labels when making batch

So I've been stuck on this problem for weeks. I want to make an image batch from a list of image filenames. I insert the filename list into a queue and use a reader to get the file. The reader then returns the filename and the read image file.
My problem is that when I make a batch using the decoded jpg and the labels from the reader, tf.train.shuffle_batch() mixes up the images and the filenames so that now the labels are in the wrong order for the image files. Is there something I am doing wrong with the queue/shuffle_batch and how can I fix it such that the batch comes out with the right labels for the right files?
Much thanks!
import tensorflow as tf
from tensorflow.python.framework import ops
def preprocess_image_tensor(image_tf):
image = tf.image.convert_image_dtype(image_tf, dtype=tf.float32)
image = tf.image.resize_image_with_crop_or_pad(image, 300, 300)
image = tf.image.per_image_standardization(image)
return image
# original image names and labels
image_paths = ["image_0.jpg", "image_1.jpg", "image_2.jpg", "image_3.jpg", "image_4.jpg", "image_5.jpg", "image_6.jpg", "image_7.jpg", "image_8.jpg"]
labels = [0, 1, 2, 3, 4, 5, 6, 7, 8]
# converting arrays to tensors
image_paths_tf = ops.convert_to_tensor(image_paths, dtype=tf.string, name="image_paths_tf")
labels_tf = ops.convert_to_tensor(labels, dtype=tf.int32, name="labels_tf")
# getting tensor slices
image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=False)
# getting image tensors from jpeg and performing preprocessing
image_buffer_tf = tf.read_file(image_path_tf, name="image_buffer")
image_tf = tf.image.decode_jpeg(image_buffer_tf, channels=3, name="image")
image_tf = preprocess_image_tensor(image_tf)
# creating a batch of images and labels
batch_size = 5
num_threads = 4
images_batch_tf, labels_batch_tf = tf.train.batch([image_tf, label_tf], batch_size=batch_size, num_threads=num_threads)
# running testing session to check order of images and labels
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
print image_path_tf.eval()
print label_tf.eval()
coord.request_stop()
coord.join(threads)
Wait.... Isn't your tf usage a little weird?
You are basically running the graph twice by calling:
print image_path_tf.eval()
print label_tf.eval()
And since you are only asking for image_path_tf and label_tf, anything below this line is not even run:
image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=False)
Maybe try this?
image_paths, labels = sess.run([images_batch_tf, labels_batch_tf])
print(image_paths)
print(labels)
From your code I'm unsure how your labels are encoded/extracted from the jpeg images. I used to encode everything in the same file, but have since found a much more elegant solution. Assuming you can get a list of filenames, image_paths and a numpy array of labels labels, you can bind them together and operate on individual examples with tf.train.slice_input_producer then batch them together using tf.train.batch.
import tensorflow as tf
from tensorflow.python.framework import ops
shuffle = True
batch_size = 128
num_threads = 8
def get_data():
"""
Return image_paths, labels such that label[i] corresponds to image_paths[i].
image_paths: list of strings
labels: list/np array of labels
"""
raise NotImplementedError()
def preprocess_image_tensor(image_tf):
"""Preprocess a single image."""
image = tf.image.convert_image_dtype(image_tf, dtype=tf.float32)
image = tf.image.resize_image_with_crop_or_pad(image, 300, 300)
image = tf.image.per_image_standardization(image)
return image
image_paths, labels = get_data()
image_paths_tf = ops.convert_to_tensor(image_paths, dtype=tf.string, name='image_paths')
labels_tf = ops.convert_to_tensor(image_paths, dtype=tf.int32, name='labels')
image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=shuffle)
# preprocess single image paths
image_buffer_tf = tf.read_file(image_path_tf, name='image_buffer')
image_tf = tf.image.decode_jpeg(image_buffer_tf, channels=3, name='image')
image_tf = preprocess_image_tensor(image_tf)
# batch the results
image_batch_tf, labels_batch_tf = tf.train.batch([image_tf, label_tf], batch_size=batch_size, num_threads=num_threads)

Categories