Tensorflow: Image Preprocessing Inception v4 not accepting my jpg - python

I'm new to tensorflow and I tried using the script mentioned in Tensorflow: use pretrained inception model to avoid using a TF Record but all my predictions end up in the same wrong class. The evaluation classifier however produces correct results, so it's not the model, I believe the pre processing is what I'm doing wrong.
So I decided to try the inception preprocessing function but now it won't accept my jpgs. I get this error:
inception_preprocessing.py", line 265, in preprocess_for_eval
image = tf.image.central_crop(image, central_fraction=central_fraction)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/image_ops_impl.py", line 335, in central_crop
_Check3DImage(image, require_static=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/image_ops_impl.py", line 129, in _Check3DImage
raise ValueError("'image' must be three-dimensional.")
ValueError: 'image' must be three-dimensional.
Here's my code:
arg_scope = inception_utils.inception_arg_scope()
im_size = 299
inputs = tf.placeholder(tf.float32, (None, im_size, im_size, 3))
inputs = inception_preprocessing.preprocess_image(inputs, im_size, im_size)
with slim.arg_scope(arg_scope):
logits, end_points = inception_v4.inception_v4(inputs)
saver = tf.train.Saver()
saver.restore(sess,ckpt_file)
for image in sample_img:
im = Image.open(image)
im = im.resize((im_size,im_size))
im = np.array(im)
logit_values = sess.run(logits, feed_dict={inputs:im})
print(np.argmax(logit_values))

inputs_processed = inception_preprocessing.preprocess_image(tf.squeeze(inputs), im_size, im_size)
inputs_processed = tf.expand_dims(inputs_processed, 0)
with slim.arg_scope(arg_scope):
logits, end_points = inception_v4.inception_v4(inputs_processed)
# placeholder feed value should be 4D
im = np.exapnd_dims(np.array(im), 0)

Related

Loading TF Records into Keras

I am trying to load a custom TFRecord file into my keras model. I attempted to follow this tutorial: https://medium.com/#moritzkrger/speeding-up-keras-with-tfrecord-datasets-5464f9836c36, but adapting for my use.
My goal is to have the functions work similar to ImageDataGenerator from Keras. I cannot use that function because I specific metadata from the images that the generator does not grab. I'm not including that metadata here because I just need the basic network to function first.
I also want to be able to apply this to a transfer learning application.
I keep getting this error: TypeError: Could not build a TypeSpec for None with type NoneType
I am using Tensorflow 2.2
def _parse_function(serialized):
features = \
{
'image': tf.io.FixedLenFeature([], tf.string),
'label': tf.io.FixedLenFeature([], tf.int64),
'shapex': tf.io.FixedLenFeature([], tf.int64),
'shapey': tf.io.FixedLenFeature([], tf.int64),
}
parsed_example = tf.io.parse_single_example(serialized=serialized,
features=features)
shapex = tf.cast(parsed_example['shapex'], tf.int32)
shapey = tf.cast(parsed_example['shapey'], tf.int32)
image_shape = tf.stack([shapex, shapey, 3])
image_raw = parsed_example['image']
# Decode the raw bytes so it becomes a tensor with type.
image = tf.io.decode_raw(image_raw, tf.uint8)
image = tf.reshape(image, image_shape)
# Get labels
label = tf.cast(parsed_example['label'], tf.float32)
return image, label
def imgs_inputs(type, perform_shuffle=False):
records_dir = '/path/to/tfrecord/'
record_paths = [os.path.join(records_dir,record_name) for record_name in os.listdir(records_dir)]
full_dataset = tf.data.TFRecordDataset(filenames=record_paths)
full_dataset = full_dataset.map(_parse_function, num_parallel_calls=16)
dataset_length = (len(list(full_dataset))) #Gets length of datase
iterator = tf.compat.v1.data.make_one_shot_iterator(databatch)
image, label = iterator.get_next()
#labels saved as values ex: [1,2,3], and are now converted to one hot encoded
label = to_categorical(label)
return image, label
image, label = imgs_inputs(type ='Train',perform_shuffle=True)
#Combine it with keras
# base_model = MobileNet(weights='imagenet', include_top=False, input_shape=(200,200,3), dropout=.3)
model_input = Input(shape=[200,200,3])
#Build your network
model_output = Flatten(input_shape=(200, 200, 3))(model_input)
model_output = Dense(19, activation='relu')(model_output)
#Create your model
train_model = Model(inputs=model_input, outputs=model_output)
#Compile your model
optimizer = Adam(learning_rate=.001)
train_model.compile(optimizer=optimizer,loss='mean_squared_error',metrics=['accuracy'],target_tensors=[label])
#Train the model
train_model.fit(epochs=10,steps_per_epoch=2)
image returns array of shape (100,200,200,3) which is a batch of 100 images
label returns array of shape(100,19) which is a batch of 100 labels (there are 19 labels)
The issue related to shapex and shapey but I don't know exactly why.
I set shapex = 200 and shapey=200. Then I rewrote the model to include the transfer learning.
base_model = MobileNet(weights='imagenet', include_top=False, input_shape=(200,200,3), dropout=.3)
x = base_model.output
types = Dense(19,activation='softmax')(x)
model = Model(inputs=base_model.input,outputs=types)
model.compile(
optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy']
history = model.fit(get_batches(), steps_per_epoch=1000, epochs=10)
I found everything I needed on this Google Colab:
[https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/04_Keras_Flowers_transfer_learning_solution.ipynb#scrollTo=XLJNVGwHUDy1][1]
[1]: https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/04_Keras_Flowers_transfer_learning_solution.ipynb#scrollTo=XLJNVGwHUDy1

TFLIte Cannot set tensor: Dimension mismatch on model conversion

I've a keras model constructed as follows
module_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
backbone = hub.KerasLayer(module_url)
backbone.build([None, 224, 224, 3])
model = tf.keras.Sequential([backbone, tf.keras.layers.Dense(len(classes), activation='softmax')])
model.build([None, 224, 224, 3])
model.compile('adam', loss='sparse_categorical_crossentropy')
Then I load Caltech101 dataset from TF hub as follows
samples, info = tfds.load("caltech101", with_info=True)
train_samples, test_samples = samples['train'], samples['test']
def normalize(row):
image, label = row['image'], row['label']
image = tf.dtypes.cast(image, tf.float32)
image = tf.image.resize(image, (224, 224))
image = image / 255.0
return image, label
train_data = train_samples.repeat().shuffle(1024).map(normalize).batch(32).prefetch(1)
test_data = test_samples.map(normalize).batch(1)
Now i'm ready to train and save my model as follows:
model.fit_generator(train_data, epochs=1, steps_per_epoch=100)
saved_model_dir = './output'
tf.saved_model.save(model, saved_model_dir)
At this point the model is usuable, I can evaluate an input of shape (224, 224, 3). I try to convert this model as follows:
def generator2():
data = train_samples
for _ in range(num_calibration_steps):
images = []
for image, _ in data.map(normalize).take(1):
images.append(image)
yield images
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.lite.RepresentativeDataset(generator2)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_default_quant_model = converter.convert()
The conversion triggers the following error
/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/optimize/tensorflow_lite_wrap_calibration_wrapper.py in FeedTensor(self, input_value)
110
111 def FeedTensor(self, input_value):
--> 112 return _tensorflow_lite_wrap_calibration_wrapper.CalibrationWrapper_FeedTensor(self, input_value)
113
114 def QuantizeModel(self, input_py_type, output_py_type, allow_float):
ValueError: Cannot set tensor: Dimension mismatch
Now there is a similar question but in there case they are loading an already converted model unlike my case where the issue happens when I try to convert a model.
The converter object is an auto generated class from C++ code using SWIG which makes it difficult to inspect. How can I found the exact Dimension expected by the converter object?
Had the same problem when using
def representative_dataset_gen():
for _ in range(num_calibration_steps):
# Get sample input data as a numpy array in a method of your choosing.
yield [input]
from https://www.tensorflow.org/lite/performance/post_training_quantization.
It seems that converter.representative_dataset expects a list containing one example with shape (1, input_shape). That is, using something along the lines
def representative_dataset_gen():
for i in range(num_calibration_steps):
# Get sample input data as a numpy array in a method of your choosing.
yield [input[i:i+1]]
if input has shape (num_samples, input_shape), solved the problem. In your case, when using tf Datasets, a working example would be
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
samples, info = tfds.load("caltech101", with_info=True)
train_samples, test_samples = samples['train'], samples['test']
def normalize(row):
image, label = row['image'], row['label']
image = tf.dtypes.cast(image, tf.float32)
image = tf.image.resize(image, (224, 224))
image = image / 255.0
return image, label
train_data = train_samples.repeat().shuffle(1024).map(normalize).batch(32).prefetch(1)
test_data = test_samples.map(normalize).batch(1)
module_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
backbone = hub.KerasLayer(module_url)
backbone.build([None, 224, 224, 3])
model = tf.keras.Sequential([backbone, tf.keras.layers.Dense(102, activation='softmax')])
model.build([None, 224, 224, 3])
model.compile('adam', loss='sparse_categorical_crossentropy')
model.fit_generator(train_data, epochs=1, steps_per_epoch=100)
saved_model_dir = 'output/'
tf.saved_model.save(model, saved_model_dir)
num_calibration_steps = 50
def generator():
single_batches = train_samples.repeat(count=1).map(normalize).batch(1)
i=0
while(i<num_calibration_steps):
for batch in single_batches:
i+=1
yield [batch[0]]
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.lite.RepresentativeDataset(generator)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_default_quant_model = converter.convert()
I had the same problem, I used this solution, setting inputs_test as your input test it should work also for you:
def representative_dataset():
arrs=np.expand_dims(inputs_test, axis=1).astype(np.float32)
for data in arrs:
yield [ data ]
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
tflite_quant_model = converter.convert()
Applied this on a raspberry pi and did worked, just be sure to install tflite outside of your venev
import tflite_runtime.interpreter as tflite

Why do I keep getting operation attribute error in my tensor flow codes

I try to build a hidden layer of neural network with tensorflow but I keep getting error message
"Operation" object has no attribute "dtype".
This is where the code throws the error:
codings = tf.layers.dense(X, n_hidden, name="hidden")
This is the entire script
import numpy as np
import tensorflow as tf
from PIL import Image
data = []
test2 = Image.open("./ters/test2.jpg")
prepared_data = np.asarray(test2.resize((800, 1000), Image.ANTIALIAS))
data.append(prepared_data)
data = np.asarray(data)
saver = tf.train.import_meta_graph("./my_model.ckpt.meta")
batch_size, height, width, channels = data.shape
n_hidden = 400
X = tf.get_default_graph().get_operation_by_name("Placeholder")
training_op = tf.get_default_graph().get_operation_by_name("train/Adam")
codings = tf.layers.dense(X, n_hidden, tf.float32)
n_iterations = 5
with tf.Session() as sess:
saver.restore(sess, "./my_model.ckpt")
sess.run(training_op)
test_img = codings.eval(feed_dict={X: X_test})
print(test_img)
Note:
I have already trained the model and named it my_model.ckpt and I tried importing and using it.
This is the error message:
Traceback (most recent call last):
File "using_img_cleaner.py", line 36, in <module>
codings = tf.layers.dense(X, n_hidden, tf.float32)
File "/home/exceptions/env/lib/python3.5/site-packages/tensorflow/python/layers/core.py", line 250, in dense
dtype=inputs.dtype.base_dtype,
AttributeError: 'Operation' object has no attribute 'dtype'

Tensorflow signature output placeholder

I am trying to export a Tensorflow model so that I can use it in Tensorflow Serving. This is the script that I use:
import os
import tensorflow as tf
trained_checkpoint_prefix = '/home/ubuntu/checkpoint'
export_dir = os.path.join('m', '0')
loaded_graph = tf.Graph()
config=tf.ConfigProto(allow_soft_placement=True)
with tf.Session(graph=loaded_graph, config=config) as sess:
# Restore from checkpoint
loader = tf.train.import_meta_graph(trained_checkpoint_prefix + 'file.meta')
loader.restore(sess, tf.train.latest_checkpoint(trained_checkpoint_prefix))
# Create SavedModelBuilder class
# defines where the model will be exported
export_path_base = "/home/ubuntu/m"
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(0)))
print('Exporting trained model to', export_path)
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
batch_shape = (20, 256, 256, 3)
input_tensor = tf.placeholder(tf.float32, shape=batch_shape, name="X_content")
predictions_tf = tf.placeholder(tf.float32, shape=batch_shape, name='Y_output')
tensor_info_input = tf.saved_model.utils.build_tensor_info(input_tensor)
tensor_info_output = tf.saved_model.utils.build_tensor_info(predictions_tf)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'image': tensor_info_input},
outputs={'output': tensor_info_output},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'style_image':
prediction_signature,
})
builder.save(as_text=True)
The main issue is the output signature (predictions_tf). In this case, having it set to placeholder, I get an error saying that it's value has to be set when the model is called from gRPC. What should I use instead?
I have tried
predictions_tf = tf.Variable(0, dtype=tf.float32, name="Y_output")
and
predictions_tf = tf.TensorInfo(dtype=tf.float32)
predictions_tf.name = "Y_output"
predictions_tf.dtype = tf.float32
I might misunderstood what you are trying to do, but here you basically create a new placeholder for input and a new placeholder for output.
What I think you should do, is once you loaded the model , you have to get the input and the output tensor of your model in the variables input tensor and prediction_tfusing for example
input_tensor=loaded_graph.get_tensor_by_name('the_name_in_the_loaded_graph:0')
prediction_tf=loaded_graph.get_tensor_by_name('the_pred_name_in_the_loaded_graph:0')

Read and preprocess image for tensorflow pretrained model

I don't have much experience in Tensorflow. I am trying to use a pretrained ResNet152 model to get the activations of the last layer as output. The images I use for input are stored on my harddrive. So I need to load the images, preprocess them and then get the output from the pretrained model. I found examples for that using URLs of images but when I try it with image paths I can't get it to work. This is what I have so far (only one image for now):
with tf.Graph().as_default():
filename_queue = tf.train.string_input_producer(['./testimg/A_008.jpg'])
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
image = tf.image.decode_jpeg(value, channels=3)
preprocessing = preprocessing_factory.get_preprocessing('resnet_v2_152', is_training=False)
processed_image = preprocessing(image, 299,299)
processed_images = tf.expand_dims(processed_image, 0)
with slim.arg_scope(resnet_v2.resnet_arg_scope()):
logits, end_points = resnet_v2.resnet_v2_152(processed_images, is_training=False)
checkpoints_dir='./models/resnet_v2_152'
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'resnet_v2_152.ckpt'),
slim.get_variables_to_restore())
with tf.Session() as sess:
init_fn(sess)
np_image, fv = sess.run([image, logits])
I am doing this in a Jupyter Notebook. When I execute the code I don't get an error message, it just keeps running and running until I restart the kernel.
Any ideas what I did wrong? And how would I do it for multiple images?
I found the solution by replacing the tf.WholeFileReader() with tf.read_file():
graph = tf.Graph()
with graph.as_default():
image_path = image = tf.placeholder(tf.string)
image = tf.image.decode_jpeg(tf.read_file(image_path), channels=3)
preprocessing = preprocessing_factory.get_preprocessing('resnet_v2_152', is_training=False)
processed_image = preprocessing(image, image_size, image_size)
processed_images = tf.expand_dims(processed_image, 0)
with slim.arg_scope(resnet_v2.resnet_arg_scope()):
logits, end_points = resnet_v2.resnet_v2_152(processed_images, is_training=False)
checkpoints_dir='./models/resnet_v2_152'
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'resnet_v2_152.ckpt'),
slim.get_variables_to_restore())
images = ['./testimg/A_008.jpg', './testimg/logo.jpg']
with tf.Session(graph=graph) as sess:
init_fn(sess)
for img in images:
fv = sess.run(logits, feed_dict={image_path: img})
print(fv)

Categories