System information:
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
TensorFlow installed from (source or binary): pip installed
TensorFlow version (use command below): v2.0.0-rc2-26-g64c3d382ca 2.0.0
Python version: 3.7.1
Error:
Unable to save TensorFlow Keras LSTM model to SavedModel format for exporting to Google Cloud bucket.
Error Message:
ValueError: Attempted to save a function b'__inference_lstm_2_layer_call_fn_36083' which references a symbolic Tensor Tensor("dropout/mul_1:0", shape=(None, 1280), dtype=float32) that is not a simple constant. This is not supported.
Code:
import tensorflow as tf
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import tqdm
import datetime
from sklearn.preprocessing import LabelBinarizer
model = tf.keras.Sequential([
tf.keras.layers.Masking(mask_value=0.),
tf.keras.layers.LSTM(512, dropout=0.5, recurrent_dropout=0.5),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(len(LABELS), activation='softmax')
])
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', 'top_k_categorical_accuracy'])
test_file = 'C:/.../testlist01.txt'
train_file = 'C:/.../trainlist01.txt'
with open(test_file) as f:
test_list = [row.strip() for row in list(f)]
with open(train_file) as f:
train_list = [row.strip() for row in list(f)]
train_list = [row.split(' ')[0] for row in train_list]
def make_generator(file_list):
def generator():
np.random.shuffle(file_list)
for path in file_list:
full_path = os.path.join(BASE_PATH, path).replace('.avi', '.npy')
label = os.path.basename(os.path.dirname(path))
features = np.load(full_path)
padded_sequence = np.zeros((SEQUENCE_LENGTH, 1280))
padded_sequence[0:len(features)] = np.array(features)
transformed_label = encoder.transform([label])
yield padded_sequence, transformed_label[0]
return generator
train_dataset = tf.data.Dataset.from_generator(make_generator(train_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 1280), (len(LABELS))))
train_dataset = train_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
valid_dataset = tf.data.Dataset.from_generator(make_generator(test_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 1280), (len(LABELS))))
valid_dataset = valid_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
model.fit(train_dataset, epochs=17, validation_data=valid_dataset)
BASE_DIRECTORY = 'C:\\...\\saved_model\\LSTM\\1\\';
tf.saved_model.save(model, BASE_DIRECTORY)
In addition to the answer of The Guy with The Hat:
The .h5 part is sufficient to tell keras to store it as keras model save.
model.save('path_to_saved_model/model.h5')
should do the trick.
Try saving it with the Keras API, not the SavedModel API. See Save and serialize models with Keras: Export to SavedModel.
model.save('path_to_saved_model', save_format='tf')
That should save the model to a SavedModel format.
There is a bug I think and you need to set the dropout to 0 for the functions tf.saved_model.save and model.save(.., save_format='tf') to work
This seems to be a bug with both TensorFlow 2.0 and 2.1, after upgrading my TensorFlow to v2.2, it's working fine.
Related
I am following this tutorial for image detection using Matterport repo.
I tried following this guide and edited the code to
How can I edit the following code to visualize the tensorboard ?
import tensorflow as tf
import datetime
%load_ext tensorboard
sess = tf.Session()
file_writer = tf.summary.FileWriter('/path/to/logs', sess.graph)
And then in the model area
# prepare config
config = KangarooConfig()
config.display()
# define the model
model = MaskRCNN(mode='training', model_dir='./', config=config)
model.keras_model.metrics_tensors = []
# Tensorflow board
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
# load weights (mscoco) and exclude the output layers
model.load_weights('mask_rcnn_coco.h5',
by_name=True,
exclude=[
"mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox",
"mrcnn_mask"
])
# train weights (output layers or 'heads')
model.train(train_set,
test_set,
learning_rate=config.LEARNING_RATE,
epochs=5,
layers='heads')
I am not sure where to callbacks=[tensorboard_callback] ?
In your model.train, if you look closely in the source code documentation, there is parameter called custom_callbacks, which defaults to None.
It is there where you need to write your code, so to train with a custom callback, you will need to add this line of code:
model.train(train_set,
test_set,
learning_rate=config.LEARNING_RATE,
custom_callbacks = [tensorboard_callback],
epochs=5,
layers='heads')
You only have to open Anaconda Prompt and write tensorboard --logdir= yourlogdirectory, where yourlogdirectory is the directory containing the model checkpoint.
It should look something like this: logs\xxxxxx20200528T1755, where xxxx stands for the name you give to your configuration.
This command will generate a web address, copy it in our browser of preference.
How convert a TensorFlow model 2.0 to tslite model with 2.0 API?
I tried to export my custom TensorFlow model to tflite format, because I want to integrate this module to an Android application. I get strange errors after compiling the Python script.
Used: Tensorflow 2.0.0 beta1 API
https://www.tensorflow.org/lite/r2/convert
I tried these methods to convert:
- From SavedModelFrom
- tf.Keras Model
These points will show in code below.
CipherNeuralModel = keras.Sequential([
keras.layers.Flatten(input_shape=[20, 20]),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(26, activation='softmax')
])
CipherNeuralModel.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
CipherNeuralModel.fit(trainSumData, imgNumberFromLit, epochs=10, steps_per_epoch=20)
savePath = "D:\FolderToModel"
tf.saved_model.save(CipherNeuralModel, savePath)
export_model = tf.saved_model.load(savePath)
concrete_func = export_model.signatures[
tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY
]
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
# Error here
converter = converter.convert()
# Example for convert tf.keras model to tflite
# convert_model = tf.function(lambda x: CipherNeuralModel(x))
# concrete_func = convert_model.get_concrete_function(
# tf.TensorSpec(CipherNeuralModel.inputs[0].shape,
# CipherNeuralModel.inputs[0].dtype))
# convertor = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
# tflite_model = convertor.convert()
TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See
console for info. b'"toco_from_protos" \xad\xa5
\xef\xa2\xab\xef\xa5\xe2\xe1\xef
\xa2\xad\xe3\xe2\xe0\xa5\xad\xad\xa5\xa9 \xa8\xab\xa8
This problem arise, when library have a long file path(more 255 character). To solve this problem need make next points:
Export model to file:
savePath = "D:\FolderToModel"
tf.saved_model.save(CipherNeuralModel, savePath)
Download model to your google drive
Make a project in https://colab.research.google.com
Connect your google drive to project:
from google.colab import drive
drive.mount('/content/drive')
Import tensorflow library to project:
pip install tensorflow==2.0.0-rc0
Import your model from file in google drive and convert it to tensorflow lite model:
path = '/content/drive/My Drive/highModelNeural'
export_model = tf.saved_model.load(path)
concrete_func = export_model.signatures[
tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY
]
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter = converter.convert()
Import your tensorflow lite model to file, which will be created in google drive:
open("/content/drive/My Drive/highModelNeural/model70v5.tflite", "wb").write(converter)
After that, you can use TFlite model in your android application or embedded device.
I am trying out various Tensorflow models from hub but
I can't seem to get this one to work with KerasLayer:
"https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/3"
I am using the same procedure used within the examples in the documentation:
https://www.tensorflow.org/tutorials/images/hub_with_keras
feature_extractor = hub.KerasLayer(URL,
input_shape=(height, width,3))
even tried a few amendments such as including:
trainable=True, tags={"train"}),
so it would look like this:
feature_extractor = hub.KerasLayer(URL,
input_shape=(height, width,3), trainable=True, tags={"train"}))
because that's what it said to do in the docs.
However, I am still getting this error:
ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 2 MetaGraphs in the SavedModel with tag sets [[], ['train']]. Pass a 'tags=' argument to load this SavedModel
import tensorflow as tf
import tensorflow_hub as hub
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
from tensorflow.keras.applications import imagenet_utils
from tensorflow.keras.callbacks import ModelCheckpoint
tf.compat.v1.disable_eager_execution()
URL = "https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/3"
module = hub.Module(URL)
height, width = hub.get_expected_image_size(module)
# here is where the error comes from
feature_extractor = hub.KerasLayer(URL,
input_shape=(height, width,3), trainable=True, tags={"train"})
At this time, hub.KerasLayer only works for TF2-style models like https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4.
Please stay tuned for more choice of TF2-style models as TensorFlow 2.0 is getting released.
I'm trying to deploy a recently trained Keras model on Google Cloud ML Engine. I googled around to see what format the saved model needs to be for ML Engine and found this:
import keras.backend as K
import tensorflow as tf
from keras.models import load_model, Sequential
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('model.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Sequential.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = 'YOUR_EXPORT_PATH' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input},
outputs={'NAME_YOUR_OUTPUT': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
However, in Keras 2.1.3, keras.backend no longer seems to have clear_session(), set_session(), or get_session(). What's the modern way of handling this issue? Do those functions live elsewhere now?
Thanks!
I'm trying to reproduce some results from the paper, describing Grad-CAM method, using Keras with Tensorflow-GPU backend, and obtain totally incorrect labels.
I've captured the screenshot of figure 1(a) from that paper and trying to make the pretrained VGG16 from Keras Applications to classify it.
Here is my image:
Here is my code (cell from the Jupyter notebook). Part of code was copied from the Keras manuals
import imageio
from matplotlib import pyplot as plt
from skimage.transform import resize
from keras import activations
from keras.applications import VGG16
from keras.applications.vgg16 import preprocess_input, decode_predictions
# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
%matplotlib inline
dog_img = imageio.imread(r"F:\tmp\Opera Snapshot_2018-09-24_133452_arxiv.org.png")
dog_img = dog_img[:, :, 0:3] # Opera has added alpha channel
dog_img = resize(dog_img, (224, 224, 3))
x = np.expand_dims(dog_img, axis=0)
x = preprocess_input(x, mode='tf')
pred = model.predict(x)
decode_predictions(pred)
Output:
[[('n03788365', 'mosquito_net', 0.017053505),
('n03291819', 'envelope', 0.015034639),
('n15075141', 'toilet_tissue', 0.012603286),
('n01737021', 'water_snake', 0.010620943),
('n04209239', 'shower_curtain', 0.009625845)]]
However, when I submit the same image to the online service, run by the paper authors, http://gradcam.cloudcv.org/classification, I see correct label "Boxer"
Here is the output from something that they call "Terminal":
Completed the Classification Task
"Time taken for inference in torch: 9.0"
"Total time taken: 9.12565684319"
{"classify_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_243.png", "execution_time": 9.0, "label": 243.0, "classify_gb_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_gcam_243.png", "classify_gcam_raw": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_raw_243.png", "input_image": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/Opera Snapshot_2018-09-24_133452_arxiv.org.png", "pred_label": 243.0, "classify_gb": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_243.png"}
Completed the Classification Task
"Time taken for inference in torch: 9.0"
"Total time taken: 9.05940508842"
{"classify_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_243.png", "execution_time": 9.0, "label": 243.0, "classify_gb_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_gcam_243.png", "classify_gcam_raw": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_raw_243.png", "input_image": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/Opera Snapshot_2018-09-24_133452_arxiv.org.png", "pred_label": 243.0, "classify_gb": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_243.png"}
Job published successfully
Publishing job to Classification Queue
Starting classification job on VGG_ILSVRC_16_layers.caffemodel
Job published successfully
Publishing job to Classification Queue
Starting classification job on VGG_ILSVRC_16_layers.caffemodel
I use Anaconda Python 64-bit, on Windows 7.
Versions of relevant software on my PC:
keras 2.2.2 0
keras-applications 1.0.4 py36_1
keras-base 2.2.2 py36_0
keras-preprocessing 1.0.2 py36_1
tensorflow 1.10.0 eigen_py36h849fbd8_0
tensorflow-base 1.10.0 eigen_py36h45df0d8_0
What am I doing wrong? How can I get boxer label?
You cannot do the following line apparently
dog_img = dog_img[:, :, 0:3] # Opera has added alpha channel
So I loaded the image using a utility in Keras called load_img, which doesn't add the alpha channel.
The complete code
import imageio
from matplotlib import pyplot as plt
from skimage.transform import resize
import numpy as np
from keras import activations
from keras.applications import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions
# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
dog_img = image.img_to_array(image.load_img(r"F:\tmp\Opera Snapshot_2018-09-24_133452_arxiv.org.png", target_size=(224, 224)))
x = np.expand_dims(dog_img, axis=0)
x = preprocess_input(x)
pred = model.predict(x)
print(decode_predictions(pred))
[[('n02108089', 'boxer', 0.29122102), ('n02108422', 'bull_mastiff', 0.199128), ('n02129604', 'tiger', 0.10050287), ('n02123159', 'tiger_cat', 0.09733449), ('n02109047', 'Great_Dane', 0.056869864)]]
Considering that all the output probabilities are very low and more or less equally distributed circa 0.01, my guess is that you are pre-processing the image incorrectly and passing some sort of scrambled image that looks like noise to model.predict(). Try to debug and imshow the image right before you predict().