Model.summary() gives me a this output
Now how can i check sequential_1 layers and sequential_3 layer?
I want whole model summary but it gives two sequential so that means two model are combined so how can i get summary of both model?
I only have model.h5 file nothing else
Models saved in .h5 format includes everything about the model.
To inspect the layers summary inside the Model in a Model, like in your case.
You could extract the layers, then call the summary method from each of them.
ie.
layer_summary = [layer.summary() for layer in loaded_model.layers]
Here is the complete code I used in reproducing your scenario.
import tensorflow as tf
print('Running Tensorflow version {}'.format(tf.__version__)) # Tensorflow 2.1.0
model_path = '/content/keras_model.h5'
loaded_model = tf.keras.models.load_model(model_path)
loaded_model.summary()
inp = loaded_model.input
layer_summary = [layer.summary() for layer in loaded_model.layers]
I've also used the model.h5 file you uploaded.
Related
I created a model from sequential. when I saved it I got this warning message
home/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override
I tested one image and the prediction was good the I saved my model when I loaded it again it started giving me wrong values and the prediction was all wrong.what is the correct way to say the model and load it
import numpy as np
import matplotlib.pyplot as plt
import glob
import cv2
import os
from tensorflow import keras
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers import Input, Dropout, Flatten, Dense
from tensorflow.keras.layers import UpSampling2D
from tensorflow.keras.models import Model
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.models import Sequential
input_shape = (3,1134,1134,3)
base_model = tf.keras.applications.ResNet50(
include_top=False,
weights="imagenet",
input_shape=(1134,1134,3),
pooling=max,
)
for layer in base_model.layers[:-4]:
layer.trainable = False
model = Sequential()
model.add(Conv2D(3,(3,3),activation='relu',padding='same'))
model.add(base_model)
model.add(Conv2D(3,(3,3),activation='relu',padding='same'))
# model.add(Convolution2D(3,(4,4),activation='relu',padding='same'))
model.add(UpSampling2D(size =(16,16)))
model.add(UpSampling2D())
model.add(BatchNormalization())
model.add(Conv2D(3,(3,3),activation='relu',padding='same'))
model.build(input_shape)
model.summary()
this is how I save it
model.save("/media/TOSHIBA EXT/trained_model/UAV_01.h5")
enter code here
model=keras.models.load_model(
"/media/TOSHIBA EXT/trained_model/UAV_01.h5")
#user123 Agree with you that it was an issue with old versions (from TF2.5, TF2.6, and TF2.7).
This was resolved in recent tf-nightly. Here is a gist for reference. If you want to use a stable version, then it will be available in the upcoming TF2.8 in near future. Thanks!
Two other approaches to try:
Save the model using a directory path instead of with a path that ends in the .h5 extension. Under the hood, save does different things if you send a path that ends with .h5. If you send a directory, it will use the newer SavedModel format. You can then directly load the model with:
from tensorflow.keras.models import load_model
new_model = load_model('<path to directory used in save>')
https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk
Only save the weights. To load the model, use your current model creation code to instantiate the object, then load the weights into the empty model object. This used to be the only way!
model.save_weights('my_model_weights.h5')
...
new_model = <build your model with your model building code>
new_model.load_weights('my_model_weights.h5')
Ref: https://keras.io/getting_started/faq/. Weights only saving
In the FAQ, there are also several other suggestions for how to handle it, but for your situation these two will probably cover it.
Here is a good snippet to add after your train code to make sure that the export works and is not the cause of a performance issue at inference:
# From: https://www.tensorflow.org/guide/keras/save_and_serialize#whole-model_saving_loading
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model')` creates a SavedModel folder `my_model`.
model.save("my_model")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_model")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
In my tensorflow model, output of one network is a tensor. This value I need to feed as input to another pretrained network. I'm loading the pretrained network as follows
input_b_ph = tf.placeholder(shape=(), dtype=float.32, name='input_b_ph')
sess1 = tf.Session()
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta', input_map={'input/Identity:0': input_b_ph})
graph = tf.get_default_graph()
saver.restore(sess1, model_path.as_posix())
output_b = graph.get_tensor_by_name('output/Identity:0')
I need to feed a tensor to feature_input. How can I achieve this?
Edit 1: Adding end-to-end details:
I have a network A defined in tensorflow which takes input input_a and produces output output_a. This I need to feed to ResNet50 pretrained model. For this I used ResNet50 from tf.keras
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.applications.resnet50 import preprocess_input
resnet_model = ResNet50(include_top=False, pooling='avg')
preprocessed_input = preprocess_input(tf.cast(output_a, tf.float32))
output_resnet = resnet_model([preprocessed_input])
Output of ResNet is output_resnet. This I need to feed to another pretrained network, say network B. B is actually written in Keras. I modified it to use tf.keras. Then I save the trained model as below:
import tensorflow as tf
from tensorflow import keras
curr_sess = keras.backend.get_session()
with tf.name_scope('input'):
_ = tf.identity(quality_net.model.input)
with tf.name_scope('output'):
__ = tf.identity(quality_net.model.output)
saver = tf.train.Saver()
saver.save(curr_sess, output_filepath.as_posix())
I have access to this network B and tried to save the model in h5 format but it gave error that model is thread lock. On searching in internet, I got to know this error comes when there are Lambda layers in the network. So, I resorted to saving the model in tensorflow format - 3 files: meta, weights and index. (Any solution using h5 format is also acceptible).
There is a caveat here. That the structure of network B can keep changing and it is from a different project. So I can't hardcode the architecture of B. I've to load it from saved model. My problem is how can I restore this pretrained model and pass output_resnet as input to network B. The output of network B i.e. output_b is the loss function to train my original network A. Currently I'm able to restore network B as follows:
input_b_ph = tf.placeholder(shape=(), dtype=float.32, name='input_b_ph')
sess1 = tf.Session()
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta', input_map={'input/Identity:0': input_b_ph})
graph = tf.get_default_graph()
saver.restore(sess1, model_path.as_posix())
output_b = graph.get_tensor_by_name('output/Identity:0')
I have output from resnet as output_resnet which is a tensor. I need a way to set this to input_b_ph. How can I achieve that? Any alternate solutions are also acceptible
Mentioning the Answer in this (Answer) Section (although it is present in Comments Section), for the benefit of the Community.
Placeholder is not required in this case. Just passing output_resnet to input_map should resolve the issue.
Replacing the code,
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta',
input_map={'input/Identity:0': input_b_ph})
with
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta',
input_map={'input/Identity:0': output_resnet})
has resolved the issue.
I have been working on a complicated Keras model with a custom metric, and I recently converted it to tensorflow lite. The models are not exactly the same, and the outputs are different, however it is difficult to evaluate because the output is a tensor of size 128. Is there any way I can run my custom metric on this model? I have been using Tf 1.14. Below is some relevant code.
# compiler and train the model
model.save('model.h5')
# save the model in TFLite
converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5', custom_objects={'custom_metric': custom_metric})
tflite_model = converter.convert()
open('model.tflite', 'wb').write(tflite_model)
# run the model
interpreter = tf.lite.Interpreter(model_path='model.tflite')
interpreter.allocate_tensors()
input_dets = interpreter.get_input_details()
output_dets = interpreter.get_output_details()
input_shape = input_dets[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_dets[0]['index'], input_data)
interpreter.invoke()
The models are supposed to be different because the converter does graph transformations (such as fuse activation and fold batch norm) and the resulting graph is targeted in inference only scenarios.
To run metrics: the interpreter provides an API to get the output value (as an array):
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
Then you apply your metric on the output.
I am trying to use a frozen, pretrained, DeepLabv3 model in a larger tf.keras training pipeline, but have been having trouble figuring out how to use it as a tf.keras Model. I am trying to use tf.keras as I feel there would be a slowdown using a feed_dict (the only way I know of to use a frozen graph) in the middle of multiple forward passes. The deeplab model referenced in the code below is built in regular keras (as opposed to tf.contrib.keras)
from keras import backend as K
# Create, compile and train model...
frozen_graph = freeze_session(K.get_session(),
output_names=[out.op.name for out in deeplab.outputs])
tf.train.write_graph(frozen_graph, "./", "my_model.pb", as_text=False)
graph = load_graph("my_model.pb")
# We can verify that we can access the list of operations in the graph
for op in graph.get_operations():
print(op.name)
# prefix/Placeholder/inputs_placeholder
# ...
# prefix/Accuracy/predictions
# We access the input and output nodes
x = graph.get_tensor_by_name("prefix/input_1:0")
y = graph.get_tensor_by_name("prefix/bilinear_upsampling_2/ResizeBilinear:0")
# We launch a Session
with tf.Session(graph=graph) as sess:
print(graph)
model2 = models.Model(inputs=x,outputs=y)
model2.summary()
and i get an error
ValueError: Input tensors to a Model must come from `tf.layers.Input`. Received: Tensor("prefix/input_1:0", shape=(?, 512, 512, 3), dtype=float32) (missing previous layer metadata).
I feel like I've seen others replace the input tensor with an Input Layer to trick tf.keras into building the graph, but after a few hours I am feeling stuck. Any help would be appreciated!
You can recreate the model object from its config. See the from_config method here https://keras.io/models/about-keras-models/.
The config is stored and loaded back by the save_model/load_model functions. I am not familiar with freeze_session.
I am currently working on vgg16 model with keras.
I fine tune vgg model with some of my layer.
After fitting my model (training), I save my model with model.save('name.h5').
It can be saved without problem.
However, when I try to reload the model with load_model function, it shows the error:
You are trying to load a weight file containing 17 layers into a model
with 0 layers
Did anyone meet this problem before?
My keras verion is 2.2.
Here is part of my code ...
from keras.models import load_model
vgg_model = VGG16(weights='imagenet',include_top=False,input_shape=(224,224,3))
global model_2
model_2 = Sequential()
for layer in vgg_model.layers:
model_2.add(layer)
for layer in model_2.layers:
layer.trainable= False
model_2.add(Flatten())
model_2.add(Dense(128, activation='relu'))
model_2.add(Dropout(0.5))
model_2.add(Dense(2, activation='softmax'))
model_2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model_2.fit(x=X_train,y=y_train,batch_size=32,epochs=30,verbose=2)
model_2.save('name.h5')
del model_2
model_2 = load_model('name.h5')
Actually I do not delete the model and then load_model immediately,
just for showing my problem.
It seems that this problem is related with the input_shape parameter of the first layer. I had this problem with a wrapper layer (Bidirectional) which did not have an input_shape parameter set. In code:
model.add(Bidirectional(LSTM(units=units, input_shape=(None, feature_size)), merge_mode='concat'))
did not work for loading my old model because the input_shape is only defined for the LSTM layer not the outer one. Instead
model.add(Bidirectional(LSTM(units=units), input_shape=(None, feature_size), merge_mode='concat'))
worked because the wrapper Birectional layer now has an input_shape parameter. Maybe you should check if the VGG net input_shape parameter is set or not or you should add a single input_layer to your model with the correct input_shape parameter.
I spent 6 hours looking around for a solution.. to apply me trained model.
finally i tried VGG16 as model and using h5 weights i´ve trained on my own and Great!
weights_model='C:/Anaconda/weightsnew2.h5' # my already trained weights .h5
vgg=applications.vgg16.VGG16()
cnn=Sequential()
for capa in vgg.layers:
cnn.add(capa)
cnn.layers.pop()
for layer in cnn.layers:
layer.trainable=False
cnn.add(Dense(2,activation='softmax'))
cnn.load_weights(weights_model)
def predict(file):
x = load_img(file, target_size=(longitud, altura))
x = img_to_array(x)
x = np.expand_dims(x, axis=0)
array = cnn.predict(x)
result = array[0]
respuesta = np.argmax(result)
if respuesta == 0:
print("Gato")
elif respuesta == 1:
print("Perro")
In case anyone is still wondering about this error:
I had the same Problem and spent days figuring out, whats causing it. I have a copy of my whole code and dataset on another system on which it worked. I noticed that it is something about the training, because without training my model, saving and loading was no problem.
The only difference between my systems was, that I was using tensorflow-gpu on my main system and for this reason, the tensorflow base version was a little bit lower (1.14.0 instead of 2.2.0). So all I had to do was using
model.fit_generator()
instead of
model.fit()
before saving it. And it works