I have a few Tensorflow models saved as .h5 files.
Due to poor record-keeping and documentation on my part, I can't recall the exact architecture each has. So, I was wondering if there was a way, from the h5 files saved for each model, to inspect the models and determine the architecture.
For example, is there a way to find out the number of layers, the activation functions, input/ouput, size, etc.
Any help is appreciated.
Thanks,
Sam
As suggested by Edwin in comments you can load the model and see the summary for layer details.
You can use the below code to get both the information about activation function and model architecture.
from tensorflow.keras.models import load_model
model = load_model('saved_model.h5')
for layer in model:
try:
print(layer.activation)
#for some layers there will not be any activation fucntion.
except:
pass
#To get the name of layers in the model.
layer_names=[layer.name for layer in model.layers]
#for model's summary and details.
model.summary()
Related
Really don't have much idea of what I'm doing, followed this tutorial to process deepdream images https://www.youtube.com/watch?v=Wkh72OKmcKI
Trying to change the base model data set to any from here, https://keras.io/api/applications/#models-for-image-classification-with-weights-trained-on-imagenet particularly InceptionResNetV2 currently. InceptionV3 uses "mixed0" up to "mixed10" whereas, the former data set uses a different naming system apparently.
Would have to change this section
# Maximize the activations of these layers
names = ['mixed3', 'mixed5']
layers = [base_model.get_layer(name).output for name in names]
# Create the feature extraction model
dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)
I'm getting an error "no such layer: mixed3"
So yea, just trying to figure out how to get the names for the layers in this data set as well as others
You can simply enter the following code to find out the model architecture(including layer names).
#Model denotes the Inception model
model.summary()
Or to visualize complex relationships,
tf.keras.utils.plot_model(model, to_file='model.png')
I want to train a model in a sequential manner. That is I want to train the model initially with a simple architecture and once it is trained, I want to add a couple of layers and continue training. Is it possible to do this in Keras? If so, how?
I tried to modify the model architecture. But until I compile, the changes are not effective. Once I compile, all the weights are re-initialized and I lose all the trained information.
All the questions in web and SO I found are either about loading a pre-trained model and continuing training or modifying the architecture of pre-trained model and then only test it. I didn't find anything related to my question. Any pointers are also highly appreciated.
PS: I'm using Keras in tensorflow 2.0 package.
Without knowing the details of your model, the following snippet might help:
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
# Train your initial model
def get_initial_model():
...
return model
model = get_initial_model()
model.fit(...)
model.save_weights('initial_model_weights.h5')
# Use Model API to create another model, built on your initial model
initial_model = get_initial_model()
initial_model.load_weights('initial_model_weights.h5')
nn_input = Input(...)
x = initial_model(nn_input)
x = Dense(...)(x) # This is the additional layer, connected to your initial model
nn_output = Dense(...)(x)
# Combine your model
full_model = Model(inputs=nn_input, outputs=nn_output)
# Compile and train as usual
full_model.compile(...)
full_model.fit(...)
Basically, you train your initial model, save it. And reload it again, and wrap it together with your additional layers using the Model API. If you are not familiar with Model API, you can check out the Keras documentation here (afaik the API remains the same for Tensorflow.Keras 2.0).
Note that you need to check if your initial model's final layer's output shape is compatible with the additional layers (e.g. you might want to remove the final Dense layer from your initial model if you are just doing feature extraction).
I've got an imported model with type of 'keras.engine.training.Model' and I want a Sequential model.
I tried this:
model = ..imported model..
seq_model = Sequential()
for layer in model.layers:
seq_model.add(layer)
But it said that "ValueError: A merge layer should be called on a list of inputs."
If you're trying to insert ResNet, Please check functional API to use it as an output layer.
However, e.g: if you're using VGG16 please check this.
I am trying to do video classification using Conv2D and LSTM. After getting the features from Conv2D I pass them to LSTM. As they are two different models how can I merge them into one for getting a single .h5py?
And also the features from Conv2D are passed to LSTM as sequences of frames saved as ".npy" array files.
I need to save the model for different purposes.
Assuming you are using tf keras for merge you can use this method:
model = keras.models.Model(inputs=[Conv2D, LSTM], outputs=out)
And to save your model you can just use .save as explained here
model.save('your_model.h5')
Not exactly sure if this is what you need but hope it helps.
This is more of an 'issue' rather than a question but, I noticed something today while trying some transfer learning using Keras. I found that the InceptionV3 model and pre-trained weights on Francois Chollet's repository are different from the Kaggle one. I checked that using the diff command.
Not only that, when I use the code block as below--
from keras.applications.inception_v3 import InceptionV3
Inception_pretrained_weight = '../pre_weight/inception_v3_weights_tf_dim_ordering_tf_kernels_chollet.h5'
pre_trained_model = InceptionV3(input_shape=(160, 160, 3),
include_top=False,
weights=Inception_pretrained_weight)
for lr in pre_trained_model.layers:
lr.trainable = False
for layr in pre_trained_model.layers:
print("layer names: ", layr.name)
I get an error as--
ValueError: Layer #0 (named "conv2d_753" in the current model) was found to correspond to layer convolution2d_1 in the save file. However the new layer conv2d_753 expects 1 weights, but the saved weights have 2 elements.
This does not happen with the model available on the Kaggle page. Have anyone noticed this yet? Does anyone know how and why these models differ ? I found another post on this-here, but it does not really help. Any suggestions would be appreciated.