How to save and reuse all settings for a keras model? - python

The question:
With a keras model (partly) specified such as this:
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10)
Is it in any way possible to save all details in the model for later use?
The details:
I've been following an example from machinelearningmastery.com and trying to modify and add traits / arguments of the model such as
activation='relu'
activation='sigmoid'
metrics=['accuracy']
And as the question suggests, I'd like to store model settings for later use.
I understand that these arguments are parts of different functions, but shouldn't it be possible all the same?
What I've tried:
1. model.save() and model.load()
Only returns
AttributeError: 'Sequential' object has no attribute 'load'
2. model.get_config()
Here I've been able to find some of the settings such as:
[{'class_name': 'Dense', 'config': {'activation': 'relu',
But I haven't found a way to load that config as a standalone model, and more often than not, I can't seem to find all settings.
3. I've also checked other posts such as Keras - Reuse weights from a previous layer - converting to keras tensor, but all aspects of the model don't seem to be covered.
Any suggestions?

Instead of trying model.load() try using load_model() provided by keras to load the model you saved using model.save()
from keras.models import load_model
load_model(filepath)
Also you can save the model as json using model.to_json() and load from json using model_from_json()
You can see more ways to save and load a model in Keras Documentation
here

model.save() will do the trick to save the model, to load it use from keras.models import load_model and use model=load_model(model_name) to load the model

Related

Using Tensorflow 2.0 and eager execution without Keras

So this question might stem from a lack of knowledge about tensorflow. But I am trying to build a multilayer perceptron with tensorflow 2.0, but without Keras.
The reason being that it is a requirement for my machine learning course that we do not use keras. Why you might ask? I am not sure.
I already have implemented our model in tensorflow 2.0 with Keras ease, and now I want to do the exact same thing without keras.
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=784))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(5, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=Adam(),
metrics=['accuracy'])
X_train = X[:7000]
y_train = tf.keras.utils.to_categorical(y[:7000], num_classes=5)
X_dev = X[7000:]
y_dev = tf.keras.utils.to_categorical(y[7000:], num_classes=5)
model.fit(X_train, y_train,
epochs=100,
batch_size=128)
score = model.evaluate(X_dev, y_dev, batch_size=128)
print(score)
Here is my problem. Whenever I look up the documentation on Tensorflow 2.0, then even the guides on custom training are using Keras.
As placeholders and sessions are a thing of the past in tensorflow 2.0, as I understand it, then I am a bit unsure of how to structure it.
I can make tensor objects. I have the impression that I need to use eager execution and use gradient tape. But I still am unsure of how to put these things together.
Now my question is. Where should I look to get a better understanding? Which direction has the greatest descent?
Please do tell me if I am doing this stack overflow post wrong. It is my first time here.
As #Daniel Möller stated, there are these tutorials for custom training and custom layers on the official TensorFlow page. As stated on the custom training page:
This tutorial used tf.Variable to build and train a simple linear model.
There is also this blog that creates custom layers and training without Keras API. You can check this code on Google Colab, which uses Cifar-10 with custom layers and training in the same manner.

Keras: model.fit() and model.fit_generator() return history objects. How do I get Keras models?

I'm doing a guided RNN project. I'm using a textbook to guide me but I'm doing a lot of things on my own. I've encountered an issue from the fact that history, below, is not a Keras model but rather a history object.
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
from keras.layers import LSTM
model = Sequential()
model.add(layers.Flatten(input_shape=(7,data.shape[-1])))
model.add(layers.Dense(32,activation='relu'))
model.add(layers.Dense(1))
val_steps = 99999//20
model.compile(optimizer=RMSprop(),loss='mae')
history = model.fit_generator(trainGen,
steps_per_epoch=250,
epochs=20,
validation_data=valGen,
validation_steps=val_steps,
use_multiprocessing=False)
The error occurs when I type the below due to the fact that history is a History object. Is there a way to extract a keras object? Thank you in advance.
predictions = history.predict(testData)
Sorry I can't comment yet. Why are you calling predict on the history and not the model itself?
predictions = model.predict(testData)

Getting 'can't pickle _thread.RLock objects' when trying to store a neural network

I am currently training a neural network and I try to store the trained model for future use. The model is based on Sequential from keras (see below). I am using joblib.dump(model, output_file_gen) to store the information. However, I get the error message:
TypeError: can't pickle _thread.RLock objects.
I have looked at some StackOverflow posts regarding this error message and it seems to relate to multithreading. I am not sure what happens in the model but maybe somebody can give me advice how to store the model either by taking steps to get rid of this error or by suggesting a better route to store a neural network.
NN setup is included below:
model = Sequential()
model.add(Dense(256, input_dim=self.latent_dim))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(1024))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(np.prod(self.img_shape), activation='tanh'))
model.add(Reshape(self.img_shape))
It is not recommended to use pickle or cPickle to save a Keras model.which is the cause of the error here (loosely reasoned)
You can use model.save(filepath) to save the model into a single HDF5 file which will contain:
the architecture of the model, allowing to re-create the model
the weights of the model
the training configuration (loss, optimizer)
the state of the optimizer, allowing to resume training exactly where you left off.
You can then use keras.models.load_model(filepath) to reinstantiate/reload your model.
The above will use a lot of disk space. so you can alternatively save the model weights. see here for more details

ValueError: You are trying to load a weight file containing 16 layers into a model with 0 layers

I came across some answers that said that the pretrained model weight files get automatically downloaded when we declare it in the .keras/models/ directory. vgg=VGG16(weights='imagenet') I managed to locate the file from the directory and copied it to my working directory. When i try to load the model, the script returns with the error
ValueError: You are trying to load a weight file containing 16 layers into
a model with 0 layers.
What should I do?
My source code is as follows
model=Sequential()
model.add(Concatenate([image_model, language_model]))
model.add(LSTM(1000, return_sequences=False))
model.add(Dense(vocab_size))
model.add(Activation('softmax'))
model.load_weights('./models/vgg16_weights.h5')
model.compile(loss='categorical_crossentropy', optimizer=Nadam(),
metrics=['accuracy'])
model.summary()
model.fit([images, captions], next_words, batch_size=512, epochs=50)
I think in the newer versions of Keras you don't need to define model architecture and load weights instead you can directly load VGG16 model from Keras like this
from keras.applications.vgg16 import VGG16
Here is the link to other models

tf.keras.models.save_model and optimizer warning

I created a Sequential model using tf.keras as follows:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, input_dim=4))
model.add(tf.keras.layers.Dense(3, activation=tf.nn.softmax))
opt = tf.train.AdamOptimizer(learning_rate=0.001)
model.compile(optimizer=opt, loss="categorical_crossentropy", metrics=["accuracy"])
model.summary()
After that, I created a training process using train_on_batch:
EPOCHS=50
for epoch in range(EPOCHS):
for metrics, labels in dataset:
# Calculate training loss and accuracy
tr_loss, tr_accuracy = model.train_on_batch(metrics, labels)
When I try to save the model, I receive a warning. I can't understand why, because I included the optimizer as part of the model.compile:
tf.keras.models.save_model(
model,
"./model/iris_model.h5",
overwrite=True,
include_optimizer=True
)
WARNING:tensorflow:TensorFlow optimizers do not make it possible to access optimizer attributes or optimizer state after instantiation. As a result, we cannot save the optimizer as part of the model save file.You will have to compile your model again after loading it. Prefer using a Keras optimizer instead (see keras.io/optimizers).
The TF version I used is 1.9.0-rc2.
As the warning says the Tensorflow optimizers cannot be saved when saving the model. Instead, use the optimizers provided by Keras:
opt = tf.keras.optimizers.Adam(lr=0.001)
Optimizer like tf.keras.optimizers.Adam() will be saved, and tf.train.AdamOptimizer() will not be saved on model.save().
As the time as writing this some official tutorials on TensorFlow use tf.train.* optimizer while I strongly believe selecting the tf.keras.optimizers.* is the best way to go.

Categories