I am a beginner in Keras and I am writing a simple program for MNIST but when I tried to load the model I am getting this error:
ValueError: You are trying to load a weight file containing 2 layers into a model with 0 layers.
This is my code:
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
#fixing random number seed
np.random.seed(7)
(X_train, Y_train),(X_test, Y_test) = mnist.load_data("D:\MY CODE PROJECT\CNN\datasets\mnist.npz")
num_pixel = X_train.shape[1] * X_train.shape[2]
#converting image to vector
X_train = X_train.reshape(X_train.shape[0],num_pixel).astype('float32')
X_test = X_test.reshape(X_test.shape[0],num_pixel).astype('float32')
# Normalizing Input from 0-255 to 0-1
X_train = X_train/255
X_test = X_test/255
#As output is multiclass so change output labels to 'ONE-HOT' ecodings Form
Y_train = np_utils.to_categorical(Y_train)
Y_test = np_utils.to_categorical(Y_test)
#defining simple Neural Network with one hidden layer
num_classes = Y_test.shape[1]
#creating model
model = Sequential()
model.add(Dense(num_pixel,activation = 'relu',kernel_initializer='normal'))
model.add(Dense(num_classes, kernel_initializer='normal',activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
#Fitting the model
model.fit(X_train,Y_train,batch_size=200,epochs=10,verbose=2,validation_data=(X_test,Y_test))
scores = model.evaluate(X_test,Y_test,verbose=0)
#Printing Error
print("baseline Error: %f" %(100-scores[1]*100))
model.save('mnist_nn_keras.h5')
del model
model = load_model('mnist_nn_keras.h5')
Can anyone explain what's wrong in the code? I am using Keras version 2.2.0.
you need to add input_shape to your model while adding layer instance. read the documentation for add function it's clearly talking about error details.
Please see below screenshot.
Related
I'm trying to solve the spiral problem using Keras with 3 spirals instead of 2 using a similar strategy that I used for 2. Problem is my loss is now growing exponentially instead of decreasing with the same parameters I used for 2 spirals (The neural network structure has 3 outputs instead of being binary). I'm not quite sure what could be happening with this issue if anyone could help? I have tried this with various epochs, learning rates, batch sizes.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.optimizers import RMSprop
from Question1.utils import create_neural_network, create_test_data
EPOCHS = 250
BATCH_SIZE = 20
def main():
df = three_spirals(1000)
# Set-up data
x_train = df[['x-coord', 'y-coord']].values
y_train = df['class'].values
# Don't need y_test, can inspect visually if it worked or not
x_test = create_test_data()
# Scale data
scaler = MinMaxScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
relu_model = create_neural_network(layers=3,
neurons=[40, 40, 40],
activation='relu',
optimizer=RMSprop(learning_rate=0.001),
loss='categorical_crossentropy',
outputs=3)
# Train networks
relu_model.fit(x=x_train, y=y_train, epochs=EPOCHS, verbose=1, batch_size=BATCH_SIZE)
# Predictions on test data
relu_predictions = relu_model.predict_classes(x_test)
models = [relu_model]
test_predictions = [relu_predictions]
# Plot
plot_data(models, test_predictions)
And here is the create_neural_network function:
def create_neural_network(layers, neurons, activation, optimizer, loss, outputs=1):
if layers != len(neurons):
raise ValueError("Number of layers doesn't much the amount of neuron layers.")
model = Sequential()
for i in range(layers):
model.add(Dense(neurons[i], activation=activation))
# Output
if outputs == 1:
model.add(Dense(outputs))
else:
model.add(Dense(outputs, activation='softmax'))
model.compile(optimizer=optimizer,
loss=loss)
return model
I have worked it out, for the output data it isn't like a binary classification where you only need one column. For multi classification you need a column for each class you want to classify...so where I had y could be 0, 1, 2 was incorrect. The correct way to do this was to have y0, y1, y2 which would be 1 if it fit that specific class and 0 if it didn't.
I'm new to PyTorch and I'm using the classic MNIST dataset for image classification.While fitting the model, I'm getting the error :
NotImplementedError: uint8
I'm using fastai's library's class as a wrapper for all training and validation data and a very basic single-layer neural network. The code I'm using is as follows :
from keras.datasets import mnist
import matplotlib.pyplot as plt
from fastai.metrics import *
from fastai.model import *
from fastai.dataset import *
import torch.nn as nn
(x_train, y_train), (x_valid, y_valid) = mnist.load_data()
net = nn.Sequential(
nn.Linear(784,10),
nn.Softmax()).cuda()
md = ImageClassifierData.from_arrays('/data/mnist',
(x_train,y_train),
(x_valid, y_valid))
loss = nn.NLLLoss()
metrics = [accuracy]
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
Can someone tell what is this error about and it's solution?
I'm getting a numpy shape error when I use the predict function of a Keras estimator. I build, evaluate, and then retrain the model using the following code:
import pandas as pd
import sqlalchemy as sqla
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from keras.utils.np_utils import to_categorical
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
# Connect to to the DB and retrieve the iris table
con = sqla.create_engine('postgresql://tristan:sebens#db:5432/tristan')
con.connect()
table_name = "iris"
schema = "public"
iris = pd.read_sql_table(table_name, con, schema=schema)
iris.head()
iris_ds = iris.values # Convert the table to a numpy array
X = iris_ds[:, 0:4].astype(float) # Slice the descriptive features into a numpy array
Y = iris_ds[:, 4] # Slice the labels away as their own numpy array
# The labels are encoded as strings, so we need to encode them
# as numbers that can be output by an ANN
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = to_categorical(encoded_Y)
# define baseline model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(3, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
seed = 7
# Train the model:
# First we define the model as a classifier. This will affect the process used to train it
estimator = KerasClassifier(build_fn=baseline_model, epochs=200, batch_size=5, verbose=0)
# Honestly not totally sure what this is, but it has to do with splitting the training/evaluation data in
# a way that gives us a more realistic metric of the model's accuracy
kfold = KFold(n_splits=10, shuffle=True, random_state=seed)
# Now that we have our classifier and our data pipeline defined, we can begin the training process
results = cross_val_score(estimator, X, dummy_y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
# If we like our accuracy, then we can train the model for real
# Evaluating the model actually evaluates a clone of the model, so now we need to train the model again
estimator.fit(X, dummy_y)
And this is where the trouble is. I try to make a test prediction:
# Let's make a test prediction with our model
x = X[0]
estimator.predict(x)
And I get an input shape error:
ValueError: Error when checking input: expected dense_21_input to have shape (4,) but got array with shape (1,)
I'm at a loss. How can the input have the wrong shape if it's literally a member of the training dataset?
I have built an autoencoder (1 encoder 8:5, 1 decoder 5:8) which takes the Pima-Indian-Diabetes dataset (https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv) and reduces its dimension (from 8 to 5). I would now like to use these reduced features to classify the data using an mlp. Now, here, I have some problems with the basic understanding of the architecture. How do I use the weights of the autoencoder and feed them into the mlp? I have checked these threads - https://github.com/keras-team/keras/issues/91 and https://www.codementor.io/nitinsurya/how-to-re-initialize-keras-model-weights-et41zre2g. The question here is which weight matrix should I consider? the one for the encoder part or the decoder part? When I add the layers for the mlp, how do I initialise the weights with these saved weights, not getting the exact syntax. Also, should my mlp start with 5 neurons since my reduced dimension is 5? What are the possible dimensions of the mlp for this binary classification problem? If anyone could elaborate please?
The deep autoencoder code is as follows:
# from keras.models import Sequential
from keras.layers import Input, Dense
from keras.models import Model
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import numpy
# Data pre-processing...
# load pima indians dataset
dataset = numpy.loadtxt("C:/Users/dibsa/Python Codes/pima.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:, 0:8]
Y = dataset[:, 8]
# Split data into training and testing datasets
x_train, x_test, y_train, y_test = train_test_split(
X, Y, test_size=0.2, random_state=42)
# scale the data within [0-1] range
scalar = MinMaxScaler()
x_train = scalar.fit_transform(x_train)
x_test = scalar.fit_transform(x_test)
# Autoencoder code begins here...
encoding_dim1 = 5 # size of encoded representations
encoding_dim2 = 3 # size of encoded representations in the bottleneck layer
# this is our input placeholder
input_data = Input(shape=(8,))
# "encoded" is the first encoded representation of the input
encoded = Dense(encoding_dim1, activation='relu', name='encoder1')(input_data)
# "enc" is the second encoded representation of the input
enc = Dense(encoding_dim2, activation='relu', name='encoder2')(encoded)
# "dec" is the lossy reconstruction of the input
dec = Dense(encoding_dim1, activation='sigmoid', name='decoder1')(enc)
# "decoded" is the final lossy reconstruction of the input
decoded = Dense(8, activation='sigmoid', name='decoder2')(dec)
# this model maps an input to its reconstruction
autoencoder = Model(inputs=input_data, outputs=decoded)
autoencoder.compile(optimizer='sgd', loss='mse')
# training
autoencoder.fit(x_train, x_train,
epochs=300,
batch_size=10,
shuffle=True,
validation_data=(x_test, x_test)) # need more tuning
# test the autoencoder by encoding and decoding the test dataset
reconstructions = autoencoder.predict(x_test)
print('Original test data')
print(x_test)
print('Reconstructed test data')
print(reconstructions)
#The stacked autoencoder code is as follows:
# from keras.models import Sequential
from keras.layers import Input, Dense
from keras.models import Model
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import numpy
# Data pre-processing...
# load pima indians dataset
dataset = numpy.loadtxt("C:/Users/dibsa/Python Codes/pima.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:, 0:8]
Y = dataset[:, 8]
# Split data into training and testing datasets
x_train, x_test, y_train, y_test = train_test_split(
X, Y, test_size=0.2, random_state=42)
# scale the data within [0-1] range
scalar = MinMaxScaler()
x_train = scalar.fit_transform(x_train)
x_test = scalar.fit_transform(x_test)
# Autoencoder code goes here...
encoding_dim1 = 5 # size of encoded representations
encoding_dim2 = 3 # size of encoded representations in the bottleneck layer
# this is our input placeholder
input_data1 = Input(shape=(8,))
# the first encoded representation of the input
encoded1 = Dense(encoding_dim1, activation='relu',
name='encoder1')(input_data1)
# the first lossy reconstruction of the input
decoded1 = Dense(8, activation='sigmoid', name='decoder1')(encoded1)
# this model maps an input to its first layer of reconstructions
autoencoder1 = Model(inputs=input_data1, outputs=decoded1)
# this is the first encoder model
enc1 = Model(inputs=input_data1, outputs=encoded1)
autoencoder1.compile(optimizer='sgd', loss='mse')
# training
autoencoder1.fit(x_train, x_train, epochs=300,
batch_size=10, shuffle=True,
validation_data=(x_test, x_test))
FirstAEoutput = autoencoder1.predict(x_train)
input_data2 = Input(shape=(encoding_dim1,))
# the second encoded representations of the input
encoded2 = Dense(encoding_dim2, activation='relu',
name='encoder2')(input_data2)
# the final lossy reconstruction of the input
decoded2 = Dense(encoding_dim1, activation='sigmoid',
name='decoder2')(encoded2)
# this model maps an input to its second layer of reconstructions
autoencoder2 = Model(inputs=input_data2, outputs=decoded2)
# this is the second encoder
enc2 = Model(inputs=input_data2, outputs=encoded2)
autoencoder2.compile(optimizer='sgd', loss='mse')
# training
autoencoder2.fit(FirstAEoutput, FirstAEoutput, epochs=300,
batch_size=10, shuffle=True)
# this is the overall autoencoder mapping an input to its final reconstructions
autoencoder = Model(inputs=input_data1, outputs=encoded2)
# test the autoencoder by encoding and decoding the test dataset
reconstructions = autoencoder.predict(x_test)
print('Original test data')
print(x_test)
print('Reconstructed test data')
print(reconstructions)
If your decoder is trying to reconstruct the input, then it doesn't really make sense to me to attach your classifier to its output. I mean, why not just attach it to the input in the first time? So if you are set on using an auto-encoder, I'd say it's pretty clear that you should attach your classifier to the output of the encoder pipe.
I'm not quite sure what you mean with "use the weights of the autoencoder and feed them into the mlp". You don't feed a layer with another layer's weights, but with it's output signal. This is pretty easy to do on Keras. Let's say you defined your auto-encoder and trained it as such:
from keras Input, Model
from keras import backend as K
from keras.layers import Dense
x = Input(shape=[8])
y = Dense(5, activation='sigmoid' name='encoder')(x)
y = Dense(8, name='decoder')(y)
ae = Model(inputs=x, outputs=y)
ae.compile(loss='mse', ...)
ae.fit(x_train, x_train, ...)
K.models.save_model(ae, './autoencoder.h5')
Then you can attach a classifying layer at the encoder and create a classifier model with the following code:
# load the model from the disk if you
# are in a different execution.
ae = K.models.load_model('./autoencoder.h5')
y = ae.get_layer('encoder').output
y = Dense(1, activation='sigmoid', name='predictions')(y)
classifier = Model(inputs=ae.inputs, outputs=y)
classifier.compile(loss='binary_crossentropy', ...)
classifier.fit(x_train, y_train, ...)
That's it, really. The classifier model will now have the first embedding layer encoder of the ae model as its first layer, followed by a sigmoid decision layer predictions.
If what you are really trying to do is to use the weights learned by the auto-encoder to initialize the weights from the classifier (I'm not positive I recommend this approach):
You can take the weight matrices with layer#get_weights, prune it (because the encoder has 5 units and the classifier only has 1) and finally set the classifier weights. Something in the following lines:
w, b = ae.get_layer('encoder').get_weights()
# remove all units except by one.
neuron_to_keep = 2
w = w[:, neuron_to_keep:neuron_to_keep + 1]
b = b[neuron_to_keep:neuron_to_keep + 1]
classifier.get_layer('predictions').set_weights(w, b)
Idavid, this is for your reference - MLP using Autoencoder reduced features. I need to understand which figure is the correct one? Sorry, I had to upload the picture as an answer as there was no option of uploading an image via comment. I think you are saying figure B is the correct one. Here is the code snippet for the same. Please let me know if I going right.
# This is a mlp classification code with features reduced by an Autoencoder
# from keras.models import Sequential
from keras.layers import Input, Dense
from keras.models import Model
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import numpy
# Data pre-processing...
# load pima indians dataset
dataset = numpy.loadtxt("C:/Users/dibsa/Python Codes/pima.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:, 0:8]
Y = dataset[:, 8]
# Split data into training and testing datasets
x_train, x_test, y_train, y_test = train_test_split(
X, Y, test_size=0.2, random_state=42)
# scale the data within [0-1] range
scalar = MinMaxScaler()
x_train = scalar.fit_transform(x_train)
x_test = scalar.fit_transform(x_test)
# Autoencoder code goes here...
encoding_dim = 5 # size of our encoded representations
# this is our input placeholder
input_data = Input(shape=(8,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu', name='encoder')(input_data)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(8, activation='sigmoid', name='decoder')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(inputs=input_data, outputs=decoded)
autoencoder.compile(optimizer='sgd', loss='mse')
# training
autoencoder.fit(x_train, x_train,
epochs=300,
batch_size=10,
shuffle=True,
validation_data=(x_test, x_test)) # need more tuning
# test the autoencoder by encoding and decoding the test dataset
reconstructions = autoencoder.predict(x_test)
print('Original test data')
print(x_test)
print('Reconstructed test data')
print(reconstructions)
# MLP code goes here...
# create model
x = autoencoder.get_layer('encoder').output
# h = Dense(3, activation='relu', name='hidden')(x)
y = Dense(1, activation='sigmoid', name='predictions')(x)
classifier = Model(inputs=autoencoder.inputs, outputs=y)
# Compile model
classifier.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# Fit the model
classifier.fit(x_train, y_train, epochs=250, batch_size=10)
print('Now making predictions')
predictions = classifier.predict(x_test)
# round predictions
rounded_predicted_classes = [round(x[0]) for x in predictions]
temp = sum(y_test == rounded_predicted_classes)
acc = temp/len(y_test)
print(acc)
I have the following code, using Keras Scikit-Learn Wrapper:
from keras.models import Sequential
from sklearn import datasets
from keras.layers import Dense
from sklearn.model_selection import train_test_split
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
from sklearn import preprocessing
import pickle
import numpy as np
import json
def classifier(X, y):
"""
Description of classifier
"""
NOF_ROW, NOF_COL = X.shape
def create_model():
# create model
model = Sequential()
model.add(Dense(12, input_dim=NOF_COL, init='uniform', activation='relu'))
model.add(Dense(6, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# evaluate using 10-fold cross validation
seed = 7
np.random.seed(seed)
model = KerasClassifier(build_fn=create_model, nb_epoch=150, batch_size=10, verbose=0)
return model
def main():
"""
Description of main
"""
iris = datasets.load_iris()
X, y = iris.data, iris.target
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
model_tt = classifier(X_train, y_train)
model_tt.fit(X_train,y_train)
#--------------------------------------------------
# This fail
#--------------------------------------------------
filename = 'finalized_model.sav'
pickle.dump(model_tt, open(filename, 'wb'))
# load the model from disk
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, Y_test)
print(result)
#--------------------------------------------------
# This also fail
#--------------------------------------------------
# from keras.models import load_model
# model_tt.save('test_model.h5')
#--------------------------------------------------
# This works OK
#--------------------------------------------------
# print model_tt.score(X_test, y_test)
# print model_tt.predict_proba(X_test)
# print model_tt.predict(X_test)
# Output of predict_proba
# 2nd column is the probability that the prediction is 1
# this value is used as final score, which can be used
# with other method as comparison
# [ [ 0.25311464 0.74688536]
# [ 0.84401423 0.15598579]
# [ 0.96047372 0.03952631]
# ...,
# [ 0.25518912 0.74481088]
# [ 0.91467732 0.08532269]
# [ 0.25473493 0.74526507]]
# Output of predict
# [[1]
# [0]
# [0]
# ...,
# [1]
# [0]
# [1]]
if __name__ == '__main__':
main()
As stated in the code there it fails at this line:
pickle.dump(model_tt, open(filename, 'wb'))
With this error:
pickle.PicklingError: Can't pickle <function create_model at 0x101c09320>: it's not found as __main__.create_model
How can I get around it?
Edit 1 : Original answer about saving model
With HDF5 :
# saving model
json_model = model_tt.model.to_json()
open('model_architecture.json', 'w').write(json_model)
# saving weights
model_tt.model.save_weights('model_weights.h5', overwrite=True)
# loading model
from keras.models import model_from_json
model = model_from_json(open('model_architecture.json').read())
model.load_weights('model_weights.h5')
# dont forget to compile your model
model.compile(loss='binary_crossentropy', optimizer='adam')
Edit 2 : full code example with iris dataset
# Train model and make predictions
import numpy
import pandas
from keras.models import Sequential, model_from_json
from keras.layers import Dense
from keras.utils import np_utils
from sklearn import datasets
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load dataset
iris = datasets.load_iris()
X, Y, labels = iris.data, iris.target, iris.target_names
X = preprocessing.scale(X)
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# convert integers to dummy variables (i.e. one hot encoded)
y = np_utils.to_categorical(encoded_Y)
def build_model():
# create model
model = Sequential()
model.add(Dense(4, input_dim=4, init='normal', activation='relu'))
model.add(Dense(3, init='normal', activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
def save_model(model):
# saving model
json_model = model.to_json()
open('model_architecture.json', 'w').write(json_model)
# saving weights
model.save_weights('model_weights.h5', overwrite=True)
def load_model():
# loading model
model = model_from_json(open('model_architecture.json').read())
model.load_weights('model_weights.h5')
model.compile(loss='categorical_crossentropy', optimizer='adam')
return model
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.3, random_state=seed)
# build
model = build_model()
model.fit(X_train, Y_train, nb_epoch=200, batch_size=5, verbose=0)
# save
save_model(model)
# load
model = load_model()
# predictions
predictions = model.predict_classes(X_test, verbose=0)
print(predictions)
# reverse encoding
for pred in predictions:
print(labels[pred])
Please note that I used Keras only, not the wrapper. It only add some complexity in something simple. Also code is volontary not factored so you can have the whole picture.
Also, you said you want to output 1 or 0. It is not possible in this dataset because you have 3 output dims and classes (Iris-setosa, Iris-versicolor, Iris-virginica). If you had only 2 classes then your output dim and classes would be 0 or 1 using sigmoid output fonction.
Just adding to gaarv's answer - If you don't require the separation between the model structure (model.to_json()) and the weights (model.save_weights()), you can use one of the following:
Use the built-in keras.models.save_model and 'keras.models.load_model` that store everything together in a hdf5 file.
Use pickle to serialize the Model object (or any class that contains references to it) into file/network/whatever..
Unfortunetaly, Keras doesn't support pickle by default. You can use
my patchy solution that adds this missing feature. Working code is
here: http://zachmoshe.com/2017/04/03/pickling-keras-models.html
Another great alternative is to use callbacks when you fit your model. Specifically the ModelCheckpoint callback, like this:
from keras.callbacks import ModelCheckpoint
#Create instance of ModelCheckpoint
chk = ModelCheckpoint("myModel.h5", monitor='val_loss', save_best_only=False)
#add that callback to the list of callbacks to pass
callbacks_list = [chk]
#create your model
model_tt = KerasClassifier(build_fn=create_model, nb_epoch=150, batch_size=10)
#fit your model with your data. Pass the callback(s) here
model_tt.fit(X_train,y_train, callbacks=callbacks_list)
This will save your training each epoch to the myModel.h5 file. This provides great benefits, as you are able to stop your training when you desire (like when you see it has started to overfit), and still retain the previous training.
Note that this saves both the structure and weights in the same hdf5 file (as showed by Zach), so you can then load you model using keras.models.load_model.
If you want to save only your weights separately, you can then use the save_weights_only=True argument when instantiating your ModelCheckpoint, enabling you to load your model as explained by Gaarv. Extracting from the docs:
save_weights_only: if True, then only the model's weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)).
The accepted answer is too complicated. You can fully save and restore every aspect of your model in a .h5 file. Straight from the Keras FAQ:
You can use model.save(filepath) to save a Keras model into a single
HDF5 file which will contain:
the architecture of the model, allowing to re-create the model
the weights of the model
the training configuration (loss, optimizer)
the state of the optimizer, allowing to resume training exactly where you left off.
You can then use keras.models.load_model(filepath) to reinstantiate your model. load_model will also take care of compiling the model using the saved training configuration (unless the model was never compiled in the first place).
And the corresponding code:
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model identical to the previous one
model = load_model('my_model.h5')
In case your keras wrapper model is in a scikit pipeline, you save steps in the pipeline separately.
import joblib
from sklearn.pipeline import Pipeline
from tensorflow import keras
# pass the create_cnn_model function into wrapper
cnn_model = keras.wrappers.scikit_learn.KerasClassifier(build_fn=create_cnn_model)
# create pipeline
cnn_model_pipeline_estimator = Pipeline([
('preprocessing_pipeline', pipeline_estimator),
('clf', cnn_model)
])
# train model
final_model = cnn_model_pipeline_estimator.fit(
X, y, clf__batch_size=32, clf__epochs=15)
# collect the preprocessing pipeline & model seperately
pipeline_estimator = final_model.named_steps['preprocessing_pipeline']
clf = final_model.named_steps['clf']
# store pipeline and model seperately
joblib.dump(pipeline_estimator, open('path/to/pipeline.pkl', 'wb'))
clf.model.save('path/to/model.h5')
# load pipeline and model
pipeline_estimator = joblib.load('path/to/pipeline.pxl')
model = keras.models.load_model('path/to/model.h5')
new_example = [[...]]
# transform new data with pipeline & use model for prediction
transformed_data = pipeline_estimator.transform(new_example)
prediction = model.predict(transformed_data)