How to save Scikit-Learn-Keras model in Keras - python

I want to save the trained model into my disk. As far as I know, I can save the model using the following code:
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
However I am using keras classifier and kfold and fitting the model in the background, my code is:
def baseline_model(optimizer='adam', init='random_uniform'):
# create model
model = Sequential()
model.add(Dense(40, input_dim=18260, activation="relu", kernel_initializer=init))
model.add(Dense(10, activation="sigmoid", kernel_initializer=init))
model.add(Dense(4, activation="softmax", kernel_initializer=init))
model.summary()
# Compile model
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
estimator = KerasClassifier(build_fn=baseline_model, validation_split=0.33, nb_epoch=100, batch_size=10, verbose=1)
kfold = KFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X, Y, cv=kfold)
print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
How can I save the trained model in this case?
Also, if I want to make predictions, how to do that given that I don't have the fitted model?

Related

Model.save and load giving different result

I want to train a model, save it, close the python session and in a new python session load the trained model and obtain the same accuracy. Currently when I try to do this, the loaded model gives random predictions and it is as though it wasn't trained.
The issue is not using the saved model in the same session. If I save a model from session 1 and load it in session 2 and use two exactly the same data to perform inference, the results are different.
Here is my save and load model.
Save Model:
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim, input_length=train_padded.shape[1]))
model.add(Conv1D(48, 5, activation='relu', padding='valid'))
model.add(GlobalMaxPooling1D())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(11, activation='softmax'))
model.compile(loss= 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
epochs = 50
batch_size = 32
history = model.fit(train_padded, training_labels, shuffle=True ,
epochs=epochs, batch_size=batch_size,
validation_split=0.2,
callbacks=[ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.0001),
EarlyStopping(monitor='val_loss', mode='min', patience=2, verbose=1),
EarlyStopping(monitor='val_accuracy', mode='max', patience=5, verbose=1)])
model.save('model.h5')
scores = model.evaluate(train_padded, training_labels, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
84%
Load Model:
model=tf.keras.models.load_model("model.h5")
scores = model.evaluate(train_padded, training_labels, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
20%
To be more specific, the inference results from the session in which the model was built is much better compared to results from a different session using the same model.
The weights for both model are the same but their accuracy are drop as shown above

save a model with KerasPickleWrapper including data preprocessing steps

I would like to save a model in keras using KerasPickleWrapper as a .sav format and send it to another person in order to do the predictions.
I scaled my data, and the other party will not. I was wondering if there's a way to wrap the scaling procedure, so the other party would not need to inverse the scaling before making the predictions.
Here's my logic behind the question, I don't know if this is correct to assume: If I fit the model with the scaled training data, my model will not perform well once the other person try do predictions using un-scaled data.
#scale data:
scaler.fit(X_train)
X_train=scaler.fit_transform(X_train)
Y_train=scaler.fit_transform(Y_train)
X_test=scaler.fit_transform(X_test)
Y_test=scaler.fit_transform(Y_test)
#Model
optimizer = keras.optimizers.Adam(lr=0.0001)
model = Sequential()
model.add(Dense(1, input_dim=1, activation='relu'))
model.add(Dense(10583, activation='relu')
model.add(Dense(1, activation='linear'))
#compile model
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mse'])
#wrap model
mw = KerasPickleWrapper(model)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
#Fit model
history= mw().fit(X_train_Xaxis, Y_train_Xaxis, epochs=100, batch_size=32, validation_split=0.2, validation_data=None, verbose=1, callbacks=[callback])
#Save Model
import pickle
filename = 'model.sav'
pickle.dump(mw, open(filename, 'wb'))

How to obtain outputs that are from future in a keras sequential model

i'm a beginner in python and i'm interested in machine learning attached to finance. I have created a model to predict future prices reading from a csv file, I have created the neural network, I get it to have very little loss but the outputs cannot be extracted or I just haven't created the necessary layers.
I would greatly appreciate a help with this. Thanks in advance.
model = Sequential()
model.add(LSTM(256, input_shape=(1,1)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(train_x, train_y, epochs=30, batch_size=1, verbose=1)
score = model.evaluate(train_x, train_y, verbose=0)
print('Keras model loss = ', score[0])
print('Keras model accuracy = ', score[1])
train_predictions = model.predict(train_x)
test_predictions = model.predict(test_x)
train_predictions = scaler.inverse_transform(train_predictions)
train_y = scaler.inverse_transform([train_y])
train_predict_plot = np.empty_like(scaled_data)
train_predict_plot[:,:] = np.nan
train_predict_plot[1:len(train_predictions)+1, :] = train_predictions
test_predict_plot = np.empty_like(scaled_data)
test_predict_plot[:,:] = np.nan
test_predict_plot[
plt.plot(scaler.inverse_transform(scaled_data))
plt.plot(train_predict_plot)
plt.plot(test_predict_plot)
plt.show()

Loading saved model (Bidirectional LSTM) in Keras

I trained and saved a Bidirectional LSTM model in Keras successfully with:
model = Sequential()
model.add(Bidirectional(LSTM(N_HIDDEN_NEURONS,
return_sequences=True,
activation="tanh",
input_shape=(SEGMENT_TIME_SIZE, N_FEATURES))))
model.add(Bidirectional(LSTM(N_HIDDEN_NEURONS)))
model.add(Dropout(0.5))
model.add(Dense(N_CLASSES, activation='sigmoid'))
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=N_EPOCHS,
validation_data=[X_test, y_test])
model.save('model_keras/model.h5')
However, when I want to load it with:
model = load_model('model_keras/model.h5')
I get an error:
ValueError: You are trying to load a weight file containing 3 layers
into a model with 0 layers.
I also tried different methods like saving and loading model architecture and weights separately but none of them worked for me. Also, previously, when I was using normal (unidirectional) LSTMs, loading the model worked fine.
As mentioned by #mpariente and #today, the input_shape is an argument of Bidirectional, not LSTM, see Keras documentation. My solution:
# Model
model = Sequential()
model.add(Bidirectional(LSTM(N_HIDDEN_NEURONS,
return_sequences=True,
activation="tanh"),
input_shape=(SEGMENT_TIME_SIZE, N_FEATURES)))
model.add(Bidirectional(LSTM(N_HIDDEN_NEURONS)))
model.add(Dropout(0.5))
model.add(Dense(N_CLASSES, activation='sigmoid'))
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=BATCH_SIZE,
epochs=N_EPOCHS,
validation_data=[X_test, y_test])
model.save('model_keras/model.h5')
and then, to load, simply do:
model = load_model('model_keras/model.h5')

How to save pipelined estimator in Keras?

I am using Scikit Learn in Python where I pipelined KerasClassifier with StandardScaler().
The code is:
def create_baseline():
model = Sequential()
model.add(Dense(11, input_dim=11, kernel_initializer='normal', activation='relu'))
model.add(Dense(7, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
classifier = KerasClassifier(build_fn=create_baseline, nb_epoch=150, batch_size=5)
kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', classifier))
pipeline = Pipeline(estimators)
results = cross_val_score(pipeline, X, Y, cv=kfold, verbose=1, fit_params={'mlp__callbacks':[tbCallBack]})
print("Result: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
How can I save the cross validation? Taking into consideration that I did not fit the classifier before, I need to save the result and then load it to make predictions.

Categories