Load and Check Total Loss / Validation accuracy of Keras Sequential Model - python

I didn't find any answers to the following question:
Is there a way to print the trained model accuracy, total model loss and model evaluation accuracy after loading the saved trained Keras model?
from keras.models import load_model
m = load_model.load("lstm_model_01.hd5")
I checked all the callable methods of m but didn't find what I was looking for.

Model is really a graph with weights and that's all that gets saved. You have to evaluate the restored model on data to get predictions and from that you'll obtain an accuracy.

Pleas save your model fit results as follows:
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
And use pickle to dump it (i.e. history) out. There you will see the training loss or validation accuracy from the model. You can load it back anytime.

Related

How to monitor accuracy in tensorflow (metric accuracy is not available)

I would like to monitor accuracy for my tensorflow model, however, when compiling my model using metrics=['accuracy'] or metrics=[tf.keras.metrics.Accuracy()] and then train my model the following Warning pops up.
WARNING:tensorflow: Early stopping conditioned on metric accuracy which is not available. Available metrics are: loss, val_loss
model.compile(optimizer='adam', loss='mean_squared_error', metrics=["tried both options i mentioned"])
callbacks = [EarlyStopping(monitor='accuracy', patience=1000)]
model.fit(x_train, y_train, epochs=5000, batch_size=100, validation_split=0.2, callbacks=callbacks)
Based on the link here:
Accuracy is one metric for evaluating classification models. Informally, accuracy is the fraction of predictions our model got right. Formally, accuracy has the following definition:
So, for other problems like regression you should use other metrics rather than accuracy, like metrics=[tf.keras.metrics.MeanSquaredError()])
In addition to Kaveh's answer, there are other metrics for regression problems. One that I think is quite useful is R2 squared (https://en.wikipedia.org/wiki/Coefficient_of_determination) and it isn't included in Keras.
Tensorflow addons library (https://www.tensorflow.org/addons) implements it and can be used in a ANN with the following code:
import tensorflow_addons as tfa
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01),
loss="mean_squared_error",
metrics=tfa.metrics.RSquare(y_shape=(1,)))

How to make sure keras model outputs same accuracy?

I set the following:
np.random.seed(7)
# split data to train, validate, test (60%, 20%, 20%)
train, validate, test = np.split(data, [int(.6*len(data)), int(.8*len(data))])
history = model.fit(train, train, epochs=1, batch_size=32, verbose=1, shuffle=True,
validation_data=(validate, validate), callbacks=[cb])
score = model.evaluate(test, test, verbose=1)
shuffle=True shouldn't matter here since I'm only training for one epoch.
Now from what I've read this should ensure that my model's accuracy is always the same after training from scratch, but the accuracy results for various runs are 48%, 48%, 56%, 48%, 56%, 47.5% and so on. So I'm wondering if there is something else I have to do to ensure that the resulting accuracy stays the same?
The parameters of the model are initialized differently everytime you fit the model, even if it is for the same data. So it will result in different accuracies. If you are insistent on getting the same accuracy, run the model once, save the parameters in a file and then load them again when you are running the code. Refer Keras Documentation for more details.

How to fix inconsistent predictions right after training and after loading the saved model?

I trained my Keras (version 2.3.1) Sequential models for a regression problem and achieved very good results. Right after training, I make predictions on the test set and then save the model as well as the weights in separate files.
To check for the speed of the models, I recently loaded them and made predictions on a single test input array but the results are way off, which should mean that the weights at the end of the training are different from the ones being loaded.
I tried making predictions using the loaded model as is and from the loaded weights too. The results for both of them are consistent. So at least, it saves the same weights in both files, however wrong they are.
From what I have read, this looks like a common issue with Keras. I came across this suggestion at several places - set the global variable initializer manually.
My problem is that this suggestion, along with a few others (like setting a fixed seed), are to be put in place before training. Training my models takes 4-5 days! How can I fix this without having to retrain the models?
Here is how I fit the models:
hist = model.fit(
X_train, y_train,
batch_size=batch_size,
verbose=1,
epochs=epochs,
validation_split=0.2
)
Then I save the model as well as the weights:
model.save("path to .h5 file")
model.save_weights("path to .hdf5 file")
Eventually, I am loading the model and predicting from it like so:
from keras.models import load_model
model = load_model("path to the same .h5 file")
ypred = model.predict(input_arr)

what actually model.save() saves in Keras?

I have a Keras model and i trained the model with 100 epochs.
now, i got 0.0085 loss at epoch 85 and at lat epoch i got 0.0092.
My question is,
what does model.save() in Keras saves?
Is it save the weights it got from lat epoch(i.e., 100)
Or is it saves the weights from best epoch (i,e., epoch 85)
Or average or mean weights from all 100 epochs?.
What actually keras model.save() is designed to save the weights after 100 epochs completion?.
Thanks for Explanation in Advance:).
The model.save() saves the whole architecture, weights and the optimizer state. This command saves the details needed to reconstitute your model.
The command will save:
The architecture of the model, allowing to re-create the model;
The weights of the model;
The training configuration (loss, optimizer);
the state of the optimizer, allowing to resume training exactly where you left off.
So you can reuse your model using keras.models.load_model(filepath) to reinstantiate your model. load_model will also take care of compiling the model using the saved training configuration.
See the example:
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
Source: https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model
The model.save() will save many details about your NN. Most important details are
The architectures of the network including the dimensions (inputs/outputs layers, hidden layers ...etc).
The weights matrices for every hidden unit in each layer and the activation function.
and many other details that we may not need to outline here.
Coming back to the second part of your question, when we save the trained model, it will be saved the loss value after the last epoch. Which mean, the final value will be less or more from the previous epochs depending on the number of epochs you specified and how close you get from overfitting.
Also, the number of epochs is not saved and it doesn't make sense in most situations according to Francois Chollet the creator of Keras. see this conversation
This is true unless you activate the callback option that turns on the early stopping of the training of your network after a certain number of epochs (which you called the best iteration). see this
My question is, what does model.save() saves , "Is it save the weights
it got from lat epoch(i.e., 100)" OR "Is it saves the weights from
best epoch (i,e., epoch 85)" OR "Average or mean weights from all 100
epochs"?.
What all things are saved(weights, optimizer state etc.) are already mentioned in the other answers. In your case, the weights of the model at the end of 100 epochs are saved.
In case, you would like to save the best model(with the least loss), then you need to create a ModelCheckPoint callback object and pass it to the fit() method via the callbacks argument.
https://keras.io/callbacks/#ModelCheckpoint
https://keras.io/callbacks/#example-model-checkpoints
It saves weights
Yes
For saving weights for best epoch, use chunk of code i have given below
No
What actually keras model.save() is designed to save the weights after 100 epochs completion?. Yes it does, but have a look at following code for saving weights of only best epochs.
Use this chunk of code to:
Save weights of best epochs only
Update weights after every epoch only if given criteria is improved (val_loss is min)
Additionally, history after each epoch will be save in .csv file.
Code
import pandas as pd
from keras.callbacks import EarlyStopping, ModelCheckpoint
#Stop when val_loss is not decreasing
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
#Save the model after every epoch.
checkpointer = ModelCheckpoint(filepath='Model_1_weights.h5', verbose=1, save_best_only=True)
#history variable will save training progress after each epoch
history = model.fit(X_train, y_train, batch_size=20, epochs=40, validation_data=(X_valid, y_valid), shuffle=True, callbacks=[checkpointer, earlyStopping])
#Save progress of each epoch in .csv file
hist_df = pd.DataFrame(history.history)
hist_csv_file = 'History_Model_1.csv'
with open(hist_csv_file, mode='w') as f:
hist_df.to_csv(f)
Link: https://keras.io/callbacks/#ModelCheckpoint

Training accuracy graph with model_to_estimator

I have a Keras sequential model and I'm using:
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
I can see the training accuracy printed when I use Keras fit() function to train the model.
I need to use Estimator API to train the model and I'm using model_to_estimator to convert the model to estimator. Then I use train_and_evaluate() to train the model.
However I don't see the accuracy graph in Tensorboard. There's only one accuracy value (from evaluation), so the graph is just a dot.
What I need is the accuracy graph from training like the one shown here:
https://www.tensorflow.org/guide/custom_estimators#tensorboard
I checked the examples and all I could find were ones where they use Estimator API to build the model and use following code to define a summary scalar.
# Compute evaluation metrics.
accuracy = tf.metrics.accuracy(labels=labels,
predictions=predicted_classes,
name='acc_op')
metrics = {'accuracy': accuracy}
tf.summary.scalar('accuracy', accuracy[1])
Does anyone know how to use this with models converted from Keras?
I'm using Tensorflow version r1.10.

Categories