How to plot accuracy with a trained model - python

I have a Tensorflow model already trained in my notebook, and I want to plot accuracy and loss after that.
Here is my code:
myGene = trainGenerator(2,'/content/data/membrane/train','image','label',
data_gen_args,save_to_dir = None)
model = unet()
model_checkpoint = ModelCheckpoint('unet_membrane.hdf5',
monitor='loss',verbose=1, save_best_only=True)
model.fit_generator(myGene,steps_per_epoch=2000,
epochs=5,callbacks=[model_checkpoint])
Is there a way to plot anything?
Because I tried with matplotlib and it doesn't work.
import matplotlib.pyplot as plt
plt.plot(history['accuracy'])
plt.plot(history['loss'])

Try this:
history = model.fit_generator(myGene,
steps_per_epoch=2000,
epochs=5,callbacks=[model_checkpoint])
and then, for plotting:
plt.plot(history.history['accuracy'])
plt.plot(history.history['loss'])

Related

Python Tensor Flow issue with fitting line to data

I've recently been trying to implement tensor flow into my projects, and I attempted to use the Basic Regression Using Keras Guide for regression. However, I am having issues with fitting the line onto the data: loss & prediction vs. data. I've normalized my data, ran it through 1000 epochs, and the data seems fine. Here is the data and the code I've used. Does anyone know why the prediction is so different from the data
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Make NumPy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
train_dataset = df.sample(frac=0.8, random_state = 0)
test_dataset = df.drop(train_dataset.index)
train_dataset.describe().transpose()
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('Max')
test_labels = test_features.pop('Max')
train_dataset.describe().transpose()[['mean','std']]
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(np.array(train_features))
print(normalizer.mean.numpy())
first = np.array(train_features[:1])
with np.printoptions(precision=2, suppress=True):
print('First example:', first)
print()
print('Normalized:', normalizer(first).numpy())
date = np.array(train_features['Date Lifted'])
date_normalizer = layers.Normalization(input_shape=[1,], axis=None)
date_normalizer.adapt(date)
date_model = tf.keras.Sequential([
date_normalizer,
layers.Dense(units=1)
])
date_model.summary()
date_model.predict(date[:10])
date_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss='mean_absolute_error')
%%time
history = date_model.fit(
train_features['Date Lifted'],
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 1000])
plt.xlabel('Epoch')
plt.ylabel('Error [Max]')
plt.legend()
plt.grid(True)
plot_loss(history)
test_results = {}
test_results['date_model'] = date_model.evaluate(
test_features['Date Lifted'],
test_labels, verbose=0)
x = tf.linspace(0, 250, 251)
y = date_model.predict(x)
def plot_horsepower(x, y):
plt.scatter(train_features['Date Lifted'], train_labels, label='Data')
plt.plot(x, y, color='k', label='Predictions')
plt.xlabel('Date Lifted')
plt.ylabel('Max')
plt.legend()
plot_horsepower(x, y)
Your model only has a single neuron?
I imagine you'll see better results if you use a more complicated model, with more layers and more units per layer.

Confused about why something ran once but not another time

So I ran this code last night, and it worked fine it did plot the training loss as a fcn of epoch value. However, when I tried to run it today I changed the batch size from 1 to 8 and it gave me a 'plt not found' error. I then moved the plotting to below the matplotlib import line and it worked. This seems to suggest that line must come before the plotting, but how was I able to plot last night with the plot commands before the import?
This is just part of the complete code yes, but the rest wasn't relevant. This was in Jupyter notebook too, so perhaps I had ran the code before without the plot lines inside the tf.device block, and it saved the import or something?
with tf.device(device_name):
inputx = Input(shape=(7,))
x = Dense(4, activation='elu',name='x1')(inputx)
x = Dense(16, activation='elu',name='x2')(x)
x = Dense(25, activation='elu',name='x3')(x)
x = Dense(10, activation='elu',name='x4')(x)
xke = Dense(5,name='x5')(x)
model = Model(inputx, xke)
adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-6, amsgrad=False)
model.compile(optimizer=adam,
loss=['mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error'],
loss_weights=[1,1,1,1,1],)
model.summary()
history = model.fit(X_train, y_train, batch_size=1, epochs=30, verbose=1)
plt.plot(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend('train', loc='upper left')
plt.show()
from sklearn.metrics import mean_squared_error as mse
train_pred = model.predict(X_train)
train_rmse_sk = np.sqrt(mse(y_train, train_pred, multioutput= "raw_values"))
print("The training rmse value is: ", train_rmse_sk, "\n")
import matplotlib.pyplot as plt

How I can plot the actual vs predicted values for the neural network?

I am trying to plot the actual vs predicted values of my neural network which I created using keras.
What I want exactly is to have my data scattered and to have the best fit curve for the training and testing data set?
below is the code:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
import os
#Load the dataset from excel
data = pd.read_csv('C:\\Excel Files\\Neural Network\\diabetes1.csv', sep=';')
#Viewing the Data
data.head(5)
import seaborn as sns
data['Outcome'].value_counts().plot(kind = 'bar')
#Split into input(x) and output (y) variables
predictiors = data.iloc[:,0:8]
response = data.iloc[:,8]
#Create training and testing vars
X_train, X_test, y_train, y_test = train_test_split(predictiors, response, test_size=0.2)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
# Define the keras model - Layer by Layer sequential model
kerasmodel = Sequential()
kerasmodel.add(Dense(12, input_dim=8, activation='relu')) #First Hidden Layer, 12 neurons in and 8 inputs with relu activation functions
kerasmodel.add(Dense(8, activation='relu')) #Relu to avoid vanishing/exploding gradient problem -#
kerasmodel.add(Dense(1, activation='sigmoid')) #since output is binary so "Sigmoid" - #OutputLayer
#Please not weight and bias initialization are done by keras default nethods using "'glorot_uniform'"
# Compiling model
kerasmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
#fintting model
kerasmodel.fit(X_train, y_train, epochs=50, batch_size=10)
# Train accuracy
_, accuracy = kerasmodel.evaluate(X_train, y_train)
print('Train Accuracy: %.2f' % (accuracy*100))
I want to have plots like these:
From what i understood form your question this will give you a scatter plot of actual_y values and a line plot of predicted_y value:-
import matplotlib.pyplot as plt
x = []
plt.plot(predicted_y, linestyle = 'dotted')
for i in range(0,len(acutal_y):
x.append(i)
plt.scatter(x, actual_y)
plt.show()
predict for both x_train and x_test by the model, and then try out to draw sns.regplot() by import seaborn as sns, for the horizontal x = actual y_values, vertical y = predicted values, two separated plots for both train and test set, then it would plot scatter for points and even line for its regression which means if slope is equal to 1 and intercept equal to 0 or close to these values, the model would be very well.
for more options it would be better to calculate 'scipy.stats.linregress' for both train and test set to derive the slope and intercept.
Thanks #Soroush Mirzaei
Thank you so much! .. Your suggestion has solved my problem.
May I ask you if is it possible to plot to data sets using sns.regplot() and the best fit curve for these two data sets.
For example:
I got two Data sets and two fit lines for each one like the below image
[1]: https://i.stack.imgur.com/avGR1.png
Instead, I want to get something two Data sets, the orange and the blue sets, and one fit curve for these two sets..
Below is the code that I am using :
sns.regplot(y_train,y_pred_train, label="Training Data")
plt.xlabel('Actual G*')
plt.ylabel('Predicted G*')
plt.title('Actual vs. Predicted')
plt.legend(loc="upper left")
sns.regplot(y_test,y_pred_test, label="Testing Data")

Matplotlib kills jupyter kernel after training model

I have run a neural network in a Jupyter notebook and I want to plot the results (loss vs. epoch number). I can run the model without problems, but then even a simple matplotlib plot kills the kernel.
Here is the code that creates the model and data I want to use:
from keras import models
from keras import layers
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data( num_words=10000)
# Change review into array
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension)) # create all-zero matrix
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # If review has word, change that index to 1
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
# Create model
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) # two int. layers w/16 hidden units each
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid')) # outputs the scalar prediction
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
# Create mini-test data
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
# fit model
history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val))
# Get values for plot
history_dict = history.history
history_dict.keys()
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epoch_num = [i for i in range(1,21)]
This works as expected. However, when I try to plot the data with the code below, I get a message: "The kernel appears to have died. It will restart automatically."
plt.plot(epoch_num, loss_values, 'bo', label='Training loss')
plt.plot(epoch_num, val_loss_values, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
I can restart the kernel and make matplotlib plots, but when I try to make a plot after running the model matplotlib causes the error to appear. I have tried updating keras, tensorflow, matplotlib, and numpy to no effect. Can anyone provide insight as to why this happens, and provide a solution?
I used latest tensorflow and imported keras from tensorflow. Everything worked as expected. I changed first three line as shown below. Full code is here
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras import layers
The following plot shows epoch versus loss

Plot graph for either loss or accuracy of the keras predictive model

So I am trying to plot a graph for my model, say I have 20 epochs and the graph should show the accuracy/loss on each epoch. As of now I found this code on Keras website.
history = model.fit(x_train, y_train, epochs = 30, batch_size = 128,validation_split = 0.2)
plot(history)
I tried using this on my data.
import matplotlib.pyplot as plt
plt.plot(history)
So this is the error I am getting
TypeError: float() argument must be a string or a number, not 'History'
Is there any way of correcting this or any other way of plotting a graph for each epoch?
Thank you.
model_history = model.fit(...
plt.figure()
plt.subplot(211)
plt.plot(model_history.history['accuracy'])
plt.subplot(212)
plt.plot(model_history.history['loss'])
This code worked for me.
print(history.history.keys()) # Displays keys from history, in my case loss,acc
plt.plot(history.history['acc']) #here I am trying to plot only accuracy, the same can be used for loss as well
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()

Categories