So I ran this code last night, and it worked fine it did plot the training loss as a fcn of epoch value. However, when I tried to run it today I changed the batch size from 1 to 8 and it gave me a 'plt not found' error. I then moved the plotting to below the matplotlib import line and it worked. This seems to suggest that line must come before the plotting, but how was I able to plot last night with the plot commands before the import?
This is just part of the complete code yes, but the rest wasn't relevant. This was in Jupyter notebook too, so perhaps I had ran the code before without the plot lines inside the tf.device block, and it saved the import or something?
with tf.device(device_name):
inputx = Input(shape=(7,))
x = Dense(4, activation='elu',name='x1')(inputx)
x = Dense(16, activation='elu',name='x2')(x)
x = Dense(25, activation='elu',name='x3')(x)
x = Dense(10, activation='elu',name='x4')(x)
xke = Dense(5,name='x5')(x)
model = Model(inputx, xke)
adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-6, amsgrad=False)
model.compile(optimizer=adam,
loss=['mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error'],
loss_weights=[1,1,1,1,1],)
model.summary()
history = model.fit(X_train, y_train, batch_size=1, epochs=30, verbose=1)
plt.plot(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend('train', loc='upper left')
plt.show()
from sklearn.metrics import mean_squared_error as mse
train_pred = model.predict(X_train)
train_rmse_sk = np.sqrt(mse(y_train, train_pred, multioutput= "raw_values"))
print("The training rmse value is: ", train_rmse_sk, "\n")
import matplotlib.pyplot as plt
Related
I am new to Python and trying to plot the training and validation accuracy and loss for my MLP Regressor, however, I am getting the following error, what am I doing wrong?
TypeError: fit() got an unexpected keyword argument 'validation_split'
mlp_new = MLPRegressor(hidden_layer_sizes=(18, 18,18),
max_iter = 10000000000,activation = 'relu',
solver = 'adam', learning_rate='constant',
alpha=0.05,validation_fraction=0.2,random_state=0,early_stopping=True)
mlp_new.fit(X_train, y_train)
mlp_new_y_predict = mlp_new.predict((X_test))
mlp_new_y_predict
import keras
from matplotlib import pyplot as plt
history = mlp_new.fit(X_train, y_train, validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
Yes, you definitely can find a validation_split arg in the keras model .fit() method.
But:
The model you are going to use here is not that one.
Check the documentation below, Methods section:
method .fit(..) has only two args: X and y.
https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html#sklearn.neural_network.MLPRegressor.fit
I have a Tensorflow model already trained in my notebook, and I want to plot accuracy and loss after that.
Here is my code:
myGene = trainGenerator(2,'/content/data/membrane/train','image','label',
data_gen_args,save_to_dir = None)
model = unet()
model_checkpoint = ModelCheckpoint('unet_membrane.hdf5',
monitor='loss',verbose=1, save_best_only=True)
model.fit_generator(myGene,steps_per_epoch=2000,
epochs=5,callbacks=[model_checkpoint])
Is there a way to plot anything?
Because I tried with matplotlib and it doesn't work.
import matplotlib.pyplot as plt
plt.plot(history['accuracy'])
plt.plot(history['loss'])
Try this:
history = model.fit_generator(myGene,
steps_per_epoch=2000,
epochs=5,callbacks=[model_checkpoint])
and then, for plotting:
plt.plot(history.history['accuracy'])
plt.plot(history.history['loss'])
I am trying to build a model using the functional api of Keras.
Here is the entire model that I have made. I am not sure if it is correct, and I would be very happy if someone could take a look at it for a moment.
I have first splittet the data into train and test data set.
from sklearn.model_selection import train_test_split
X1_train, X1_test, X2_train, X2_test, y_train, y_test = train_test_split(X1_scaled, X2_scaled, end_y, test_size=0.2)
[i.shape for i in (X1_train, X1_test, X2_train, X2_test, y_train, y_test)]
Here is the part, where I start to build the model
from tensorflow.keras import layers, Model, utils
# Build the model
input1 = layers.Input((10, 6))
input2 = layers.Input((10, 2, 5))
x1 = layers.Flatten()(input1)
x2 = layers.Flatten()(input2)
concat = layers.concatenate([x1, x2])
# Add hidden and dropout layers
hidden1 = layers.Dense(64, activation='relu')(concat)
hid1_out = layers.Dropout(0.5)(hidden1)
hidden2 = layers.Dense(32, activation='relu')(hid1_out)
hid2_out = layers.Dropout(0.5)(hidden2)
output = layers.Dense(1, activation='sigmoid')(hid2_out)
model = Model(inputs=[input1, input2], outputs=output)
# summarize layers
print(model.summary())
# compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
history = model.fit([X1_train, X2_train], y_train, epochs=200, batch_size=5, verbose=0, validation_data=([X1_test, X2_test], y_test))
# evaluate the keras model
_, train_accuracy = model.evaluate([X1_train, X2_train], y_train, verbose=0)
_, test_accuracy = model.evaluate([X1_test, X2_test], y_test, verbose=0)
print('Accuracy NN: %.2f' % (train_accuracy*100))
print('Accuracy NN: %.2f' % (test_accuracy*100))
A problem occurs here. No plot is showing.
# Plots
from matplotlib import pyplot
pyplot.subplot(211)
pyplot.title('Loss')
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
# plot accuracy
pyplot.subplot(212)
pyplot.title('Accuracy')
pyplot.plot(history.history['accuracy'], label='train')
pyplot.plot(history.history['val_accuracy'], label='test')
pyplot.legend()
pyplot.show(`
Could someone give me any hints on how to manage it ?
Thank you for giving me some of your time
below is the code for a function that will produce two plots side by side. The first plot
shows the training loss and validation loss versus epochs. The second plot shows training accuracy and validation accuracy versus epochs. It also places a dot in the first plot for the epoch with the lowest validation loss and a dot on the second plot for the epoch with the highest validation accuracy.
def tr_plot(history):
#Plot the training and validation data
tacc=history.history['accuracy']
tloss=history.history['loss']
vacc=history.history['val_accuracy']
vloss=history.history['val_loss']
Epoch_count=len(tacc)
Epochs=[]
for i in range (Epoch_count):
Epochs.append(i+1)
index_loss=np.argmin(vloss)# this is the epoch with the lowest validation loss
val_lowest=vloss[index_loss] # lowest validation loss value
index_acc=np.argmax(vacc) # this is the epoch with the highest training accuracy
acc_highest=vacc[index_acc] # this is the highest accuracy value
plt.style.use('fivethirtyeight')
sc_label='best epoch= '+ str(index_loss+1 )
vc_label='best epoch= '+ str(index_acc + 1)
fig,axes=plt.subplots(nrows=1, ncols=2, figsize=(20,8))
axes[0].plot(Epochs,tloss, 'r', label='Training loss')
axes[0].plot(Epochs,vloss,'g',label='Validation loss' )
axes[0].scatter(index_loss+1 ,val_lowest, s=150, c= 'blue', label=sc_label)
axes[0].set_title('Training and Validation Loss')
axes[0].set_xlabel('Epochs')
axes[0].set_ylabel('Loss')
axes[0].legend()
axes[1].plot (Epochs,tacc,'r',label= 'Training Accuracy')
axes[1].plot (Epochs,vacc,'g',label= 'Validation Accuracy')
axes[1].scatter(index_acc+1 ,acc_highest, s=150, c= 'blue', label=vc_label)
axes[1].set_title('Training and Validation Accuracy')
axes[1].set_xlabel('Epochs')
axes[1].set_ylabel('Accuracy')
axes[1].legend()
plt.tight_layout
plt.show()
The resulting plot looks like this
I have run a neural network in a Jupyter notebook and I want to plot the results (loss vs. epoch number). I can run the model without problems, but then even a simple matplotlib plot kills the kernel.
Here is the code that creates the model and data I want to use:
from keras import models
from keras import layers
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data( num_words=10000)
# Change review into array
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension)) # create all-zero matrix
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # If review has word, change that index to 1
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
# Create model
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) # two int. layers w/16 hidden units each
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid')) # outputs the scalar prediction
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
# Create mini-test data
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
# fit model
history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val))
# Get values for plot
history_dict = history.history
history_dict.keys()
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epoch_num = [i for i in range(1,21)]
This works as expected. However, when I try to plot the data with the code below, I get a message: "The kernel appears to have died. It will restart automatically."
plt.plot(epoch_num, loss_values, 'bo', label='Training loss')
plt.plot(epoch_num, val_loss_values, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
I can restart the kernel and make matplotlib plots, but when I try to make a plot after running the model matplotlib causes the error to appear. I have tried updating keras, tensorflow, matplotlib, and numpy to no effect. Can anyone provide insight as to why this happens, and provide a solution?
I used latest tensorflow and imported keras from tensorflow. Everything worked as expected. I changed first three line as shown below. Full code is here
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras import layers
The following plot shows epoch versus loss
So I am trying to plot a graph for my model, say I have 20 epochs and the graph should show the accuracy/loss on each epoch. As of now I found this code on Keras website.
history = model.fit(x_train, y_train, epochs = 30, batch_size = 128,validation_split = 0.2)
plot(history)
I tried using this on my data.
import matplotlib.pyplot as plt
plt.plot(history)
So this is the error I am getting
TypeError: float() argument must be a string or a number, not 'History'
Is there any way of correcting this or any other way of plotting a graph for each epoch?
Thank you.
model_history = model.fit(...
plt.figure()
plt.subplot(211)
plt.plot(model_history.history['accuracy'])
plt.subplot(212)
plt.plot(model_history.history['loss'])
This code worked for me.
print(history.history.keys()) # Displays keys from history, in my case loss,acc
plt.plot(history.history['acc']) #here I am trying to plot only accuracy, the same can be used for loss as well
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()