I am trying to plot the training data. I am using
for i in range(epoch_size):
history = model.fit(trainingX[:10], trainingY[:10], epochs=1, batch_size=batch_size,callbacks = [early_stop], verbose=2)
in loop of 50
and then
plt.plot(history.history['acc'])
plt.title('model accuracy')
plt.ylabel('acc')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
yet the plot is always empty, why is that?
img
You are overwriting the history with each iteration of the for loop and for only one epoch there is no change that is possible to plot. So you either need more than one epoch or need to store the history values with a seperate array.
Try to print the following print(history.history) and check the contents.
And we generally run its once.
history = model.fit(X_train, Y_train,
batch_size=128, epochs=10,
verbose=2, validation_data=(X_test, Y_test))
Related
So I ran this code last night, and it worked fine it did plot the training loss as a fcn of epoch value. However, when I tried to run it today I changed the batch size from 1 to 8 and it gave me a 'plt not found' error. I then moved the plotting to below the matplotlib import line and it worked. This seems to suggest that line must come before the plotting, but how was I able to plot last night with the plot commands before the import?
This is just part of the complete code yes, but the rest wasn't relevant. This was in Jupyter notebook too, so perhaps I had ran the code before without the plot lines inside the tf.device block, and it saved the import or something?
with tf.device(device_name):
inputx = Input(shape=(7,))
x = Dense(4, activation='elu',name='x1')(inputx)
x = Dense(16, activation='elu',name='x2')(x)
x = Dense(25, activation='elu',name='x3')(x)
x = Dense(10, activation='elu',name='x4')(x)
xke = Dense(5,name='x5')(x)
model = Model(inputx, xke)
adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-6, amsgrad=False)
model.compile(optimizer=adam,
loss=['mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error'],
loss_weights=[1,1,1,1,1],)
model.summary()
history = model.fit(X_train, y_train, batch_size=1, epochs=30, verbose=1)
plt.plot(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend('train', loc='upper left')
plt.show()
from sklearn.metrics import mean_squared_error as mse
train_pred = model.predict(X_train)
train_rmse_sk = np.sqrt(mse(y_train, train_pred, multioutput= "raw_values"))
print("The training rmse value is: ", train_rmse_sk, "\n")
import matplotlib.pyplot as plt
I am new to Python and trying to plot the training and validation accuracy and loss for my MLP Regressor, however, I am getting the following error, what am I doing wrong?
TypeError: fit() got an unexpected keyword argument 'validation_split'
mlp_new = MLPRegressor(hidden_layer_sizes=(18, 18,18),
max_iter = 10000000000,activation = 'relu',
solver = 'adam', learning_rate='constant',
alpha=0.05,validation_fraction=0.2,random_state=0,early_stopping=True)
mlp_new.fit(X_train, y_train)
mlp_new_y_predict = mlp_new.predict((X_test))
mlp_new_y_predict
import keras
from matplotlib import pyplot as plt
history = mlp_new.fit(X_train, y_train, validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
Yes, you definitely can find a validation_split arg in the keras model .fit() method.
But:
The model you are going to use here is not that one.
Check the documentation below, Methods section:
method .fit(..) has only two args: X and y.
https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html#sklearn.neural_network.MLPRegressor.fit
I am trying to build a model using the functional api of Keras.
Here is the entire model that I have made. I am not sure if it is correct, and I would be very happy if someone could take a look at it for a moment.
I have first splittet the data into train and test data set.
from sklearn.model_selection import train_test_split
X1_train, X1_test, X2_train, X2_test, y_train, y_test = train_test_split(X1_scaled, X2_scaled, end_y, test_size=0.2)
[i.shape for i in (X1_train, X1_test, X2_train, X2_test, y_train, y_test)]
Here is the part, where I start to build the model
from tensorflow.keras import layers, Model, utils
# Build the model
input1 = layers.Input((10, 6))
input2 = layers.Input((10, 2, 5))
x1 = layers.Flatten()(input1)
x2 = layers.Flatten()(input2)
concat = layers.concatenate([x1, x2])
# Add hidden and dropout layers
hidden1 = layers.Dense(64, activation='relu')(concat)
hid1_out = layers.Dropout(0.5)(hidden1)
hidden2 = layers.Dense(32, activation='relu')(hid1_out)
hid2_out = layers.Dropout(0.5)(hidden2)
output = layers.Dense(1, activation='sigmoid')(hid2_out)
model = Model(inputs=[input1, input2], outputs=output)
# summarize layers
print(model.summary())
# compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
history = model.fit([X1_train, X2_train], y_train, epochs=200, batch_size=5, verbose=0, validation_data=([X1_test, X2_test], y_test))
# evaluate the keras model
_, train_accuracy = model.evaluate([X1_train, X2_train], y_train, verbose=0)
_, test_accuracy = model.evaluate([X1_test, X2_test], y_test, verbose=0)
print('Accuracy NN: %.2f' % (train_accuracy*100))
print('Accuracy NN: %.2f' % (test_accuracy*100))
A problem occurs here. No plot is showing.
# Plots
from matplotlib import pyplot
pyplot.subplot(211)
pyplot.title('Loss')
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
# plot accuracy
pyplot.subplot(212)
pyplot.title('Accuracy')
pyplot.plot(history.history['accuracy'], label='train')
pyplot.plot(history.history['val_accuracy'], label='test')
pyplot.legend()
pyplot.show(`
Could someone give me any hints on how to manage it ?
Thank you for giving me some of your time
below is the code for a function that will produce two plots side by side. The first plot
shows the training loss and validation loss versus epochs. The second plot shows training accuracy and validation accuracy versus epochs. It also places a dot in the first plot for the epoch with the lowest validation loss and a dot on the second plot for the epoch with the highest validation accuracy.
def tr_plot(history):
#Plot the training and validation data
tacc=history.history['accuracy']
tloss=history.history['loss']
vacc=history.history['val_accuracy']
vloss=history.history['val_loss']
Epoch_count=len(tacc)
Epochs=[]
for i in range (Epoch_count):
Epochs.append(i+1)
index_loss=np.argmin(vloss)# this is the epoch with the lowest validation loss
val_lowest=vloss[index_loss] # lowest validation loss value
index_acc=np.argmax(vacc) # this is the epoch with the highest training accuracy
acc_highest=vacc[index_acc] # this is the highest accuracy value
plt.style.use('fivethirtyeight')
sc_label='best epoch= '+ str(index_loss+1 )
vc_label='best epoch= '+ str(index_acc + 1)
fig,axes=plt.subplots(nrows=1, ncols=2, figsize=(20,8))
axes[0].plot(Epochs,tloss, 'r', label='Training loss')
axes[0].plot(Epochs,vloss,'g',label='Validation loss' )
axes[0].scatter(index_loss+1 ,val_lowest, s=150, c= 'blue', label=sc_label)
axes[0].set_title('Training and Validation Loss')
axes[0].set_xlabel('Epochs')
axes[0].set_ylabel('Loss')
axes[0].legend()
axes[1].plot (Epochs,tacc,'r',label= 'Training Accuracy')
axes[1].plot (Epochs,vacc,'g',label= 'Validation Accuracy')
axes[1].scatter(index_acc+1 ,acc_highest, s=150, c= 'blue', label=vc_label)
axes[1].set_title('Training and Validation Accuracy')
axes[1].set_xlabel('Epochs')
axes[1].set_ylabel('Accuracy')
axes[1].legend()
plt.tight_layout
plt.show()
The resulting plot looks like this
how to plot training error and validation error vs number of epochs?
train_data = generate_arrays_for_training(indexPat, filesPath, end=75)
validation_data=generate_arrays_for_training(indexPat, filesPath, start=75)
model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75%
validation_data=generate_arrays_for_training(indexPat, filesPath, start=75),#start=75), #It take the last 25%
#steps_per_epoch=10000, epochs=10)
steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),#*25),
validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),#*75),
verbose=2,
epochs=300, max_queue_size=2, shuffle=True, callbacks=[callback])
This might be what you're looking for, but you should provide more details in order to get a more suitable answer
import matplotlib.pyplot as plt
hist = model.fit_generator(...)
plt.figure()
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','val'], loc = 'upper left')
plt.show()
This is rather a popular error, but I couldn't find a proper answer given my setup.
I found this tutorial code, but when running, I get this error:
val_acc = history.history['val_acc']
KeyError: 'val_acc'
The fit_generator() function unlike fit(), doesn't allow a validation split. So how to fix it?
Here is the code:
def plot_training(history):
print (history.history.keys())
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r.')
plt.plot(epochs, val_acc, 'r')
plt.title('Training and validation accuracy')
# plt.figure()
# plt.plot(epochs, loss, 'r.')
# plt.plot(epochs, val_loss, 'r-')
# plt.title('Training and validation loss')
plt.show()
plt.savefig('acc_vs_epochs.png')
#....
finetune_model = build_finetune_model(base_model, dropout=dropout, fc_layers=FC_LAYERS, num_classes=len(class_list))
adam = Adam(lr=0.00001)
finetune_model.compile(adam, loss='categorical_crossentropy', metrics=['accuracy'])
filepath="./checkpoints/" + "ResNet50" + "_model_weights.h5"
checkpoint = ModelCheckpoint(filepath, monitor=["acc"], verbose=1, mode='max')
callbacks_list = [checkpoint]
history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=8,
steps_per_epoch=steps_per_epoch,
shuffle=True, callbacks=callbacks_list)
plot_training(history)
Hi writting my suggestions here because I'm not able to comment yet,
You are right the fuction fit_generator() dosen't have the validation split attribute.
Therefore you need to make your own validation dataset and feed it to the fit generator through validation_data=(val_X, val_y) as eg.:
history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=8, validation_data=(val_X, val_y),
steps_per_epoch=steps_per_epoch,
shuffle=True, callbacks=callbacks_list)
Hope this helps.
EDIT
To get a validation dataset from your data you can use the methode train_test_split() from sklearn. For example a split with 77% train and 33% validation data:
X_train, val_X, y_train, val_y= train_test_split(
X, y, test_size=0.33, random_state=42)
Look here for more information.
Alternatively you could write your own split methode :)
Edit 2
If you don't have the possibility to use train_test split and with the assumption you have a pandas dataframe called train_data with the features and labels together:
val_data=train_data.sample(frac=0.33,random_state=1)
This should creates a validation dataset with 33% of the data and a train dataset with 77% of the data.
Edit3
It turns out you are using ImageDataGenerator() to create your data. This is quite handy because you can set your validation percentage via validation_split= while you initialize the ImageDataGenerator() as seen in the documentation (here). This should look something like this:
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
validation_split=0.33)
After this you need two "generated" datasets. One to train and one to make your validation. This should look as following:
train_generator = train_datagen.flow_from_directory(TRAIN_DIR,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,subset="training")
validation_generator = train_datagen.flow_from_directory(TRAIN_DIR,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,subset="validation")
Finally you can use both sets in your fit_generator as following:
history = finetune_model.fit_generator(train_generator,epochs=NUM_EPOCHS, workers=8,
validation_data=validation_generator, validation_steps = validation_generator.samples,steps_per_epoch=steps_per_epoch,
shuffle=True, callbacks=callbacks_list)
Let me know if this solves your problem :)