I have run a neural network in a Jupyter notebook and I want to plot the results (loss vs. epoch number). I can run the model without problems, but then even a simple matplotlib plot kills the kernel.
Here is the code that creates the model and data I want to use:
from keras import models
from keras import layers
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data( num_words=10000)
# Change review into array
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension)) # create all-zero matrix
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # If review has word, change that index to 1
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
# Create model
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) # two int. layers w/16 hidden units each
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid')) # outputs the scalar prediction
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
# Create mini-test data
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
# fit model
history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val))
# Get values for plot
history_dict = history.history
history_dict.keys()
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epoch_num = [i for i in range(1,21)]
This works as expected. However, when I try to plot the data with the code below, I get a message: "The kernel appears to have died. It will restart automatically."
plt.plot(epoch_num, loss_values, 'bo', label='Training loss')
plt.plot(epoch_num, val_loss_values, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
I can restart the kernel and make matplotlib plots, but when I try to make a plot after running the model matplotlib causes the error to appear. I have tried updating keras, tensorflow, matplotlib, and numpy to no effect. Can anyone provide insight as to why this happens, and provide a solution?
I used latest tensorflow and imported keras from tensorflow. Everything worked as expected. I changed first three line as shown below. Full code is here
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras import layers
The following plot shows epoch versus loss
Related
I've recently been trying to implement tensor flow into my projects, and I attempted to use the Basic Regression Using Keras Guide for regression. However, I am having issues with fitting the line onto the data: loss & prediction vs. data. I've normalized my data, ran it through 1000 epochs, and the data seems fine. Here is the data and the code I've used. Does anyone know why the prediction is so different from the data
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Make NumPy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
train_dataset = df.sample(frac=0.8, random_state = 0)
test_dataset = df.drop(train_dataset.index)
train_dataset.describe().transpose()
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('Max')
test_labels = test_features.pop('Max')
train_dataset.describe().transpose()[['mean','std']]
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(np.array(train_features))
print(normalizer.mean.numpy())
first = np.array(train_features[:1])
with np.printoptions(precision=2, suppress=True):
print('First example:', first)
print()
print('Normalized:', normalizer(first).numpy())
date = np.array(train_features['Date Lifted'])
date_normalizer = layers.Normalization(input_shape=[1,], axis=None)
date_normalizer.adapt(date)
date_model = tf.keras.Sequential([
date_normalizer,
layers.Dense(units=1)
])
date_model.summary()
date_model.predict(date[:10])
date_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss='mean_absolute_error')
%%time
history = date_model.fit(
train_features['Date Lifted'],
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 1000])
plt.xlabel('Epoch')
plt.ylabel('Error [Max]')
plt.legend()
plt.grid(True)
plot_loss(history)
test_results = {}
test_results['date_model'] = date_model.evaluate(
test_features['Date Lifted'],
test_labels, verbose=0)
x = tf.linspace(0, 250, 251)
y = date_model.predict(x)
def plot_horsepower(x, y):
plt.scatter(train_features['Date Lifted'], train_labels, label='Data')
plt.plot(x, y, color='k', label='Predictions')
plt.xlabel('Date Lifted')
plt.ylabel('Max')
plt.legend()
plot_horsepower(x, y)
Your model only has a single neuron?
I imagine you'll see better results if you use a more complicated model, with more layers and more units per layer.
So I ran this code last night, and it worked fine it did plot the training loss as a fcn of epoch value. However, when I tried to run it today I changed the batch size from 1 to 8 and it gave me a 'plt not found' error. I then moved the plotting to below the matplotlib import line and it worked. This seems to suggest that line must come before the plotting, but how was I able to plot last night with the plot commands before the import?
This is just part of the complete code yes, but the rest wasn't relevant. This was in Jupyter notebook too, so perhaps I had ran the code before without the plot lines inside the tf.device block, and it saved the import or something?
with tf.device(device_name):
inputx = Input(shape=(7,))
x = Dense(4, activation='elu',name='x1')(inputx)
x = Dense(16, activation='elu',name='x2')(x)
x = Dense(25, activation='elu',name='x3')(x)
x = Dense(10, activation='elu',name='x4')(x)
xke = Dense(5,name='x5')(x)
model = Model(inputx, xke)
adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-6, amsgrad=False)
model.compile(optimizer=adam,
loss=['mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error','mean_squared_error'],
loss_weights=[1,1,1,1,1],)
model.summary()
history = model.fit(X_train, y_train, batch_size=1, epochs=30, verbose=1)
plt.plot(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend('train', loc='upper left')
plt.show()
from sklearn.metrics import mean_squared_error as mse
train_pred = model.predict(X_train)
train_rmse_sk = np.sqrt(mse(y_train, train_pred, multioutput= "raw_values"))
print("The training rmse value is: ", train_rmse_sk, "\n")
import matplotlib.pyplot as plt
I am trying to plot the actual vs predicted values of my neural network which I created using keras.
What I want exactly is to have my data scattered and to have the best fit curve for the training and testing data set?
below is the code:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
import os
#Load the dataset from excel
data = pd.read_csv('C:\\Excel Files\\Neural Network\\diabetes1.csv', sep=';')
#Viewing the Data
data.head(5)
import seaborn as sns
data['Outcome'].value_counts().plot(kind = 'bar')
#Split into input(x) and output (y) variables
predictiors = data.iloc[:,0:8]
response = data.iloc[:,8]
#Create training and testing vars
X_train, X_test, y_train, y_test = train_test_split(predictiors, response, test_size=0.2)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
# Define the keras model - Layer by Layer sequential model
kerasmodel = Sequential()
kerasmodel.add(Dense(12, input_dim=8, activation='relu')) #First Hidden Layer, 12 neurons in and 8 inputs with relu activation functions
kerasmodel.add(Dense(8, activation='relu')) #Relu to avoid vanishing/exploding gradient problem -#
kerasmodel.add(Dense(1, activation='sigmoid')) #since output is binary so "Sigmoid" - #OutputLayer
#Please not weight and bias initialization are done by keras default nethods using "'glorot_uniform'"
# Compiling model
kerasmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
#fintting model
kerasmodel.fit(X_train, y_train, epochs=50, batch_size=10)
# Train accuracy
_, accuracy = kerasmodel.evaluate(X_train, y_train)
print('Train Accuracy: %.2f' % (accuracy*100))
I want to have plots like these:
From what i understood form your question this will give you a scatter plot of actual_y values and a line plot of predicted_y value:-
import matplotlib.pyplot as plt
x = []
plt.plot(predicted_y, linestyle = 'dotted')
for i in range(0,len(acutal_y):
x.append(i)
plt.scatter(x, actual_y)
plt.show()
predict for both x_train and x_test by the model, and then try out to draw sns.regplot() by import seaborn as sns, for the horizontal x = actual y_values, vertical y = predicted values, two separated plots for both train and test set, then it would plot scatter for points and even line for its regression which means if slope is equal to 1 and intercept equal to 0 or close to these values, the model would be very well.
for more options it would be better to calculate 'scipy.stats.linregress' for both train and test set to derive the slope and intercept.
Thanks #Soroush Mirzaei
Thank you so much! .. Your suggestion has solved my problem.
May I ask you if is it possible to plot to data sets using sns.regplot() and the best fit curve for these two data sets.
For example:
I got two Data sets and two fit lines for each one like the below image
[1]: https://i.stack.imgur.com/avGR1.png
Instead, I want to get something two Data sets, the orange and the blue sets, and one fit curve for these two sets..
Below is the code that I am using :
sns.regplot(y_train,y_pred_train, label="Training Data")
plt.xlabel('Actual G*')
plt.ylabel('Predicted G*')
plt.title('Actual vs. Predicted')
plt.legend(loc="upper left")
sns.regplot(y_test,y_pred_test, label="Testing Data")
I have a Tensorflow model already trained in my notebook, and I want to plot accuracy and loss after that.
Here is my code:
myGene = trainGenerator(2,'/content/data/membrane/train','image','label',
data_gen_args,save_to_dir = None)
model = unet()
model_checkpoint = ModelCheckpoint('unet_membrane.hdf5',
monitor='loss',verbose=1, save_best_only=True)
model.fit_generator(myGene,steps_per_epoch=2000,
epochs=5,callbacks=[model_checkpoint])
Is there a way to plot anything?
Because I tried with matplotlib and it doesn't work.
import matplotlib.pyplot as plt
plt.plot(history['accuracy'])
plt.plot(history['loss'])
Try this:
history = model.fit_generator(myGene,
steps_per_epoch=2000,
epochs=5,callbacks=[model_checkpoint])
and then, for plotting:
plt.plot(history.history['accuracy'])
plt.plot(history.history['loss'])
I'm quite new to neural nets and tried writing my own code to classify images. I've been using the Concrete Crack Images for Classification (https://data.mendeley.com/datasets/5y9wdsg2zt/2) to classify whether an image has a crack or is free of defects. From that dataset I randomly extracted 2.000 images, 1.400 for my training set and 300 each for my validation and test set. Half of the images are positive/ show a crack and the other half are negative/ free of defects.
For classification I'm using VGG16 pre-trained on ImageNet. Down below you can see my full code, which I put together using different tutorials that tried to solve a similar task.
Unfortunately it can't identify one single crack image and is classifying everything as negative/ free of defects.
I've tried different batch sizes, amounts of epochs, amounts of images, tried it without being pre-trained, but nothing seems to work and I have absolutely no idea why. I'd really appreciate some help so thank you in advance!
If there are any questions left feel free to ask.
import tensorflow as tf
import os
import numpy as np
import keras
from keras.preprocessing import image
from PIL import Image
import os.path, sys
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import scipy as sp
from tensorflow.keras.applications import vgg16
from tensorflow.keras.preprocessing.image import load_img, img_to_array, array_to_img, ImageDataGenerator
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras import optimizers
from tensorflow.keras.utils import *
import requests
from io import BytesIO
import random
import pickle
import itertools
from sklearn.metrics import classification_report, confusion_matrix
# load data
PATH = '/Volumes/ConcreteCrackImages'
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
test_dir = os.path.join(PATH, 'test')
train_negative_dir = os.path.join(train_dir, 'Negative') # directory with our training negative pictures
train_crack_dir = os.path.join(train_dir, 'Positive') # directory with our training crack pictures
validation_negative_dir = os.path.join(validation_dir, 'Negative') # directory with our validation negative pictures
validation_crack_dir = os.path.join(validation_dir, 'Positive') # directory with our validation crack pictures
test_negative_dir = os.path.join(test_dir, 'Negative') # directory with our test negative pictures
test_crack_dir = os.path.join(test_dir, 'Positive') # directory with our test crack pictures
# understand the data
num_negative_tr = len(os.listdir(train_negative_dir))
num_crack_tr = len(os.listdir(train_crack_dir))
num_negative_val = len(os.listdir(validation_negative_dir))
num_crack_val = len(os.listdir(validation_crack_dir))
num_negative_test = len(os.listdir(test_negative_dir))
num_crack_test = len(os.listdir(test_crack_dir))
total_train = num_negative_tr + num_crack_tr
total_val = num_negative_val + num_crack_val
total_test = num_negative_test + num_crack_test
print('total training negative images:', num_negative_tr)
print('total training crack images:', num_crack_tr)
print('total validation negative images:', num_negative_val)
print('total validation crack images:', num_crack_val)
print('total test negative images:', num_negative_test)
print('total test crack images:', num_crack_test)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
print("Total test images:", total_test)
# variables for pre-processing
batch_size = 32
epochs = 40
IMG_HEIGHT = 224
IMG_WIDTH = 224
# data preparation
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
test_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our test data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
test_data_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory=test_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
# visualize training images
sample_training_images, _ = next(train_data_gen)
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
# =============================================================================
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
# =============================================================================
# create the model/ import vgg16
vgg_conv = vgg16.VGG16(weights='imagenet', include_top=False, input_shape = (224, 224, 3))
# Freeze the layers except the last 4 layers
for layer in vgg_conv.layers[:-8]:
layer.trainable = False
# Check the trainable status of the individual layers
for layer in vgg_conv.layers:
print(layer, layer.trainable)
### MODIFY VGG STRUCTURE ###
x = vgg_conv.output
x = GlobalAveragePooling2D()(x)
x = Dense(1, activation="sigmoid")(x)
model = Model(vgg_conv.input, x)
model.compile(loss = "binary_crossentropy", optimizer = optimizers.SGD(lr=0.00001, momentum=0.9), metrics=["accuracy"])
model.summary()
# train the model
history = model.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
# visualize training results
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# Evaluate the model on the test data using `evaluate`
print('\nEvaluate on test data')
results = model.evaluate(test_data_gen,
verbose = 1)
print('test loss, test acc:', results)
#Confusion Matrix and Classification Report
Y_pred = model.predict(test_data_gen)
y_pred = np.argmax(Y_pred, axis=1)
print('Confusion Matrix')
print(confusion_matrix(test_data_gen.classes, y_pred))
print('Classification Report')
target_names = ['Negative', 'Crack']
print(classification_report(test_data_gen.classes, y_pred, target_names=target_names))
There are a couple of issues you can check.
since you are using VGG and ImageDataGenerator, you gotta make sure the image data generator do the same preprocessing as the VGG pretrained model required. VGG is trained using the imagenet_utils.preprocessing_input with mode set to "caffe". There are in total three modes, caffe, tf, and torch. Different models are retained with different preprocessing.
When you initiate the VGG model, you set the including top to false. And then you get the output of VGG, did a global pool and then added a dense for output. If you dig into the source code of the VGG implementation, the top is just the softmax layer but also FC layers. The FC layers is the way to generate abstraction over the extracted VGG feature. If you don't have enough FC layer, your model is not complicated enough to learn the feature space well.
You can at least try these two to see if they help