Keras model consistently underestimates the target - python

I don't understand why my keras model is under-estimating the target. I include the minimal example below. If I simplify the model architecture, the predictions are closer to the true one. But what I confuse is if the complex model overfits, why isn't the predictions be extremely close to the training true value but off systematically like that? (The plot is for training data)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from sklearn.metrics import mean_squared_error
def create_dataset(num_series=1, num_steps=1000, period=500, mu=1, sigma=0.3):
noise = np.random.normal(mu, sigma, size=(num_series, num_steps))
sin_minumPi_Pi = np.sin(np.tile(np.linspace(-np.pi, np.pi, period), int(num_steps / period)))
sin_Zero_2Pi = np.sin(np.tile(np.linspace(0, 2 * np.pi, period), int(num_steps / period)))
pattern = np.concatenate((np.tile(sin_minumPi_Pi.reshape(1, -1),
(int(np.ceil(num_series / 2)),1)),
np.tile(sin_Zero_2Pi.reshape(1, -1),
(int(np.floor(num_series / 2)), 1))
),
axis=0
)
target = noise + pattern
return target[0]
avail=create_dataset(mu=5)
window_size = 7
def getdata(data,window_size):
X,y = np.array([1]*window_size),np.array([])
for i in range(window_size, len(data)):
X = np.vstack((X,data[i-window_size:i]))
y = np.append(y,data[i:i+1])
return X[1:],y
X,y = getdata(avail,window_size)
def train_model(X,y,a_dim=100,epoch=50,batch_size=32,d=0.2):
model = Sequential()
model.add(Dense(a_dim,activation='relu',input_dim=X.shape[1]))
model.add(Dropout(d))
model.add(Dense(a_dim,activation='relu'))
model.add(Dropout(d))
model.add(Dense(a_dim,activation='relu'))
model.add(Dropout(d))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X, y, epochs=epoch,batch_size=batch_size, verbose=2)
return model
model = train_model(X,y)
plt.plot(model.predict(X)[:,0])
plt.plot(y)
plt.show()

It is because your model has been built for this behaviour.
For a model is the main purpose to get an optimized function which fits any unseen data as well as possible. Aming this, your model starts to learn from training samples. The problem is, that during the training process your model tends to overfit on your training samples loosing its generalization ability i.e. to perform well on unseen data.
To avoid this, we use some technics, one of them is dropout.
You used it as well:
model.add(Dropout(d))
with the default parameter d=0.2:
def train_model(X,y,a_dim=100,epoch=50,batch_size=32,d=0.2):
As I wrote above this is to avoid overfit on your training data and that is why your model constantly overestimate it, but get a better estimation to unseen data.
Passing other dropout value to your train_model() function you will get better fitting to your train data:
model = train_model(X, y, d=0.0)
plt.plot(model.predict(X)[:,0])
plt.plot(y)
plt.show()
Out:

Related

Incompatible shapes in Stellargraph

I'm trying to implement a small prototype of The GCN Model using the library Stellargraph. I've my StellarGraph graph object ready, I'm trying to solve a multi-class multi-label classification problem. This means I'm trying to predict more than one column (19 exactly) each column is encoded to either 0 or 1.
Here is what I've done:
from sklearn.model_selection import train_test_split
from stellargraph.mapper import FullBatchNodeGenerator
train_subjects, test_subjects = train_test_split(nodelist, test_size = .25)
generator = FullBatchNodeGenerator(graph, method="gcn")
from stellargraph.layer import GCN
train_gen = generator.flow(train_subjects['ID'], train_subjects.drop(['ID'], axis = 1))
gcn = GCN(layer_sizes=[16, 16], activations=["relu", "relu"], generator=generator, dropout=0.5)
from tensorflow.keras import layers, optimizers, losses, metrics, Model
x_inp, x_out = gcn.in_out_tensors()
predictions = layers.Dense(units = 1, activation="sigmoid")(x_out)
from tensorflow.keras.metrics import Precision as Precision
​
model = Model(inputs=x_inp, outputs=predictions)
model.compile(
optimizer=optimizers.Adam(learning_rate = 0.01),
loss=losses.categorical_crossentropy,
metrics= [Precision()])
val_gen = generator.flow(test_subjects['ID'], test_subjects.drop(['ID'], axis = 1))
from tensorflow.keras.callbacks import EarlyStopping
es_callback = EarlyStopping(monitor="val_precision", patience=200, restore_best_weights=True)
history = model.fit(
train_gen,
epochs=200,
validation_data=val_gen,
verbose=2,
shuffle=False,
callbacks=[es_callback])
I've 271045 edges & 16354 nodes in total including 12265 training nodes. The issue I'm getting is a shape mismatching from Keras. It states as follows. I suspect it's due to inserting multiple columns as target columns. I've tried the model using only one column (class) & it worked perfectly.
InvalidArgumentError: Incompatible shapes: [1,12265] vs. [1,233035]
[[node LogicalAnd_1 (defined at tmp/ipykernel_52/2745570431.py:7) ]] [Op:__inference_train_function_1405]
It's worth mentioning that 233035 = 12265 (number of train nodes) times 19 (number of classes). Any Idea on what is going wrong here?
I figured out the problem.
It was a newbie mistake, I initialized the Dense Classification layer with 1 unit instead of 19 (number of classes).
I just needed to fix that line to:
predictions = layers.Dense(units = 19, activation="sigmoid")(x_out)
Have a nice day!

How to implement Batch Normalization on tensorflow with Keras as a high-level API

BatchNormalization (BN) operates slightly differently when in training and in inference. In training, it uses the average and variance of the current mini-batch to scale its inputs; this means that the exact result of the application of batch normalization depends not only on the current input, but also on all other elements of the mini-batch. This is clearly not desirable when in inference mode, where we want a deterministic result. Therefore, in that case, a fixed statistic of the global average and variance over the entire training set is used.
In Tensorflow, this behavior is controlled by a boolean switch training that needs to be specified when calling the layer, see https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization. How do I deal with this switch when using Keras high-level API? Am I correct in assuming that it is dealt with automatically, depending whether we are using model.fit(x, ...) or model.predict(x, ...)?
To test this, I have written this example. We start with a random distribution and we want to classify whether the input is positive or negative. However, we also have a test dataset coming from a different distribution where the inputs are displaced by 2 (and consequently the labels check whether x>2).
import numpy as np
from math import ceil
from tensorflow.python.data import Dataset
from tensorflow.python.keras import Input, Model
from tensorflow.python.keras.layers import Dense, BatchNormalization
np.random.seed(18)
xt = np.random.randn(10_000, 1)
yt = np.array([[int(x > 0)] for x in xt])
train_data = Dataset.from_tensor_slices((xt, yt)).shuffle(10_000).repeat().batch(32).prefetch(2)
xv = np.random.randn(100, 1)
yv = np.array([[int(x > 0)] for x in xv])
valid_data = Dataset.from_tensor_slices((xv, yv)).repeat().batch(32).prefetch(2)
xs = np.random.randn(100, 1) + 2
ys = np.array([[int(x > 2)] for x in xs])
test_data = Dataset.from_tensor_slices((xs, ys)).repeat().batch(32).prefetch(2)
x = Input(shape=(1,))
a = BatchNormalization()(x)
a = Dense(8, activation='sigmoid')(a)
a = BatchNormalization()(a)
y = Dense(1, activation='sigmoid')(a)
model = Model(inputs=x, outputs=y, )
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(train_data, epochs=10, steps_per_epoch=ceil(10_000 / 32), validation_data=valid_data,
validation_steps=ceil(100 / 32))
zs = model.predict(test_data, steps=ceil(100 / 32))
print(sum([ys[i] == int(zs[i] > 0.5) for i in range(100)]))
Running the code prints the value 0.5, meaning that half the examples are labeled properly. This is what I would expect if the system was using the global statistics on the training set to implement BN.
If we change the BN layers to read
x = Input(shape=(1,))
a = BatchNormalization()(x, training=True)
a = Dense(8, activation='sigmoid')(a)
a = BatchNormalization()(a, training=True)
y = Dense(1, activation='sigmoid')(a)
and run the code again we find 0.87. Forcing always the training state, the percentage of correct prediction has changed. This is consistent with the idea that model.predict(x, ...) is now using the statistic of the mini-batch to implement BN, and is therefore able to slightly "correct" the mismatch in the source distributions between training and test data.
Is that correct?
If I'm understanding your question correctly, then yes, keras does automatically manage training vs inference behavior based on fit vs predict/evaluate. The flag is called learning_phase, and it determines the behavior of batch norm, dropout, and potentially other things. The current learning phase can be seen with keras.backend.learning_phase(), and set with keras.backend.set_learning_phase().
https://keras.io/backend/#learning_phase

Keras dynamic graphs

It is known that Keras is based on static graphs. However, it seems to me that it is not really the case since I can make my graph implementation dynamic.
Here is a simple example proving my claim:
import keras.backend as K
import numpy as np
from keras.models import Model
from keras.layers import Dense, Input, Dropout
from keras.losses import mean_squared_error
def get_mnist():
np.random.seed(1234) # set seed for deterministic ordering
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_all = np.concatenate((x_train, x_test), axis = 0)
Y = np.concatenate((y_train, y_test), axis = 0)
X = x_all.reshape(-1,x_all.shape[1]*x_all.shape[2])
p = np.random.permutation(X.shape[0])
X = X[p].astype(np.float32)*0.02
Y = Y[p]
return X, Y
X, Y = get_mnist()
drop = K.variable(0.2)
input = Input(shape=(784,))
x = Dropout(rate=drop.value)(input)
x = Dense(128, activation="relu", name="encoder_layer")(x)
decoder = Dense(784, activation="relu", name="decoder_layer")(x)
autoencoder = Model(inputs=input, outputs=decoder)
autoencoder.compile(optimizer='adadelta', loss= mean_squared_error)
autoencoder.fit(X, X, batch_size=256, epochs=300)
K.set_value(drop, 0.5)
autoencoder.fit(X, X, batch_size=256, epochs=300)
It is obvious that we can change the value of drop at any time, even after compiling the model.
If it is a Static Graph, how should I be able to do so?
Am I missing the point?
In case I do, what is the real interpretation of Dynamic Graphs?
It's true that you can change drop at any time, but that doesn't mean that Keras supports a dynamic graph. You are most likely used to seeing a neural network node described as a linear function with an activation function. By stacking these nodes you get a neural network. Then by reasoning what dropout does is take out nodes dynamically. However, this is not the Keras implementation of dropout. Dropout will set the nodes output to zero. This can be done by setting all of the weights in the respective node to zero. These operations are equivalent since the premise is that the next layer receives no output from the dropped node. However, the first method requires a dynamic graph, and the second can be done with both a dynamic and static graph, which is the implementation that Keras uses. Thus, there is no need for a dynamic graph for this operation.
To understand what a dynamic graph is, it's the model itself changing. The dropout operation just doesn't change the graph architecture (number of nodes, number of layers, ect. ) after the initial construction of the graph.
Certainly not because the dropout layer doesn't really affect the network architecture.

Get loss values for each training instance - Keras

I want to get loss values as model train with each instance.
history = model.fit(..)
for example above code returns the loss values for each epoch not mini batch or instance.
what is the best way to do this? Any suggestions?
There is exactly what you are looking for at the end of this official keras documentation page https://keras.io/callbacks/#callback
Here is the code to create a custom callback
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
model = Sequential()
model.add(Dense(10, input_dim=784, kernel_initializer='uniform'))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
history = LossHistory()
model.fit(x_train, y_train, batch_size=128, epochs=20, verbose=0, callbacks=[history])
print(history.losses)
# outputs
'''
[0.66047596406559383, 0.3547245744908703, ..., 0.25953155204159617, 0.25901699725311789]
'''
If you want to get loss values for each batch, you might want to use call model.train_on_batch inside a generator. It's hard to provide a complete example without knowing your dataset, but you will have to break your dataset into batches and feed them one by one
def make_batches(...):
...
batches = make_batches(...)
batch_losses = [model.train_on_batch(x, y) for x, y in batches]
It's a bit more complicated with single instances. You can, of course, train on 1-sized batches, though it will most likely thrash your optimiser (by maximising gradient variance) and significantly degrade performance. Besides, since loss functions are evaluated outside of Python's domain, there is no direct way to hijack the computation without tinkering with C/C++ and CUDA sources. Even then, the backend itself evaluates the loss batch-wise (benefitting from highly vectorised matrix-operations), therefore you will severely degrade performance by forcing it to evaluate loss on each instance. Simply put, hacking the backend will only (probably) help you reduce GPU memory transfers (as compared to training on 1-sized batches from the Python interface). If you really want to get per-instance scores, I would recommend you to train on batches and evaluate on instances (this way you will avoid issues with high variance and reduce expensive gradient computations, since gradients are only estimated during training):
def make_batches(batchsize, x, y):
...
batchsize = n
batches = make_batches(n, ...)
batch_instances = [make_batches(1, x, y) for x, y in batches]
losses = [
(model.train_on_batch(x, y), [model.test_on_batch(*inst) for inst in instances])
for batch, instances in zip(batches, batch_instances)
]
One solution is to calculate the loss function between train expectations and predictions from train input. In the case of loss = mean_squared_error and three dimensional outputs (i.e. image width x height x channels):
model.fit(train_in,train_out,...)
pred = model.predict(train_in)
loss = np.add.reduce(np.square(test_out-pred),axis=(1,2,3)) # this computes the total squared error for each sample
loss = loss / ( pred.shape[1]*pred.shape[2]*pred.shape[3]) # this computes the mean over the sample entry
np.savetxt("loss.txt",loss) # This line saves the data to file
After combining resources from here and here I came up with the following code. Maybe it will help you. The idea is that you can override the Callbacks class from keras and then use the on_batch_end method to check the loss value from the logs that keras will supply automatically to that method.
Here is a working code of an NN with that particular function built in. Maybe you can start from here -
import numpy as np
import pandas as pd
import seaborn as sns
import os
import matplotlib.pyplot as plt
import time
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import Callback
# fix random seed for reproducibility
seed = 155
np.random.seed(seed)
# load pima indians dataset
# download directly from website
dataset = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data",
header=None).values
X_train, X_test, Y_train, Y_test = train_test_split(dataset[:,0:8], dataset[:,8], test_size=0.25, random_state=87)
class NBatchLogger(Callback):
def __init__(self,display=100):
'''
display: Number of batches to wait before outputting loss
'''
self.seen = 0
self.display = display
def on_batch_end(self,batch,logs={}):
self.seen += logs.get('size', 0)
if self.seen % self.display == 0:
print('\n{0}/{1} - Batch Loss: {2}'.format(self.seen,self.params['samples'],
logs.get('loss')))
out_batch = NBatchLogger(display=1000)
np.random.seed(seed)
my_first_nn = Sequential() # create model
my_first_nn.add(Dense(5, input_dim=8, activation='relu')) # hidden layer
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
my_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
my_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=1000, verbose=0, batch_size=128,
callbacks=[out_batch], initial_epoch=0)
Please let me know if you wanted to have something like this.

How to decide the size of layers in Keras' Dense method?

Below is the simple example of multi-class classification task with
IRIS data.
import seaborn as sns
import numpy as np
from sklearn.cross_validation import train_test_split
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.regularizers import l2
from keras.utils import np_utils
#np.random.seed(1335)
# Prepare data
iris = sns.load_dataset("iris")
iris.head()
X = iris.values[:, 0:4]
y = iris.values[:, 4]
# Make test and train set
train_X, test_X, train_y, test_y = train_test_split(X, y, train_size=0.5, random_state=0)
################################
# Evaluate Keras Neural Network
################################
# Make ONE-HOT
def one_hot_encode_object_array(arr):
'''One hot encode a numpy array of objects (e.g. strings)'''
uniques, ids = np.unique(arr, return_inverse=True)
return np_utils.to_categorical(ids, len(uniques))
train_y_ohe = one_hot_encode_object_array(train_y)
test_y_ohe = one_hot_encode_object_array(test_y)
model = Sequential()
model.add(Dense(16, input_shape=(4,),
activation="tanh",
W_regularizer=l2(0.001)))
model.add(Dropout(0.5))
model.add(Dense(3, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# Actual modelling
# If you increase the epoch the accuracy will increase until it drop at
# certain point. Epoch 50 accuracy 0.99, and after that drop to 0.977, with
# epoch 70
hist = model.fit(train_X, train_y_ohe, verbose=0, nb_epoch=100, batch_size=1)
score, accuracy = model.evaluate(test_X, test_y_ohe, batch_size=16, verbose=0)
print("Test fraction correct (NN-Score) = {:.2f}".format(score))
print("Test fraction correct (NN-Accuracy) = {:.2f}".format(accuracy))
My question is how do people usually decide the size of layers?
For example based on code above we have:
model.add(Dense(16, input_shape=(4,),
activation="tanh",
W_regularizer=l2(0.001)))
model.add(Dense(3, activation='sigmoid'))
Where first parameter of Dense is 16 and second is 3.
Why two layers uses two different values for Dense?
How do we choose what's the best value for Dense?
Basically it is just trial and error. Those are called hyperparameters and should be tuned on a validation set (split from your original data into train/validation/test).
Tuning just means trying different combinations of parameters and keep the one with the lowest loss value or better accuracy on the validation set, depending on the problem.
There are two basic methods:
Grid search: For each parameter, decide a range and steps into that range, like 8 to 64 neurons, in powers of two (8, 16, 32, 64), and try each combination of the parameters. This is obviously requires an exponential number of models to be trained and tested and takes a lot of time.
Random search: Do the same but just define a range for each parameter and try a random set of parameters, drawn from an uniform distribution over each range. You can try as many parameters sets you want, for as how long you can. This is just a informed random guess.
Unfortunately there is no other way to tune such parameters. About layers having different number of neurons, that could come from the tuning process, or you can also see it as dimensionality reduction, like a compressed version of the previous layer.
There is no known way to determine a good network structure evaluating the number of inputs or outputs. It relies on the number of training examples, batch size, number of epochs, basically, in every significant parameter of the network.
Moreover, a high number of units can introduce problems like overfitting and exploding gradient problems. On the other side, a lower number of units can cause a model to have high bias and low accuracy values. Once again, it depends on the size of data used for training.
Sadly it is trying some different values that give you the best adjustments. You may choose the combination that gives you the lowest loss and validation loss values, as well as the best accuracy for your dataset, as said in the previous post.
You could do some proportion on your number of units value, something like:
# Build the model
model = Sequential()
model.add(Dense(num_classes * 8, input_shape=(shape_value,), activation = 'relu' ))
model.add(Dropout(0.5))
model.add(Dense(num_classes * 4, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes * 2, activation = 'relu'))
model.add(Dropout(0.2))
#Output layer
model.add(Dense(num_classes, activation = 'softmax'))
The model above shows an example of a categorisation AI system. The num_classes are the number of different categories the system has to choose. For instance, in the iris dataset from Keras, we have:
Iris Setosa
Iris Versicolour
Iris Virginica
num_classes = 3
However, this could lead to worse results than with other random values. We need to adjust the parameters to the training dataset by making some different tries and then analyse the results seeking for the best combination of parameters.
My suggestion is to use EarlyStopping(). Then check the number of epochs and accuracy with test loss.
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping
rlp = lrd = ReduceLROnPlateau(monitor = 'val_loss',patience = 2,verbose = 1,factor = 0.8, min_lr = 1e-6)
es = EarlyStopping(verbose=1, patience=2)
his = classifier.fit(X_train, y_train, epochs=500, batch_size = 128, validation_split=0.1, verbose = 1, callbacks=[lrd,es])

Categories