Tensoflow Keras custom loss gets slow - python

I tried to define a custom loss function for an AE according to the Keras spec. that takes y, y_hat.
The loss is a combination of the MSE and the Frobenius norm of the Jacobian. When using the loss, the training is very fast until I return the sum of MSE and Norm, i.e. returning ret, which slows down the training. Not returning ret but keeping all computations the same, makes the training fast again.
E.g. the below version is slow. Just returning mse makes the training fast again.
#tf.function
def orthogonal_loss(y, y_hat):
"""
Computes the orthogonal loss, a combination of reconstruction loss and
regularization of the orthogonality of the Jacobian.
Args:
y: input vector of shape (batch, dim)
y_hat: reconstruction of y of shape (batch, dim)
Returns: loss of MSE(y, y_hat) + scaling || J'J - I * diag(J'J) ||_F
"""
mse = tf.keras.losses.mean_squared_error(y, y_hat)
with tf.GradientTape() as tape:
z = ae.encoder(y)
tape.watch(z)
y_tilde = ae.decoder(z)
# the Jacobian will be of shape (batch, output dim., latent dim.)
jacobian = tape.batch_jacobian(y_tilde, z)
# will use the batch matrix. mult. as the last two dim. specify valid matrix
jj = tf.matmul(jacobian, jacobian, transpose_a=True)
# jj_diag = tf.linalg.diag_part(jj)
# - tf.eye(128)
ortho = tf.linalg.norm(jj, ord="fro", axis=(-2, -1))
ret = mse + 0.0001 * ortho
return ret
Any idea what the cause of this phenomenon is? I could only think of a complex gradient which slows down the optimizer.

Related

TensorFlow Batch Hessian

I'm building a neural network that must approximate some multivariate function, say f(x). The loss function is defined as how close the second derivative of the network is to the function f. To do this, I must compute the Hessian of f(x). I wrote a custom TensorFlow model that kind of looks like this
class ApproximateModel(tf.keras.Model):
#tf.function
def f_true_hessian(x: tf.Tensor) -> tf.Tensor:
# Some function that should return the actual Hessian
return x
def train_step(self, data):
# tf.get_shape(x) -> (batch_size, dimension_x)
x = data[0]
# Calculate loss
with tf.GradientTape() as second_tape:
with tf.GradientTape() as first_tape:
first_tape.watch(x)
second_tape.watch(x)
f = self(x, training=True)
f_x = first_tape.gradient(f, x)
second_tape.watch(f_x)
f_jacobian = second_tape.jacobian(f_x, x)
# tf.get_shape(f) -> (batch_size, dimension_x, batch_size, dimension_x)
# I want to get (batch_size, dimension_x, dimension_x) somehow..
loss = tf.math.reduce_mean(tf.math.square(tf.reduce_sum(f_jacobian, axis=[1, 2]) - self.f_true_hessian(x))))
return loss
For the interested reader, the application of this type of network is to approximate PDE's as in here.
The code above works well in case I don't have a batch size. I can't figure out how to get the Hessian in case I have a batch of samples of x. How do I get my desired output, where only the Hessian of dimension_x is computed and the batch_size is omitted?

How can I get my neural net to correctly do linear regression?

I used the code for the first neural net from in the book neural nets and deep learning by Michael Nielsen, which was used for recognising handwritten digits. It uses stochastic gradient descent with mini batches and the sigmoid activation function. I gave it one input neuron, two hidden neurons and one output neuron. I then give it a bunch of data, which represents a straight line so basically a number of points between zero and 1, where the input is the same as the output. No matter how I tweak the learning rate and number of epochs used, the network is never able to make a linear regression. Is that due to the fact that I am using the sigmoid activation function? If so, what other function can I use?
The blue line represents the prediction of the network while the green line is the training data and the inputs for the network predictions were just numbers between 0 and 3, with an interval of 0.01.
Here's the code:
"""
network.py
~~~~~~~~~~
A module to implement the stochastic gradient descent learning
algorithm for a feedforward neural network. Gradients are calculated
using backpropagation. Note that I have focused on making the code
simple, easily readable, and easily modifiable. It is not optimized,
and omits many desirable features.
"""
#### Libraries
# Standard library
import random
# Third-party libraries
import numpy as np
from sklearn.datasets import make_regression
import matplotlib.pyplot as plt
class Network(object):
def __init__(self, sizes):
"""The list ``sizes`` contains the number of neurons in the
respective layers of the network. For example, if the list
was [2, 3, 1] then it would be a three-layer network, with the
first layer containing 2 neurons, the second layer 3 neurons,
and the third layer 1 neuron. The biases and weights for the
network are initialized randomly, using a Gaussian
distribution with mean 0, and variance 1. Note that the first
layer is assumed to be an input layer, and by convention we
won't set any biases for those neurons, since biases are only
ever used in computing the outputs from later layers."""
self.num_layers = len(sizes)
self.sizes = sizes
'''creates a list of arrays with random numbers with mean 0 and variance 1;
These arrays represent the biases of each neuron in each layer so one random number is assigned per neuron in
each layer and every array represents one layer of biases
'''
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]
#self always refers to an instance of a class
def feedforward(self, a):
# a are the activations of the neurons
"""Return the output of the network if ``a`` is input."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def SGD(self, training_data, epochs, mini_batch_size, eta,
test_data=None):
"""Train the neural network using mini-batch stochastic
gradient descent. The ``training_data`` is a list of tuples
``(x, y)`` representing the training inputs and the desired
outputs. The other non-optional parameters are
self-explanatory. If ``test_data`` is provided then the
network will be evaluated against the test data after each
epoch, and partial progress printed out. This is useful for
tracking progress, but slows things down substantially."""
if test_data: n_test = len(test_data)
n = len(training_data)
#this is done as many times as the number of epochs say -> that is how often the network is trained
for j in range(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in range(0, n, mini_batch_size)]
#data is made into appropriately sized mini-batches
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
for x,y in mini_batch:
print("Loss: ", (self.feedforward(x) - y)**2)
if test_data:
print ("Epoch {0}: {1} / {2}".format(
j, self.evaluate(test_data), n_test))
else:
print ("Epoch {0} complete".format(j))
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
#nabla_b and nabla_w are the same lists of matrices as "biases" and
#"weights" but all matrices are filled with zeroes; Thus, it is reset to 0 for every mini_batch.
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
#updates the weights and biases by subtracting the average of the sum of the derivatives of the cost
#function wrt to the biases/weights that were added for every training example in the mini_batch.
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
"""Makes two lists filled with zeros in the same shape as biases and weights"""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x]
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
#multiplies w matrix for each layer by activation vector and adds bias
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
#this calculates the output error
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
#this is the derivative of the cost function wrt the biases in the last layer
nabla_b[-1] = delta
#this is the derivative of the cost function wrt the weights in the last layer
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
for l in range(2, self.num_layers): #Code really is this: for l in range(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
#This is the vector of errors of the layer -l
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
#fills the matrices nabla_b and nabla_w with the derivatives of the
#cost function with respect to the biases and weights in layers -l
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)
def evaluate(self, test_data):
"""Return the number of test inputs for which the neural
network outputs the correct result. Note that the neural
network's output is assumed to be the index of whichever
neuron in the final layer has the highest activation."""
test_results = [(np.argmax(self.feedforward(x)), y)
for (x, y) in test_data]
#returns the number of inputs that were preducted correctly.
return sum(int(x == y) for (x, y) in test_results)
def cost_derivative(self, output_activations, y):
"""Return the vector of partial derivatives \partial C_x /
\partial a for the output activations."""
return (output_activations-y)
#### Miscellaneous functions
def sigmoid(z):
"""The sigmoid function."""
return 1.0/(1.0+np.exp(-z))
def sigmoid_prime(z):
"""Derivative of the sigmoid function."""
return sigmoid(z)*(1-sigmoid(z))
Sigmoid activation function is used for classification task, which in your case is to recognize handwritten digits. Whereas Linear Regression is regression task where output should be continuous. If you want the output layer to act as regression, you should be using linear activation function which comes as default for Keras Dense layers.

How to customize an LSTM loss function to only concider a given index range of prediction and target sequence?

I am currently working with an LSTM sequence to sequence model for time domain signal predictions. Because of domain knowledge, I know that the first part of the prediction (about 20%) can never be predicted correctly, since the information required is not available in the given input sequence. The remaining 80% of the predicted sequence are usually predicted quite well. In order to exclude the first 20% from the training optimization, it would be nice to define a loss function that would basically operate on a given index range like the numpy code below:
start = int(0.2*sequence_length)
stop = sequence_length
def mse(pred, target):
""" Mean squared error between two time series np.arrays """
return 1/target.shape[0]*np.sum((pred-target)**2)
def range_mse_loss(y_pred, y):
return mse(y_pred[start:stop],y[start:stop])
How do I have to write this loss function in order to have it work with my preexisting keras code, where loss is simply given by model.compile(loss='mse') ?
You can slice your tensor to just last 80% of the data.
size = int(y_true.shape[0] * 0.8) # for 2D vector, e.g., (100, 1)
loss_fn = tf.keras.losses.MeanSquaredError(name='mse')
loss_fn(y_pred[:-size], y_true[:-size])
You can also use the sample_weights at the tf.keras.losses.MeanSquaredError(), passing an array of weights and the first 20% of weights is zero
size = int(y_true.shape[0] * 0.8) # for 2D vector, e.g., (100, 1)
zeros = tf.zeros((y_true.shape[0] - size), dtype=tf.int32)
ones = tf.ones((size), dtype=tf.int32)
weights = tf.concat([zeros, ones], 0)
loss_fn = tf.keras.losses.MeanSquaredError(name='mse')
loss_fn(y_pred, y_true, sample_weights=weights)
There is a warming of the second solution, the final loss will be lower than the first solution, because you are putting zero in the first predictions values, but you aren't removing them in the formula MSE = 1 /n * sum((y-y_hat)^2).
One workaround would be to mark the observations as None/nan and then overwrite the train_step method. Following tensorflow's tutorial about customizing train_step, you would do something like this
#tf.function
def train_step(keras_model, data):
print('custom train_step')
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = keras_model(x, training=True) # Forward pass
# masking nan values in observations, also assuming that targets are >0.0
mask = tf.greater(y, 0.0)
true_y = tf.boolean_mask(y, mask)
pred_y = tf.boolean_mask(y_pred, mask)
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = keras_model.compiled_loss(true_y, pred_y, regularization_losses=keras_model.losses)
# Compute gradients
trainable_vars = keras_model.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
keras_model.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
keras_model.compiled_metrics.update_state(true_y, pred_y)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in keras_model.metrics}
This will work for all the performance metrics you are tracking. Alternative way would be to mask the nans inside the loss function but that would be limited to only one loss function and not any other loss function/performance metrics.

What's the best way to access single gradients in a batch in TensorFlow?

I'm currently analyzing how gradients develop over the course of training of a CNN using Tensorflow 2.x. What I want to do is compare each gradient in a batch to the gradient resulting for the whole batch. At the moment I use this simple code snippet for each training step:
[...]
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
[...]
# One training step
# x_train is a batch of input data, y_train the corresponding labels
def train_step(model, optimizer, x_train, y_train):
# Process batch
with tf.GradientTape() as tape:
batch_predictions = model(x_train, training=True)
batch_loss = loss_object(y_train, batch_predictions)
batch_grads = tape.gradient(batch_loss, model.trainable_variables)
# Do something with gradient of whole batch
# ...
# Process each data point in the current batch
for index in range(len(x_train)):
with tf.GradientTape() as single_tape:
single_prediction = model(x_train[index:index+1], training=True)
single_loss = loss_object(y_train[index:index+1], single_prediction)
single_grad = single_tape.gradient(single_loss, model.trainable_variables)
# Do something with gradient of single data input
# ...
# Use batch gradient to update network weights
optimizer.apply_gradients(zip(batch_grads, model.trainable_variables))
train_loss(batch_loss)
train_accuracy(y_train, batch_predictions)
My main problem is that computation time explodes when calculating each of the gradients single-handedly although these calculations should have already been done by Tensorflow when calculating the batch's gradient. The reason is that GradientTape as well as compute_gradients always return a single gradient no matter whether single or several data points were given. So this computation has to be done for each data point.
I know that I could compute the batch's gradient to update the network by using all the single gradients calculated for each data point but this plays only a minor role in saving computation time.
Is there a more efficient way to compute single gradients?
You can use the jacobian method of the gradient tape to get the Jacobian matrix, which will give you the gradients for each individual loss value:
import tensorflow as tf
# Make a random linear problem
tf.random.set_seed(0)
# Random input batch of ten four-vector examples
x = tf.random.uniform((10, 4))
# Random weights
w = tf.random.uniform((4, 2))
# Random batch label
y = tf.random.uniform((10, 2))
with tf.GradientTape() as tape:
tape.watch(w)
# Prediction
p = x # w
# Loss
loss = tf.losses.mean_squared_error(y, p)
# Compute Jacobian
j = tape.jacobian(loss, w)
# The Jacobian gives you the gradient for each loss value
print(j.shape)
# (10, 4, 2)
# Gradient of the loss wrt the weights for the first example
tf.print(j[0])
# [[0.145728424 0.0756840706]
# [0.103099883 0.0535449386]
# [0.267220169 0.138780832]
# [0.280130595 0.145485848]]

Reconstruction loss on regression type of Variational Autoencoder

I'm currently working on a variation of Variational Autoencoder in a sequential setting, where the task is to fit/recover a sequence of real-valued observation data (hence it is a regression problem).
I have built my model using tf.keras with eager execution enabled, and tensorflow_probability (tfp). Following VAE concept, the generative net emits the distribution parameters of the observation data, which I model as multivariate normal. Therefore the outputs are mean and logvar of the predicted distribution.
Regarding training process, the first component of the loss is reconstruction error. That is the log likelihood of the true observation, given the predicted (parameters) distribution from the generative net. Here, I use tfp.distributions, since it is fast and handy.
However, after training is done, marked by a considerably low loss value, it turns out that my model seems not to learn anything. The predicted value from the model is just barely flat across the time dimension (recall that the problem is sequential).
Nevertheless, for the sake of sanity check, when I replace log likelihood with MSE loss (which is not justifiable while working on VAE), it yields very good data fitting. So I conclude that there must be something wrong with this log likelihood term. Is there anyone having some clue and/or solution for this?
I have considered replacing the log likelihood with cross-entropy loss, but I think that is not applicable in my case, since my problem is regression and the data can't be normalized into [0,1] range.
I also have tried to implement annealed KL term (i.e. weighing the KL term with constant < 1) when using the log likelihood as the reconstruction loss. But it also didn't work.
Here is my code snippet of the original (using log likelihood as reconstruction error) loss function:
import tensorflow as tf
tfe = tf.contrib.eager
tf.enable_eager_execution()
import tensorflow_probability as tfp
tfd = tfp.distributions
def loss(model, inputs):
outputs, _ = SSM_model(model, inputs)
#allocate the corresponding output component
infer_mean = outputs[:,:,:latent_dim] #mean of latent variable from inference net
infer_logvar = outputs[:,:,latent_dim : (2 * latent_dim)]
trans_mean = outputs[:,:,(2 * latent_dim):(3 * latent_dim)] #mean of latent variable from transition net
trans_logvar = outputs[:,:, (3 * latent_dim):(4 * latent_dim)]
obs_mean = outputs[:,:,(4 * latent_dim):((4 * latent_dim) + output_obs_dim)] #mean of observation from generative net
obs_logvar = outputs[:,:,((4 * latent_dim) + output_obs_dim):]
target = inputs[:,:,2:4]
#transform logvar to std
infer_std = tf.sqrt(tf.exp(infer_logvar))
trans_std = tf.sqrt(tf.exp(trans_logvar))
obs_std = tf.sqrt(tf.exp(obs_logvar))
#computing loss at each time step
time_step_loss = []
for i in range(tf.shape(outputs)[0].numpy()):
#distribution of each module
infer_dist = tfd.MultivariateNormalDiag(infer_mean[i],infer_std[i])
trans_dist = tfd.MultivariateNormalDiag(trans_mean[i],trans_std[i])
obs_dist = tfd.MultivariateNormalDiag(obs_mean[i],obs_std[i])
#log likelihood of observation
likelihood = obs_dist.prob(target[i]) #shape = 1D = batch_size
likelihood = tf.clip_by_value(likelihood, 1e-37, 1)
log_likelihood = tf.log(likelihood)
#KL of (q|p)
kl = tfd.kl_divergence(infer_dist, trans_dist) #shape = batch_size
#the loss
loss = - log_likelihood + kl
time_step_loss.append(loss)
time_step_loss = tf.convert_to_tensor(time_step_loss)
overall_loss = tf.reduce_sum(time_step_loss)
overall_loss = tf.cast(overall_loss, dtype='float32')
return overall_loss

Categories