I'm building a neural network that must approximate some multivariate function, say f(x). The loss function is defined as how close the second derivative of the network is to the function f. To do this, I must compute the Hessian of f(x). I wrote a custom TensorFlow model that kind of looks like this
class ApproximateModel(tf.keras.Model):
#tf.function
def f_true_hessian(x: tf.Tensor) -> tf.Tensor:
# Some function that should return the actual Hessian
return x
def train_step(self, data):
# tf.get_shape(x) -> (batch_size, dimension_x)
x = data[0]
# Calculate loss
with tf.GradientTape() as second_tape:
with tf.GradientTape() as first_tape:
first_tape.watch(x)
second_tape.watch(x)
f = self(x, training=True)
f_x = first_tape.gradient(f, x)
second_tape.watch(f_x)
f_jacobian = second_tape.jacobian(f_x, x)
# tf.get_shape(f) -> (batch_size, dimension_x, batch_size, dimension_x)
# I want to get (batch_size, dimension_x, dimension_x) somehow..
loss = tf.math.reduce_mean(tf.math.square(tf.reduce_sum(f_jacobian, axis=[1, 2]) - self.f_true_hessian(x))))
return loss
For the interested reader, the application of this type of network is to approximate PDE's as in here.
The code above works well in case I don't have a batch size. I can't figure out how to get the Hessian in case I have a batch of samples of x. How do I get my desired output, where only the Hessian of dimension_x is computed and the batch_size is omitted?
Related
I tried to define a custom loss function for an AE according to the Keras spec. that takes y, y_hat.
The loss is a combination of the MSE and the Frobenius norm of the Jacobian. When using the loss, the training is very fast until I return the sum of MSE and Norm, i.e. returning ret, which slows down the training. Not returning ret but keeping all computations the same, makes the training fast again.
E.g. the below version is slow. Just returning mse makes the training fast again.
#tf.function
def orthogonal_loss(y, y_hat):
"""
Computes the orthogonal loss, a combination of reconstruction loss and
regularization of the orthogonality of the Jacobian.
Args:
y: input vector of shape (batch, dim)
y_hat: reconstruction of y of shape (batch, dim)
Returns: loss of MSE(y, y_hat) + scaling || J'J - I * diag(J'J) ||_F
"""
mse = tf.keras.losses.mean_squared_error(y, y_hat)
with tf.GradientTape() as tape:
z = ae.encoder(y)
tape.watch(z)
y_tilde = ae.decoder(z)
# the Jacobian will be of shape (batch, output dim., latent dim.)
jacobian = tape.batch_jacobian(y_tilde, z)
# will use the batch matrix. mult. as the last two dim. specify valid matrix
jj = tf.matmul(jacobian, jacobian, transpose_a=True)
# jj_diag = tf.linalg.diag_part(jj)
# - tf.eye(128)
ortho = tf.linalg.norm(jj, ord="fro", axis=(-2, -1))
ret = mse + 0.0001 * ortho
return ret
Any idea what the cause of this phenomenon is? I could only think of a complex gradient which slows down the optimizer.
I'm trying to combine a few "networks" into one final loss function. I'm wondering if what I'm doing is "legal", as of now I can't seem to make this work. I'm using tensorflow probability :
The main problem is here:
# Get gradients of the loss wrt the weights.
gradients = tape.gradient(loss, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights])
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights])
Which gives me None gradients and throws on apply gradients:
AttributeError: 'list' object has no attribute 'device'
Full code:
univariate_gmm = tfp.distributions.MixtureSameFamily(
mixture_distribution=tfp.distributions.Categorical(probs=phis_true),
components_distribution=tfp.distributions.Normal(loc=mus_true,scale=sigmas_true)
)
x = univariate_gmm.sample(n_samples, seed=random_seed).numpy()
dataset = tf.data.Dataset.from_tensor_slices(x)
dataset = dataset.shuffle(buffer_size=1024).batch(64)
m_phis = keras.layers.Dense(2, activation=tf.nn.softmax)
m_mus = keras.layers.Dense(2)
m_sigmas = keras.layers.Dense(2, activation=tf.nn.softplus)
def neg_log_likelihood(y, phis, mus, sigmas):
a = tfp.distributions.Normal(loc=mus[0],scale=sigmas[0]).prob(y)
b = tfp.distributions.Normal(loc=mus[1],scale=sigmas[1]).prob(y)
c = np.log(phis[0]*a + phis[1]*b)
return tf.reduce_sum(-c, axis=-1)
# Instantiate a logistic loss function that expects integer targets.
loss_fn = neg_log_likelihood
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
# Iterate over the batches of the dataset.
for step, y in enumerate(dataset):
yy = np.expand_dims(y, axis=1)
# Open a GradientTape.
with tf.GradientTape() as tape:
# Forward pass.
phis = m_phis(yy)
mus = m_mus(yy)
sigmas = m_sigmas(yy)
# Loss value for this batch.
loss = loss_fn(yy, phis, mus, sigmas)
# Get gradients of the loss wrt the weights.
gradients = tape.gradient(loss, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights])
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, [m_phis.trainable_weights, m_mus.trainable_weights, m_sigmas.trainable_weights]))
# Logging.
if step % 100 == 0:
print("Step:", step, "Loss:", float(loss))
There are two separate problems to take into account.
1. Gradients are None:
Typically this happens, if non-tensorflow operations are executed in the code that is watched by the GradientTape. Concretely, this concerns the computation of np.log in your neg_log_likelihood functions. If you replace np.log with tf.math.log, the gradients should compute. It may be a good habit to try not to use numpy in your "internal" tensorflow components, since this avoids errors like this. For most numpy operations, there is a good tensorflow substitute.
2. apply_gradients for multiple trainables:
This mainly has to do with the input that apply_gradients expects. There you have two options:
First option: Call apply_gradients three times, each time with different trainables
optimizer.apply_gradients(zip(m_phis_gradients, m_phis.trainable_weights))
optimizer.apply_gradients(zip(m_mus_gradients, m_mus.trainable_weights))
optimizer.apply_gradients(zip(m_sigmas_gradients, m_sigmas.trainable_weights))
The alternative would be to create a list of tuples, like indicated in the tensorflow documentation (quote: "grads_and_vars: List of (gradient, variable) pairs.").
This would mean calling something like
optimizer.apply_gradients(
[
zip(m_phis_gradients, m_phis.trainable_weights),
zip(m_mus_gradients, m_mus.trainable_weights),
zip(m_sigmas_gradients, m_sigmas.trainable_weights),
]
)
Both options require you to split the gradients. You can either do that by computing the gradients and indexing them separately (gradients[0],...), or you can simply compute the gradiens separately. Note that this may require persistent=True in your GradientTape.
# [...]
# Open a GradientTape.
with tf.GradientTape(persistent=True) as tape:
# Forward pass.
phis = m_phis(yy)
mus = m_mus(yy)
sigmas = m_sigmas(yy)
# Loss value for this batch.
loss = loss_fn(yy, phis, mus, sigmas)
# Get gradients of the loss wrt the weights.
m_phis_gradients = tape.gradient(loss, m_phis.trainable_weights)
m_mus_gradients = tape.gradient(loss, m_mus.trainable_weights)
m_sigmas_gradients = tape.gradient(loss, m_sigmas .trainable_weights)
# Update the weights of our linear layer.
optimizer.apply_gradients(
[
zip(m_phis_gradients, m_phis.trainable_weights),
zip(m_mus_gradients, m_mus.trainable_weights),
zip(m_sigmas_gradients, m_sigmas.trainable_weights),
]
)
# [...]
I am currently working with an LSTM sequence to sequence model for time domain signal predictions. Because of domain knowledge, I know that the first part of the prediction (about 20%) can never be predicted correctly, since the information required is not available in the given input sequence. The remaining 80% of the predicted sequence are usually predicted quite well. In order to exclude the first 20% from the training optimization, it would be nice to define a loss function that would basically operate on a given index range like the numpy code below:
start = int(0.2*sequence_length)
stop = sequence_length
def mse(pred, target):
""" Mean squared error between two time series np.arrays """
return 1/target.shape[0]*np.sum((pred-target)**2)
def range_mse_loss(y_pred, y):
return mse(y_pred[start:stop],y[start:stop])
How do I have to write this loss function in order to have it work with my preexisting keras code, where loss is simply given by model.compile(loss='mse') ?
You can slice your tensor to just last 80% of the data.
size = int(y_true.shape[0] * 0.8) # for 2D vector, e.g., (100, 1)
loss_fn = tf.keras.losses.MeanSquaredError(name='mse')
loss_fn(y_pred[:-size], y_true[:-size])
You can also use the sample_weights at the tf.keras.losses.MeanSquaredError(), passing an array of weights and the first 20% of weights is zero
size = int(y_true.shape[0] * 0.8) # for 2D vector, e.g., (100, 1)
zeros = tf.zeros((y_true.shape[0] - size), dtype=tf.int32)
ones = tf.ones((size), dtype=tf.int32)
weights = tf.concat([zeros, ones], 0)
loss_fn = tf.keras.losses.MeanSquaredError(name='mse')
loss_fn(y_pred, y_true, sample_weights=weights)
There is a warming of the second solution, the final loss will be lower than the first solution, because you are putting zero in the first predictions values, but you aren't removing them in the formula MSE = 1 /n * sum((y-y_hat)^2).
One workaround would be to mark the observations as None/nan and then overwrite the train_step method. Following tensorflow's tutorial about customizing train_step, you would do something like this
#tf.function
def train_step(keras_model, data):
print('custom train_step')
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = keras_model(x, training=True) # Forward pass
# masking nan values in observations, also assuming that targets are >0.0
mask = tf.greater(y, 0.0)
true_y = tf.boolean_mask(y, mask)
pred_y = tf.boolean_mask(y_pred, mask)
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = keras_model.compiled_loss(true_y, pred_y, regularization_losses=keras_model.losses)
# Compute gradients
trainable_vars = keras_model.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
keras_model.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
keras_model.compiled_metrics.update_state(true_y, pred_y)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in keras_model.metrics}
This will work for all the performance metrics you are tracking. Alternative way would be to mask the nans inside the loss function but that would be limited to only one loss function and not any other loss function/performance metrics.
I'm currently analyzing how gradients develop over the course of training of a CNN using Tensorflow 2.x. What I want to do is compare each gradient in a batch to the gradient resulting for the whole batch. At the moment I use this simple code snippet for each training step:
[...]
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
[...]
# One training step
# x_train is a batch of input data, y_train the corresponding labels
def train_step(model, optimizer, x_train, y_train):
# Process batch
with tf.GradientTape() as tape:
batch_predictions = model(x_train, training=True)
batch_loss = loss_object(y_train, batch_predictions)
batch_grads = tape.gradient(batch_loss, model.trainable_variables)
# Do something with gradient of whole batch
# ...
# Process each data point in the current batch
for index in range(len(x_train)):
with tf.GradientTape() as single_tape:
single_prediction = model(x_train[index:index+1], training=True)
single_loss = loss_object(y_train[index:index+1], single_prediction)
single_grad = single_tape.gradient(single_loss, model.trainable_variables)
# Do something with gradient of single data input
# ...
# Use batch gradient to update network weights
optimizer.apply_gradients(zip(batch_grads, model.trainable_variables))
train_loss(batch_loss)
train_accuracy(y_train, batch_predictions)
My main problem is that computation time explodes when calculating each of the gradients single-handedly although these calculations should have already been done by Tensorflow when calculating the batch's gradient. The reason is that GradientTape as well as compute_gradients always return a single gradient no matter whether single or several data points were given. So this computation has to be done for each data point.
I know that I could compute the batch's gradient to update the network by using all the single gradients calculated for each data point but this plays only a minor role in saving computation time.
Is there a more efficient way to compute single gradients?
You can use the jacobian method of the gradient tape to get the Jacobian matrix, which will give you the gradients for each individual loss value:
import tensorflow as tf
# Make a random linear problem
tf.random.set_seed(0)
# Random input batch of ten four-vector examples
x = tf.random.uniform((10, 4))
# Random weights
w = tf.random.uniform((4, 2))
# Random batch label
y = tf.random.uniform((10, 2))
with tf.GradientTape() as tape:
tape.watch(w)
# Prediction
p = x # w
# Loss
loss = tf.losses.mean_squared_error(y, p)
# Compute Jacobian
j = tape.jacobian(loss, w)
# The Jacobian gives you the gradient for each loss value
print(j.shape)
# (10, 4, 2)
# Gradient of the loss wrt the weights for the first example
tf.print(j[0])
# [[0.145728424 0.0756840706]
# [0.103099883 0.0535449386]
# [0.267220169 0.138780832]
# [0.280130595 0.145485848]]
I am trying to devise a custom loss function for Variational auto-encoder in Keras with two parts: reconstruction loss and divergence loss. However, instead of using the gaussian distribution for divergence loss, I want to sample randomly from the input and then perform the divergence loss based on the sampled inputs. However, I do not know how to sample inputs which are from the complete datastet and then perform a loss with respect to it. The encoder model is:
x_input = Input((input_size,))
enc1 = Dense(encoder_size[0], activation='relu')(x_input)
drop = Dropout(keep_prob)(enc1)
enc2 = Dense(encoder_size[1], activation='relu')(drop)
drop = Dropout(keep_prob)(enc2)
mu = Dense(latent_dim, activation='linear', name='encoder_mean')(drop)
encoder = Model(x_input,mu)
The structure of loss should be:
# the input is the placeholder for the complete input
def loss(x, y, input):
reconstruction_loss = mean_squared_error(x, y)
sample_num = 100
sample_input = sample_from_input(input, sample_num)
sample_encoded = encoder.predict(sample_input) <-- this would not work with placeholder
sample_prior = gaussian(mean=0, std=1)
# perform KL divergence between sample_encoded and sample_prior
I have not found anything similar given. It would be great if somebody can point me in the right direction.
There are couple of problems in your code. First, when you create your custom loss function, it expects only two (equivalent) parameters of y_true and y_pred. So you will not be able to pass explicitly the parameter of input in your case. If you wish to pass additional parameters, you have to use the concept of nested function.
Next thing is inside predict function you will not be able to pass TensorFlow placeholders. You will have to pass Numpy array equivalents in it. So I would recommend you to rewrite your sample_from_input which samples from a set of file path inputs, reads it and sends a Numpy array of file data. Also, in the parameter of input_data, pass it the file paths where your data is present.
I have enclosed only the relevant parts of code.
def custom_loss(input_data):
def loss(y_true, y_pred):
reconstruction_loss = mean_squared_error(x, y)
sample_num = 100
sample_input = sample_from_input(input_data)
# sample_input is a Numpy array
sample_encoded = encoder.predict(sample_input)
sample_prior = gaussian(mean=0, std=1)
# perform KL divergence between sample_encoded and sample_prior
divergence_loss = # Your logic returning a numeric value
return reconstruction_loss + divergence_loss
return loss
encoder.compile(optimizer='adam', loss=custom_loss('<<input_data_path>>'))