I decided to write my lossfunction Structural Similarity Loss according to the article
https://arxiv.org/pdf/1910.08711.pdf
Testing different models for segmentation and different losses for them I have a problem that sometimes there is numerical instability and my self-written Segnet model gives out NaN during training, due to which loss also becomes NaN. While on other losses (bce, dice loss, focal loss) everything is stable. After printing out the variables in detail, I found out that the loss value before the y_pred=NaN arrives is adequate, so my assumption is that the loss gradients are counted incorrectly, but it's not clear how to fix it.
def ssl_loss (y_real, y_pred, window_size=11, eps = 0.01):
beta = 0.1
Lambda = 0.5
#input size(B, C, H, W)
#C = 1, because we compare monochrome segmentations
y_real, y_pred = y_real.to(device).squeeze(), y_pred.to(device).squeeze()
bce_matrix = (y_pred - y_real * y_pred + torch.log(1 + torch.exp(-y_pred)))
y_pred = torch.sigmoid(y_pred)
blurer = T.GaussianBlur(kernel_size=(11, 11), sigma=(1.5, 1.5))
mu_y = blurer(y_real)
sigma_y = blurer((y_real - mu_y) ** 2)
mu_p = blurer(y_pred)
sigma_p = blurer((y_pred - mu_p) ** 2)
errors = torch.abs((y_real - mu_y + eps) / (torch.sqrt(sigma_y) + eps) - (y_pred - mu_p + eps) / (torch.sqrt(sigma_p) + eps)).squeeze()
f_n_c = (errors > beta * errors.max()).int()
M = f_n_c.sum(dim=(1, 2)).unsqueeze(1).unsqueeze(2)
ssl_matrix = (errors * f_n_c * bce_matrix / M)
loss = Lambda * bce_matrix.mean() + (1 - Lambda) * ssl_matrix.mean()
return loss
And here's meaningful part of my train function
for epoch in range(epochs):
avg_loss = 0
model.train()
for X_batch, Y_batch in data_tr:
X_batch = X_batch.to(device)
Y_batch = Y_batch.to(device)
opt.zero_grad()
Y_pred = model(X_batch)
loss = loss_fn(Y_batch, Y_pred)
loss.backward()
opt.step()
avg_loss += loss / len(data_tr)
scheduler.step()
I'm using this custom loss function for ccc
def ccc(y_true, y_pred):
ccc = ((ccc_v(y_true, y_pred) + ccc_a(y_true, y_pred)) / 2)
return 1 - ccc
def ccc_v(y_true, y_pred):
x = y_true[:,0]
y = y_pred[:,0]
x_mean = K.mean(x, axis=0)
y_mean = K.mean(y, axis=0)
covar = K.mean( (x - x_mean) * (y - y_mean) )
x_var = K.var(x)
y_var = K.var(y)
ccc = (2.0 * covar) / (x_var + y_var + (x_mean + y_mean)**2)
return ccc
def ccc_a(y_true, y_pred):
x = y_true[:,1]
y = y_pred[:,1]
x_mean = K.mean(x, axis=0)
y_mean = K.mean(y, axis=0)
covar = K.mean( (x - x_mean) * (y - y_mean) )
x_var = K.var(x)
y_var = K.var(y)
ccc = (2.0 * covar) / (x_var + y_var + (x_mean + y_mean)**2)
return ccc
Currently the loss function ccc returns a scalar. The loss function is split into 2 different functions (ccc_v and ccc_a) because I use them as metrics as well.
I've read from Keras doc and this question that a custom loss function should return a list of losses, one for each sample.
First question: my model trains even if the loss function returns a scalar. Is it that bad? How is training different if I use a loss function whose output is a scalar instead of a list of scalars?
Second question: how can I rewrite my loss function to return a list of losses? I know I should avoid means and sums but in my case I think it's not possible because there's not a global mean but different ones, one a the numerator for the covariance and a couple at the denominator for the variances.
if your using tensorflow there are automatic apis for calculating loss
tf.keras.losses.mse()
tf.keras.losses.mae()
tf.keras.losses.Huber()
# Define the loss function
def loss_function(w1, b1, w2, b2, features = borrower_features, targets = default):
predictions = model(w1, b1, w2, b2)
# Pass targets and predictions to the cross entropy loss
return keras.losses.binary_crossentropy(targets, predictions)
#if your using categorical_crossentropy than return the losses for it.
#convert your image into a single np.array for input
#build your SoftMax model
# Define a sequential model
model=keras.Sequential()
# Define a hidden layer
model.add(keras.layers.Dense(16, activation='relu', input_shape=(784,)))
# Define the output layer
model.add(keras.layers.Dense(4,activation='softmax'))
# Compile the model
model.compile('SGD', loss='categorical_crossentropy',metrics=['accuracy'])
# Complete the fitting operation
train_data=train_data.reshape((50,784))
# Fit the model
model.fit(train_data, train_labels, validation_split=0.2, epochs=3)
# Reshape test data
test_data = test_data.reshape(10, 784)
# Evaluate the model
model.evaluate(test_data, test_labels)
I am very new to NN and tensorflow, recently I have been reading up on keras implementation of variational autoencoder, and I found two versions of loss functions:
version1:
def vae_loss(x, x_decoded_mean):
recon_loss = original_dim * objectives.mse(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return recon_loss + kl_loss
version2:
def vae_loss(x, x_decoded_mean):
recon_loss = objectives.mse(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return recon_loss + kl_loss
if my understanding is correct, version 1 is a sum of loss and version 2 is mean loss across all samples in the same batch. so does the scale of loss affect learning result? I tried testing them out, and it largely affect my latent variable scale. so why is this and which form of loss function is correct?
update of my question:
if I multiply original_dim with KL loss,
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) *original_dim
return xent_loss + kl_loss
the latent distribution looks like below:
enter image description here
and decoded output looks like this:
enter image description here
looks the encoder output does not contain any information. I am using mnist dataset, and the example from https://github.com/vvkv/Variational-Auto-Encoders/blob/master/Variational%2BAuto%2BEncoders.ipynb
Summing versus averaging the loss for each example in a batch will simply scale all loss terms proportionally. An equivalent change would be adjusting the learning rate. The important thing is that your normal loss magnitude multiplied by your learning rate do not lead to unstable learning.
I am working on a dummy example with generated heartbeats, and want to first use a VAE to encode the heartbeats and afterwards a simple classifier.
Problem is when i increase the beta above 0.01, the reconstructions become nonsense (see the first image).
And when the beta is low i get a normal autoencoder output with no disentanglement (second image).
I believe the problem may be in my KL divergence or VAE loss function, but i can't seem to find it.
In my encoder i do the reparameterization as such:
enc = self.encoder(x,batch_size, x_lenghts)
mu = self.enc2mean(enc)
logv = self.enc2logv(enc)
std = torch.exp(0.5*logv)
z = torch.randn([batch_size,1, self.encoder_hidden_sizes[-1] * (int(self.bidirectional)+1)]).to(self.device)
z = z * std + mu
And i define the VAE loss as:
def VAE_loss(x, reconstruction, mu, logvar, batch_size, latent_dim, beta=0):
mse = F.mse_loss(x, reconstruction)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
KLD /= (batch_size * latent_dim)
return mse + beta*KLD
Full standalone code to reproduce the results is here.
Any insights are appreciated!
I have only one output for my model, but I would like to combine two different loss functions:
def get_model():
# create the model here
model = Model(inputs=image, outputs=output)
alpha = 0.2
model.compile(loss=[mse, gse],
loss_weights=[1-alpha, alpha]
, ...)
but it complains that I need to have two outputs because I defined two losses:
ValueError: When passing a list as loss, it should have one entry per model outputs.
The model has 1 outputs, but you passed loss=[<function mse at 0x0000024D7E1FB378>, <function gse at 0x0000024D7E1FB510>]
Can I possibly write my final loss function without having to create another loss function (because that would restrict me from changing the alpha outside the loss function)?
How do I do something like (1-alpha)*mse + alpha*gse?
Update:
Both my loss functions are equivalent to the function signature of any builtin keras loss function, takes in y_true and y_pred and gives a tensor back for loss (which can be reduced to a scalar using K.mean()), but I believe, how these loss functions are defined shouldn't affect the answer as long as they return valid losses.
def gse(y_true, y_pred):
# some tensor operation on y_pred and y_true
return K.mean(K.square(y_pred - y_true), axis=-1)
Specify a custom function for the loss:
model = Model(inputs=image, outputs=output)
alpha = 0.2
model.compile(
loss=lambda y_true, y_pred: (1 - alpha) * mse(y_true, y_pred) + alpha * gse(y_true, y_pred),
...)
Or if you don't want an ugly lambda make it into an actual function:
def my_loss(y_true, y_pred):
return (1 - alpha) * mse(y_true, y_pred) + alpha * gse(y_true, y_pred)
model = Model(inputs=image, outputs=output)
alpha = 0.2
model.compile(loss=my_loss, ...)
EDIT:
If your alpha is not some global constant, you can have a "loss function factory":
def make_my_loss(alpha):
def my_loss(y_true, y_pred):
return (1 - alpha) * mse(y_true, y_pred) + alpha * gse(y_true, y_pred)
return my_loss
model = Model(inputs=image, outputs=output)
alpha = 0.2
my_loss = make_my_loss(alpha)
model.compile(loss=my_loss, ...)
Yes, define your own custom loss function and pass that to the loss argument upon compiling:
def custom_loss(y_true, y_pred):
return (1-alpha) * K.mean(K.square(y_true-y_pred)) + alpha * gse
(Not sure what you mean with gse). It can be helpful to have a look at how the vanilla losses are implemented in Keras: https://github.com/keras-team/keras/blob/master/keras/losses.py
loss function should be one function.You are giving your model a list of two functions
try:
def mse(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
model.compile(loss= (mse(y_true, y_pred)*(1-alpha) + gse(y_true, y_pred)*alpha),
, ...)
Not that this answer particularly addresses the original question, I thought of writing it because the same error occurs when trying to load a keras model that has a custom loss using keras.models.load_model, and it's not been properly answered anywhere. Specifically, following the VAE example code in keras github repository, this error occurs when loading the VAE model after been saved with model.save.
The solution is to save only the weights using vae.save_weights('file.h5') instead of saving the full model. However, you would have to build and compile the model again before loading the weights using vae.load_weights('file.h5').
Following is an example implementation.
class VAE():
def build_model(self): # latent_dim and intermediate_dim can be passed as arguments
def sampling(args):
"""Reparameterization trick by sampling from an isotropic unit Gaussian.
# Arguments
args (tensor): mean and log of variance of Q(z|X)
# Returns
z (tensor): sampled latent vector
"""
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.int_shape(z_mean)[1]
# by default, random_normal has mean = 0 and std = 1.0
epsilon = K.random_normal(shape=(batch, dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
# original_dim = self.no_features
# intermediate_dim = 256
latent_dim = 8
inputs = Input(shape=(self.no_features,))
x = Dense(256, activation='relu')(inputs)
x = Dense(128, activation='relu')(x)
x = Dense(64, activation='relu')(x)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)
# use reparameterization trick to push the sampling out as input
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
# instantiate encoder model
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
# build decoder model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(32, activation='relu')(latent_inputs)
x = Dense(48, activation='relu')(x)
x = Dense(64, activation='relu')(x)
outputs = Dense(self.no_features, activation='linear')(x)
# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
# instantiate VAE model
outputs = decoder(encoder(inputs)[2])
VAE = Model(inputs, outputs, name='vae_mlp')
reconstruction_loss = mse(inputs, outputs)
reconstruction_loss *= self.no_features
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
VAE.add_loss(vae_loss)
VAE.compile(optimizer='adam')
return VAE
Now,
vae_cls = VAE()
vae = vae_cls.build_model()
# vae.fit()
vae.save_weights('file.h5')
Load model and predict (if in a different script, you need to import the VAE class),
vae_cls = VAE()
vae = vae_cls.build_model()
vae.load_weights('file.h5')
# vae.predict()
Finally, The Difference: [ref]
Keras model.save saves,
Model weights
Model architecture
Model compilation details (loss function(s) and metrics)
Model optimizer and regularizer states
Keras model.save_weights saves only the model weights. Keras model.to_json() saves the model architecture.
Hope this helps someone experimenting with variational autoencoders.
Combine MAE and RMSE together:
import tensorflow as tf
from tensorflow import keras
def loss_fn_mae_rmse(y_true, y_pred, alpha=0.8):
mae = keras.losses.MeanAbsoluteError()
mse = keras.losses.MeanSquaredError()
return alpha * mae(y_true, y_pred) + (1 - alpha) * tf.sqrt(mse(y_true, y_pred))
model = keras.Model(inputs=..., outputs=...)
opt = keras.optimizers.Adam(learning_rate=1e-4)
model.compile(optimizer=opt, loss=loss_fn_mae_rmse, metrics=['mae'])
At the same time, if you want to load this model after training and saved to disk:
model = keras.models.load_model('path/to/model.h5', custom_objects={'loss_fn_mae_rmse': loss_fn_mae_rmse})