Keras - Implementation of custom loss function with multiple outputs - python

I am trying to replicate (a way smaller version) of the AlphaGo Zero system. However, in the network model, I am having a problem. The loss function I am supposed to implement is the following:
Where:
z is the label (a real value between -1 and 1) of one of the two heads of network and v is this value predicted by the network.
pi is the label of a distribution probability over all actions and p is the distribution probability over all actions predicted by the network.
c is the L2 regularization parameter.
I pass to the network a list of channels (representing the game state) and an array (same size of the pi and p) representing which actions are indeed valid (by putting 1 if valid, 0 otherwise).
As you can see, the loss function uses both the target and the network predictions for the calculation. But after extensive search, when implementing my custom loss function, I can only pass as parameter y_true and y_pred even though I have two "y_true's" and two "y_pred's". I have tried using indexing to get those values but I'm pretty sure it is not working.
The modeling of the network and the custom loss function is in the code below:
def custom_loss(y_true, y_pred):
# I am pretty sure this does not work
output_prob_dist = y_pred[0]
output_value = y_pred[1]
label_prob_dist = y_true[0]
label_value = y_pred[1]
mse_loss = K.mean(K.square(label_value - output_value), axis=-1)
cross_entropy_loss = K.dot(K.transpose(label_prob_dist), output_prob_dist)
return mse_loss - cross_entropy_loss
def define_model():
"""Neural Network model implementation using Keras + Tensorflow."""
state_channels = Input(shape = (5,5,6), name='States_Channels_Input')
valid_actions_dist = Input(shape = (32,), name='Valid_Actions_Input')
conv = Conv2D(filters=10, kernel_size=2, kernel_regularizer=regularizers.l2(0.0001), activation='relu', name='Conv_Layer')(state_channels)
pool = MaxPooling2D(pool_size=(2, 2), name='Pooling_Layer')(conv)
flat = Flatten(name='Flatten_Layer')(pool)
# Merge of the flattened channels (after pooling) and the valid action
# distribution. Used only as input in the probability distribution head.
merge = concatenate([flat, valid_actions_dist])
#Probability distribution over actions
hidden_fc_prob_dist_1 = Dense(100, kernel_regularizer=regularizers.l2(0.0001), activation='relu', name='FC_Prob_1')(merge)
hidden_fc_prob_dist_2 = Dense(100, kernel_regularizer=regularizers.l2(0.0001), activation='relu', name='FC_Prob_2')(hidden_fc_prob_dist_1)
output_prob_dist = Dense(32, kernel_regularizer=regularizers.l2(0.0001), activation='softmax', name='Output_Dist')(hidden_fc_prob_dist_2)
#Value of a state
hidden_fc_value_1 = Dense(100, kernel_regularizer=regularizers.l2(0.0001), activation='relu', name='FC_Value_1')(flat)
hidden_fc_value_2 = Dense(100, kernel_regularizer=regularizers.l2(0.0001), activation='relu', name='FC_Value_2')(hidden_fc_value_1)
output_value = Dense(1, kernel_regularizer=regularizers.l2(0.0001), activation='tanh', name='Output_Value')(hidden_fc_value_2)
model = Model(inputs=[state_channels, valid_actions_dist], outputs=[output_prob_dist, output_value])
model.compile(loss=custom_loss, optimizer='adam', metrics=['accuracy'])
return model
# In the main method
model = define_model()
# ...
# MCTS routine to collect the data for the network input
# ...
x_train = [channels_input, valid_actions_dist_input]
y_train = [dist_probs_label, who_won_label]
model.fit(x_train, y_train, epochs=10)
In short, my question is: how do I correctly implement this custom loss function that uses both the network outputs and label values of the network?

I check their git and there is a lot going on; As showing in the equetion the final loss is the combination of three different losses, and the three networks are minimizing this final loss. Their code of losses is below:
# train ops
policy_cost = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(
logits=logits, labels=tf.stop_gradient(labels['pi_tensor'])))
value_cost = params['value_cost_weight'] * tf.reduce_mean(
tf.square(value_output - labels['value_tensor']))
reg_vars = [v for v in tf.trainable_variables()
if 'bias' not in v.name and 'beta' not in v.name]
l2_cost = params['l2_strength'] * \
tf.add_n([tf.nn.l2_loss(v) for v in reg_vars])
combined_cost = policy_cost + value_cost + l2_cost
You can refer this and make your changes accordingly.

Related

Keras. Siamese network and triplet loss

I want to build a network that should be able to verificate images (e.g. human faces). As I understand, that the best solution for that is Siamese network with a triplet loss. I didn't found any ready-made implementations, so I decided to create my own.
But I have question about Keras. For example, here's the structure of the network:
And the code is something like that:
embedding = Sequential([
Flatten(),
Dense(1024, activation='relu'),
Dense(64),
Lambda(lambda x: K.l2_normalize(x, axis=-1))
])
input_a = Input(shape=shape, name='anchor')
input_p = Input(shape=shape, name='positive')
input_n = Input(shape=shape, name='negative')
emb_a = embedding(input_a)
emb_p = embedding(input_p)
emb_n = embedding(input_n)
out = Concatenate()([emb_a, emb_p, emp_n])
model = Model([input_a, input_p, input_n], out)
model.compile(optimizer='adam', loss=<triplet_loss>)
I defined only one embedding model. Does this mean that once the model starts training weights would be the same for each input?
If it is, how can I extract embedding weights from the model?
Yes, In triplet loss function weights should be shared across all three networks, i.e Anchor, Positive and Negetive.
In Tensorflow 1.x to achieve weight sharing you can use reuse=True in tf.layers.
But in Tensorflow 2.x since the tf.layers has been moved to tf.keras.layers and reuse functionality has been removed.
To achieve weight sharing you can write a custom layer that takes the parent layer and reuses its weights.
Below is the sample example to do the same.
class SharedConv(tf.keras.layers.Layer):
def __init__(
self,
filters,
kernel_size,
strides=None,
padding=None,
dilation_rates=None,
activation=None,
use_bias=True,
**kwargs
):
self.filters = filters
self.kernel_size = kernel_size
self.strides = strides
self.padding = padding
self.dilation_rates = dilation_rates
self.activation = activation
self.use_bias = use_bias
super().__init__(*args, **kwargs)
def build(self, input_shape):
self.conv = Conv2D(
self.filters,
self.kernel_size,
padding=self.padding,
dilation_rate=self.dilation_rates[0]
)
self.net1 = Activation(self.activation)
self.net2 = Activation(self.activation)
def call(self, inputs, **kwargs):
x1 = self.conv(inputs)
x1 = self.act1(x1)
x2 = tf.nn.conv2d(
inputs,
self.conv.weights[0],
padding=self.padding,
strides=self.strides,
dilations=self.dilation_rates[1]
)
if self.use_bias:
x2 = x2 + self.conv.weights[1]
x2 = self.act2(x2)
return x1, x2
I will answer on how to extract the embeddings (reference from my Github post):
My trained siamese model looked like this:
siamese_model.summary()
Note that my newly redefined model is basically the same as the one highlighted in yellow
I then redefined my model which I wanted to use for extracting embeddings (It should be the same model you defined except now it will not have those multiple inputs like siamese) which looked like this:
siamese_embeddings_model = build_siamese_model(input_shape)
siamese_embeddings_model .summary()
Then I just extracted the weights from my trained siamese model and set them into my new model
embeddings_weights = siamese_model.layers[-3].get_weights()
siamese_embeddings_model.set_weights(embeddings_weights )
Then you can supply the new Image to extract the embeddings from the new model
vector = siamese.predict(image)
len(vector[0]) it will print 150 because of my fine dense layer (which are the output vector)

Can I use multiple "actual" and "predicted" outputs in a single loss functions?

I am using a multiple output model in tensorflow 2.0
input_layer = layers.Input(shape=(INP_MAX_LENGTH), name="input")
embed_layer = layers.Embedding(EMBED_INP_DIM, embedding_size, name="embeddings", weights=[embedding_matrix], trainable=False, input_length=INP_MAX_LENGTH)(input_layer)
lstm_layer1 = layers.LSTM(1024, name="lstm1")(embed_layer)
lstm_layer2 = layers.LSTM(1024, name="lstm2")(embed_layer)
output_layer1 = layers.Dense(1, name="output1", activation='relu')(lstm_layer1)
output_layer2 = layers.Dense(1, name="output2", activation='relu')(lstm_layer2)
concat_layer = layers.Concatenate()([output_layer1, output_layer2])
output_layer3 = layers.Dense(1, name="output3", activation='relu')(concat_layer)
model = Model(input=input_layer, output=[output_layer1, output_layer2, output_layer3])
model.compile((optimizer='adagrad', loss={'output1': loss1, 'output2': loss1, 'output3':loss2})
I'm using quantile loss function as my loss1 and it is working fine.
I want my loss2 function to behave something like this
import tensorflow.keras.backend as K
def loss2(y_true, y_pred):
# y_true1 = y from output1
# y_true2 = y from output2
# y_pred1 = y from output_layer1
# y_pred2 = y from output_layer2
# loss = K.mean(y_true - (K.sqrt(K.square(y_true2 - y_pred2) + K.square(y_true1 - y_pred1)))), axis=-1)
return loss
In short, I'm trying to implement the distance formula as my loss function and minimize the distance between two points to 0.
Can I pass y_true and y_pred from output1 and output2 to loss2 function? I tried using Concatenate to at least pass the y_preds but somehow its not working.
Yes, when you have multiple output nodes you can have loss functions that take in multiple predictions.
The basic idea is to always treat your output as a vector so when you pass it from your generator function it should be a list and when it is passed to the loss functions it should remain a list.
Rather than having different loss functions for each output node you should ideally have one loss function for all the outputs combined.

Keras model graph is disconnected when trying to use a shared model

I'm trying to train a neural network in keras but I'm getting as error that there are no gradients for any variable, which may imply that the graph is disconnected.
I'm copying here a stripped down version of the code with only the bit related to the model definition.
The model accepts two inputs that will be fed, one at time, to the same shared model: the encoder.
The two outputs of the encoder are then concatenated and sent to a dense layer to compute the final output.
I don't get what's wrong, it looks like that when instantiating the encoder I'm creating additional trainable variables that are not used anywhere.
For the network layout I was getting inspiration from the official keras docs:
https://keras.io/guides/functional_api/#all-models-are-callable-just-like-layers
def _get_encoder(self, model_input_shape):
encoder_input = Input(shape=model_input_shape)
x = encoder_input
x = Conv2D(32, (3, 3), strides=1, padding="same")(x)
x = BatchNormalization(axis=-1)(x)
x = LeakyReLU(alpha=0.1)(x)
latent_z = Flatten()(x)
latent_z = Dense(self.latent_dim)(latent_z)
encoder = Model(
encoder_input,
latent_z,
name='encoder'
)
return encoder
def build_model(self):
model_input_shape = (self.height, self.width, self.depth)
model_input_1 = Input(shape=model_input_shape)
model_input_2 = Input(shape=model_input_shape)
self.encoder = self._get_encoder(model_input_shape)
z_1 = self.encoder(model_input_1)
z_2 = self.encoder(model_input_2)
x = concatenate([z_1, z_2])
prediction = Dense(1, activation='sigmoid')(x)
self.network = Model(
inputs=[model_input_1, model_input_2],
outputs=[prediction],
name = 'network'
)
network.network.compile(
optimizer='rmsprop',
loss='mse',
metrics=['mae'])
H = network.network.fit(
x=train_gen,
validation_data=test_gen,
epochs=EPOCHS,
steps_per_epoch=STEPS,
validation_steps=STEPS)
I found the problem. My custom data generator was returning a list [x,y] instead of a tuple (x,y). Where x is the input and y the target. A simple mistake that was causing totally unrelated errors.

Feature learning with triplet loss after 1-2 epochs yields 100% val accuracy?

My NN has to learn image similarity with a custom triplet loss. The positive image is similar to the anchor, while the negative is not.
My task is to predict whether the second image or the third image of an unseen triplet is more similar to the anchor or not.
The triplets are given for both train and test sets in the task, so I did not have to mine them or randomly generate them: they are fixed in my task.
---> Idea: To improve my model, I try to use feature learning with Xception layers frozen and adding a Dense layer on top.
Problem:
When training the below model with Xception layers frozen, after 1-2 epochs it learns to just set all positive images to a very low distance to the anchor and all negative images to a very high distance. Hence, the 100% val accuracy.
I immediately thought of overfitting but I only have one fully connected layer that I train? How can I combat this? Or is my triplet loss somehow wrongly defined?
I dont use data augmentation so could that potentially help?
Somehow this happens only when using a pretrained model. When I use a simple model I get realistic accuracy...
What am I missing here?
My triplet loss:
def triplet_loss(y_true, y_pred, alpha = 0.4):
"""
Implementation of the triplet loss function
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor data
positive -- the encodings for the positive data (similar to anchor)
negative -- the encodings for the negative data (different from anchor)
Returns:
loss -- real number, value of the loss
"""
total_length = y_pred.shape.as_list()[-1]
anchor = y_pred[:,0:int(total_length*1/3)]
positive = y_pred[:,int(total_length*1/3):int(total_length*2/3)]
negative = y_pred[:,int(total_length*2/3):int(total_length*3/3)]
# distance between the anchor and the positive
pos_dist = K.sum(K.square(anchor-positive),axis=1)
# distance between the anchor and the negative
neg_dist = K.sum(K.square(anchor-negative),axis=1)
# compute loss
basic_loss = pos_dist-neg_dist+alpha
loss = K.maximum(basic_loss,0.0)
return loss
Then my model:
def baseline_model():
input_1 = Input(shape=(256, 256, 3))
input_2 = Input(shape=(256, 256, 3))
input_3 = Input(shape=(256, 256, 3))
pretrained_model = Xception(include_top=False, weights="imagenet")
for layer in pretrained_model.layers:
layer.trainable = False
x1 = pretrained_model(input_1)
x2 = pretrained_model(input_2)
x3 = pretrained_model(input_3)
x1 = Flatten(name='flatten1')(x1)
x2 = Flatten(name='flatten2')(x2)
x3 = Flatten(name='flatten3')(x3)
x1 = Dense(128, activation=None,kernel_regularizer=l2(0.01))(x1)
x2 = Dense(128, activation=None,kernel_regularizer=l2(0.01))(x2)
x3 = Dense(128, activation=None,kernel_regularizer=l2(0.01))(x3)
x1 = Lambda(lambda x: K.l2_normalize(x,axis=-1))(x1)
x2 = Lambda(lambda x: K.l2_normalize(x,axis=-1))(x2)
x3 = Lambda(lambda x: K.l2_normalize(x,axis=-1))(x3)
concat_vector = concatenate([x1, x2, x3], axis=-1, name='concat')
model = Model([input_1, input_2, input_3], concat_vector)
model.compile(loss=triplet_loss, optimizer=Adam(0.00001), metrics=[accuracy])
model.summary()
return model
Fitting my model:
model.fit(
gen(X_train,batch_size=batch_size),
steps_per_epoch=13281 // batch_size,
epochs=10,
validation_data=gen(X_val,batch_size=batch_size),
validation_steps=1666 // batch_size,
verbose=1,
callbacks=callbacks_list
)
model.save_weights('try_6.h5')
Please note that you use different Dense layers for each input (you define 3 different Dense layers. each time you create a new Dense object it generate a new layer, with new parameters, independent of the previous layers you created). If the input is consistent, meaning input 1 is always the anchor, input 2 is always the positive, and input 3 is always the negative - it will be super easy for the model to overfit. What you should probably do is use only a single Dense layer for all 3 inputs.
For example, based on your code you can define the model like this:
pretrained_model = Xception(include_top=False, weights="imagenet")
for layer in pretrained_model.layers:
layer.trainable = False
general_input = Input(shape=(256, 256, 3))
x = pretrained_model(general_input)
x = Flatten()(x)
x = Dense(128, activation=None,kernel_regularizer=l2(0.01))(x)
base_model = Model([general_input], [x])
input_1 = Input(shape=(256, 256, 3))
input_2 = Input(shape=(256, 256, 3))
input_3 = Input(shape=(256, 256, 3))
x1 = base_model(input_1)
x2 = base_model(input_2)
x3 = base_model(input_3)
# ... continue with your code - normalize, concat, etc.

tensorflow model not updating weights

I have a model that is training (it goes through steps and epochs and evaluate losses) but weights are not training.
I trying to train a discriminator that would distinguish whether the image is synthetic or real. It's part of GANs model, I'm trying to build.
Basic structure is as followed:
I have two inputs:
1. image (could be real or synthetic) 2. labels (0 for real, 1 for synthetic)
Source Estimator is where I extract features from images. I had already trained the model and restored the weights and biases. These layers are frozen (not trainable).
def SourceEstimator(eye, name, trainable = True):
# source estimator and target representer shares the same structure.
# SE is not trainable, while TR is.
net = tf.layers.conv2d(eye, 32, 3, (1,1), padding='same', activation=tf.nn.leaky_relu, trainable=trainable, name=name+'_conv2d_1')
net = tf.layers.conv2d(net, 32, 3, (1,1), padding='same', activation=tf.nn.leaky_relu, trainable=trainable, name=name+'_conv2d_2')
net = tf.layers.conv2d(net, 64, 3, (1,1), padding='same', activation=tf.nn.leaky_relu, trainable=trainable, name=name+'_conv2d_3')
c3 = net
net = tf.layers.max_pooling2d(net, 3, (2,2), padding='same', name=name+'_maxpool_4')
net = tf.layers.conv2d(net, 80, 3, (1,1), padding='same', activation=tf.nn.leaky_relu, trainable=trainable, name=name+'_conv2d_5')
net = tf.layers.conv2d(net, 192, 3, (1,1), padding='same', activation=tf.nn.leaky_relu, trainable=trainable, name=name+'_conv2d_6')
c5 = net
return (c3, c5)
Discriminator is as followed:
def DiscriminatorModel(features, reuse=False):
with tf.variable_scope('discriminator', reuse=tf.AUTO_REUSE):
net = tf.layers.conv2d(features, 64, 3, 2, padding='same', kernel_initializer='truncated_normal', activation=tf.nn.leaky_relu, trainable=True, name='discriminator_c1')
net = tf.layers.conv2d(net, 128, 3, 2, padding='same', kernel_initializer='truncated_normal', activation=tf.nn.leaky_relu, trainable=True, name='discriminator_c2')
net = tf.layers.conv2d(net, 256, 3, 2, padding='same', kernel_initializer='truncated_normal', activation=tf.nn.leaky_relu, trainable=True, name='discriminator_c3')
net = tf.contrib.layers.flatten(net)
net = tf.layers.dense(net, units=1, activation=tf.nn.softmax, name='descriminator_out', trainable=True)
return net
Input goes to SourceEstimator model and extracts features (c3,c5).
Then c3 and c5 is concatenated along the channel axis and passed to discriminator model.
c3, c5 = CommonModel(self.left_eye, 'el', trainable=False)
c5 = tf.image.resize_images(c5, size=(self.config.img_size,self.config.img_size))
features = tf.concat([c3, c5], axis=3)
##---------------------------------------- DISCRIMINATOR ------------------------------------------##
with tf.variable_scope('discriminator'):
logit = DiscriminatorModel(features)
Finally losses and train_ops
##---------------------------------------- LOSSES ------------------------------------------##
with tf.variable_scope("discriminator_losses"):
self.loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logit, labels=self.label))
##---------------------------------------- TRAIN ------------------------------------------##
# optimizers
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
disc_optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
self.disc_op = disc_optimizer.minimize(self.loss, global_step=self.global_step_tensor, name='disc_op')
Train steps and epochs. I'm using 32 batch size. And data generator class to get the image each step.
def train_epoch(self):
num_iter_per_epoch = self.train_data.get_size() // self.config.get('batch_size')
loop = tqdm(range(num_iter_per_epoch))
for i in loop:
dloss = self.train_step(i)
loop.set_postfix(loss='{:05.3f}'.format(dloss))
def train_step(self, i):
el, label = self.train_data.get_batch(i)
## ------------------- train discriminator -------------------##
feed_dict = {
self.model.left_eye: el,
self.model.label: label
}
_, dloss = self.sess.run([self.model.disc_op, self.model.loss], feed_dict=feed_dict)
return dloss
While the model is going through steps and epochs, weight remains unchanged.
Loss fluctuates during the training steps, but loss for every epoch is the same. For example, if I don't shuffle the dataset each epoch, loss on the graph will follow the same pattern each epoch.
Which I think means the model recognizes the different losses, but is not updating parameters according to the losses.
Here are few other things I tried and did not help:
tried small and big learning rate (0.1 and 1e-8)
tried with SourceEstimator layers trainable==True
flipped labels (0 == synthetic, 1 == real)
increased kernel sizes and filter sizes in discriminator.
I've been stuck on this problem for a while now, I really need some insights. Thanks in advance.
------EDIT 1-----
def initialize_uninitialized(sess):
global_vars = tf.global_variables()
is_initialized= sess.run([tf.is_variable_initialized(var) for var in global_vars])
not_initialized_vars = [v for (v, f) in zip(global_vars, is_initialized) if not f]
# for var in not_initialized_vars: # only for testing
# print(var.name)
if len(not_initialized_vars):
sess.run(tf.variables_initializer(not_initialized_vars))
self.sess = tf.Session()
## inbetween here I create data generator, model and restore pretrained model.
self.initilize_uninitialized(self.sess)
for current_epoch in range(self.model.current_epoch_tensor.eval(self.sess), self.config.num_epochs, 1)
self.train_epoch() # included above
self.sess.run(self.model.increment_current_epoch_tensor)
I can see that you are calling minimize as well as loss function in session.run(). You should only call minimize() function. i.e. only self.model.disc_op which will internally call loss function.
Also, I cannot see your session initialization call anywhere. See to it that it is getting called only once.
Looking at your updated code, I can see that you are equating tf.is_variable_initialized() call to is_not_initialized. Thus, it is initializing those variables which are already initialized.
I never managed to find out what was wrong with the code.
My colleague suggested trying the same model in a different isolated environment, so I rewrote the code using Keras Library.
And now it's working. :/
We still don't know what exactly was wrong with the code above - I didn't change anything. I even used the same code for weight transferring and variable initialization.
If anyone ever faces similar problem, I would suggest trying the same model in a different environment.
Or if anyone knows what was wrong with the code above please share!

Categories