How to store variable in loss function into instance variable - python

I am using Keras with Tensorflow.
Since I want to create LSTM-CRF model, I defined my own loss function using tf.contrib.crf.crf_log_likelihood:
def loss(self, y_true, y_pred):
sequence_lengths = ... # calc from y_true
log_likelihood, transition_params = tf.contrib.crf.crf_log_likelihood(y_pred, y_true, sequence_lengths)
loss = tf.reduce_mean(-log_likelihood)
self.transition_params = transition_params
return loss
As you know, CRF needs transition params on prediction phase. So I stored transition_params into instance variables, self.transition_params.
The problem is that self.transition_params has never been updated during minibatch. According to my observation, it seems to be stored only once when compiling the model.
Is there any way to store variable in loss function into instance variable in Keras?

The problem is the wrong function signature tf.contrib.crf.crf_log_likelihood, you need to pass the transition_params with your current transition params. Following changes will solve the same.
log_likelihood, transition_params =
tf.contrib.crf.crf_log_likelihood(y_pred, y_true, sequence_lengths,
transition_params=self.transition_params)

Related

Error in TF variable singleton variable creation in updating dynamic model

Following the papers Progressive Gans (https://arxiv.org/abs/1710.10196), I implement keras.Model that needs to grow in size (layers). I first initialize the full model. But when the time making an inference, I will use only partial of model but the same trainable_variables, e.g. 4x4 then 8x8. So that the trainable_variables passing to train_step which decorated with tf.function will be different. This work properly for computing gradient etc. but not optimizer.apply_gradients.
The code look something like this:
strategy = tf.distribute.MirroredStrategy()
G = Generator()
with strategy.scope():
Opt = keras.optimizers.Adam()
G.initialize_model() # initialize full model
#tf.function
def train_step(optimizer, model, var_to_train):
with tf.GradientTape() as tape:
Loss = loss(model(datasets))
grads = tape.gradient(Loss, variables)
opt.apply_gradients(zip(grads, variables)) # this will raise ValueError
res = 4 # resolution of 4x4
for ep in range(epochs):
if ep % 100 == 0:
res = res * 2
cur_model = G.forward(output_shape=(res, res, 3)) # output for given image resolution
var = cur_model.trainable_variables # this variables will be increasing as we grow model
strategy.run(train_step, args=(Opt, cur_model, var))
Note, however, that this will work fine when train_step is not used in the context of tf.function or in MirroredStrategy. From last section of seem not to solve the problem. I tried tf.distribute.ReplicaContext.all_reduce or any equivalent method for obtaining local results from all replica but it won't work since the trainable_variables are created inside the strategy.scope() so every update must be in the context of Replica.
The only naive solution I could is to train, for example 4x4 model and save it. Then use transfer learning load it back to 8x8 model.
I want to use usual keras optimizer which support any dynamic trainable_variables passed through tf.function context.
[1]: https://www.tensorflow.org/guide/function#creating_tfvariables:~:text=shape%3D()%2C%20dtype%3Dfloat32)
First call optimizer._create_all_weights(var) where the argument var is the full model. This will make the optimizer create all variables and must be done at the beginning before making any updates. In updating gradient it won't create again but still can provide a subset of it. This work in the context of tf.function too.

how to make a custom loss function which use model in keras

I'm trying to make a custom loss function for keras NN model.
Normally, loss functions have y_prediction and y_true for arguments.
But, I need to use model in the custom loss function like
y_prediction = model(X_train) to use tf. GradientTape.
So what I want to know is how to use the latest model(on the way to fit) in the custom loss function.
If you have an idea about that, tell me, please.
(Sorry for my bad English)
You can create a model class as and implement the train_step method:
class YourModel(Model):
def __init__(self):
super(YourModel, self).__init__()
# define your model architecture here as an attribute of the class
def train_step(data):
with tf.GradientTape() as tape:
# foward pass data through the architecture
# compute loss (y_true, y_pred, any other param)
# weight update
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
return {
'loss': loss
# other losses
}
def call(self, x):
# your forward pass implementation
return # output
More information can be found here: https://www.tensorflow.org/tutorials/quickstart/advanced

How do I change the output from a tensor by updating a variable?

I have a loss function which is dependant on some variable, then I want to change that variable and get the updated loss. The loss is given to me as an input to a function, so I can't simply move that line under tf.control_dependencies (which does give me what I want). So how do I go about updating the variable and loss afterwards?
X = tf.Variable(1.0) #Dummy parameters of a neural network
loss = (X+1.0)**2 #Dummy Loss function given to me as input
add = tf.assign(X,X+1.0) #Me changing the parameters of the network
with tf.control_dependencies( [ add ] ):
updated_loss = loss #Me wanting the updated loss
print(K.eval(updated_loss )) #Me not getting the updated loss :(
When you make the loss variable, you are giving it an initial value, this doesn't make it a function that will reevaluate the loss every time it is called.
We can make a loss function that is called to calculate the loss each time
Edit:
I changed the code below to reassign a value to the loss tensorflow variable rather than returning a new variable.
import tensorflow as tf
import tensorflow.keras.backend as K
# Function that updates the loss
#tf.function
def update_loss(X, loss):
loss.assign((X+1.0)**2)
def test():
# Initialise a value of loss
loss = tf.Variable(0.0)
print(K.eval(loss))
# Initialise variable
X = tf.Variable(2.0)
# Update loss
update_loss(X, loss)
print(K.eval(loss))
# Change network by adding 1.0
X.assign_add(1.0)
# Update loss
update_loss(X, loss)
print(K.eval(loss))

Accessing part of y_pred in customized loss function for calculating loss

I want to develop a neural network with three inputs pos,anc,neg and three outputs pos_out,anc_out,neg_out. While calculating loss in my customized loss function in keras, I want to access pos_out, anc_out, neg_out in y_pred. I can access y_pred as a whole. But how to access individual part pos_out, anc_out and neg_out
I have applied max function to y_pred. It calculates max value correctly. If I am passing only one output in Model as Model(input=[pos,anc,neg], output=pos_out) then also it calculates max value correctly. But when it comes to accessing max values form pos_out, anc_out and neg_out separately in customized function, it does not work.
def testmodel(input_shape):
pos = Input(shape=(14,300))
anc = Input(shape=(14,300))
neg = Input(shape=(14,300))
model = Sequential()
model.add(Flatten(batch_input_shape=(1,14,300)))
pos_out = model(pos)
anc_out = model(anc)
neg_out = model(neg)
model = Model(input=[pos,anc,neg], output=[pos_out,anc_out,neg_out])
return model
def customloss(y_true,y_pred):
print((K.int_shape(y_pred)[1]))
#loss = K.max((y_pred))
loss = K.max[pos_out]
return loss
You can create a loss function that contains a closure that lets you access the model and thus the targets and the model layer outputs.
class ExampleCustomLoss(object):
""" The loss function can access model.inputs, model.targets and the outputs
of specific layers. These are all tensors and will have the expected results
for the batch.
"""
def __init__(self, model):
self.model = model
def loss(self, y_true, y_pred, **kwargs):
...
return loss
model = Model(..., ...)
loss_calculator = ExampleCustomLoss(model)
model.compile('adam', loss_calculator.loss)
However, it may be simpler to do the inverse. i.e. have a single model output
out = Concatenate(axis=1)([pos_out, anc_out, neg_out])
And then in the loss function slice y_true and y_pred.
From the names of variables, it looks as if you are trying to use a triplet loss. You may find this other question useful:
How to deal with triplet loss when at time of input i have only two files i.e. at time of testing
Your loss function gets 2 arguments, model output and true label, your model output will have the shape that you define when you define the net. Your loss function needs to output a single difference value, between your model's output and the true value of the label while training.
Also please add some trainable layers to your model, because your custom loss function will be useless otherwise.

Keras custom metrics with more than two inputs

I have a VAE model that I've broken down into the encoder and decoder parts, and implemented a custom loss. A simplified example is as below
input = Input(shape=(self.image_height, self.image_width, self.image_channel))
encoded = build_encoder(input)
decoded = build_decoder(encoded)
model = Model(input, decoded)
The loss (simplified) is
loss = K.mean(decoded[0] + decoded[1] + encoded[0]**2)
model.add_loss(loss)
model.compile(optimizer=self.optimizer)
My main problem is that I want to use Keras' modelcheckpoint function, which would then require me to set custom metrics. However, everything I have seen online is similar to https://keras.io/metrics/#custom_metrics. This only takes in y_true and y_pred, and modify the validation loss from there. How would I implement it in my example model, where the loss is calculated from multiple inputs, not only the final output of "decoded"?
Well apparently you can still use the variables (keras layers) without passing those into the custom loss function.
So for my example, the loss can be calculated as
def custom_loss(y_true, y_pred):
return K.mean(decoded[0] + decoded[1] + encoded[0]**2)
model.compile(optimizer=self.optimizer, loss=custom_loss)
y_true and y_pred is never used, but the actual required inputs can still be called (as long as they are in the same scope as the custom loss function of course).

Categories