Use numpy with Tensorflow GradientTape - python

I am trying to implement a solution where I am using gradientTape and Numpy in the custom loss function.
Can I use numpy operations in the custom loss?
ValueError: No gradients provided for any variable: ['conv3d_transpose_20/kernel:0', 'conv3d_transpose_20/bias:0', 'batch_normalization_16/gamma:0', 'batch_normalization_16/beta:0', 'conv3d_transpose_21/kernel:0', 'conv3d_transpose_21/bias:0', 'batch_normalization_17/gamma:0', 'batch_normalization_17/beta:0', 'conv3d_transpose_22/kernel:0', 'conv3d_transpose_22/bias:0', 'batch_normalization_18/gamma:0', 'batch_normalization_18/beta:0', 'conv3d_transpose_23/kernel:0', 'conv3d_transpose_23/bias:0', 'batch_normalization_19/gamma:0', 'batch_normalization_19/beta:0', 'conv3d_transpose_24/kernel:0', 'conv3d_transpose_24/bias:0'].
None of the layers are getting gradients and being differentiable

Related

Defining a callable "loss" function

I am trying to optimize a loss function (defined using evidence lower bound) with tf.train.AdamOptimizer.minimize() on Tensorflow version 1.15.2 with eager execution enabled. I tried the following:
learning_rate = 0.01
optim = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optim.minimize(loss)
and got the following : RuntimeError: "loss" passed to Optimizer.compute_gradients should be a function when eager execution is enabled.
This works fine if I disable eager execution but since I need to save a tensorflow variable as a numpy array so I need eager execution enabled. The documentation mentions that when eager execution is enabled, the loss must be a callable. So the loss function should be defined in a way that it takes no inputs but gives out loss. I am not exactly sure how do I achieve such a thing.
I tried train_op = optim.minimize(lambda: loss) but got ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables [] and loss <function <lambda> at 0x7f3c67a93b00>

Tensorflow variable initialization inside loss function

I have an object detection model implemented in tensorflow.keras (version 1.15). I am trying to implement a modified (hybrid) loss function in my model. Basically I need a few variables defined in my loss function because I am processing the y_true and y_pred provided in my classification loss function (a focal loss to be exact). So, I naturally resided in implementing my ops inside the loss function.
I have defined a WrapperClass to initialize my variables:
class LossWrapper(object):
def __init__(self, num_centers):
...
self.total_loss = tf.Variable(0, dtype=tf.float32)
def loss_funcion(self, y_true, y_pred):
...
self.total_loss = self.total_loss + ...
I amd getting an error:
tensorflow.python.framework.errors_impl.FailedPreconditionError:
Attempting to use uninitialized value Variable
using self.total_loss_cosine = tf.zeros(1)[0] I am getting (a similar message):
tensorflow.python.framework.errors_impl.InvalidArgumentError:
Retval[0] does not have value
I came to the conclusion that no matter how I define my variable or where I define it (I have tried inside the __init__ function or in the main function body) I am getting an error stating about attempting to use some uninitialized Variable.
I am starting to think that I cannot initialize variables inside my loss function and probably I should implement them as a typical block outside it. Is this the case? Is the loss functions basically separated from the rest of the network so the typical initialization does not work as expected?
Some remarks:
The loss function seem to work flawless in eager execution mode where the initialization issue obviously does not exist.
In eager execution mode the type of y_true seem to be np.array and not tf.Tensor (or tf.EagerTensor at least). Does this mean that actually y_true and y_pred are propagated as numpy array in general, meaning, that this part is actually detached from the network? (I have tested this on eager execution only though)

Using Keras layers inside custom loss function

Is it possible to use a Keras layer (pre-trained or fixed layer with no trainable parameters) inside a custom loss function?
I would like to do something like:
def custom_loss(y_true, y_pred):
y_true_trans = SomeKerasLayer()(y_true)
y_true_trans = SomeKerasLayer()(y_pred)
return K.mean(K.abs(y_pred_trans - y_true_trans), axis=-1)
In the Tensorflow backend, I get the error:
File "/home/drb/venvs/keras/lib/python3.5/site-packages/tensorflow/python /framework/tensor_util.py", line 364, in make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.
Of course I could transform y_pred with the Keras layer outside the loss function (by providing an extra output), but I can't do the same with the reference value y_true.
Another way to rephrase the same question in more general terms would be: Is it possible to encapsulate a Keras layer as a Keras backend function?
Is there any solution or workaround?
The question is kind of vague, so it has both a yes and no response.
Depending on your implementation you may try
model = keras.layers.Add(..something..)(x)
where x = the name of the previous relevant value.

Tensorflow placeholder in Keras custom objective function

I need to implement a custom objective function for Keras where i need an additional tensorflow placeholder for computation. In tensorflow, i have it as following,
pre_cost1 = tf.multiply((self.input_R - self.Decoder) , self.input_mask_R)
cost1 = tf.square(self.l2_norm(pre_cost1))
where input_mask_R is the tensorflow placeholder. input_R and Decoder are the placeholders corresponding to y_true and y_pred for Keras loss function respectively. I have the Keras loss function implemented as,
def custom_objective(y_true, y_pred):
pre_cost1 = tf.multiply((y_true - y_pred))
cost1 = tf.square(l2_norm(pre_cost1))
return cost1
I need to add the additional information for input mask in the loss function for keras. (It needs to be tensorflow placeholder since its a mask for the input which is different for each row of the input data).
Use the keras backend:
import keras.backend as K
Most functions for tensors are there, such as:
input_mask_R = K.placeholder(shape=(yourshape))
But maybe, since you want a predefined mask, what you need is:
input_mask_R = K.constant(arrayWithValues, shape=(yourshape))
And you can actually multiply and square also with K.multiply and K.square. That way, if you ever think of changing the backend, everything will be ok. (Also I'm not sure if Keras will handle direct calls to tensorflow functions.....)
See documentation: https://keras.io/backend/

LSUV init in tensorflow

I am trying to implement LSUV init as in :
https://github.com/ducha-aiki/LSUVinit
https://github.com/ducha-aiki/LSUV-keras/blob/master/lsuv_init.py
in tensorflow and I encounter some difficulties.
Basically, the algorithm should be something like :
For each layer L do
while |Var(Bl) -1| > epsilon
do forward pass with mini-batch
calculate Var(Bl)
Wl = Wl/sqrt(Var(Bl))
end while
end for
Is there any way to iterate over each layer in the network ?
How to get the activation for each layer after a single pass ?
I was trying to calculate the variance of each layer with tf.nn.moments but this function return a tensor, how to evaluate the value of this tensor ?
I am wondering what is better to do : to build a different inference graph for initialization and training phase or to run some "training" in order to initialized the model and load it before the real training .Any advises ?

Categories