I am new to Keras. How can I print the outputs of a layer, both intermediate or final, during the training phase?
I am trying to debug my neural network and wanted to know how the layers behave during training. To do so I am trying to exact input and output of a layer during training, for every step.
The FAQ (https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer) has a method to extract output of intermediate layer for building another model but that is not what I want. I don't need to use the intermediate layer output as input to other layer, I just need to print their values out and perhaps graph/chart/visualize it.
I am using Keras 2.1.4
I think I have found an answer myself, although not strictly accomplished by Keras.
Basically, to access layer output during training, one needs to modify the computation graph by adding a print node.
A more detailed description can be found in this StackOverflow question:
How can I print the intermediate variables in the loss function in TensorFlow and Keras?
I will quote an example here, say you would like to have your loss get printed per step, you need to set your custom loss function as:
for Theano backend:
diff = y_pred - y_true
diff = theano.printing.Print('shape of diff', attrs=['shape'])(diff)
return K.square(diff)
for Tensorflow backend:
diff = y_pred - y_true
diff = tf.Print(diff, [tf.shape(diff)])
return K.square(diff)
Outputs of other layers can be accessed similarly.
There is also a nice vice tutorial about using tf.Print() from Google
Using tf.Print() in TensorFlow
If you want to know more info on each neuron, you need to use the following to get their bias and weights.
weights = model.layers[0].get_weights()[0]
biases = model.layers[0].get_weights()[1]
0 index defines weights and 1 defines the bias.
You can also get per layer too,
for layer in model.layers:
weights = layer.get_weights() # list of numpy arrays
After each training, if you can access each layer with its dimension and obtain the weights and bias to a numpy array, you should be able to visualize how the neuron after each training.
Hope it helps.
Related
TensorFlow's official tutorial says that we should pass base_model(trainin=False) during training in order for the BN layer not to update mean and variance. my question is: why? why we don't need to update mean and variance, I mean BN has imagenet mean and variance and why it is useful to use imagenet's mean and variance, and not update them on new data? even during fine tunning, in this case whole model updates weights but BN layer still is going to have imagenet mean and variance.
edit: i am using this tutorial :https://www.tensorflow.org/tutorials/images/transfer_learning
When model is trained from initialization, batchnorm should be enabled to tune their mean and variance as you mentioned. Finetuning or transfer learning is a bit different thing: you already has a model that can do more than you need and you want to perform particular specialization of pre-trained model to do your task/work on your data set. In this case part of weights are frozen and only some layers closest to output are changed. Since BN layers are used all around model you should froze them as well. Check again this explanation:
Important note about BatchNormalization layers Many models contain
tf.keras.layers.BatchNormalization layers. This layer is a special
case and precautions should be taken in the context of fine-tuning, as
shown later in this tutorial.
When you set layer.trainable = False, the BatchNormalization layer
will run in inference mode, and will not update its mean and variance
statistics.
When you unfreeze a model that contains BatchNormalization layers in
order to do fine-tuning, you should keep the BatchNormalization layers
in inference mode by passing training = False when calling the base
model. Otherwise, the updates applied to the non-trainable weights
will destroy what the model has learned.
Source: transfer learning, details regarding freeze
I have a convolutional neural network with some layers in keras. The last layer in this network is a custom layer that is responsible for sorting some numbers those this layer gets from previous layer, then, the output of custom layer is sent for calculate loss function.
for this purpose (sorting) I use some operator in this layer such as K.argmax and K.gather.
In the back-propagation phase I get error from keras that says:
An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval
that is reasonable cause the involvement of this layer in the derivation process.
Given that my custom layer do not need to corporate in differential chain rule, how can I control differential chain in keras? can I disable this process in custom layer?
Reorder layer that I used in my code is simply following:
def Reorder(args):
z = args[0]
l = args[1]
index = K.tf.argmax(l, axis=1)
return K.tf.gather(z, index)
Reorder_Layer = Lambda(Reorder, name='out_x')
pred_x = Reorder_Layer([z, op])
A few things:
It's impossible to train without a derivative, so, there is no solution if you want to train this model
It's not necessary to "compile" if you are only going to predict, so you don't need custom derivation rules
If the problem is really in that layer, I suppose that l is computed by the model using trainable layers before it.
If you really want to try this, which doesn't seem a good idea, you can try a l = keras.backend.stop_gradient(args[1]). But this means that absolutely nothing will be trained from l until the beginning of the model. If this doesn't work, then you have to make all layers that produce l have trainable=False before compiling the model.
I have trained a fully convolutional neural network with Keras. I have used the Functional API and have defined the input layer as Input(shape=(128,128,3)), corresponding to the size of the images in my training set.
However, I want to use the trained model on images of variable sizes (which should be ok because the network is fully convolutional). To do this, I need to change my input layer to Input(shape=(None,None,3)). The obvious way to solve the problem would have been to train my model directly with an input shape of (None,None,3) but I use a custom loss function where I need to specify the size of my training images.
I have tried to define a new input layer and assign it to my model like this :
from keras.engine import InputLayer
input_layer = InputLayer(input_shape=(None, None, 3), name="input")
model.layers[0] = input_layer
This actually changes the size of the input layers accordingly but the following layers still expect (128,128,filters) inputs.
Is there a way to change all of the inputs values at once ?
Create a new model, exactly the same, except for new input shape; and tranfer weights:
newModel.set_weights(oldModel.get_weights())
If anything goes wrong, then it might not be fully convolutional (ex: contains a Flatten layer).
I am studying deep learning and trying to implement it using CAFFE- Python. can anybody tell that how we can assign the weights to each node in input layer instead of using weight filler in caffe?
There is a fundamental difference between weights and input data: the training data is used to learn the weights (aka "trainable parameters") during training. Once the net is trained, the training data is no longer needed while the weights are kept as part of the model to be used for testing/deployment.
Make sure this difference is clear to you before you precede.
Layers with trainable parameters has a filler to set the weights initially.
On the other hand, an input data layer does not have trainable parameters, but it should supply the net with input data. Thus, input layers has no filler.
Based on the type of input layer you use, you will need to prepare your training data.
I fine tunes vgg-16 for binary classification. I used sigmoidLoss layer as the loss function.
To test the model, I coded a python file in which I loaded the model with an image and got the output using :
out = net.forward()
My doubt is should I take the output from Sigmoid or SigmoidLoss layer.
And What is the difference between 2 layers.
My output will actually be the probability of input image being class 1.**
For making predictions on test set, you can create a separate deploy prototxt by modifying the original prototxt.
Following are the steps for the same
Remove the data layer that was used for training, as for in the case of classification we are no longer providing labels for our data.
Remove any layer that is dependent upon data labels.
Set the network up to accept data.
Have the network output the result.
You can read more about this here: deploy prototxt
Else, you can add
include {
phase: TRAIN
}
to your SigmoidWithLoss layer so that it's not used when testing the network. To make predictions, simply check the output of Sigmoid layer.
SigmoidWithLoss layer outputs a single number per batch representing the loss w.r.t the ground truth labels.
On the other hand, Sigmoid layer outputs a probability value for each input in the batch. This output does not require the ground truth labels to be computed.
If you are looking for the probability per input, you should be looking at the output of the Sigmoid layer