How can I obtain the output of an intermediate layer (feature extraction)? - python

I want to extract features of a optical image and save them into numpy array . I've seen similar questions , also can be seen here : https://keras.io/getting_started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer-feature-extraction , but don't know how to go about it .

Keras documentation does exaclty specify how to do that. If you have defined your model model_full you can create another one, that is just a part of it - from the input layer, to the one you're interested in.
model_part = Model(
inputs=model_full.input,
outputs=model_full.get_layer("intermed_layer").output)
Then you should be able to obtain output from intermediate layer using:
intermed_output = model_part(data)
In order to do that, you just need a model_full defined, which I assume you already have.
2nd approach
You can also use built-in Keras function, which I guess you already saw in documentation as well. It may look kind of complicated at first, but it's just creating a function with bound values i.e.
from keras import backend as K
get_3rd_layer_output = K.function(
[model.layers[0].input], # param 1 will be treated as layer[0].output
[model.layers[3].output]) # and this function will return output from 3rd layer
# here X is param 1 (input) and the function returns output from layers[3]
output = get_3rd_layer_output([X])[0]
Clearly, again model has to be defined. Not sure if there are any other requirements apart from that.

Related

Meaning of model.add(tf.keras.layers.Lambda(lambda x: x * 200))

What does the following line of code do? How to interprete?
model.add(tf.keras.layers.Lambda(lambda x: x * 200))
My interpretation:
Lambda is like a function.
>>> f = lambda x: x + 1
>>> f(3)
4
In the second example the function is called using f(3). But what is the purpose of model.add?
The model.add method adds a layer to the associated Keras model. Now, the argument of this method usually is a Keras layer. In your case, it is a special kind of layer called Lambda. You are right that lambda is a function. In principle, lambda is common syntactic sugar that allows you to declare a simple function without naming it. It would be just like:
def my_func(x):
return x*200
model.add(tf.keras.layers.Lambda(my_func))
As you can see, this is way more code for a very basic functionality. Coming back to the Lambda layer, this just applies the given function to all of the nodes of the previous layer. If you don't understand what a Keras model is or how machine learning works, at least in a broad sense, you may want to start with some tutorials on that instead of looking into what the individual lines of code do. This way you could become productive way faster.
I bet it is used a a last layer. Normally, you can just a have a Dense layer output. However, you can help the training by scaling up the output to around the same figures as your labels. This will depend on the activation functions you used in your model. LSTM or SimpleRNN use tanh by default and that has an output range of [-1,1]. You will use this Lambda() layer to scale the output by 200 before it adjusts the layer weights.

How use properly tensorflow functions within the model

I'm using the functional API of TensorFlow 2 and tensorflow.keras.layers to build the model.
I have an input tensor (in_1) with shape [batch_size, length, dim] and I would like to compute the mean along the length dimension and obtain an output tensor (out_1) with shape [batch_size, dim].
Which of this should I use to do it? (all these options works, in terms of output shape and training)
out_1 = Lambda(lambda x: tf.math.reduce_mean(x, axis=1))(in_1)
out_1 = Lambda(lambda x: tf.keras.backend.mean(x, axis=1))(in_1)
out_1 = tf.math.reduce_mean(in_1, axis=1)
This last one automatically creates a TensorFlowOpLayer, is this something that should be avoided?
Are there other ways to do this?
What's the difference between tf.math.reduce_mean and tf.keras.backend.mean, which should I use?
I know that custom functions should be called inside the Lambda layer, but is it true also for TensorFlow functions such as tf.math.reduce_mean which can process the tensor in "one fell swoop"? How should I call them if I need to specify a parameter (e.g. axis)?
First, for the difference between tf.keras.backend.mean and tf.math.reduce_mean: There is none. You can check the source code for the keras backend version, which simply uses reduce_mean (from math_ops, but internally that's the same one that's exposed in tf.math). IMHO this is a bit of a failure in the TF re-design where they incorporated Keras: Keras is now contained in TF, but Keras also uses TF in the "backend", so you basically have every operation twice: Once the TF version, and once the Keras version which, after all, also just uses the TF version.
Anyway, for the difference between using Lambda or not: It also doesn't (really) matter. Here is a minimal example:
inp = tf.keras.Input((10,))
layer = tf.reduce_mean(inp, axis=-1)
model = tf.keras.Model(inp, layer)
print(model.layers)
gives the output
[<tensorflow.python.keras.engine.input_layer.InputLayer at 0x7f1a651500b8>,
<tensorflow.python.keras.engine.base_layer.TensorFlowOpLayer at 0x7f1a9912d8d0>]
We can see that the reduce_mean operation was automatically converted to a TensorFlowOpLayer. Now, this may be technically different from a Lambda layer, but I doubt that this makes any practical difference. I suppose this would not work for a Sequential model, where you need to supply a list of layers, so there Lambda would likely be needed.

Easiest way to see the output of a hidden layer in Tensorflow/Keras?

I am working on a GAN and I'm trying to diagnose how any why mode collapse occurs. I want to be able to look "under the hood" and see what the outputs of various layers in the network look like for the last minibatch. I saw you can do something like model.layers[5].output, but this produces a tensor of shape [None, 64, 64, 512], which looks like an empty tensor and not the actual output from the previous run. My only other idea is to recompile a model that's missing all the layers after the one I'm interested in and then run a minibatch through, but this seems like an extremely inefficient way to do it. I'm wondering if there's an easier way. I want to run some statistics on layer outputs during the training process to see where things might be going wrong.
I did this for a GAN I was training myself. The method I used extends to both the generator (G) and discriminator (D) of a GAN.
The idea is to make a model with the same input as D or G, but with outputs according to each layer in the model that you require.
For me, I found it useful to check the activations. In Keras, with some model model (which will be D or G for you and me)
activation_layers = []
activation_names = []
# obtain the layers in a given model, but skip the first 6
# as these generally are the input / non-convolutional layers
model_layers = [layer for layer in sub_model.layers][6:]
# print the names of what we are looking at.
print("MODEL LAYERS:", model_layers)
for layer in model_layers:
# check if the layer is an activation
if isinstance(layer, (Activation, LeakyReLU)):
# append the output of this layer for later
activation_layers.append(layer.output)
# name it with a signature of its output for clarity
activation_names.append(layer.name + str(layer.output_shape[1]))
# now create a model which outputs every activation
activation_model = Model(inputs=model.inputs, outputs=activation_layers)
# this outputs a list of layers when given an input, so for G
noise = np.random.normal(size=(1,32,32,1)) # random image shape (change for yourself)
model_activations = model.predict(noise)
Now the rest is quite model-specific. This is the basic method for checking the outputs of the layers in a given model.
Note it can be done before, during or after training. It also does not need re-compiling.
The plotting of activation maps in this case is relatively straight forward and as you mentioned, you will probably have something specific you want to do. Still, I have to link this beautiful example here.
I report here a useful 2-line code block I extrapolated from the answer of #Homer that I used to inspect a single layer of the neural network.
ablation_model = Model(inputs=model.inputs, outputs=model.layers[-2].output)
preds = ablation_model.predict(np.random.normal(size=(20,2))) # adapt size

Keras, What's the difference between keras.backend.concatenate and Keras.layers.Concatenate [duplicate]

I just recently started playing around with Keras and got into making custom layers. However, I am rather confused by the many different types of layers with slightly different names but with the same functionality.
For example, there are 3 different forms of the concatenate function from https://keras.io/layers/merge/ and https://www.tensorflow.org/api_docs/python/tf/keras/backend/concatenate
keras.layers.Concatenate(axis=-1)
keras.layers.concatenate(inputs, axis=-1)
tf.keras.backend.concatenate()
I know the 2nd one is used for functional API but what is the difference between the 3? The documentation seems a bit unclear on this.
Also, for the 3rd one, I have seen a code that does this below. Why must there be the line ._keras_shape after the concatenation?
# Concatenate the summed atom and bond features
atoms_bonds_features = K.concatenate([atoms, summed_bond_features], axis=-1)
# Compute fingerprint
atoms_bonds_features._keras_shape = (None, max_atoms, num_atom_features + num_bond_features)
Lastly, under keras.layers, there always seems to be 2 duplicates. For example, Add() and add(), and so on.
First, the backend: tf.keras.backend.concatenate()
Backend functions are supposed to be used "inside" layers. You'd only use this in Lambda layers, custom layers, custom loss functions, custom metrics, etc.
It works directly on "tensors".
It's not the choice if you're not going deep on customizing. (And it was a bad choice in your example code -- See details at the end).
If you dive deep into keras code, you will notice that the Concatenate layer uses this function internally:
import keras.backend as K
class Concatenate(_Merge):
#blablabla
def _merge_function(self, inputs):
return K.concatenate(inputs, axis=self.axis)
#blablabla
Then, the Layer: keras.layers.Concatenate(axis=-1)
As any other keras layers, you instantiate and call it on tensors.
Pretty straighforward:
#in a functional API model:
inputTensor1 = Input(shape) #or some tensor coming out of any other layer
inputTensor2 = Input(shape2) #or some tensor coming out of any other layer
#first parentheses are creating an instance of the layer
#second parentheses are "calling" the layer on the input tensors
outputTensor = keras.layers.Concatenate(axis=someAxis)([inputTensor1, inputTensor2])
This is not suited for sequential models, unless the previous layer outputs a list (this is possible but not common).
Finally, the concatenate function from the layers module: keras.layers.concatenate(inputs, axis=-1)
This is not a layer. This is a function that will return the tensor produced by an internal Concatenate layer.
The code is simple:
def concatenate(inputs, axis=-1, **kwargs):
#blablabla
return Concatenate(axis=axis, **kwargs)(inputs)
Older functions
In Keras 1, people had functions that were meant to receive "layers" as input and return an output "layer". Their names were related to the merge word.
But since Keras 2 doesn't mention or document these, I'd probably avoid using them, and if old code is found, I'd probably update it to a proper Keras 2 code.
Why the _keras_shape word?
This backend function was not supposed to be used in high level codes. The coder should have used a Concatenate layer.
atoms_bonds_features = Concatenate(axis=-1)([atoms, summed_bond_features])
#just this line is perfect
Keras layers add the _keras_shape property to all their output tensors, and Keras uses this property for infering the shapes of the entire model.
If you use any backend function "outside" a layer or loss/metric, your output tensor will lack this property and an error will appear telling _keras_shape doesn't exist.
The coder is creating a bad workaround by adding the property manually, when it should have been added by a proper keras layer. (This may work now, but in case of keras updates this code will break while proper codes will remain ok)
Keras historically supports 2 different interfaces for their layers, the new functional one and the old one, that requires model.add() calls, hence the 2 different functions.
For the TF -- their concatenate() functions does not do everything that required for Keras to work, hence, the additional calls to make ._keras_shape variable correct and not to upset Keras that expects that variable to have some particular value.

tensorflow copy variable but not trainable to pretrain next layers

I want to implement the autoencoder (to be exact stacked convolutional autoencoder)
here I'd like to pretrain each layer first and then fine-tuning
So I created variables for weight of each layer
ex. W_1 = tf.Variable(initial_value, name,trainable=True etc) for first layer
and I pretrained W_1 of first layer
Then I want to pretrain weight of second layer (W_2)
Here I should use W_1 for calculating input of second layer.
However W_1 is trainable so if I use W_1 directly then tensorflow may train W_1 together.
So I should create W_1_out that keep value of W_1 but not trainable
To be honest I tried to modify code of this site
https://github.com/cmgreen210/TensorFlowDeepAutoencoder/blob/master/code/ae/autoencoder.py
At line 102 it creates variable by following code
self[name_w + "_fixed"] = tf.Variable(tf.identity(self[name_w]),
name=name_w + "_fixed",
trainable=False)
However it calls error cause it use uninitialized value
How should I do to copy variable but make it not trainable to pretrain next layers??
Not sure if still relevant, but I'll try anyway.
Generally, what I do in a situation like that is the following:
Populate the (default) graph according to the model you are building, e.g. for the first training step just create the first convolutional layer W1 you mention. When you train the first layer you can store the saved model once training is finished, then reload it and add the ops required for the second layer W2. Or you can just build the whole graph for W1 from scratch again directly in the code and then add the ops for W2.
If you are using the restore mechanism provided by Tensorflow, you will have the advantage that the weights for W1 are already the pre-trained ones. If you don't use the restore mechanism, you will have to set the W1 weights manually, e.g. by doing something shown in the snippet further below.
Then when you set up the training op, you can pass a list of variables as var_list to the optimizer which explicitly tells the optimizer which parameters are updated in order to minimize the loss. If this is set to None (the default), it just uses what it can find in tf.trainable_variables() which in turn is a collection of all tf.Variables that are trainable. May be check this answer, too, which basically says the same thing.
When using the var_list argument, graph collections come in handy. E.g. you could create a separate graph collection for every layer you want to train. The collection would contain the trainable variables for each layer and then you could very easily just retrieve the required collection and pass it as the var_list argument (see example below and/or the remark in the above linked documentation).
How to override the value of a variable: name is the name of the variable to be overriden, value is an array of the appropriate size and type and sess is the session:
variable = tf.get_default_graph().get_tensor_by_name(name)
sess.run(tf.assign(variable, value))
Note that the name needs an additional :0 in the end, so e.g. if the weights of your layer are called 'weights1' the name in the example should be 'weights1:0'.
To add a tensor to a custom collection: Use something along the following lines:
tf.add_to_collection('layer1_tensors', weights1)
tf.add_to_collection('layer1_tensors', some_other_trainable_variable)
Note that the first line creates the collection because it does not yet exist and the second line adds the given tensor to the existing collection.
How to use the custom collection: Now you can do something like this:
# loss = some tensorflow op computing the loss
var_list = tf.get_collection_ref('layer1_tensors')
optim = tf.train.AdamOptimizer().minimize(loss=loss, var_list=var_list)
You could also use tf.get_collection('layer_tensors') which would return you a copy of the collection.
Of course, if you don't wanna do any of this, you could just use trainable=False when creating the graph for all variables you don't want to be trainable as you hinted towards in your question. However, I don't like that option too much, because it requires you to pass in booleans into the functions that populate your graph, which is very easily overlooked and thus error-prone. Also, even if you decide to it like that, you would still have to restore the non-trainable variables manually.

Categories