keras: how to block convolution layer weights - python

Here I have a GoogleNet model for Keras. Is there any possibile way to block the changing of the individual layers of network? I want to block the first two layers of pretrained model from changes.

By 'block the changing of the individual layers' I am assuming you don't want to train those layers, that is you don't want to modify the loaded weights(possibly learnt in previous training).
if so you can pass trainable=False to the layer and the parameters wont be used for the training update rule.
Example:
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_dim=100),
Dense(output_dim=10),
Activation('sigmoid'),
])
model.summary()
model2 = Sequential([
Dense(32, input_dim=100,trainable=False),
Dense(output_dim=10),
Activation('sigmoid'),
])
model2.summary()
You can see in the model summary for the 2nd model the parameters are counted as Non-trainable ones.
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
dense_1 (Dense) (None, 32) 3232 dense_input_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 10) 330 dense_1[0][0]
____________________________________________________________________________________________________
activation_1 (Activation) (None, 10) 0 dense_2[0][0]
====================================================================================================
Total params: 3,562
Trainable params: 3,562
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
dense_3 (Dense) (None, 32) 3232 dense_input_2[0][0]
____________________________________________________________________________________________________
dense_4 (Dense) (None, 10) 330 dense_3[0][0]
____________________________________________________________________________________________________
activation_2 (Activation) (None, 10) 0 dense_4[0][0]
====================================================================================================
Total params: 3,562
Trainable params: 330
Non-trainable params: 3,232

Related

What's the sequence length of a Keras Bidirectional layer?

If I have:
self.model.add(LSTM(lstm1_size, input_shape=(seq_length, feature_dim), return_sequences=True))
self.model.add(BatchNormalization())
self.model.add(Dropout(0.2))
then my seq_length specifies how many slices of data I want to process at once. If it matters, my model is a sequence-to-sequence (same size).
But if I have:
self.model.add(Bidirectional(LSTM(lstm1_size, input_shape=(seq_length, feature_dim), return_sequences=True)))
self.model.add(BatchNormalization())
self.model.add(Dropout(0.2))
then is that doubling the sequence size? Or at each time step, is it getting seq_length / 2 before and after that timestep?
Using a bidirectional LSTM layer has no effect on the sequence length.
I tested this with the following code:
from keras.models import Sequential
from keras.layers import Bidirectional,LSTM,BatchNormalization,Dropout,Input
model = Sequential()
lstm1_size = 50
seq_length = 128
feature_dim = 20
model.add(Bidirectional(LSTM(lstm1_size, input_shape=(seq_length, feature_dim), return_sequences=True)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
batch_size = 32
model.build(input_shape=(batch_size,seq_length, feature_dim))
model.summary()
This resulted in the following output for bidirectional
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bidirectional_1 (Bidirection (32, 128, 100) 28400
_________________________________________________________________
batch_normalization_1 (Batch (32, 128, 100) 400
_________________________________________________________________
dropout_1 (Dropout) (32, 128, 100) 0
=================================================================
Total params: 28,800
Trainable params: 28,600
Non-trainable params: 200
No bidirectional layer:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 128, 50) 14200
_________________________________________________________________
batch_normalization_1 (Batch (None, 128, 50) 200
_________________________________________________________________
dropout_1 (Dropout) (None, 128, 50) 0
=================================================================
Total params: 14,400
Trainable params: 14,300
Non-trainable params: 100
_________________________________________________________________

How to get a layer's type in Keras?

I'm trying to select the last Conv2D layer for a given (general) model. model.summary() provides a list of layers with their type, but how can I access this to find the layer of that type?
Output from model.summary():
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
...
_________________________________________________________________
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
I think what you are looking for is this:
layer.__class__.__name__
where layer is in
model.layers
You can iterate over model.layers in reverse order and check layer types via isinstance:
next(x for x in model.layers[::-1] if isinstance(x, keras.layers.Conv2D))

Splitting a Keras model on an arbitrary layer

I'm trying to create a function that splits a Keras model on a user specified layer. I have the following code:
def return_split_models(model, layer):
model_f, model_h = Sequential(), Sequential()
for current_layer in range(0, layer+1):
model_f.add(model.layers[current_layer])
for current_layer in range(layer+1, len(model.layers)):
model_h.add(model.layers[current_layer])
return model_f, model_h
However, when we return model_h and call a summary, we will see a ValueError that the model has never been called. From looking at other posts it seems like this has to do with the inputs for model_h, however I cannot find examples which generalize to any specified layer. Does anyone have any guidance?
You need to add InputLayer to model_h.
from keras.layers import InputLayer
def return_split_models(model, layer):
model_f, model_h = Sequential(), Sequential()
for current_layer in range(0, layer+1):
model_f.add(model.layers[current_layer])
# add input layer
model_h.add(InputLayer(input_shape=model.layers[layer+1].input_shape[1:]))
for current_layer in range(layer+1, len(model.layers)):
model_h.add(model.layers[current_layer])
return model_f, model_h
An example:
model = Sequential()
model.add(Dense(50,input_shape=(100,)))
model.add(Dense(40))
model.add(Dense(30))
model.add(Dense(20))
model.add(Dense(10))
model_f, model_h = return_split_models(model, 2)
print(model_f.summary())
print(model_h.summary())
# print
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 50) 5050
_________________________________________________________________
dense_2 (Dense) (None, 40) 2040
_________________________________________________________________
dense_3 (Dense) (None, 30) 1230
=================================================================
Total params: 8,320
Trainable params: 8,320
Non-trainable params: 0
_________________________________________________________________
None
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_4 (Dense) (None, 20) 620
_________________________________________________________________
dense_5 (Dense) (None, 10) 210
=================================================================
Total params: 830
Trainable params: 830
Non-trainable params: 0
_________________________________________________________________
None

How to set embedding layer name in keras

inputs_bedding = Input(shape=(it.shape))
embedding = Embedding(9488, 512, trainable=False)(inputs_bedding)
There is no name parameter in keras Embedding layer. How to set the name to the layer?
You can set the name of the embedding layer just like any other layer.
from keras.layers import Embedding, Input
from keras import Model
inputs_bedding = Input(shape=(32,))
embedding = Embedding(9488, 512, trainable=False, name="test")(inputs_bedding)
model = Model(inputs=inputs_bedding, outputs=embedding)
model.summary() gives you:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) (None, 32) 0
_________________________________________________________________
test (Embedding) (None, 32, 512) 4857856
=================================================================
Total params: 4,857,856
Trainable params: 0
Non-trainable params: 4,857,856
_________________________________________________________________

Access to outputs of the middle layers in the fine tuned network in keras

I have fine tuned vgg16 in Keras with this layers:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 1, 1, 512) 14714688
_________________________________________________________________
flatten_1 (Flatten) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 525312
_________________________________________________________________
dense_2 (Dense) (None, 512) 524800
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 15,769,930
Trainable params: 8,134,666
Non-trainable params: 7,635,264
But I can just extract the features of my input image from flatten_1 , dense_1 ... , dense_3 by model.layers[1].output , model.layers[1].output , ... , model.layers[5].output
So how can I extract the features in the middle layers of vgg16?
This is a common pattern to get the output of intermediate layers for a given input x_test:
import keras.backend as K
get_layer = K.function(
[model.layers[0].input, K.learning_phase()],
[model.layers[LAYER_DESIRED].output])
layer_output = get_layer([x_test, 0])[0]
where LAYER_DESIRED is the index of the layer you want to output.

Categories