Add layers to a pretrained model wihout creating a sequential model - python

I am using a pretrained Resnet50 (from the tensorflow.keras.applications package) and finetune it for multilabel classification (with 2 classes), and I'd like to extract the Saliency maps from the finetuned model.
To make a classifier, i add 2 dense layers to the Resnet model, creating a new sequential model as follow :
self.model = tf.keras.Sequential([
resnet50,
layers.Dense(1024, activation='relu', name='hidden_layer'),
layers.Dense(2, activation='sigmoid', name='output')
])
but my problem is that the resnet50 becomes a "single layer", like each layer is no more accessible : the model summary only contains 3 layers. I'd like to know if there is a way to add layers to a functional model without creating a sequential model, in order to be able to access each layer of the resnet model.
Thank you in advance,

To access layers of the resnet50 model, you have to first access the resnet50 layer aand then from that access the convolution layer for which you want to create saliency map.
self.model.get_layer("resnet50").get_layer("conv5_block2_3_conv")

Related

LSTM, Keras : How many layers should the inference model have?

Should the inference model in a chatbot model with keras lstm, have the same amount of layers as the main model or it doesnt matter?
I don't know what you exactly mean by inference model.
The number of layers of a model is an hyperparameter that you tune during training. Let's say that you train an LSTM model with 3 layers, then the model used for inference must have the same number of layers and use the weights resulting from the training.
Otherwise, if you add non trained layer when inference, the results won't make any sense.
Hope this helps

Fine-Tuning Pretrained BERT with CNN. How to Disable Masking

Has anyone used CNN in Keras to fine-tune a pre-trained BERT?
I have been trying to design this but the pre-trained model comes with Masks (i think in the embedding layer) and when the fine-tuning architecture is built on the output of one the encoder layers it gives this errorLayer conv1d_1 does not support masking, but was passed an input_mask
So far I have tried some suggested workarounds such as using keras_trans_mask to remove the mask before the CNN and add it back later. But that leads to other errors too.
Is it possible to disable Masking in the pre-trained model or is there a workaround for it?
EDIT: This is the code I'm working with
inputs = model.inputs[:2]
layer_output = model.get_layer('Encoder-12-FeedForward-Norm').output
conv_layer= keras.layers.Conv1D(100, kernel_size=3, activation='relu',
data_format='channels_first')(layer_output)
maxpool_layer = keras.layers.MaxPooling1D(pool_size=4)(conv_layer)
flat_layer= keras.layers.Flatten()(maxpool_layer)
outputs = keras.layers.Dense(units=3, activation='softmax')(flat_layer)
model = keras.models.Model(inputs, outputs)
model.compile(RAdam(learning_rate =LR),loss='sparse_categorical_crossentropy',metrics=['sparse_categorical_accuracy'])
So layer_output contains a mask and it cannot be applied to conv1d

Activate dropout in a pre-trained VGG16 model

I'm using Tensorflow 2.0 and a pre-trained VGG16 model and want to activate dropout during prediction. So far I tried the following without success:
model = tf.keras.applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', is_training=True)
model = tf.keras.applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', dropout_rate=0.5)
However, none of these approaches worked. How can I enable dropout during the prediction phase?
The VGG16 architecture does not contain a dropout layer by default. You would need to insert a dropout layer in the model.
Here is a post I found useful to solve this:
Add dropout layers between pretrained dense layers in keras

How to freeze specific layers of a model inside a model?

My keras model is made up of multiple models. Each "sub-model" has multiple layers. How do I call out the layers in the "sub-model" and set trainability / freeze specific layers?
I'll use an example of the VGG19 convolutional neural network in Keras, although it applies to any neural network architecture:
from keras.applications.vgg19 import VGG19
model = VGG19(weights='imagenet')
You can visualise the layers using:
model.summary()
The summary will show the amount of trainable parameters in the network. To freeze certain layers, i.e. the last 5 layers in the network:
for layer in model.layers[:-5]:
layer.trainable = False
Calling the summary again you'll see the amount of trainable parameters have reduced.

Naming layers in keras

I am using a pre-trained keras model ( Convolutional network) and I am retraining this model again on my dataset.
Now, I need to get the output of some layers, to visualize the gradient activation. I just found out that every trained model has different naming of layers. for example, the input layer in one model is: input_7 (InputLayer) and in another model is input_5 (InputLayer).
Do you know how to prevent this bad behavior? How can I keep the same naming without the need to manually name all the layers, as I have more than 53 convolutional layers?

Categories