Activate dropout in a pre-trained VGG16 model - python

I'm using Tensorflow 2.0 and a pre-trained VGG16 model and want to activate dropout during prediction. So far I tried the following without success:
model = tf.keras.applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', is_training=True)
model = tf.keras.applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', dropout_rate=0.5)
However, none of these approaches worked. How can I enable dropout during the prediction phase?

The VGG16 architecture does not contain a dropout layer by default. You would need to insert a dropout layer in the model.
Here is a post I found useful to solve this:
Add dropout layers between pretrained dense layers in keras

Related

Add layers to a pretrained model wihout creating a sequential model

I am using a pretrained Resnet50 (from the tensorflow.keras.applications package) and finetune it for multilabel classification (with 2 classes), and I'd like to extract the Saliency maps from the finetuned model.
To make a classifier, i add 2 dense layers to the Resnet model, creating a new sequential model as follow :
self.model = tf.keras.Sequential([
resnet50,
layers.Dense(1024, activation='relu', name='hidden_layer'),
layers.Dense(2, activation='sigmoid', name='output')
])
but my problem is that the resnet50 becomes a "single layer", like each layer is no more accessible : the model summary only contains 3 layers. I'd like to know if there is a way to add layers to a functional model without creating a sequential model, in order to be able to access each layer of the resnet model.
Thank you in advance,
To access layers of the resnet50 model, you have to first access the resnet50 layer aand then from that access the convolution layer for which you want to create saliency map.
self.model.get_layer("resnet50").get_layer("conv5_block2_3_conv")

How to confirm Keras is loading resnet pretrained nets

I am currently trying to use a pretrained ResNet50 model for my TensorFlow program. When running the train python script, I am not getting a clear indicator that it is using ResNet. Here is a snippet of the code that I have in my train script where ResNet is being used:
from tensorflow.keras.applications import ResNet50
base_model = base_model_fn(ResNet50)
final_model = build_model(base_model, num_classes)
model = Model(inputs=base_model.input, outputs=final_model)
When I run the code, it says that it is creating directories for resnet and then dumps tool data into them, but it should show a download bar and install the pretrained nets, right? Where would I check to make sure it is using resnet?
You could try it this way
img_shape = (224,224) # set this to the desired size
base_model=tf.keras.applications.ResNet50V2( include_top=False, input_shape=img_shape,
pooling='max', weights='imagenet')
x=base_model.output
output=Dense(num_classes, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.summary()

Fine-Tuning Pretrained BERT with CNN. How to Disable Masking

Has anyone used CNN in Keras to fine-tune a pre-trained BERT?
I have been trying to design this but the pre-trained model comes with Masks (i think in the embedding layer) and when the fine-tuning architecture is built on the output of one the encoder layers it gives this errorLayer conv1d_1 does not support masking, but was passed an input_mask
So far I have tried some suggested workarounds such as using keras_trans_mask to remove the mask before the CNN and add it back later. But that leads to other errors too.
Is it possible to disable Masking in the pre-trained model or is there a workaround for it?
EDIT: This is the code I'm working with
inputs = model.inputs[:2]
layer_output = model.get_layer('Encoder-12-FeedForward-Norm').output
conv_layer= keras.layers.Conv1D(100, kernel_size=3, activation='relu',
data_format='channels_first')(layer_output)
maxpool_layer = keras.layers.MaxPooling1D(pool_size=4)(conv_layer)
flat_layer= keras.layers.Flatten()(maxpool_layer)
outputs = keras.layers.Dense(units=3, activation='softmax')(flat_layer)
model = keras.models.Model(inputs, outputs)
model.compile(RAdam(learning_rate =LR),loss='sparse_categorical_crossentropy',metrics=['sparse_categorical_accuracy'])
So layer_output contains a mask and it cannot be applied to conv1d

Issue with embedding pre-trained model in Keras

I have a pre-trained Fasttext model and I want to embed it in Keras.
model = Sequential()
model.add(Embedding(MAX_NB_WORDS,
EMBEDDING_DIM,
input_length=X.shape[1],
input_length=4,
weights=[embedding_matrix],
trainable=False))
But it didn't work.
I found that lots of people have same problems with embedding pre-trained model to Keras, and all of them are left with no solution.
It seems like weights and embeddings_initializer are deprecated.
Is there any alternative method to solve the problem?
Thanks in advance
Weights parameter is deprecated in Embedding layer of Keras.
The new version of embedding layer will look like below -
embedding_layer = Embedding(num_words,
EMBEDDING_DIM,
embeddings_initializer=Constant(embedding_matrix),
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
You can find latest version of embedding layer details here - Keras Embedding Layer
You can find the example of pretrained word embedding here - Pretrained Word Embedding

How to freeze specific layers of a model inside a model?

My keras model is made up of multiple models. Each "sub-model" has multiple layers. How do I call out the layers in the "sub-model" and set trainability / freeze specific layers?
I'll use an example of the VGG19 convolutional neural network in Keras, although it applies to any neural network architecture:
from keras.applications.vgg19 import VGG19
model = VGG19(weights='imagenet')
You can visualise the layers using:
model.summary()
The summary will show the amount of trainable parameters in the network. To freeze certain layers, i.e. the last 5 layers in the network:
for layer in model.layers[:-5]:
layer.trainable = False
Calling the summary again you'll see the amount of trainable parameters have reduced.

Categories