I am using a pretrained Resnet50 (from the tensorflow.keras.applications package) and finetune it for multilabel classification (with 2 classes), and I'd like to extract the Saliency maps from the finetuned model.
To make a classifier, i add 2 dense layers to the Resnet model, creating a new sequential model as follow :
self.model = tf.keras.Sequential([
resnet50,
layers.Dense(1024, activation='relu', name='hidden_layer'),
layers.Dense(2, activation='sigmoid', name='output')
])
but my problem is that the resnet50 becomes a "single layer", like each layer is no more accessible : the model summary only contains 3 layers. I'd like to know if there is a way to add layers to a functional model without creating a sequential model, in order to be able to access each layer of the resnet model.
Thank you in advance,
To access layers of the resnet50 model, you have to first access the resnet50 layer aand then from that access the convolution layer for which you want to create saliency map.
self.model.get_layer("resnet50").get_layer("conv5_block2_3_conv")
I have been trying to train a bidirectional LSTM using TensorFlow v2 keras for text classification. Below is the architecture:
model1 = Sequential()
model1.add(Embedding(vocab, 128,input_length=maxlength))
model1.add(Bidirectional(LSTM(32,dropout=0.2,recurrent_dropout=0.2,return_sequences=True)))
model1.add(Bidirectional(LSTM(16,dropout=0.2,recurrent_dropout=0.2,return_sequences=True)))
model1.add(GlobalAveragePooling1D())
model1.add(Dense(5, activation='softmax'))
model1.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model1.summary()
Now, it is the summary details where I am confused
My doubts are related to the output shapes of BiLSTM layers. How they are (283,64) & (283,32) though the number of units used is 32 & 16 respectively for the 2 layers. Here, maxlength=283, vocab=19479
I believe that the explanation for this result is the bidirectional Nature of the LSTM layers in which you have added to your neural network: The size of the layer you have added is doubled for the layer to also learn the sequence backwards. I hope you can understand, if you have any questions, you can ask me in the comments.
This is because of Bidirectional. If you remove it, you'll see that output shapes are (283,32) & (283,16). Bidirectional creates some kind of extra layer
I have a pre-trained Fasttext model and I want to embed it in Keras.
model = Sequential()
model.add(Embedding(MAX_NB_WORDS,
EMBEDDING_DIM,
input_length=X.shape[1],
input_length=4,
weights=[embedding_matrix],
trainable=False))
But it didn't work.
I found that lots of people have same problems with embedding pre-trained model to Keras, and all of them are left with no solution.
It seems like weights and embeddings_initializer are deprecated.
Is there any alternative method to solve the problem?
Thanks in advance
Weights parameter is deprecated in Embedding layer of Keras.
The new version of embedding layer will look like below -
embedding_layer = Embedding(num_words,
EMBEDDING_DIM,
embeddings_initializer=Constant(embedding_matrix),
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
You can find latest version of embedding layer details here - Keras Embedding Layer
You can find the example of pretrained word embedding here - Pretrained Word Embedding
I would like to implement attention to a trained image classification CNN model. For example, there are 30 classes and with the Keras CNN, I obtain for each image the predicted class. However, to visualize the important features/locations of the predicted result. I want to add a Soft Attention after the FC layer. I tried to read the "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention" to obtain similar results. However, I could not understand how the author has implemented. Because my problem is not an image caption or text seq2seq problem.
I have an image classification CNN and would like to extract the features and put it into a LSTM to visualize soft attention. Although I am getting stuck every time.
The steps I took:
Load CNN model
Extract features from a single image (however, the LSTM will check the same image with some removed patches in the image)
The steps I took:
Load CNN model (I already trained the CNN earlier for predictions)
Extract features from a single image (however, the LSTM will check the same image with some removed patches in the image)
Getting stuck after steps below:
Create LSTM with soft attention
Obtain a single output
I am using Keras with TensorFlow background. CNN features are extracted using ResNet50. The images are 224x224 and the FC layer has 2048 units as output shape.
#Extract CNN features:
base_model = load_model(weight_file, custom_objects={'custom_mae': custom_mae})
last_conv_layer = base_model.get_layer("global_average_pooling2d_3")
cnn_model = Model(input=base_model.input, output=last_conv_layer.output)
cnn_model.trainable = False
bottleneck_features_train_v2 = cnn_model.predict(train_gen.images)
#Create LSTM:
seq_input = Input(shape=(1, 224, 224, 3 ))
encoded_frame = TimeDistributed(cnn_model)(seq_input)
encoded_vid = LSTM(2048)(encoded_frame)
lstm = Dropout(0.5)(encoded_vid)
#Add soft attention
attention = Dense(1, activation='tanh')(lstm)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(units)(attention)
attention = Permute([2, 1])(attention)
#output 101 classes
predictions = Dense(101, activation='softmax', name='pred_age')(attention)
What I expect is to give an image feature from the last FC layer. Add soft attention to LSTM to train attention weights and would like to obtain a class from the output and visualize the soft attention to know where the system is looking at with doing the prediction (similar soft attention visualization as in the paper).
As I am new to the attention mechanism, I did much research and could not find a solution/understanding of my problem. I would like to know if I am doing it right.
I would like to follow the Convolutional Neural Net (CNN) approach here. However, this code in github uses Pytorch, whereas I am using Keras.
I want to reproduce boxes 6,7 and 8 where pre-trained weights from VGG-16 on ImageNet is downloaded and is used to make the CNN converge faster.
In particular, there is a portion (box 8) where weights are downloaded and skipped from VGG-16 that have no counterpart in SegNet (the CNN model). In my work, I am using a CNN model called U-Net instead of Segnet. The U-Net Keras code that I am using can be found here.
I am new to Keras and would appreciate any insight in Keras code on how I can go about downloading and skipping the VGG weights that have no counterpart with my U-Netmodel.
The technique you are addressing is called "Transfer Learning" - when a pre-trained model on a different dataset is used as part of the model as a starting point for better convergence. The intuition behind it is simple: we assume that after training on such a large and rich dataset as ImageNet, the convolution kernels of the model will learn useful representations.
In your specific case, you want to stack VGG16 weights in the bottom and deconvolution blocks on the top. I will go step-by-step, as you pointed out that you are new to Keras. This answer is organized as a step-by-step tutorial and will provide small snippets for you to use in your own code.
Loading weights
In the PyTorch code you linked to above, the model was first defined, and only then the weights are copied. I found this approach abundant, as it contains lots of not necessary code. Here, we will load VGG16 first, and then stack the other layers on the top.
from keras import applications
from keras.layers import Input
# Loading without top layers, since you only need convolution. Note that by not
# specifying the shape of top layers, the input tensor shape is (None, None, 3),
# so you can use them for any size of images.
vgg_model = applications.VGG16(weights='imagenet', include_top=False)
# If you want to specify input tensor shape, e.g. 256x256 with 3 channels:
input_tensor = Input(shape=(256, 256, 3))
vgg_model = applications.VGG16(weights='imagenet',
include_top=False,
input_tensor=input_tensor)
# To see the models' architecture and layer names, run the following
vgg_model.summary()
Defining the U-Net computation graph with VGG16 on the bottom
As pointed in previous paragraph, you do not need to define a model and copy the weights over. Just stack other layers on top of vgg_model:
# Import the layers to be used in U-Net
from keras.layers import ...
# From the U-Net code you provided
def make_conv_block(nb_filters, input_tensor, block):
...
# Creating dictionary that maps layer names to the layers
layers = dict([(layer.name, layer) for layer in vgg_model.layers])
# Getting output tensor of the last VGG layer that we want to include.
# I don't know much about U-Net, but according to the code you provided,
# you don't need the last pooling layer, right?
vgg_top = layers['block5_conv3'].output
# Now getting bottom layers for multi-scale skip-layers
block1_conv2 = layers['block1_conv2'].output
block2_conv2 = layers['block2_conv2'].output
block3_conv3 = layers['block3_conv3'].output
block4_conv3 = layers['block4_conv3'].output
# Stacking the remaining layers of U-Net on top of it (modified from
# the U-Net code you provided)
up6 = Concatenate()([UpSampling2D(size=(2, 2))(vgg_top), block4_conv3])
conv6 = make_conv_block(256, up6, 6)
up7 = Concatenate()([UpSampling2D(size=(2, 2))(conv6), block3_conv3])
conv7 = make_conv_block(128, up7, 7)
up8 = Concatenate()([UpSampling2D(size=(2, 2))(conv7), block2_conv2])
conv8 = make_conv_block(64, up8, 8)
up9 = Concatenate()([UpSampling2D(size=(2, 2))(conv8), block1_conv2])
conv9 = make_conv_block(32, up9, 9)
conv10 = Conv2D(nb_labels, (1, 1), name='conv_10_1')(conv9)
x = Reshape((nb_rows * nb_cols, nb_labels))(conv10)
x = Activation('softmax')(x)
outputs = Reshape((nb_rows, nb_cols, nb_labels))(x)
I want to emphasize that what we've done in this paragraph is just defining the computation graph for U-Net. This code is written specifically for VGG16, but you can modify it for other architectures as you wish.
Creating a model
After the previous step, we've got a computational graph (I assume that you use Tensorflow backend for Keras. If you're using Theano, I recommend you to switch to Tensorflow since this framework has achieved a state of maturity now). Now, we need to do the following things:
Create a model on top of this computation graph
Freeze the bottom layers, since you don't want to wreck your pre-trained weights
# Creating new model. Please note that this is NOT a Sequential() model
# as in commonly found tutorials on the internet.
from keras.models import Model
custom_model = Model(inputs=vgg_model.input, outputs=outputs)
# Make sure that the pre-trained bottom layers are not trainable.
# Here, I freeze all the layers of VGG16 (layers 0-18, including the
# pooling ones.
for layer in custom_model.layers[:19]:
layer.trainable = False
# Do not forget to compile it before training
custom_model.compile(loss='your_loss',
optimizer='your_optimizer',
metrics=['your_metrics'])
"I got confused"
Assuming that you're new to Keras and to Deep Learning in general (as you admitted in your question), I recommend the following articles to read to further understand the process of Fine Tuning and Transfer Learning on Keras:
How CNNs see the world - a great short article that will give you intuitive understanding of the dirty magic behind Transfer Learning.
Building powerful image classification models using very little data - This one will give you more insight on how to adjust learning rates and "release" the frozen layers.
When you're learning a framework, documentation is your best friend. Fortunately, Keras has an incredible documentation.
Q&A
The deconvolution blocks we put on top of VGG are from the UNET achitecture (i.e. up6 to conv10)? Please confirm.
Yes, it's the same as here, just with different names of the skip-connection layers (e.g. block1_conv2 instead of conv1)
We leave out the conv layers (i.e., conv1 to conv5). Can you please share with me as to why this is so?
We don't leave or throw any layers from the VGG network. The VGG16 network architecture and the bottom architecture of U-Net (up to conv5) is very similar. In fact, they are made of 5 blocks of the following format:
+-----------------+-------------------+
| VGG conv blocks | U-Net conv blocks |
+-----------------+-------------------+
| blockX_conv1 | convN |
| ... | poolN |
| blockX_convN | |
| blockX_pool | |
+-----------------+-------------------+
Here is a better visualization. So, the only difference between VGG16 and bottom part of U-Net is that each block of VGG16 contains multiple convolution layers instead of one. That's why, the alternative of connecting conv3 to conv6 is connecting block3_conv3 to conv6. The U-Net architecture remains the same, just with more convolution layers on the bottom.
Is there anyway to incorporate the Max pooling in the conv layers (in your opinion what are we doing here by leaving them out, and would you say it is insignificant?)
We don't leave them out. The only pooling layer that I threw away is block5_pool (which is the last layer in bottom part of VGG16) - because in the original U-Net (refer to the code) it seems like the last convolution block in the bottom part is not followed by a pooling layer (we have conv5 but don't have pool5). I kept all the layers of VGG16.
We see Maxpooling being used on the convolution blocks. Would we also just simply drop these pooling layers (as we are doing here with Unet) if we wanted to combine Segnet with VGG?
As I explained in the question above, we are not dropping any pooling layers.
However, you would need to stack a different type of pooling layers instead of the simple MaxPooling2D that is used in the default VGG16, because SegNet preserves max-indexes. This can be achieved with tf.nn.max_pool_with_argmax and using the trick of replacing middle layers of Keras model (I won't cover the detailed information in this answer to keep it clean). The replacement is harmless and doesn't require re-training because pooling layers don't contain any trained weights.
The U-NET from here is different from what I am using, can you tell what is the impact of such a difference between the two?
It is a more shallow U-Net. The one in your original question has 5 convolution blocks on the bottom (conv1 - conv5), while the later only has 3. Choose how many blocks you need depending on the data (e.g. for simple data as cells you might want to use only 2-3 blocks, while gray matter or tissue segmentation might require 5 blocks for better quality. See this link to have an insight of what convolution kernels "see".
Also, what do you think about the VGGSegnet from here. Does it use the trick of the middle layers you mentioned in Q&A? And is it the equivalent of the Pytorch code I initially posted?
Interesting. It is an incorrect implementation, and is not equivalent to the Pytorch code you posted. I have opened an issue in that repository.
Final question....is it always a rule in Transfer Learning to put the pretrained model (i.e., the model w/ pretrained weights) at the bottom?
Generally it is. Think of the convolution kernels as "features": the first layer detects small edges, colors. The following layers combines those edges and colors into more complicated detections, like "yellow lines" or "blue circle". Then the upper convolution layers detects more abstract shapes as "eyes", "nose", etc. based on detections of lower layers. So replacing the bottom layers (while the upper layers depends on the bottom representation) is illogic.
A solution sketch for this would look like the following:
Initialize VGG-16 and load the ImageNet weights by using the appropriate weights='imagenet' flag:
vgg_16 = keras.applications.vgg16.VGG16(weights='imagenet')
Initialize your model:
model = Model() # or Sequential()
... # Define and compile your model
For each layer that you want to copy over:
i_vgg = ... # Index of the layer you want to copy
i_mod = ... # Index of the corresponding layer in your model
weights = vgg_16.layers[i_vgg].get_weights()
model.layers[i_mod].set_weights(weights)
If you don't want to spend time to find out the index of each layer, you can assign a name to the relevant layers using the name='some_name' parameter in the layer constructor and then access the weights as follows:
layer_dict = dict([(layer.name, layer) for layer in model.layers])
weights = layer_dict['some_name'].get_weights()
layer_dict['some_name'].set_weights(weights)
Sources:
Loading VGG
Getting weights from layer (FChollet is the creator of Keras)
Getting and setting weights
Cheers