When creating a Sequential model in Keras, I understand you provide the input shape in the first layer. Does this input shape then make an implicit input layer?
For example, the model below explicitly specifies 2 Dense layers, but is this actually a model with 3 layers consisting of one input layer implied by the input shape, one hidden dense layer with 32 neurons, and then one output layer with 10 possible outputs?
model = Sequential([
Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
Well, it actually is an implicit input layer indeed, i.e. your model is an example of a "good old" neural net with three layers - input, hidden, and output. This is more explicitly visible in the Keras Functional API (check the example in the docs), in which your model would be written as:
inputs = Input(shape=(784,)) # input layer
x = Dense(32, activation='relu')(inputs) # hidden layer
outputs = Dense(10, activation='softmax')(x) # output layer
model = Model(inputs, outputs)
Actually, this implicit input layer is the reason why you have to include an input_shape argument only in the first (explicit) layer of the model in the Sequential API - in subsequent layers, the input shape is inferred from the output of the previous ones (see the comments in the source code of core.py).
You may also find the documentation on tf.contrib.keras.layers.Input enlightening.
It depends on your perspective :-)
Rewriting your code in line with more recent Keras tutorial examples, you would probably use:
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=784))
model.add(Dense(10, activation='softmax')
...which makes it much more explicit that you only have 2 Keras layers. And this is exactly what you do have (in Keras, at least) because the "input layer" is not really a (Keras) layer at all: it's only a place to store a tensor, so it may as well be a tensor itself.
Each Keras layer is a transformation that outputs a tensor, possibly of a different size/shape to the input. So while there are 3 identifiable tensors here (input, outputs of the two layers), there are only 2 transformations involved corresponding to the 2 Keras layers.
On the other hand, graphically, you might represent this network with 3 (graphical) layers of nodes, and two sets of lines connecting the layers of nodes. Graphically, it's a 3-layer network. But "layers" in this graphical notation are bunches of circles that sit on a page doing nothing, whereas a layers in Keras transform tensors and do actual work for you. Personally, I would get used to the Keras perspective :-)
Note finally that for fun and/or simplicity, I substituted input_dim=784 for input_shape=(784,) to avoid the syntax that Python uses to both confuse newcomers and create a 1-D tuple: (<value>,).
Related
I designed a neural network model with large number of output predicted by softmax function. However, I want categorize all the outputs into 5 outputs without modifying the architecture of other layers. The model performs well in the first case but when I decrease the number of output it loses accuracy and get a bad generalization. My question is : Is there a method to make my model performs well even if there is just 5 outputs ? for example : adding dropout layer before output layer, using other activation function, etc.
If it is a plain neural network then yeah definitely use the RelU activation function in the hidden layers and add dropout layer for each hidden layer. Also you can normalize you data before feeding them to the network.
I have been trying to train a bidirectional LSTM using TensorFlow v2 keras for text classification. Below is the architecture:
model1 = Sequential()
model1.add(Embedding(vocab, 128,input_length=maxlength))
model1.add(Bidirectional(LSTM(32,dropout=0.2,recurrent_dropout=0.2,return_sequences=True)))
model1.add(Bidirectional(LSTM(16,dropout=0.2,recurrent_dropout=0.2,return_sequences=True)))
model1.add(GlobalAveragePooling1D())
model1.add(Dense(5, activation='softmax'))
model1.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model1.summary()
Now, it is the summary details where I am confused
My doubts are related to the output shapes of BiLSTM layers. How they are (283,64) & (283,32) though the number of units used is 32 & 16 respectively for the 2 layers. Here, maxlength=283, vocab=19479
I believe that the explanation for this result is the bidirectional Nature of the LSTM layers in which you have added to your neural network: The size of the layer you have added is doubled for the layer to also learn the sequence backwards. I hope you can understand, if you have any questions, you can ask me in the comments.
This is because of Bidirectional. If you remove it, you'll see that output shapes are (283,32) & (283,16). Bidirectional creates some kind of extra layer
I have trained a fully convolutional neural network with Keras. I have used the Functional API and have defined the input layer as Input(shape=(128,128,3)), corresponding to the size of the images in my training set.
However, I want to use the trained model on images of variable sizes (which should be ok because the network is fully convolutional). To do this, I need to change my input layer to Input(shape=(None,None,3)). The obvious way to solve the problem would have been to train my model directly with an input shape of (None,None,3) but I use a custom loss function where I need to specify the size of my training images.
I have tried to define a new input layer and assign it to my model like this :
from keras.engine import InputLayer
input_layer = InputLayer(input_shape=(None, None, 3), name="input")
model.layers[0] = input_layer
This actually changes the size of the input layers accordingly but the following layers still expect (128,128,filters) inputs.
Is there a way to change all of the inputs values at once ?
Create a new model, exactly the same, except for new input shape; and tranfer weights:
newModel.set_weights(oldModel.get_weights())
If anything goes wrong, then it might not be fully convolutional (ex: contains a Flatten layer).
I am slightly confused on the different ways to apply dropout to my Sequential model in Keras.
My model is the following:
model = Sequential()
model.add(Embedding(input_dim=64,output_dim=64, input_length=498))
model.add(LSTM(units=100,dropout=0.5, recurrent_dropout=0.5))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
Assume that I added an extra Dropout layer after the Embedding layer in the below manner:
model = Sequential()
model.add(Embedding(input_dim=64,output_dim=64, input_length=498))
model.add(Dropout(0.25))
model.add(LSTM(units=100,dropout=0.5, recurrent_dropout=0.5))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
Will this make any difference since I then specified that the dropout should be 0.5 in the LSTM parameter specifically, or am I getting this all wrong?
When you add a dropout layer you're adding dropout to the output of the previous layer only, in your case you are adding dropout to your embedding layer.
An LSTM cell is more complex than a single layer neural network, when you specify the dropout in the LSTM cell you are actually applying dropout to 4 different sub neural network operations in the LSTM cell.
Below is a visualization of an LSMT cell from Colah's blog on LSTMs (the best visualization of LSTM/RNNs out there, http://colah.github.io/posts/2015-08-Understanding-LSTMs/). The yellow boxes represent 4 fully connected network operations (each with their own weights) which occur under the hood of the LSTM - this is neatly wrapped up in the LSTM cell wrapper, though it's not really so hard to code by hand.
When you specify dropout=0.5 in the LSTM cell, what you are doing under the hood is applying dropout to each of these 4 neural network operations. This is effectively adding model.add(Dropout(0.25)) 4 times, once after each of the 4 yellow blocks you see in the diagram, within the internals of the LSTM cell.
I hope that short discussion makes it more clear how the dropout applied in the LSTM wrapper, which is applied to effectively 4 sub networks within the LSTM, is different from the dropout you applied once in the sequence after your embedding layer. And to answer your question directly, yes, these two dropout definitions are very much different.
Notice, as a further example to help elucidate the point: if you were to define a simple 5 layer fully connected neural network you would need to define dropout after each layer, not once. model.add(Dropout(0.25)) is not some kind of global setting, it's adding the dropout operation to a pipeline of operations. If you have 5 layers, you need to add 5 dropout operations.
I would like to follow the Convolutional Neural Net (CNN) approach here. However, this code in github uses Pytorch, whereas I am using Keras.
I want to reproduce boxes 6,7 and 8 where pre-trained weights from VGG-16 on ImageNet is downloaded and is used to make the CNN converge faster.
In particular, there is a portion (box 8) where weights are downloaded and skipped from VGG-16 that have no counterpart in SegNet (the CNN model). In my work, I am using a CNN model called U-Net instead of Segnet. The U-Net Keras code that I am using can be found here.
I am new to Keras and would appreciate any insight in Keras code on how I can go about downloading and skipping the VGG weights that have no counterpart with my U-Netmodel.
The technique you are addressing is called "Transfer Learning" - when a pre-trained model on a different dataset is used as part of the model as a starting point for better convergence. The intuition behind it is simple: we assume that after training on such a large and rich dataset as ImageNet, the convolution kernels of the model will learn useful representations.
In your specific case, you want to stack VGG16 weights in the bottom and deconvolution blocks on the top. I will go step-by-step, as you pointed out that you are new to Keras. This answer is organized as a step-by-step tutorial and will provide small snippets for you to use in your own code.
Loading weights
In the PyTorch code you linked to above, the model was first defined, and only then the weights are copied. I found this approach abundant, as it contains lots of not necessary code. Here, we will load VGG16 first, and then stack the other layers on the top.
from keras import applications
from keras.layers import Input
# Loading without top layers, since you only need convolution. Note that by not
# specifying the shape of top layers, the input tensor shape is (None, None, 3),
# so you can use them for any size of images.
vgg_model = applications.VGG16(weights='imagenet', include_top=False)
# If you want to specify input tensor shape, e.g. 256x256 with 3 channels:
input_tensor = Input(shape=(256, 256, 3))
vgg_model = applications.VGG16(weights='imagenet',
include_top=False,
input_tensor=input_tensor)
# To see the models' architecture and layer names, run the following
vgg_model.summary()
Defining the U-Net computation graph with VGG16 on the bottom
As pointed in previous paragraph, you do not need to define a model and copy the weights over. Just stack other layers on top of vgg_model:
# Import the layers to be used in U-Net
from keras.layers import ...
# From the U-Net code you provided
def make_conv_block(nb_filters, input_tensor, block):
...
# Creating dictionary that maps layer names to the layers
layers = dict([(layer.name, layer) for layer in vgg_model.layers])
# Getting output tensor of the last VGG layer that we want to include.
# I don't know much about U-Net, but according to the code you provided,
# you don't need the last pooling layer, right?
vgg_top = layers['block5_conv3'].output
# Now getting bottom layers for multi-scale skip-layers
block1_conv2 = layers['block1_conv2'].output
block2_conv2 = layers['block2_conv2'].output
block3_conv3 = layers['block3_conv3'].output
block4_conv3 = layers['block4_conv3'].output
# Stacking the remaining layers of U-Net on top of it (modified from
# the U-Net code you provided)
up6 = Concatenate()([UpSampling2D(size=(2, 2))(vgg_top), block4_conv3])
conv6 = make_conv_block(256, up6, 6)
up7 = Concatenate()([UpSampling2D(size=(2, 2))(conv6), block3_conv3])
conv7 = make_conv_block(128, up7, 7)
up8 = Concatenate()([UpSampling2D(size=(2, 2))(conv7), block2_conv2])
conv8 = make_conv_block(64, up8, 8)
up9 = Concatenate()([UpSampling2D(size=(2, 2))(conv8), block1_conv2])
conv9 = make_conv_block(32, up9, 9)
conv10 = Conv2D(nb_labels, (1, 1), name='conv_10_1')(conv9)
x = Reshape((nb_rows * nb_cols, nb_labels))(conv10)
x = Activation('softmax')(x)
outputs = Reshape((nb_rows, nb_cols, nb_labels))(x)
I want to emphasize that what we've done in this paragraph is just defining the computation graph for U-Net. This code is written specifically for VGG16, but you can modify it for other architectures as you wish.
Creating a model
After the previous step, we've got a computational graph (I assume that you use Tensorflow backend for Keras. If you're using Theano, I recommend you to switch to Tensorflow since this framework has achieved a state of maturity now). Now, we need to do the following things:
Create a model on top of this computation graph
Freeze the bottom layers, since you don't want to wreck your pre-trained weights
# Creating new model. Please note that this is NOT a Sequential() model
# as in commonly found tutorials on the internet.
from keras.models import Model
custom_model = Model(inputs=vgg_model.input, outputs=outputs)
# Make sure that the pre-trained bottom layers are not trainable.
# Here, I freeze all the layers of VGG16 (layers 0-18, including the
# pooling ones.
for layer in custom_model.layers[:19]:
layer.trainable = False
# Do not forget to compile it before training
custom_model.compile(loss='your_loss',
optimizer='your_optimizer',
metrics=['your_metrics'])
"I got confused"
Assuming that you're new to Keras and to Deep Learning in general (as you admitted in your question), I recommend the following articles to read to further understand the process of Fine Tuning and Transfer Learning on Keras:
How CNNs see the world - a great short article that will give you intuitive understanding of the dirty magic behind Transfer Learning.
Building powerful image classification models using very little data - This one will give you more insight on how to adjust learning rates and "release" the frozen layers.
When you're learning a framework, documentation is your best friend. Fortunately, Keras has an incredible documentation.
Q&A
The deconvolution blocks we put on top of VGG are from the UNET achitecture (i.e. up6 to conv10)? Please confirm.
Yes, it's the same as here, just with different names of the skip-connection layers (e.g. block1_conv2 instead of conv1)
We leave out the conv layers (i.e., conv1 to conv5). Can you please share with me as to why this is so?
We don't leave or throw any layers from the VGG network. The VGG16 network architecture and the bottom architecture of U-Net (up to conv5) is very similar. In fact, they are made of 5 blocks of the following format:
+-----------------+-------------------+
| VGG conv blocks | U-Net conv blocks |
+-----------------+-------------------+
| blockX_conv1 | convN |
| ... | poolN |
| blockX_convN | |
| blockX_pool | |
+-----------------+-------------------+
Here is a better visualization. So, the only difference between VGG16 and bottom part of U-Net is that each block of VGG16 contains multiple convolution layers instead of one. That's why, the alternative of connecting conv3 to conv6 is connecting block3_conv3 to conv6. The U-Net architecture remains the same, just with more convolution layers on the bottom.
Is there anyway to incorporate the Max pooling in the conv layers (in your opinion what are we doing here by leaving them out, and would you say it is insignificant?)
We don't leave them out. The only pooling layer that I threw away is block5_pool (which is the last layer in bottom part of VGG16) - because in the original U-Net (refer to the code) it seems like the last convolution block in the bottom part is not followed by a pooling layer (we have conv5 but don't have pool5). I kept all the layers of VGG16.
We see Maxpooling being used on the convolution blocks. Would we also just simply drop these pooling layers (as we are doing here with Unet) if we wanted to combine Segnet with VGG?
As I explained in the question above, we are not dropping any pooling layers.
However, you would need to stack a different type of pooling layers instead of the simple MaxPooling2D that is used in the default VGG16, because SegNet preserves max-indexes. This can be achieved with tf.nn.max_pool_with_argmax and using the trick of replacing middle layers of Keras model (I won't cover the detailed information in this answer to keep it clean). The replacement is harmless and doesn't require re-training because pooling layers don't contain any trained weights.
The U-NET from here is different from what I am using, can you tell what is the impact of such a difference between the two?
It is a more shallow U-Net. The one in your original question has 5 convolution blocks on the bottom (conv1 - conv5), while the later only has 3. Choose how many blocks you need depending on the data (e.g. for simple data as cells you might want to use only 2-3 blocks, while gray matter or tissue segmentation might require 5 blocks for better quality. See this link to have an insight of what convolution kernels "see".
Also, what do you think about the VGGSegnet from here. Does it use the trick of the middle layers you mentioned in Q&A? And is it the equivalent of the Pytorch code I initially posted?
Interesting. It is an incorrect implementation, and is not equivalent to the Pytorch code you posted. I have opened an issue in that repository.
Final question....is it always a rule in Transfer Learning to put the pretrained model (i.e., the model w/ pretrained weights) at the bottom?
Generally it is. Think of the convolution kernels as "features": the first layer detects small edges, colors. The following layers combines those edges and colors into more complicated detections, like "yellow lines" or "blue circle". Then the upper convolution layers detects more abstract shapes as "eyes", "nose", etc. based on detections of lower layers. So replacing the bottom layers (while the upper layers depends on the bottom representation) is illogic.
A solution sketch for this would look like the following:
Initialize VGG-16 and load the ImageNet weights by using the appropriate weights='imagenet' flag:
vgg_16 = keras.applications.vgg16.VGG16(weights='imagenet')
Initialize your model:
model = Model() # or Sequential()
... # Define and compile your model
For each layer that you want to copy over:
i_vgg = ... # Index of the layer you want to copy
i_mod = ... # Index of the corresponding layer in your model
weights = vgg_16.layers[i_vgg].get_weights()
model.layers[i_mod].set_weights(weights)
If you don't want to spend time to find out the index of each layer, you can assign a name to the relevant layers using the name='some_name' parameter in the layer constructor and then access the weights as follows:
layer_dict = dict([(layer.name, layer) for layer in model.layers])
weights = layer_dict['some_name'].get_weights()
layer_dict['some_name'].set_weights(weights)
Sources:
Loading VGG
Getting weights from layer (FChollet is the creator of Keras)
Getting and setting weights
Cheers