I have a sequence of 5 images that I want to pass through a CNN sequentially. A single input will have size: (5, width, height, channels) and I want to pass each image in the sequence in order to a 2D CNN, concatenate all 5 outputs at some layer and then feed to an LSTM. My model looks something like this:
from keras.models import Model
from keras.layers import Dense, Input, LSTM, Flatten, Conv2D, MaxPooling2D
# Feed images in sequential order here
inputs = Input(shape=(128, 128, 3))
x = Conv2D(16, 3, activation='relu')(inputs)
x = MaxPooling2D((2, 2))(x)
...
# Concatenate sequence outputs here
x = LSTM(8)(x)
x = Flatten()(x)
outputs = Dense(5, activation='sigmoid')
model = Model(inputs=inputs, outputs=outputs)
Eventually I want to concatenate all 5 outputs together at some point in the network and feed them to an LSTM but I am having trouble figuring out how to feed sequence of images in order to a 2D convolutional layer. I have looked into 3D convolutional layers and the ConvLSTM2D layer but I want to figure out how I can do it this way instead.
Related
I've seen that in keras I can use tf.slpit to split layers. My problem is that I don't understand how to do the connections between the "forked ways" that each layer must take.
Here is an image of an example I'm trying to do. It is basically an Input layer that splits into 2 sub-NN and then reunite in a layer before the output.
Diagram (thin black lines white poligons represent the wheight connections matrixes):
After googling with better words (as merge neural networks) I found that Keras Functional API are the answer.
Helpful links:
https://machinelearningmastery.com/keras-functional-api-deep-learning/
https://www.educative.io/answers/how-to-merge-two-different-models-in-keras
Example code from first link:
# Shared Input Layer
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
# input layer
visible = Input(shape=(64,64,1))
# first feature extractor
conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
flat1 = Flatten()(pool1)
# second feature extractor
conv2 = Conv2D(16, kernel_size=8, activation='relu')(visible)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
flat2 = Flatten()(pool2)
# merge feature extractors
merge = concatenate([flat1, flat2])
# interpretation layer
hidden1 = Dense(10, activation='relu')(merge)
# prediction output
output = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=visible, outputs=output)
# summarize layers
print(model.summary())
# plot graph
plot_model(model, to_file='shared_input_layer.png')
I'm working on a machine learning project with convolutional neural networks using TF/Keras in Python, and my goal is to split up an image up into patches, run a convolution on each one separately, and then put it back together.
What I can't figure out how to do is run a convolution for each slice of a 3D array.
For example, if I have a tensor of size (500,100,100) I want to do a separate convolution for all 500 slices of size (100 x 100). I'm implementing this within a custom Keras layer and want these to be trainable weights I've tried a few different things:
Using map.fn() to run a convolution for each slice of the array
This doesn't seem to attach weights to each layer separately.
Using the DepthwiseConv2D layer:
This works well for the first call of the layer, but fails when I call the layer the second time with more filters because it wants to perform the depthwise convolution on each of the previous filtered layers
This, of course isn't what I want because I want one convolution for each of the previous sets of filters from the previous layer.
Any ideas are appreciated, as I'm truly stuck here. Thank you!
If you have a tensor with shape (500,100,100) and want to feed some subset of this tensor, to separate conv2d layers at the same time, you may do this by defining conv2d layers in the same level. You should first define Lambda layers to split input, then feed their output to Conv2D layers, then concatenate them.
Let's take a tensor with shape (100,28,28,1) as an example, that we want to split it into 2 subset tensor and apply conv2d layers on each subset separately:
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D, Input, concatenate, Lambda
from tensorflow.keras.models import Model
# define a sample dataset
x = tf.random.uniform((100, 28, 28, 1))
y = tf.random.uniform((100, 1), dtype=tf.int32, minval=0, maxval=9)
ds = tf.data.Dataset.from_tensor_slices((x, y))
ds = ds.batch(16)
def create_nn_model():
input = Input(shape=(28,28,1))
b1 = Lambda(lambda a: a[:,:14,:,:], name="first_slice")(input)
b2 = Lambda(lambda a: a[:,14:,:,:], name="second_slice")(input)
d1 = Conv2D(64, 2, padding='same', activation='relu', name="conv1_first_slice")(b1)
d2 = Conv2D(64, 2, padding='same', activation='relu', name="conv2_second_slice")(b2)
x = concatenate([d1,d2], axis=1)
x = Flatten()(x)
x = Dense(64, activation='relu')(x)
out = Dense(10, activation='softmax')(x)
model = Model(input, out)
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_nn_model()
tf.keras.utils.plot_model(model, show_shapes=True)
Here is the plotted model architecture:
I want to build an LSTM on top of pre-trained CNN (VGG) to classify a video sequence. The LSTM will be fed with the features extracted by the last FC layer of VGG.
The architecture is something like:
I wrote the code:
def build_LSTM_CNN_net()
from keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.layers import Dense, Input, Flatten
from keras.layers.pooling import GlobalAveragePooling2D, GlobalAveragePooling1D
from keras.layers.recurrent import LSTM
from keras.layers.wrappers import TimeDistributed
from keras.optimizers import Nadam
from keras.applications.vgg16 import VGG16
num_classes = 5
frames = Input(shape=(5, 224, 224, 3))
base_in = Input(shape=(224,224,3))
base_model = VGG16(weights='imagenet',
include_top=False,
input_shape=(224,224,3))
x = Flatten()(base_model.output)
x = Dense(128, activation='relu')(x)
x = TimeDistributed(Flatten())(x)
x = LSTM(units = 256, return_sequences=False, dropout=0.2)(x)
x = Dense(self.nb_classes, activation='softmax')(x)
lstm_cnn = build_LSTM_CNN_net()
keras.utils.plot_model(lstm_cnn, "lstm_cnn.png", show_shapes=True)
But got the error:
ValueError: `TimeDistributed` Layer should be passed an `input_shape ` with at least 3 dimensions, received: [None, 128]
Why is this happening, how can I fix it?
here the correct way to build a model to classify video sequences. Note that I wrap into TimeDistributed a model instance. This model was previously build to extract features from each frame individually. In the second part, we deal the frame sequences
frames, channels, rows, columns = 5,3,224,224
video = Input(shape=(frames,
rows,
columns,
channels))
cnn_base = VGG16(input_shape=(rows,
columns,
channels),
weights="imagenet",
include_top=False)
cnn_base.trainable = False
cnn_out = GlobalAveragePooling2D()(cnn_base.output)
cnn = Model(cnn_base.input, cnn_out)
encoded_frames = TimeDistributed(cnn)(video)
encoded_sequence = LSTM(256)(encoded_frames)
hidden_layer = Dense(1024, activation="relu")(encoded_sequence)
outputs = Dense(10, activation="softmax")(hidden_layer)
model = Model(video, outputs)
model.summary()
if you want to use the VGG 1x4096 emb representation you can simply do:
frames, channels, rows, columns = 5,3,224,224
video = Input(shape=(frames,
rows,
columns,
channels))
cnn_base = VGG16(input_shape=(rows,
columns,
channels),
weights="imagenet",
include_top=True) #<=== include_top=True
cnn_base.trainable = False
cnn = Model(cnn_base.input, cnn_base.layers[-3].output) # -3 is the 4096 layer
encoded_frames = TimeDistributed(cnn)(video)
encoded_sequence = LSTM(256)(encoded_frames)
hidden_layer = Dense(1024, activation="relu")(encoded_sequence)
outputs = Dense(10, activation="softmax")(hidden_layer)
model = Model(video, outputs)
model.summary()
I am trying to implement a CNN model to classify some images to their corresponding classes. Images are of size 64x64x3. My dataset consists of 25,000 images and also a CSV file consisting of 14 pre-extracted features like color, length etc.
I want to build a CNN model that make use of both the image data and the features for training and prediction. How can I implement such a model in Python with Keras.?
I'm going to start out assuming that you can import the data without any issues, and you have already separated the x-data into Image and Features, and you have the y data as the labels of each image.
You can use the keras functional api to have a neural network take multiple inputs.
from keras.models import Model
from keras.layers import Conv2D, Dense, Input, Embedding, multiply, Reshape, concatenate
img = Input(shape=(64, 64, 3))
features = Input(shape=(14,))
embedded = Embedding(input_dim=14, output_dim=60*32)(features)
embedded = Reshape(target_shape=(14, 60,32))(embedded)
encoded = Conv2D(32, (3, 3), activation='relu')(img)
encoded = Conv2D(32, (3, 3), activation='relu')(encoded)
x = concatenate([embedded, encoded], axis=1)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
main_output = Dense(1, activation='sigmoid', name='main_output')(x)
model = Model([img, features], [main_output])
I want to use a pretrained imagenet VGG16 model in keras and add my own small convnet on top. I am only interested in the features, not the predictions
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
import os
from keras.models import Model
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
load images from directory (the dir contains 4 images)
IF = '/home/ubu/files/png/'
files = os.listdir(IF)
imgs = [img_to_array(load_img(IF + p, target_size=[224,224])) for p in files]
im = np.array(imgs)
load the base model, preprocess input and get the features
base_model = VGG16(weights='imagenet', include_top=False)
x = preprocess_input(aa)
features = base_model.predict(x)
this works, and I get the features for my images on the pretrained VGG.
I now want to finetune the model and add some convolutional layers.
I read https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html and https://keras.io/applications/ but cannot quite bring them together.
adding my model on top:
x = base_model.output
x = Convolution2D(32, 3, 3)(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Convolution2D(32, 3, 3)(x)
x = Activation('relu')(x)
feat = MaxPooling2D(pool_size=(2, 2))(x)
building the complete model
model_complete = Model(input=base_model.input, output=feat)
stop base layers from being learned
for layer in base_model.layers:
layer.trainable = False
new model
model_complete.compile(optimizer='rmsprop',
loss='binary_crossentropy')
now fit the new model, the model is 4 images and [1,0,1,0] are the class labels.
But this is obviously wrong:
model_complete.fit_generator((x, [1,0,1,0]), samples_per_epoch=100, nb_epoch=2)
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
How is this done?
How would I do it if I only wanted to replace the last convolutional block (conv block5 in VGG16) instead of adding something?
How would I only train the bottleneck features?
The features output features has shape (4, 512, 7, 7). There are four images, but what is in the other dimensions? How would I reduce that to a (1,x) array?
Fitting model
The problem with your generator code is that the fit_generator method expects a generator function to generate the data for fitting which you don't provide.
You can either define a generator as done in the tutorial that you have linked to or create the data and labels yourself and fit your model yourself:
model_complete.fit(images, labels, batch_size=100, nb_epoch=2)
where images are your generated training images and labels are the corresponding labels.
Removing last layer
Assuming you have a model variable and the "pop" method described below, you can do model = pop(model) to remove the last layer.
Training only specific layers
As you have done in your code, you can do:
for layer in base_model.layers:
layer.trainable = False
Then you can "unfreeze" and layer that you want by changing their trainable property to True.
Changing dimensions
To change the output to a 1D array you can use the Flatten layer
The pop method
def pop(model):
'''Removes a layer instance on top of the layer stack.
This code is thanks to #joelthchao https://github.com/fchollet/keras/issues/2371#issuecomment-211734276
'''
if not model.outputs:
raise Exception('Sequential model cannot be popped: model is empty.')
else:
model.layers.pop()
if not model.layers:
model.outputs = []
model.inbound_nodes = []
model.outbound_nodes = []
else:
model.layers[-1].outbound_nodes = []
model.outputs = [model.layers[-1].output]
model.built = False
return model
Use model.fit(X, y) to train on your dataset as explained here: https://keras.io/models/model/#fit
Additionally you should add a Flatten layer and a dense layer with an ouput shape of 1 to get the correct result shape.