Keras: concatenating model flattened output with vector - python

I have a Keras model defined as such:
model = Sequential()
model.add(embedding_layer)
model.add(Conv1D(filters=256, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=3))
model.add(Flatten())
model.add(Dense(num_classes, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
After the Flatten() layer, I want to concatenate 2 additional features, i.e. if Flatten() gives me a vector of size (1, n) (model.output_shape == (None, n)), I want to concatenate a separate numpy array of size (1, 2) so model.output_shape == (None, n+2). How would I go about doing this?
I think keras.layers.merge.Concatenate is what I'm looking for here, but I don't know how to implement it. There aren't many examples online, and Keras 2.0 also uses an updated syntax. Any help would be appreciated.

I played around a bit and figured it out. For anyone who's interested: this is a good use case for Keras' functional API, which always returns tensors, on which you can do tensor operations.
embedded_sequence = embedding_layer(sequence_input)
x = Conv1D(filters=256, kernel_size=3, activation='relu', padding='same')(embedded_sequence)
x = MaxPooling1D(pool_size=3)(x)
x = Flatten()(x)
# additional features input
from keras.layers.merge import Concatenate
af_input = Input(shape=(data['af_train'].shape[1],), name='af_input')
x = Concatenate()([x, af_input])
# output
main_output = Dense(num_classes, activation='sigmoid', name='main_output')(x)
model = Model(inputs=[sequence_input, af_input], outputs=main_output)
model.compile(loss='binary_crossentropy', optimizer='adam')

I haven't tested this code, but I did something similar and it worked (may not be the most efficient way though):
model = Sequential()
model.add(embedding_layer)
model.add(Conv1D(filters=256, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=3))
model.add(Flatten())
auxiliary_input = Input(shape=(2,), name='aux_input')
final_model = Sequential()
final_model.add(Merge([model, auxiliary_input], mode='concat'))
final_model.add(Dense(num_classes, activation='sigmoid'))
final_model.compile(loss='binary_crossentropy', optimizer='adam')
There is also a part of doc that give example of having multiple inputs (and also multiple outputs) but using older API usage style.

Related

Using two autoencoders side by side and bulid model on top in Keras

For a chess engine I want to use two autoencoder models, which extract key-features out of a chess-position, concatenate them and build a model on top to compare two chess positions.
My code looks like this so far:
enc1 = keras.models.load_model("autoencoder.h5")
enc2 = keras.models.load_model("autoencoder.h5")
encoder1 = Model(
inputs=enc1.input,
outputs=[enc1.get_layer(index=2).output,
enc1.get_layer(index=4).output,
enc1.get_layer(index=6).output,
enc1.get_layer(index=7).output
]
)
encoder1.trainable = False
encoder2 = Model(
inputs=enc2.input,
outputs=[enc2.get_layer(index=2).output,
enc2.get_layer(index=4).output,
enc2.get_layer(index=6).output,
enc2.get_layer(index=7).output
]
)
encoder2.trainable = False
model = Sequential()
model.add(concatenate([encoder1, encoder2]))
model.add(Dense(400, activation="relu", input_shape=(2,769,)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(200, activation='relu', kernel_regularizer=l2(), bias_regularizer=l2()))
model.add(Dropout(0.2))
model.add(Dense(100, activation='relu', kernel_regularizer=l2(), bias_regularizer=l2()))
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax'))
metric = tf.keras.metrics.CategoricalAccuracy()
model.compile(optimizer=Adam(learning_rate=0.001), loss="categorical_crossentropy", metrics=metric)
This is giving errors. How do I concatenate these two autoencoder layers?
Thanks so much!

Keras fully connected followed by convolution

I'm not super amazing with keras yet, so please be gentle.
My input data is a matrix of size 60000 x 784.
I'm trying to add convolutional layers after my fully connected layers, something like this:
model = Sequential()
model.add(Dense(784, input_dim=train_amplitudes.shape[1], activation='relu'))
model.add(Dense(784, activation='relu'))
model.add(Dense(784, activation='relu'))
model.add(Conv2D(100, kernel_size=5, activation='relu', input_shape=(28, 28))
mode.add(Conv2D(20, kernel_size = 3, activation = 'relu'))
model.add(Dense(train_targets.shape[1], activation='linear'))
Notice that 28 * 28 = 784.
I get the error "Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=2" at the first convolution layer.
Why and how can I fix this?
What is the purpose of this specific network structure? Assuming that your original data was 28x28, you should leave the input with 28x28 and then apply conv2d. After that you can flatten the last output of the convolutional blocks to continue with the fully connected layers.
In Keras, input shape argument is a 4D tensor with shape: (batch, channels, rows, cols) if data_format is "channels_first" or 4D tensor with shape: (batch, rows, cols, channels) if data_format is "channels_last". You're just passing rows and columns (what you think) but it also requires batch and channels. More information can be found here.
I think I've managed to fix it. This is the code that "works"
model = Sequential()
model.add(Dense(784, input_dim=train_amplitudes.shape[1], activation='relu'))
model.add(Dense(784, activation='relu'))
model.add(Dense(784, activation='relu'))
mode.add(Reshape((28, 28, 1))
model.add(Conv2D(100, kernel_size=5, activation='relu')
mode.add(Conv2D(20, kernel_size = 3, activation = 'relu'))
model.add(Flatten())
model.add(Dense(train_targets.shape[1], activation='linear'))
it works in the sense that no error is produced. Whether it makes sense or produces a good output, that's another matter, but this is good enough for me.

How to get weights from keras model?

I'm trying to build a 2 layered neural network for MNIST dataset and I want to get weights from my model.
I found a similar question her on SO and I tried this,
model.get_weights()
But It returned 11 values when I check the len(model.get_weights()) Isn't it suppose to return 3 weights? I have even disabled bias.
model = Sequential()
model.add(Flatten(input_shape = (28, 28)))
model.add(Dense(512, activation='relu', kernel_initializer='he_normal', use_bias=False,))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(128, activation='relu', kernel_initializer='he_normal', use_bias=False,))
model.add(BatchNormalization())
model.add(Dropout(0.1))
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', use_bias=False,))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
result = model.fit(x_train, y_train, validation_split=0.25, epochs=10,
batch_size=128, verbose=1)
To get the weights of a particular layer, you could retrieve this layer by using its name and call get_weights on it (as shubham-panchal said in its comment).
For example:
model.get_layer('dense').get_weights()
or
model.get_layer('dense_2').get_weights()
You could go though the layers of your model and retrieve its name and weights:
{layer.name: layer.get_weights() for layer in model.layers}

Merge 6 inputs in Conv1D keras

I have written a structure for Conv1D in keras. I want to merge the 6 different inputs of same shape. Previously, Merge([ model1, model2, model3, model4, model5, model6], mode = 'concat') worked just fine but after new updates, I cant use Merge anymore.
Concatenate can be used as follows,
from keras.layers import Concatenate
model = Concatenate([ model1, model2, model3, model4, model5, model6])
But I want to add Dense layers before the softmax layer to this merged model, which I cant add to Concatenate as it accepts only tensor inputs.
How do I merge the 6 inputs before passing it to 2 dense layers and softmax layer??
My current code is as follows,
input_shape = (64,250)
model1 = Sequential()
model1.add(Conv1D(64, 2, activation='relu', input_shape=input_shape))
model1.add(Conv1D(64, 2, activation='relu'))
model1.add(MaxPooling1D(2))
model1.add(Dropout(0.75))
model1.add(Flatten())
model2 = Sequential()
model2.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model2.add(Conv1D(128, 2, activation='relu'))
model2.add(MaxPooling1D(2))
model2.add(Dropout(0.75))
model2.add(Flatten())
model3 = Sequential()
model3.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model3.add(Conv1D(128, 2, activation='relu'))
model3.add(MaxPooling1D(2))
model3.add(Dropout(0.75))
model3.add(Flatten())
model4 = Sequential()
model4.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model4.add(Conv1D(128, 2, activation='relu'))
model4.add(MaxPooling1D(2))
model4.add(Dropout(0.75))
model4.add(Flatten())
model5 = Sequential()
model5.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model5.add(Conv1D(128, 2, activation='relu'))
model5.add(MaxPooling1D(2))
model5.add(Dropout(0.75))
model5.add(Flatten())
model6 = Sequential()
model6.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model6.add(Conv1D(128, 2, activation='relu'))
model6.add(MaxPooling1D(2))
model6.add(Dropout(0.75))
model6.add(Flatten())
from keras.layers import Concatenate
model = Concatenate([ model1, model2, model3, model4, model5, model6])
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.75))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.75))
model.add(Dense(40, activation='softmax'))
opt = keras.optimizers.adam(lr=0.001, decay=1e-6)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit([d1, d2, d3, d4, d5, d6], label, validation_split=0.2, batch_size=25, epochs=30)
The way you are calling Concatenate Function is not correct. Concatenate expects one argument which specifies the axis of concatenation. What you are trying to achieve can be done using keras's functional API.
just change the following code
model = Concatenate([ model1, model2, model3, model4, model5, model6])
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.75))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.75))
model.add(Dense(40, activation='softmax'))
to
merged = Concatenate()([ model1.output, model2.output, model3.output, model4.output, model5.output, model6.output])
merged = Dense(512, activation='relu')(merged)
merged = Dropout(0.75)(merged)
merged = Dense(1024, activation='relu')(merged)
merged = Dropout(0.75)(merged)
merged = Dense(40, activation='softmax')(merged)
model = Model(inputs=[model1.input, model2.input, model3.input, model4.input, model5.input, model6.input], outputs=merged)
N.B
Though it is not the question being asked, I've noticed that you have been using very large dropout rate( But this may depend on the problem you are trying to solve). 0.75 drop rate means you are dropping 75% of the neurons while training. Please consider using small drop rate because the model might not converge.

How to use ConvLSTM2D followed by Conv2D in Keras python

I am trying to use the following model in Keras, where ConvLSTM2D output is followed by Conv2D to generate segmentation-like output. Input and output should be time series of the size (2*WINDOW_H+1, 2*WINDOW_W+1) each
model = Sequential()
model.add(ConvLSTM2D(3, kernel_size=3, padding = "same", batch_input_shape=(1, None, 2*WINDOW_H+1, 2*WINDOW_W+1, 1), return_sequences=True, stateful=True))
model.add(Conv2D(1, kernel_size=3, padding = "same"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
However, this gives the following error (when adding Conv2D):
Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5
Any pointers on where I might be wrong really appreciated. Thanks!
I think you would need to do a time distributed Conv2D layer so that the dimensions match. Like this maybe:
model = Sequential()
model.add(ConvLSTM2D(3, kernel_size=3, padding = "same", batch_input_shape=(1, None, 2*WINDOW_H+1, 2*WINDOW_W+1, 1), return_sequences=True, stateful=True))
model.add(TimeDistributed((Conv2D(1, kernel_size=3, padding = "same")))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
The problem in your model that you try to use the sequence for the regular convolutional layer.
The only thing that you need is to remove return_sequences=True in ConvLSTM2D.
So this line:
model.add(ConvLSTM2D(3, kernel_size=3, padding = "same", batch_input_shape=(1, None, 2*WINDOW_H+1, 2*WINDOW_W+1, 1), return_sequences=True, stateful=True))
should be like this:
model.add(ConvLSTM2D(3, kernel_size=3, padding = "same", batch_input_shape=(1, None, 2*WINDOW_H+1, 2*WINDOW_W+1, 1), stateful=True))
In the ConvLSTM2D layer before Conv2D, you use return_sequence is False. After you run the program, you can get the result ndim=4.

Categories