I a trying to merge 2 sequential models in keras. Here is the code:
model1 = Sequential(layers=[
# input layers and convolutional layers
Conv1D(128, kernel_size=12, strides=4, padding='valid', activation='relu', input_shape=input_shape),
MaxPooling1D(pool_size=6),
Conv1D(256, kernel_size=12, strides=4, padding='valid', activation='relu'),
MaxPooling1D(pool_size=6),
Dropout(.5),
])
model2 = Sequential(layers=[
# input layers and convolutional layers
Conv1D(128, kernel_size=20, strides=5, padding='valid', activation='relu', input_shape=input_shape),
MaxPooling1D(pool_size=5),
Conv1D(256, kernel_size=20, strides=5, padding='valid', activation='relu'),
MaxPooling1D(pool_size=5),
Dropout(.5),
])
model = merge([model1, model2], mode = 'sum')
Flatten(),
Dense(256, activation='relu'),
Dropout(.5),
Dense(128, activation='relu'),
Dropout(.35),
# output layer
Dense(5, activation='softmax')
return model
Here is the error log:
File
"/nics/d/home/dsawant/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py",
line 392, in is_keras_tensor
raise ValueError('Unexpectedly found an instance of type ' + str(type(x)) + '. ' ValueError: Unexpectedly found an instance of
type <class 'keras.models.Sequential'>. Expected a symbolic tensor
instance.
Some more log:
ValueError: Layer merge_1 was called with an input that isn't a
symbolic tensor. Received type: class 'keras.models.Sequential'.
Full input: [keras.models.Sequential object at 0x2b32d518a780,
keras.models.Sequential object at 0x2b32d521ee80]. All inputs to the
layer should be tensors.
How can I merge these 2 Sequential models that use different window sizes and apply functions like 'max', 'sum' etc to them?
Using the functional API brings you all possibilities.
When using the functional API, you need to keep track of inputs and outputs, instead of just defining layers.
You define a layer, then you call the layer with an input tensor to get the output tensor. Models and layers can be called exactly the same way.
For the merge layer, I prefer using other merge layers that are more intuitive, such as Add(), Multiply() and Concatenate() for instance.
from keras.layers import *
mergedOut = Add()([model1.output,model2.output])
#Add() -> creates a merge layer that sums the inputs
#The second parentheses "calls" the layer with the output tensors of the two models
#it will demand that both model1 and model2 have the same output shape
This same idea apply to all the following layers. We keep updating the output tensor giving it to each layer and getting a new output (if we were interested in creating branches, we would use a different var for each output of interest to keep track of them):
mergedOut = Flatten()(mergedOut)
mergedOut = Dense(256, activation='relu')(mergedOut)
mergedOut = Dropout(.5)(mergedOut)
mergedOut = Dense(128, activation='relu')(mergedOut)
mergedOut = Dropout(.35)(mergedOut)
# output layer
mergedOut = Dense(5, activation='softmax')(mergedOut)
Now that we created the "path", it's time to create the Model. Creating the model is just like telling at which input tensors it starts and where it ends:
from keras.models import Model
newModel = Model([model1.input,model2.input], mergedOut)
#use lists if you want more than one input or output
Notice that since this model has two inputs, you have to train it with two different X_training vars in a list:
newModel.fit([X_train_1, X_train_2], Y_train, ....)
Now, suppose you wanted only one input, and both model1 and model2 would take the same input.
The functional API allows that quite easily by creating an input tensor and feeding it to the models (we call the models as if they were layers):
commonInput = Input(input_shape)
out1 = model1(commonInput)
out2 = model2(commonInput)
mergedOut = Add()([out1,out2])
In this case, the Model would consider this input:
oneInputModel = Model(commonInput,mergedOut)
Related
I need to see how I would initialize all layers of a Sequential model with data from a same-sized sequential model.
E.G. How would I initialize the weights for every layer of the following Sequential model?
model = tf.keras.Sequential([Dense(2000, activation='relu', input_shape=(11,)),
Dense(1, activation='relu'),
Dropout(0.5),
Dense(400, activation='relu'),
Dropout(0.5),
Dense(150, activation='relu'),
BatchNormalization(),
Dense(y_max+1, activation='softmax')
])
I am fairly new to CNN training and have managed to make the above code work through trial and error and extensive research.
Datatype is list and np.array() of dtype np.float64
The idea is that I grab the weights from one model (same as above) and return it to another model (also same as above). I just need to be able to visualize how I can initialize the weights and biases of all layers using the following:
weights = model.get_weights()[0]
biases = model.get_weights()[1]
return weights, biases
I have attempted the model.set_weights() method, but I keep getting the following error message, given the code before the TypeError:
if iteration == 1:
for layer in model.layers:
layer.set_weights(None, None)
TypeError: set_weights() takes 2 positional arguments but 3 were given
I'd be very appreciative of any help, thank you.
In the Sequential example above, each layer parameters can be accessed and assigned new weights as shown below,
#example of first layer
model.layers[0]
#weights of the first layer,
model.layers[0].weights #gives the weights of kernel and bias of dense in this case
#assign new_weights by
model.layers[0].kernel.assign(tf.Variable(new_kernel_weights))
model.layers[0].bias.assign(tf.Variable(new_bias_weights))
I'm working on a machine learning project with convolutional neural networks using TF/Keras in Python, and my goal is to split up an image up into patches, run a convolution on each one separately, and then put it back together.
What I can't figure out how to do is run a convolution for each slice of a 3D array.
For example, if I have a tensor of size (500,100,100) I want to do a separate convolution for all 500 slices of size (100 x 100). I'm implementing this within a custom Keras layer and want these to be trainable weights I've tried a few different things:
Using map.fn() to run a convolution for each slice of the array
This doesn't seem to attach weights to each layer separately.
Using the DepthwiseConv2D layer:
This works well for the first call of the layer, but fails when I call the layer the second time with more filters because it wants to perform the depthwise convolution on each of the previous filtered layers
This, of course isn't what I want because I want one convolution for each of the previous sets of filters from the previous layer.
Any ideas are appreciated, as I'm truly stuck here. Thank you!
If you have a tensor with shape (500,100,100) and want to feed some subset of this tensor, to separate conv2d layers at the same time, you may do this by defining conv2d layers in the same level. You should first define Lambda layers to split input, then feed their output to Conv2D layers, then concatenate them.
Let's take a tensor with shape (100,28,28,1) as an example, that we want to split it into 2 subset tensor and apply conv2d layers on each subset separately:
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D, Input, concatenate, Lambda
from tensorflow.keras.models import Model
# define a sample dataset
x = tf.random.uniform((100, 28, 28, 1))
y = tf.random.uniform((100, 1), dtype=tf.int32, minval=0, maxval=9)
ds = tf.data.Dataset.from_tensor_slices((x, y))
ds = ds.batch(16)
def create_nn_model():
input = Input(shape=(28,28,1))
b1 = Lambda(lambda a: a[:,:14,:,:], name="first_slice")(input)
b2 = Lambda(lambda a: a[:,14:,:,:], name="second_slice")(input)
d1 = Conv2D(64, 2, padding='same', activation='relu', name="conv1_first_slice")(b1)
d2 = Conv2D(64, 2, padding='same', activation='relu', name="conv2_second_slice")(b2)
x = concatenate([d1,d2], axis=1)
x = Flatten()(x)
x = Dense(64, activation='relu')(x)
out = Dense(10, activation='softmax')(x)
model = Model(input, out)
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_nn_model()
tf.keras.utils.plot_model(model, show_shapes=True)
Here is the plotted model architecture:
I am trying to tie together a CNN layer with 2 LSTM layers and ctc_batch_cost for loss, but I'm encountering some problems. My model is supposed to work with grayscale images.
During my debugging I've figured out that if I use just a CNN layer that keeps the output size equal to the input size + LSTM and CTC, the model is able to train:
# === Without MaxPool2D ===
inp = Input(name='inp', shape=(128, 32, 1))
cnn = Conv2D(name='conv', filters=1, kernel_size=3, strides=1, padding='same')(inp)
# Go from Bx128x32x1 to Bx128x32 (B x TimeSteps x Features)
rnn_inp = Reshape((128, 32))(maxp)
blstm = Bidirectional(LSTM(256, return_sequences=True), name='blstm1')(rnn_inp)
blstm = Bidirectional(LSTM(256, return_sequences=True), name='blstm2')(blstm)
# Softmax.
dense = TimeDistributed(Dense(80, name='dense'), name='timedDense')(blstm)
rnn_outp = Activation('softmax', name='softmax')(dense)
# Model compiles, calling fit works!
But when I add a MaxPool2D layer that halves the dimensions, I get an error sequence_length(0) <= 64, similar to the one presented here.
# === With MaxPool2D ===
inp = Input(name='inp', shape=(128, 32, 1))
cnn = Conv2D(name='conv', filters=1, kernel_size=3, strides=1, padding='same')(inp)
maxp = MaxPool2D(name='maxp', pool_size=2, strides=2, padding='valid')(cnn) # -> 64x16x1
# Go from Bx64x16x1 to Bx64x16 (B x TimeSteps x Features)
rnn_inp = Reshape((64, 16))(maxp)
blstm = Bidirectional(LSTM(256, return_sequences=True), name='blstm1')(rnn_inp)
blstm = Bidirectional(LSTM(256, return_sequences=True), name='blstm2')(blstm)
# Softmax.
dense = TimeDistributed(Dense(80, name='dense'), name='timedDense')(blstm)
rnn_outp = Activation('softmax', name='softmax')(dense)
# Model compiles, but calling fit crashes with:
# InvalidArgumentError: sequence_length(0) <= 64
# [[{{node ctc_loss_1/CTCLoss}}]]
After struggling for about 3 days with this problem, I posted the above question here, on StackOverflow. About 2 hours after posting the questions I finally figured it out.
TL;DR Solution:
If you're using ctc_batch_cost:
Make sure you're passing the lengths (numbers of timesteps) of the sequences entering your RNNs as their inputs for the input_length argument.
If you're using ctc_loss:
Make sure you're passing the lengths (numbers of timesteps) of the sequences entering your RNNs as their inputs for the logit_length argument.
Solution:
The solution lies in the documentation, which, relatively sparse, can be cryptic for a machine learning newbie like myself.
The TensorFlow documentation for ctc_batch_cost reads:
tf.keras.backend.ctc_batch_cost(
y_true, y_pred, input_length, label_length
)
...
input_length tensor (samples, 1) containing the sequence length for
each batch item in y_pred.
...
input_length corresponds to logit_length from ctc_loss function's TensorFlow documentation:
tf.nn.ctc_loss(
labels, logits, label_length, logit_length, logits_time_major=True, unique=None,
blank_index=None, name=None
)
...
logit_length tensor of shape [batch_size] Length of input sequence in
logits.
...
That's where it clicked, at the word logit. So, the argument for input_length or logit_length is supposed to be a tensor/container (in my case, numpy array) of the lengths (i.e. number of timesteps) of the sequences entering the RNN (in my case LSTM) as input.
I was originally making the mistake of considering the required length to be the width of the grayscale images that act as input for the whole network (CNN + MaxPool2D + RNN), but because the MaxPool2D layer creates a tensor of different dimensions for the RNN's input, the ctc loss function crashes.
Now fit runs without crashing.
I would like to combine 2 neural networks which are showing probabilities of classes.
One says that it is a cat on the image.
The second says that the cat has a collar.
How to use softmax activation function on the output of the neural network?
Please, see the picture to understand the main idea:
You can use the functional API to create a multi-output network. Essentially every output will be a separate prediction. Something along the lines of:
in = Input(shape=(w,h,c)) # image input
latent = Conv...(...)(in) # some convolutional layers to extract features
# How share the underlying features to predict
animal = Dense(2, activation='softmax')(latent)
collar = Dense(2, activation='softmax')(latent)
model = Model(in, [animal, coller])
model.compile(loss='categorical_crossentropy', optimiser='adam')
You can have as many separate outputs you like. If you have only binary features you can have a single vector output as well, Dense(2, activation='sigmoid') and first entry could predict cat or not, while second whether it has a collar. This would be multi-class multi-label setup.
Juste create two separate dense layers (with sofmax activation) at the end of your model, e.g.:
from keras.layers import Input, Dense, Conv2D
from keras.models import Model
# Input example:
inputs = Input(shape=(64, 64, 3))
# Example of model:
x = Conv2D(16, (3, 3), padding='same')(inputs)
x = Dense(512, activation='relu')(x)
x = Dense(64, activation='relu')(x)
# ... (replace with your actual layers)
# Then add two separate layers taking the previous output and generating two estimations:
cat_predictions = Dense(2, activation='softmax')(x)
collar_predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=inputs, outputs=[cat_predictions, collar_predictions])
I have two different types of data (image volumes and coordinates) and I would like to use a convolutional neural network on the image volume data and then after this I would like to append some additional information (ie. the coordinates of the volume).
Independently this should create a pretty solid predictor for my function. How can I implement this using Keras.
The only answers I have found online are either ambiguous or are using the deprecated methods which I have got to work. But I would really like to implement this using the current API that way I can more easily save the model for later use.
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv3D(64, (3, 3, 3), activation='relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
print(model.output_shape)
# The additional data (the coordinates x,y,z)
extra = Sequential()
extra.add(Activation('sigmoid', input_shape=(3,)))
print(extra.output_shape)
merged = Concatenate([model, extra])
# New model should encompass the outputs of the convolutional network and the coordinates that have been merged.
# But how?
new_model = Sequential()
new_model.add(Dense(128, activation='relu'))
new_model.add(Dropout(0.8))
new_model.add(Dense(32, activation='sigmoid'))
new_model.add(Dense(num_classes, activation='softmax'))
new_model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
Sequential models are not suited for creating models with branches.
You can have the two independent models as Sequential models, as you did, but from the Concatenate on, you should start using the functional Model API.
The idea is to get the output tensors of the two models and feed them in other layers to get new output tensors.
So, considering you have model and extra:
mergedOutput = Concatenate()([model.output, extra.output])
This mergetOutput is a tensor. You can either create the last part of the model using this tensor, or create the last part independently, and call it on this tensor. The second approach may be good if you want to train each model separately (doesn't seem to be your case).
Now, creating the new model as a functional API model:
out = Dense(128, activation='relu')(mergetOutput)
out = Dropout(0.8)(out)
out = Dense(32, activation='sigmoid')(out)
out = Dense(num_classes, activation='softmax')(out)
new_model = Model(
[model.input, extra.input], #model with two input tensors
out #and one output tensor
)
An easier approach is to take all three models you have already created and use them to create a combined model:
model = Sequential() #your first model
extra = Sequential() #your second model
new_model = Sequential() #all these three exactly as you did
#in this case, you just need to add an input shape to new_model, compatible with the concatenated output of the previous models.
new_model.add(FirstNewModelLayer(...,input_shape=(someValue,)))
Join them like this:
mergedOutput = Concatenate()([model.output, extra.output])
finalOutput = new_model(mergedOutput)
fullModel = Model([model.input,extra.input],finalOutput)
Use the functional API of Keras (https://keras.io/models/model/). You can just apply layers to your merged layer in Keras. The functional API works like this. You have a tensor and you apply a function to this Tensor. Then this is recursively evaluated. Because pretty much everything is a tensor in Keras this works quite nicely.
An example for this is:
activation = Dense(128, activation='relu')(merged)