Keras Sequential model, more inputs - python

I am quite new to machine learning and I am currently working on a "car value predictor" application. I stuck where I have to feed my data to my model. I have 4 inputs:
date: the car's first registration date (int)
km: the car's mileage meter (int)
consume: the car's consume type (one-hot encoded vector with 10 element e.g. for petrol: [1 0 0 0 0 0 0 0 0 0])
type: the car's type (for example: "BMW-320", stored in one-hot encoded vector with 440 element )
and one output:
the price of the car.
I would like to do something similar to this: https://imgur.com/wlvffn7
I have tried the following code which compiles but the output is not what I wanted:
model = Sequential([
Dense(128, input_shape=(1,), activation='relu', name='date'),
Dense(128, input_shape=(1,), activation='relu', name='km'),
Dense(128, input_shape=(10,), activation='relu', name='consume'),
Dense(128, input_shape=(440,), activation='relu', name='type'),
Dropout(0.5),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(1, activation='linear')
])
model.compile(loss='mse', optimizer='adam')
model.fit( x = {'date' : samples_train['input'][:,0],
'km' : samples_train['input'][:,1],
'consume':samples_train['input'][:,2],
'type':samples_train['input'][:,3]},
y = samples_train['output'],
epochs=1000,
batch_size=16,
verbose=1,
validation_data = ({'date' : samples_valid['input'][:,0],
'km' : samples_valid['input'][:,1],
'consume':samples_valid['input'][:,2],
'type':samples_valid['input'][:,3]}, samples_valid['output']),
callbacks=callbacks)
Can anyone point out what am I doing wrong or how can I implement a model "structure" like on the picture?
EDIT:
I think this is what I was looking for. Can anyone confirm this? :)
input_1 = Input(shape=(1,), name='date') # input layers
input_2 = Input(shape=(1,), name='km')
input_3 = Input(shape=(10,), name='consume')
input_4 = Input(shape=(440,), name='type')
dense_1 = Dense(256, activation='relu')(input_1) # hidden layers
dense_1 = Dense(256, activation='relu')(input_2)
dense_1 = Dense(256, activation='relu')(input_3)
dense_1 = Dense(256, activation='relu')(input_4)
dropout_1 = Dropout(0.5)(dense_1)
dense_2 = Dense(256, activation='relu')(dropout_1)
dropout_2 = Dropout(0.5)(dense_2)
outputs = Dense(1, activation='linear')(dropout_2) # output layer
model = Model([input_1,input_2,input_3,input_4], outputs)
Thank you in advance.

I think your second Implementation is wrong.
By implementeing it like that, dense_1 will only have the value you gave it in the last line : input_4 = Input(shape=(440,), name='type') thus not taking into accunt the rest of the imputs for the rest of the network.
What you should do is concatenate your inputs into a single Line before feeding it to the first dense layer, like that :
from keras.layers import Concatenate
input_1 = Input(shape=(1,), name='date') # input layers
input_2 = Input(shape=(1,), name='km')
input_3 = Input(shape=(10,), name='consume')
input_4 = Input(shape=(440,), name='type')
x = Concatenate()([input_1 , input_2 , input_3 , input_4]) # Concatenation of the inputs.
dense_1 = Dense(256, activation='relu')(x) # hidden layers
dropout_1 = Dropout(0.5)(dense_1)
dense_2 = Dense(256, activation='relu')(dropout_1)
dropout_2 = Dropout(0.5)(dense_2)
outputs = Dense(1, activation='linear')(dropout_2) # output layer
model = Model([input_1,input_2,input_3,input_4], outputs)

Related

Unexpected output shape from a keras dense layer

I try to create a minimal non-convolutional NN image binary classifier with one hidden layer only (as a practice before more complicated models):
def make_model(input_shape):
inputs = keras.Input(shape=input_shape)
x = layers.Dense(128, activation="ReLU")(inputs)
outputs = layers.Dense(1, activation="sigmoid")(x)
return keras.Model(inputs, outputs)
model = make_model(input_shape=(256, 256, 3))
Its model.summary() shows
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 256, 256, 3)] 0
dense (Dense) (None, 256, 256, 128) 512
dense_1 (Dense) (None, 256, 256, 1) 129
=================================================================
Total params: 641
Trainable params: 641
Non-trainable params: 0
Since the dense_1 layer has one neuron only, what I expect from this layer is an output shape of (None, 1) (i,e, a single number indicating the predicted binary label) but instead the model gives (None, 256, 256, 1).
What's wrong with my model setting and how can I get it right?
You have to flatten your preposterously large tensor if you want to use the output shape (None, 1):
import tensorflow as tf
def make_model(input_shape):
inputs = tf.keras.layers.Input(shape=input_shape)
x = tf.keras.layers.Dense(128, activation="relu")(inputs)
x = tf.keras.layers.Flatten()(x)
outputs = tf.keras.layers.Dense(1, activation="sigmoid")(x)
return tf.keras.Model(inputs, outputs)
model = make_model(input_shape=(256, 256, 3))
print(model.summary())
A mistake is in your function make_model.
def make_model(input_shape):
inputs = keras.Input(shape=input_shape)
x = layers.Dense(128, activation="ReLU")(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
return keras.Model(inputs, outputs)
You probably wanted the second line to be
x = layers.Dense(128, activation="ReLU")(inputs)
and not
x = layers.Dense(128, activation="ReLU")(x)
and unfortunately, x exists in scope, so it didn't throw an error.

Matrix Size Incompatibility with ragged tensor of variable size sequence and LSTM

I'm working on ECG classification using LSTM. I have about thousands of ECG sequences, but the length of each ECG sequence seems to be different.
The model I have built is inspired of :
LSTM Model
Actually I have a ragged tensor of 8528 sequences (Z) which has a shape of (8528, None, None), and corresponding labels (U with 4 possibilities) are a one hot encoded tensor of shape (8528, 4).
The model is supposed to be a sequence of LTSM with respectively 64,256 and 100 units followed by a Dense layer of 4.
I have understand that because of the variable sequence length, the batch_size has to be set to 1.
Here is the code I'm using :
max_seq = Z.bounding_shape()[-1]
print(f'Maximum length sequence : {max_seq}')
model = keras.Sequential([
tf.keras.layers.Input(shape=[None, max_seq], batch_size=1, dtype=tf.float32, ragged=True),
keras.layers.LSTM(units=64, activation='tanh', dropout=0.2, name = 'LSTM_1', return_sequences=True),
keras.layers.LSTM(units=256, activation='tanh', dropout=0.2, name = 'LSTM_2',return_sequences=True),
keras.layers.LSTM(units=100, activation='tanh', dropout=0.2, name = 'LSTM_3', return_sequences=False),
# tf.keras.layers.LSTM(4),
#tf.keras.layers.Flatten(name = 'Flatten'),
tf.keras.layers.Dense(units=4, activation='sigmoid', name = 'Dense_1')
])
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001),loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
model.summary()
model.fit(Z, W, batch_size=1, epochs=20)
That returns :
Maximum length sequence : 18286
Model: "sequential_23"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
LSTM_1 (LSTM) (1, None, 64) 4697856
_________________________________________________________________
LSTM_2 (LSTM) (1, None, 256) 328704
_________________________________________________________________
LSTM_3 (LSTM) (None, 100) 142800
_________________________________________________________________
Dense_1 (Dense) (None, 4) 404
=================================================================
Total params: 5,169,764
Trainable params: 5,169,764
Non-trainable params: 0
_________________________________________________________________
Size of first sequence : 9000
But I got this error about Matrix incompatibility :
Matrix size-incompatible: In[0]: [1,16866], In[1]: [18286,256]
[[{{node sequential_23/LSTM_1/while/body/_1/sequential_23/LSTM_1/while/lstm_cell_67/MatMul}}]] [Op:__inference_train_function_100065]
Function call stack:
train_function
Can someone explain me why I got such type of error ?
Thanks in advance
EDIT : Here's a minimal reproductible example :
xx = tf.ragged.constant([
[[0.1, 0.2]],
[[0.4, 0.7 , 0.5, 0.6]],
[[0.4, 0.7 , 0.5]]
])
"""
Labels represented as OneHotEncoding so you
should use CategoricalCrossentropy instade of SparseCategoricalCrossentropy
"""
yy = np.array([[0, 0, 1,0], [1,0,0,0], [1,0,0,0]])
model = keras.Sequential([
tf.keras.layers.Input(shape=[None, max_seq], batch_size=1, dtype=tf.float32, ragged=True),
keras.layers.LSTM(units=64, activation='tanh', dropout=0.2, name = 'LSTM_1', return_sequences=True),
keras.layers.LSTM(units=256, activation='tanh', dropout=0.2, name = 'LSTM_2',return_sequences=True),
keras.layers.LSTM(units=100, activation='tanh', dropout=0.2, name = 'LSTM_3', return_sequences=False),
tf.keras.layers.Dense(units=4, activation='sigmoid', name = 'Dense_1')
])
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001),loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
model.summary()
model.fit(xx, yy, batch_size=1, epochs=20)

How can I insert a scalar value and a binary value to a layer (last layer) in keras?

I am trying to modify the network which is implemented here. This network uses chest x ray images as input and classifies it into 14 categories (13 types of diseases and no finding). The network does not take the patient age and gender as an input. So I want to provide the network with that information too. In short At the last 3 layers of the network is like the following:
bn (BatchNormalization) (None, 7, 7, 1024) 4096 conv5_block16_concat[0][0]
__________________________________________________________________________________________________
avg_pool (GlobalAveragePooling2 (None, 1024) 0 bn[0][0]
__________________________________________________________________________________________________
predictions (Dense) (None, 14) 14350 avg_pool[0][0]
So what I have done so far is the following:
I simply pop the last dense layer using model_vgg16.layers.pop().
Then as expexted the network turns into:
bn (BatchNormalization) (None, 7, 7, 1024) 4096 conv5_block16_concat[0][0]
__________________________________________________________________________________________________
avg_pool (GlobalAveragePooling2 (None, 1024) 0 bn[0][0]
I know that I can add a layer using:
new_layer = Dense(14, activation='softmax', name='my_dense')
inp = model.input
out = new_layer(model.layers[-1].output)
model2 = Model(inp, out)
But I do not know how to add a layer that takes the inputs from previous layer together with 1 scalar value (age [0:100]), and one binary value gender [0:1].
So How can I add a last layer that takes inputs from previous layer together with 1 scalar value and 1 binary value?
Edit: The base model I am using is DenseNet121. The some final layers looks like this:
EDIT
The way I load the model is the following:
cp = ConfigParser()
cp.read(config_file)
# default config
output_dir = cp["DEFAULT"].get("output_dir")
base_model_name = cp["DEFAULT"].get("base_model_name")
class_names = cp["DEFAULT"].get("class_names").split(",")
image_source_dir = cp["DEFAULT"].get("image_source_dir")
image_dimension = cp["TRAIN"].getint("image_dimension")
output_weights_name = cp["TRAIN"].get("output_weights_name")
weights_path = os.path.join(output_dir, output_weights_name)
best_weights_path = os.path.join(output_dir, f"best_{output_weights_name}")
model_weights_path = best_weights_path
model_factory = ModelFactory()
model = model_factory.get_model(
class_names,
model_name=base_model_name,
use_base_weights=False,
weights_path=model_weights_path)
Now the model is in variable model.
Then as suggested I do
x = model.output
flat1 = Flatten()(x)
and get this error:
ValueError: Input 0 is incompatible with layer flatten_27: expected min_ndim=3, found ndim=2
When I repeat the same thing after removing the last layer using model.layers.pop()
I still get the same error? Even though I have spent couple of hours cannot overcome that problem. So how can this be done?
try this
from keras.applications.densenet import DenseNet121
from keras.layers import Dense, GlobalAveragePooling2D,concatenate
input_image = Input(shape=(224, 224, 3))
# normalize age
input_age_and_gender = Input(shape=(2,))
base_model = DenseNet121(input_tensor=input_image, weights='imagenet', include_top=False)
x = base_model.output
encoded_image = GlobalAveragePooling2D()(x)
out = concatenate([encoded_image,input_age_and_gender])
output = Dense(14, activation='softmax')(out)
model = Model([input_image,input_age_and_gender],output)
You can have a multi input model.
So instead of just using this:
img_input = Input(shape=input_shape)
base_model = base_model_class(
include_top=False,
input_tensor=img_input,
input_shape=input_shape,
weights=base_weights,
pooling="avg")
x = base_model.output
predictions = Dense(len(class_names), activation="sigmoid", name="predictions")(x)
model = Model(inputs=img_input, outputs=predictions)
I am not sure what your base_model looks like there. BUT for the sake of it check the following, where the first input is imaginary and the shape of the second input should be the shape of your age_gender_df.values:
input1 = Input(shape=(64,64,1))
conv11 = Conv2D(32, kernel_size=4, activation='relu')(input1)
pool11 = MaxPooling2D(pool_size=(2, 2))(conv11)
conv12 = Conv2D(16, kernel_size=4, activation='relu')(pool11)
pool12 = MaxPooling2D(pool_size=(2, 2))(conv12)
flat1 = Flatten()(pool12)
# INSTEAD OF THE ABOVE INPUT I WROTE YOU CAN USE YOUR BASE MODEL
input2 = Input(shape=(2,2)) # HERE THIS SHOULD BE THE SHAPE OF YOUR AGE/GENDER DF
layer = Dense(10, activation='relu')(input2)
flat2 = Flatten()(layer)
merge = concatenate([flat1, flat2])
# interpretation model
hidden1 = Dense(10, activation='relu')(merge)
hidden2 = Dense(10, activation='relu')(hidden1)
output = Dense(14, activation='linear')(hidden2)
model = Model(inputs=[input1, input2], outputs=output)
Summary
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_30 (InputLayer) (None, 64, 64, 1) 0
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 61, 61, 32) 544 input_30[0][0]
__________________________________________________________________________________________________
max_pooling2d_23 (MaxPooling2D) (None, 30, 30, 32) 0 conv2d_23[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 27, 27, 16) 8208 max_pooling2d_23[0][0]
__________________________________________________________________________________________________
input_31 (InputLayer) (None, 2, 2) 0
__________________________________________________________________________________________________
max_pooling2d_24 (MaxPooling2D) (None, 13, 13, 16) 0 conv2d_24[0][0]
__________________________________________________________________________________________________
dense_38 (Dense) (None, 2, 10) 30 input_31[0][0]
__________________________________________________________________________________________________
flatten_23 (Flatten) (None, 2704) 0 max_pooling2d_24[0][0]
__________________________________________________________________________________________________
flatten_24 (Flatten) (None, 20) 0 dense_38[0][0]
__________________________________________________________________________________________________
concatenate_9 (Concatenate) (None, 2724) 0 flatten_23[0][0]
flatten_24[0][0]
__________________________________________________________________________________________________
dense_39 (Dense) (None, 10) 27250 concatenate_9[0][0]
__________________________________________________________________________________________________
dense_40 (Dense) (None, 10) 110 dense_39[0][0]
__________________________________________________________________________________________________
dense_41 (Dense) (None, 14) 154 dense_40[0][0]
==================================================================================================
Total params: 36,296
Trainable params: 36,296
Non-trainable params: 0
Visualisation:
EDIT:
In your case I suppose the model should look like the following:
img_input = Input(shape=input_shape)
base_model = base_model_class(
include_top=False,
input_tensor=img_input,
input_shape=input_shape,
weights=base_weights,
pooling="avg")
x = base_model.output
flat1 = Flatten()(x)
input2 = Input(shape=(2,2)) # HERE THIS SHOULD BE THE SHAPE OF YOUR AGE/GENDER DF
layer = Dense(10, activation='relu')(input2)
flat2 = Flatten()(layer)
merge = concatenate([flat1, flat2])
# interpretation model
hidden1 = Dense(10, activation='relu')(merge)
hidden2 = Dense(10, activation='relu')(hidden1)
output = Dense(14, activation='linear')(hidden2)
model = Model(inputs=[img_input, input2], outputs=output)
Issue is solved by first removing the last layer (prediction layer) by
model_original.layers.pop()
Then defining another model which is the replica of the original model except the last layer
model2 = keras.Model(model_original.input, model_original.layers[-1].output)
after that the input, which includes the age is defined
age = layers.Input(shape=(1,))
Next, age input and the last layer of the previously defined network is concatenated using
x = model2.output
concatenated = layers.concatenate([x, age])
In th final step, a prediction layer is added after the concatenation to complete the network
output = Dense(14, activation='linear')(concatenated)
model3 = keras.Model(inputs=[model_original.input, age], outputs=output)
So the final layers of the design looks like that:

How to replace dense layer with convolutional one?

I wanted to replace the Dense_out layer with a convolution one, can anybody tell me how to do it?
code:
model = Sequential()
conv_1 = Conv2D(filters = 32,kernel_size=(3,3),activation='relu')
model.add(conv_1)
conv_2 = Conv2D(filters=64,kernel_size=(3,3),activation='relu')
model.add(conv_2)
pool = MaxPool2D(pool_size = (2,2),strides = (2,2), padding = 'same')
model.add(pool)
drop = Dropout(0.5)
model.add(drop)
model.add(Flatten())
Dense_1 = Dense(128,activation = 'relu')
model.add(Dense_1)
Dense_out = Dense(57,activation = 'softmax')
model.add(Dense_out)
model.compile(optimizer='Adam',loss='categorical_crossentropy',metric=['accuracy'])
model.fit(train_image,train_label,epochs=10,verbose = 1,validation_data=(test_image,test_label))
print(model.summary())
when I'm trying this code :
model = Sequential()
conv_01 = Conv2D(filters = 32,kernel_size=(3,3),activation='relu')
model.add(conv_01)
conv_02 = Conv2D(filters=64,kernel_size=(3,3),activation='relu')
model.add(conv_02)
pool = MaxPool2D(pool_size = (2,2),strides = (2,2), padding = 'same')
model.add(pool)
conv_11 = Conv2D(filters=64,kernel_size=(3,3),activation='relu')
model.add(conv_11)
pool_2 = MaxPool2D(pool_size=(2,2),strides=(2,2),padding='same')
model.add(pool_2)
drop = Dropout(0.3)
model.add(drop)
model.add(Flatten())
Dense_1 = Dense(128,activation = 'relu')
model.add(Dense_1)
Dense_2 = Dense(64,activation = 'relu')
model.add(Dense_2)
conv_out = Conv2D(filters= 64,kernel_size=(3,3),activation='relu')
model.add(Dense_out)
model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(train_image,train_label,epochs=10,verbose = 1,validation_data=(test_image,test_label))
I get the following error
ValueError: Input 0 of layer conv2d_3 is incompatible with the layer:
expected ndim=4, found ndim=2. Full shape received: [None, 64]
I am new at this so an explanation would greatly help
You will need to reshape to be able to use a 2x2 filter as needed in a conv2D layer.
You can use:
out = keras.layers.Reshape(target_shape)
model.add(out)
and then do the convolution:
conv_out = Conv2D(filters=3,kernel_size=(3,3),activation='softmax')
model.add(conv_out)
with filters being the number of channels you want in you output layer (3 for RGB).
More info about the layers and parameters in Keras Documentation

Concatenate multiple Convolution Layers

Text classification by extracting tri-grams and quad-grams features of character level inputs using multiple concatenated CNN layers and passing it to BLSTM layer
submodels = []
for kw in (3, 4): # kernel sizes
model = Sequential()
model.add(Embedding(vocab_size, 16,input_length=maxlen,input_shape=(maxlen,vocab_size))
model.add(Convolution1D(nb_filter=64, filter_length=kw,
border_mode='valid', activation='relu'))
submodels.append(model)
big_model = Sequential()
big_model.add(keras.layers.Concatenate(submodels))
big_model.add(Bidirectional(LSTM(100, return_sequences=False)))
big_model.add(Dense(n_out,activation='softmax'))
Model summary of individual conv layers:
Layer (type) Output Shape Param
------------ ------------ -----
embedding_49 (Embedding) (None, 1024, 16) 592
conv1d_41 (Conv1D) (None, 1024, 64) 4160
But, I am getting this error:
ValueError: Input 0 is incompatible with layer conv1d_22: expected
ndim=3, found ndim=4
UPDATE NOW USING FUNCTIONAL KERAS API
x = Input(shape=(maxlen,vocab_size))
x=Embedding(vocab_size, 16, input_length=maxlen)(x)
x=Convolution1D(nb_filter=64, filter_length=3,border_mode='same',
activation='relu')(x)
x1 = Input(shape=(maxlen,vocab_size))
x1=Embedding(vocab_size, 16, input_length=maxlen)(x1)
x1=Convolution1D(nb_filter=64, filter_length=4,border_mode='same',
activation='relu')(x1)
x2 = Bidirectional(LSTM(100, return_sequences=False))
x2=Dense(n_out,activation='softmax')(x2)
big_model = Model(input=keras.layers.Concatenate([x,x1]),output=x2)
big_model.compile(loss='categorical_crossentropy', optimizer='adadelta',
metrics=['accuracy'])
Still the same error!
from keras import Input
from keras import Model
vocab_size = 1000
maxlen = 100
n_out = 1000
input_x = Input(shape=(None,))
x=layers.Embedding(vocab_size, 16, input_length=maxlen)(input_x)
x=layers.Convolution1D(nb_filter=64, filter_length=3,border_mode='same',activation='relu')(x)
input_x1 = Input(shape=(None,))
x1=layers.Embedding(vocab_size, 16, input_length=maxlen)(input_x1)
x1=layers.Convolution1D(nb_filter=64, filter_length=4,border_mode='same',
activation='relu')(x1)
concatenated = layers.concatenate([x,x1],axis = -1)
x2 = layers.Bidirectional(layers.LSTM(100, return_sequences=False))(concatenated)
x2=layers.Dense(n_out,activation='softmax')(x2)
big_model = Model([input_x,input_x1],output=x2)
big_model.compile(loss='categorical_crossentropy', optimizer='adadelta',
metrics=['accuracy'])

Categories