I wanted to replace the Dense_out layer with a convolution one, can anybody tell me how to do it?
code:
model = Sequential()
conv_1 = Conv2D(filters = 32,kernel_size=(3,3),activation='relu')
model.add(conv_1)
conv_2 = Conv2D(filters=64,kernel_size=(3,3),activation='relu')
model.add(conv_2)
pool = MaxPool2D(pool_size = (2,2),strides = (2,2), padding = 'same')
model.add(pool)
drop = Dropout(0.5)
model.add(drop)
model.add(Flatten())
Dense_1 = Dense(128,activation = 'relu')
model.add(Dense_1)
Dense_out = Dense(57,activation = 'softmax')
model.add(Dense_out)
model.compile(optimizer='Adam',loss='categorical_crossentropy',metric=['accuracy'])
model.fit(train_image,train_label,epochs=10,verbose = 1,validation_data=(test_image,test_label))
print(model.summary())
when I'm trying this code :
model = Sequential()
conv_01 = Conv2D(filters = 32,kernel_size=(3,3),activation='relu')
model.add(conv_01)
conv_02 = Conv2D(filters=64,kernel_size=(3,3),activation='relu')
model.add(conv_02)
pool = MaxPool2D(pool_size = (2,2),strides = (2,2), padding = 'same')
model.add(pool)
conv_11 = Conv2D(filters=64,kernel_size=(3,3),activation='relu')
model.add(conv_11)
pool_2 = MaxPool2D(pool_size=(2,2),strides=(2,2),padding='same')
model.add(pool_2)
drop = Dropout(0.3)
model.add(drop)
model.add(Flatten())
Dense_1 = Dense(128,activation = 'relu')
model.add(Dense_1)
Dense_2 = Dense(64,activation = 'relu')
model.add(Dense_2)
conv_out = Conv2D(filters= 64,kernel_size=(3,3),activation='relu')
model.add(Dense_out)
model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(train_image,train_label,epochs=10,verbose = 1,validation_data=(test_image,test_label))
I get the following error
ValueError: Input 0 of layer conv2d_3 is incompatible with the layer:
expected ndim=4, found ndim=2. Full shape received: [None, 64]
I am new at this so an explanation would greatly help
You will need to reshape to be able to use a 2x2 filter as needed in a conv2D layer.
You can use:
out = keras.layers.Reshape(target_shape)
model.add(out)
and then do the convolution:
conv_out = Conv2D(filters=3,kernel_size=(3,3),activation='softmax')
model.add(conv_out)
with filters being the number of channels you want in you output layer (3 for RGB).
More info about the layers and parameters in Keras Documentation
Related
What i am trying to do is create a text classification model which combines CNNS and word embeddings.The basic idea is that we have an Embedding layer at the start of the network and then 2 parallel convolutional networks for find 2,3 - grams.
Each of these convolution layers takes the output of the embedding layer as input.
After the outputs of the two cnn layers are concatenated,flattend and feeded to a Dense.
My input is tokenized,numerical sentences of length 27(shape = (None,27)) and i have 1244 of these sentences.
I've managed to create a sequential model wit ha single cnn layer but struggle wit hthe above
My code so far :
input_shape = Embedding(voc, 100,weights=[embedding_matrix], input_length=X.shape[1])
tower_1 = Conv1D(filters=100, kernel_size=2, activation='relu')(input_shape)
tower_1 = MaxPooling1D(pool_size=2)(tower_1)
tower_2 = Conv1D(filters=100, kernel_size=3, activation='relu')(input_shape)
tower_2 = MaxPooling1D(pool_size=2)(tower_2)
merged = keras.layers.concatenate([tower_1, tower_2,], axis=1)
merged = Flatten()(merged)
out = Dense(3, activation='softmax')(merged)
model = Model(input_shape, out)
This produces this error:
TypeError: Inputs to a layer should be tensors. Got: <keras.layers.embeddings.Embedding object at 0x7fadca016dd0>
i have also trid replacing
input_shape = Embedding(voc, 100,weights=[embedding_matrix], input_length=X.shape[1])
with:
input_tensor = Input(shape=(1244,27))
input_shape = Embedding(voc, 100,weights=[embedding_matrix], input_length=X.shape[1])(input_tensor)
which gives me this error:
ValueError: Input 0 of layer "max_pooling1d_23" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 1244, 26, 100)
You should define your Input layer without the number of samples. Just the sentence length:
import tensorflow as tf
inputs = tf.keras.layers.Input((27,))
embedded = tf.keras.layers.Embedding(50, 100, input_length=27)(inputs)
tower_1 = tf.keras.layers.Conv1D(filters=100, kernel_size=2, activation='relu')(embedded)
tower_1 = tf.keras.layers.MaxPooling1D(pool_size=2)(tower_1)
tower_2 = tf.keras.layers.Conv1D(filters=100, kernel_size=3, activation='relu')(embedded)
tower_2 = tf.keras.layers.MaxPooling1D(pool_size=2)(tower_2)
merged = tf.keras.layers.concatenate([tower_1, tower_2,], axis=1)
merged = tf.keras.layers.Flatten()(merged)
out = tf.keras.layers.Dense(3, activation='softmax')(merged)
model = tf.keras.Model(inputs, out)
print(model.summary())
Usage:
samples = 5
random_input = tf.random.uniform((samples, 27), maxval=50, dtype=tf.int32)
print(model(random_input))
tf.Tensor(
[[0.31525075 0.33163014 0.3531191 ]
[0.3266019 0.3295619 0.34383622]
[0.32351935 0.32669052 0.34979013]
[0.32954428 0.33178467 0.33867106]
[0.32966062 0.3283257 0.34201372]], shape=(5, 3), dtype=float32)
Assume, I have two features: x1 and x2. Here, x1 is a vector of word index and x2 is a vector of numerical values. The length of x1 and x2 are equal to 50. There are 6000 rows for each x1 and x2. I combine these two into one such as
X = np.array([np.row_stack((x1[i], x2[i])) for i in range(x1.shape[0])])
My initial LSTM model is
X_input = Input(shape = (50, 2), name = "X_seq")
X_hidden1 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_input)
X_hidden2 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_hidden1)
X_hidden3 = LSTM(units = 128, dropout = 0.25)(X_hidden2)
X_dense = Dense(units = 128, activation = 'relu')(X_hidden3)
X_dense_dropout = Dropout(0.25)(X_dense)
concat = tf.keras.layers.concatenate(inputs = [X_dense_dropout])
output = Dense(units = num_category, activation = 'softmax', name = "output")(concat)
model = tf.keras.Model(inputs = [X_input], outputs = [output])
model.compile(optimizer = 'adam', loss = "sparse_categorical_crossentropy", metrics = ["accuracy"])
However, I know I need to have an embedding layer to take care of X[0,:] right below the Input layer. Thus, I modified my above code to
X_input = Input(shape = (50, 2), name = "X_seq")
x1_embedding = Embedding(input_dim = max_pages, output_dim = embedding_dim, input_length = max_length)(X_input[0,:])
X_concat = tf.keras.layers.concatenate(inputs = [x1_embedding, X_input[1,:]])
X_hidden1 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_concat)
X_hidden2 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_hidden1)
X_hidden3 = LSTM(units = 128, dropout = 0.25)(X_hidden2)
X_dense = Dense(units = 128, activation = 'relu')(X_hidden3)
X_dense_dropout = Dropout(0.25)(X_dense)
concat = tf.keras.layers.concatenate(inputs = [X_dense_dropout])
output = Dense(units = num_category, activation = 'softmax', name = "output")(concat)
model = tf.keras.Model(inputs = [X_input], outputs = [output])
model.compile(optimizer = 'adam', loss = "sparse_categorical_crossentropy", metrics = ["accuracy"])
Python shows an error
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 2, 15), (None, 2)]
any suggestion? many thanks
The problem is that the input to the concat layer has different dimensions and hence we can't concat them. To overcome this issue we could just reshape the input to the concat layer using tf.keras.layers.Reshape like below and the rest will be same.
reshaped_input = tf.keras.layers.Reshape((-1,1))(X_input[:, 1])
X_concat = tf.keras.layers.concatenate(inputs = [x1_embedding, reshaped_input])
I have been working on a project for estimating the traffic flow using time series data combine with weather data. I am using a window of 30 values for my time series and I am using 20 weather related features. I have used the functional API to implement this, but I keep getting the same error and I do not know how it can be solved. I have looked at other similar threads such as this one Input 0 of layer conv1d_1 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 200], but it has not helped.
This is my model,
series_input = Input(shape = (series_input_train.shape[1], ), name = 'series_input')
x = Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu")(series_input)
x = LSTM(32, return_sequences = True)(x)
x = LSTM(32, return_sequences = True)(x)
x = Dense(1, activation = 'relu')(x)
series_output = Lambda(lambda w: w * 200)(x)
weather_input = Input(shape = (weather_input_train.shape[1], ), name = 'weather_input')
x = Dense(32, activation = 'relu')(weather_input)
x = Dense(32, activation = 'relu')(x)
weather_output = Dense(1, activation = 'relu')(x)
concatenate = concatenate([series_output, weather_output], axis=1, name = 'concatenate')
output = Dense(1, name = 'output')(concatenate)
model = Model([series_input, weather_input], output)
The shapes of series_input_train and weather_input_train are (34970, 30) and (34970, 20) respetively.
The error I keep getting is this one,
ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, 30)
What am I doing wrong?
Honestly, I have always had trouble figuring out how the shape of the inputs work in TensorFlow. If you could point me in the right direction, it would be appreciated but what I need right now is a fix for my model.
As Tao-Lung says, the first connection for a convolutional layer expects a 3-position form.
1D convolution on sequences expects a 3D input. In other words, for each element of the batch, for each time step, a single vector.
If you want the step to be unitary you solve your problem as follows:
series_input = Input(shape = (series_input_train.shape[1],1,)
and
x = Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu",input_shape=[None,series_input])
Problem
3+D tensor with shape: batch_shape + (steps, input_dim)
https://keras.io/api/layers/convolution_layers/convolution1d/
Solution
You don't specify "steps" parameters.
If "steps" is 1, Use the following code:
import numpy as np
series_input_train = np.expand_dims(series_input_train, axis=1)
weather_input_train = np.expand_dims(weather_input_train, axis=1)
Text classification by extracting tri-grams and quad-grams features of character level inputs using multiple concatenated CNN layers and passing it to BLSTM layer
submodels = []
for kw in (3, 4): # kernel sizes
model = Sequential()
model.add(Embedding(vocab_size, 16,input_length=maxlen,input_shape=(maxlen,vocab_size))
model.add(Convolution1D(nb_filter=64, filter_length=kw,
border_mode='valid', activation='relu'))
submodels.append(model)
big_model = Sequential()
big_model.add(keras.layers.Concatenate(submodels))
big_model.add(Bidirectional(LSTM(100, return_sequences=False)))
big_model.add(Dense(n_out,activation='softmax'))
Model summary of individual conv layers:
Layer (type) Output Shape Param
------------ ------------ -----
embedding_49 (Embedding) (None, 1024, 16) 592
conv1d_41 (Conv1D) (None, 1024, 64) 4160
But, I am getting this error:
ValueError: Input 0 is incompatible with layer conv1d_22: expected
ndim=3, found ndim=4
UPDATE NOW USING FUNCTIONAL KERAS API
x = Input(shape=(maxlen,vocab_size))
x=Embedding(vocab_size, 16, input_length=maxlen)(x)
x=Convolution1D(nb_filter=64, filter_length=3,border_mode='same',
activation='relu')(x)
x1 = Input(shape=(maxlen,vocab_size))
x1=Embedding(vocab_size, 16, input_length=maxlen)(x1)
x1=Convolution1D(nb_filter=64, filter_length=4,border_mode='same',
activation='relu')(x1)
x2 = Bidirectional(LSTM(100, return_sequences=False))
x2=Dense(n_out,activation='softmax')(x2)
big_model = Model(input=keras.layers.Concatenate([x,x1]),output=x2)
big_model.compile(loss='categorical_crossentropy', optimizer='adadelta',
metrics=['accuracy'])
Still the same error!
from keras import Input
from keras import Model
vocab_size = 1000
maxlen = 100
n_out = 1000
input_x = Input(shape=(None,))
x=layers.Embedding(vocab_size, 16, input_length=maxlen)(input_x)
x=layers.Convolution1D(nb_filter=64, filter_length=3,border_mode='same',activation='relu')(x)
input_x1 = Input(shape=(None,))
x1=layers.Embedding(vocab_size, 16, input_length=maxlen)(input_x1)
x1=layers.Convolution1D(nb_filter=64, filter_length=4,border_mode='same',
activation='relu')(x1)
concatenated = layers.concatenate([x,x1],axis = -1)
x2 = layers.Bidirectional(layers.LSTM(100, return_sequences=False))(concatenated)
x2=layers.Dense(n_out,activation='softmax')(x2)
big_model = Model([input_x,input_x1],output=x2)
big_model.compile(loss='categorical_crossentropy', optimizer='adadelta',
metrics=['accuracy'])
I am quite new to machine learning and I am currently working on a "car value predictor" application. I stuck where I have to feed my data to my model. I have 4 inputs:
date: the car's first registration date (int)
km: the car's mileage meter (int)
consume: the car's consume type (one-hot encoded vector with 10 element e.g. for petrol: [1 0 0 0 0 0 0 0 0 0])
type: the car's type (for example: "BMW-320", stored in one-hot encoded vector with 440 element )
and one output:
the price of the car.
I would like to do something similar to this: https://imgur.com/wlvffn7
I have tried the following code which compiles but the output is not what I wanted:
model = Sequential([
Dense(128, input_shape=(1,), activation='relu', name='date'),
Dense(128, input_shape=(1,), activation='relu', name='km'),
Dense(128, input_shape=(10,), activation='relu', name='consume'),
Dense(128, input_shape=(440,), activation='relu', name='type'),
Dropout(0.5),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(1, activation='linear')
])
model.compile(loss='mse', optimizer='adam')
model.fit( x = {'date' : samples_train['input'][:,0],
'km' : samples_train['input'][:,1],
'consume':samples_train['input'][:,2],
'type':samples_train['input'][:,3]},
y = samples_train['output'],
epochs=1000,
batch_size=16,
verbose=1,
validation_data = ({'date' : samples_valid['input'][:,0],
'km' : samples_valid['input'][:,1],
'consume':samples_valid['input'][:,2],
'type':samples_valid['input'][:,3]}, samples_valid['output']),
callbacks=callbacks)
Can anyone point out what am I doing wrong or how can I implement a model "structure" like on the picture?
EDIT:
I think this is what I was looking for. Can anyone confirm this? :)
input_1 = Input(shape=(1,), name='date') # input layers
input_2 = Input(shape=(1,), name='km')
input_3 = Input(shape=(10,), name='consume')
input_4 = Input(shape=(440,), name='type')
dense_1 = Dense(256, activation='relu')(input_1) # hidden layers
dense_1 = Dense(256, activation='relu')(input_2)
dense_1 = Dense(256, activation='relu')(input_3)
dense_1 = Dense(256, activation='relu')(input_4)
dropout_1 = Dropout(0.5)(dense_1)
dense_2 = Dense(256, activation='relu')(dropout_1)
dropout_2 = Dropout(0.5)(dense_2)
outputs = Dense(1, activation='linear')(dropout_2) # output layer
model = Model([input_1,input_2,input_3,input_4], outputs)
Thank you in advance.
I think your second Implementation is wrong.
By implementeing it like that, dense_1 will only have the value you gave it in the last line : input_4 = Input(shape=(440,), name='type') thus not taking into accunt the rest of the imputs for the rest of the network.
What you should do is concatenate your inputs into a single Line before feeding it to the first dense layer, like that :
from keras.layers import Concatenate
input_1 = Input(shape=(1,), name='date') # input layers
input_2 = Input(shape=(1,), name='km')
input_3 = Input(shape=(10,), name='consume')
input_4 = Input(shape=(440,), name='type')
x = Concatenate()([input_1 , input_2 , input_3 , input_4]) # Concatenation of the inputs.
dense_1 = Dense(256, activation='relu')(x) # hidden layers
dropout_1 = Dropout(0.5)(dense_1)
dense_2 = Dense(256, activation='relu')(dropout_1)
dropout_2 = Dropout(0.5)(dense_2)
outputs = Dense(1, activation='linear')(dropout_2) # output layer
model = Model([input_1,input_2,input_3,input_4], outputs)