Unexpected output from Embedding layer - python

I've been trying to implement an LSTM in Keras for several hours (using a sequential model with an embedding layer, two LSTM layers, and a dense layer), but I wind up getting different error messages.
From what I can tell, the problem is that the output of the embedding layer has two dimensions instead of three, because I get this value error (ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2) when adding the second LSTM layer, and I get the error assert len(input_shape) >= 3 AssertionError when I delete the line for adding the second LSTM layer (which means the dense layer has the same issue).
These error occurs before I call the model's "train" method.
My code is here.
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.layers import Embedding
from keras.layers import TimeDistributed
from keras.preprocessing import text
from keras.preprocessing.sequence import pad_sequences
# The data in X was preprocessed using Keras' built in pad_sequences.
# Before preprocessing, it consisted of plain lists of integers (which
# were just integers wit a one-to-one map to plain words as strings)
X = pad_sequences(X)
model = Sequential()
model.add(Embedding(batch_size=32, input_dim=len(filtered_vocabulary)+1, output_dim=256, input_length=38))
model.add(LSTM(128))
model.add(LSTM(128)) # error occurs in this line
model.add(TimeDistributed(Dense(len(filtered_vocabulary)+1, activation="softmax")))
model.compile(optimizer = "rmsprop", loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(X, X, epochs=60, batch_size=32)
I'd be glad if any of you could help me out with this.

The error occurs because the first LSTM layer only returns the last output because you haven't specified return_sequences=True in the LSTM layers. This looks like a multi-output setup so you would need to return the LSTM output at every time step using that argument for both layers.
Just to be clear, without return_sequences=True the shape is (None, 128); with the shape is (None, 38, 128).

Related

TF Keras: ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2

So I have sequences of 2D Vectors that form patterns. I want to predict how the sequence continues.
I have a start_xy array constisting of arrays with the order, start_x and start_y:
e.g. [1, 2.4, 3.8]
and the same for the end_xy.
I want to train a model a sequence prediction model:
import numpy as np
import pickle
import keras
from keras.models import Sequential
from keras.layers import LSTM, Dense
from keras.callbacks import ModelCheckpoint
import training_data_generator
tdg = training_data_generator.training_data_generator(500)
trainingdata = tdg.produceTrainingSequences()
print("Printing DATA!:")
start_xy =[]
end_xy =[]
for batch in trainingdata:
for pattern in batch:
order = 1
for sequence in pattern:
start = [order,sequence[0],sequence[1]]
start_xy.append(start)
end = [order,sequence[2],sequence[3]]
end_xy.append(end)
order = order +1
model = Sequential()
model.add(LSTM(64, return_sequences=False, input_shape=(2,len(start_xy))))
model.add(Dense(2, activation='relu'))
model.compile(loss='mse', optimizer='adam')
model.fit(start_xy,end_xy,batch_size=len(start_xy), epochs=5000, verbose=2)
But I get the error message:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [320, 3]
I suspect I have to reshape my inputs somehow, but I don't yet understand how.
How do I make this work?
Am I even doing this the right way?
You mostly just have to convert your data into numpy arrays and do some reshaping of that data so that the model will accept it.
First convert start_xy into a numpy array and reshape it to have 3 dims:
start_xy = np.array(start_xy)
start_xy = start_xy.reshape(*start_xy.shape, 1)
Next fix the input shape for the LSTM layer to be [3, 1]:
model.add(LSTM(64, return_sequences=False, input_shape=start_xy.shape[1:]))
Let me know if the error persists or if another one comes up!

How to train model to add new classes?

My trained model has 10 classes ( ie. output layer has 10 classes). I want to add 3 more classes to it without training the whole model again.
I want to use the old trained model and add new classes to it.
This is the code I had already tried but it shows an error.
from keras.models import load_model
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
base_model = load_model('hand_gest.h5')
new_model = Sequential()
for layer in base_model.layers[:-2]:
new_model.add(layer)
for layer in new_model.layers:
layer.trainable = False
weights_training = base_model.layers[-2].get_weights()
new_model.layers[-2].set_weights(weights_training)
new_model.add(Dense(units = 3, activation = 'softmax'))
But when I train this model it shows the following error.
ValueError: You called `set_weights(weights)` on layer "max_pooling2d_2" with a weight list of length 2, but the layer was expecting 0 weights. Provided weights: [array([[-0.01650696, 0.01082378, 0.0149541 , .....
As the number of classes is changed from 10 to 13, the last layer of the previous network needs to be changed.
base_model = load_model('hand_gest.h5')
base_model.pop() #remove the last layer - 'Dense' layer with 10 units
for layer in base_model.layers:
layer.trainable = False
base_model.add(Dense(units = 13, activation = 'softmax'))
base_model.summary() #Check architecture before starting the fine-tuning

Python (Keras): Value Error: Error when checking input

I'm trying to train a CNN on word vectors generated using the gensim library. After I have generated all of my data in numeric form, I try to pass it on to a CNN model using Keras when I get the following error:
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (20000, 250, 50)
I've searched on this problem for hours, and all of the solutions posted for similar/same issues haven't been able to solve this error for me. Can anyone see where I'm going wrong with the input dimensions? I've generated some random numpy data that recreates the error:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Convolution2D, Flatten, Dropout
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.callbacks import TensorBoard
t = np.random.rand(20000,250,50)
l = np.random.rand(20000,1)
embedding_vecor_length = 50
net = Sequential()
net.add(Convolution2D(64, 3,input_shape=(1,250,50),
data_format='channels_first'))
# Convolutional model (3x conv, flatten, 2x dense)
net.add(Convolution2D(32,(3), padding='same'))
net.add(Convolution2D(16,(3), padding='same'))
net.add(Convolution2D(8,(3), padding='same'))
net.add(Flatten())
net.add(Dropout(0.2))
net.add(Dense(180,activation='sigmoid'))
net.add(Dropout(0.2))
net.add(Dense(1,activation='sigmoid'))
net.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
tensorBoardCallback = TensorBoard(log_dir='./logs', write_graph=True)
net.summary()
net.fit(t, l, epochs=3, callbacks=[tensorBoardCallback], batch_size=64)
Convolutions use 4 dimensions. Considering you're using "channels_first":
Images
Channels
Side 1
Side 2
Your input is missing the channels.
t = np.random.rand(20000,1,250,50)

Concatenating features of two pooling layers

I am trying to design a Bi-Directional LSTM model and I want to concatenate features after Max pooling and Average pooling layers.
I have this for my model:
from keras.layers import Dense, Embedding
from keras.layers.recurrent import LSTM
from keras.layers import Bidirectional
from keras.models import Sequential
from keras.layers.core import Dropout
from features import train,embedding_matrix,words
from keras.layers import concatenate,AveragePooling1D,GlobalMaxPooling1D
model=Sequential()
model.add(Embedding(words,300,input_length=train.shape[1],weights=[embedding_matrix]))
model.add(Bidirectional(LSTM(20,activation='tanh',kernel_initializer='glorot_uniform',recurrent_dropout = 0.2, dropout = 0.2,return_sequences=True)))
model.add(concatenate([GlobalMaxPooling1D(),AveragePooling1D()]))
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax'))
print model.summary()
But I am having:
ValueError: Layer concatenate_1 was called with an input that isn't a symbolic tensor which is because I believe the concatenating layer. As I am not adding the pooling in the model.
Can I add two layers in the same model? or Should I define two separate models and then add pooling layers in each of them?
The trick here is to use a graph model instead of a sequential model.
Before we get started, I assume
your network expects a 2D input tensor of shape (B=batch_size, N=num_of_words), where N is the longest sample length of your training data. (In case you have unequal length samples, you should use keras.preprocessing.sequence.pad_sequences to achieve equal length samples)
your vocabulary size is V (probably is 300 if I understand correctly)
your embedding layer encodes each word to a feature of F dimension, i.e. your embedding layer's weight matrix is VxF.
from keras.layers import Dense, Embedding, Input, Concatenate, Lambda
from keras.layers.recurrent import LSTM
from keras.layers import Bidirectional
from keras.models import Model
from keras.layers.core import Dropout
from keras import backend as BKN
from keras.layers import concatenate,AveragePooling1D,GlobalMaxPooling1D
words = Input( shape=(N,))
f = Embedding(input_dim=V,output_dim=F)( words )
f = Bidirectional(LSTM(20,activation='tanh',
kernel_initializer='glorot_uniform',
recurrent_dropout = 0.2,
dropout = 0.2,return_sequences=True))(f)
gpf = GlobalMaxPooling1D()(f)
gpf = Lambda( lambda t : BKN.expand_dims(t, axis=1) )(gpf)
apf = AveragePooling1D( pool_size=2 )(f)
pf = Concatenate(axis=1)([gpf, apf])
pf = Dropout(0.2)( pf )
pred = Dense(2, activation='softmax')(pf) # <-- make sure this is correct
model = Model( input=words, output=pred )
Finally, I fail to find that keras Embedding layer supports syntax like weights=[embedding_matrix].

merging recurrent layers with dense layer in Keras

I want to build a neural network where the two first layers are feedforward and the last one is recurrent.
here is my code :
model = Sequential()
model.add(Dense(150, input_dim=23,init='normal',activation='relu'))
model.add(Dense(80,activation='relu',init='normal'))
model.add(SimpleRNN(2,init='normal'))
adam =OP.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model.compile(loss="mean_squared_error", optimizer="rmsprop")
and I get this error :
Exception: Input 0 is incompatible with layer simplernn_11: expected ndim=3, found ndim=2.
model.compile(loss='mse', optimizer=adam)
It is correct that in Keras, RNN layer expects input as (nb_samples, time_steps, input_dim). However, if you want to add RNN layer after a Dense layer, you still can do that after reshaping the input for the RNN layer. Reshape can be used both as a first layer and also as an intermediate layer in a sequential model. Examples are given below:
Reshape as first layer in a Sequential model
model = Sequential()
model.add(Reshape((3, 4), input_shape=(12,)))
# now: model.output_shape == (None, 3, 4)
# note: `None` is the batch dimension
Reshape as an intermediate layer in a Sequential model
model.add(Reshape((6, 2)))
# now: model.output_shape == (None, 6, 2)
For example, if you change your code in the following way, then there will be no error. I have checked it and the model compiled without any error reported. You can change the dimension as per your need.
from keras.models import Sequential
from keras.layers import Dense, SimpleRNN, Reshape
from keras.optimizers import Adam
model = Sequential()
model.add(Dense(150, input_dim=23,init='normal',activation='relu'))
model.add(Dense(80,activation='relu',init='normal'))
model.add(Reshape((1, 80)))
model.add(SimpleRNN(2,init='normal'))
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model.compile(loss="mean_squared_error", optimizer="rmsprop")
In Keras, you cannot put a Reccurrent layer after a Dense layer because the Dense layer gives output as (nb_samples, output_dim). However, a Recurrent layer expects input as (nb_samples, time_steps, input_dim). So, a Dense layer gives a 2-D output, but a Recurrent layer expects a 3-D input. However, you can do the reverse, i.e., put a Dense layer after a Recurrent layer.

Categories