Value Error with input_length and shape in Keras - python

So, I am trying to build a text generator using Keras. I have 35983 sentences of length 50 i.e. my training data x has a shape of (35983,50,1). However, I cannot figure out how to give the input shape to my model ad keep running into errors. The other dimensions in my model are:
Embedding Matrix: (46, 45)
vocab_size = 45
Here is the structure of my model:
model = Sequential()
model.add(TimeDistributed(Embedding(vocab_size + 1, vocab_size, input_length= 50, weights = [embedding_weights], input_shape = (x.shape[1],x.shape[2]))))
model.add(TimeDistributed(Conv1D(64, 3, activation = 'relu', kernel_initializer= 'glorot_normal')))
model.add(Dropout(0.2))
model.add(TimeDistributed(MaxPooling1D(pool_size= 2)))
model.add(TimeDistributed(Conv1D(64, 3, activation = 'relu', kernel_initializer = 'glorot_normal')))
model.add(Dropout(0.2))
model.add(TimeDistributed(MaxPooling1D(pool_size= 2)))
model.add(TimeDistributed(Conv1D(64, 3, activation = 'relu', kernel_initializer = 'glorot_normal')))
model.add(Dropout(0.2))
model.add(TimeDistributed(MaxPooling1D(pool_size= 2)))
model.add(TimeDistributed(GlobalMaxPool1D()))
model.add(Bidirectional(LSTM(256, return_sequences = True)))
model.add(Dropout(0.2))
model.add(Bidirectional(LSTM(256, return_sequences = True)))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy'])
model.build((None, x.shape[1],x.shape[2]))
model.summary()
Everytime I try to run the above code I get the following error:
ValueError: "input_length" is 50, but received input has shape (None, 1)
I have tried chopping and changing the shapes but I still get a similar error and I can't figure out how to get rid of it. Any help would be appreciated. Thank You.

May this will help you.
I = Input(shape=(50, 50)) # each search result is made of 50 words and 50 docs
emb = Embedding(
max_val,
embedding_dims,
dropout=embedding_dropout
)
right = TimeDistributed(emb)(I)
right = TimeDistributed(Convolution1D(nb_filter=5,
filter_length=5,
border_mode='valid',
activation='relu',
subsample_length=1)(right)
right = TimeDistributed(GlobalMaxPooling1D())(right)

Related

Error while using both sparse_categorical_crossentropy and categorical_crossentropy in keras

I have started training a basic MLP model on MNIST data taken from here. Below is my code for implementing the model.
train = pd.read_csv(r"train.csv")
test = pd.read_csv(r"test.csv")
train_img_path = "./Images/train/"
test_img_path = "./Images/test/"
train_img = []
for img in train['filename']:
img_path = train_img_path+img
image = imread(img_path)
image = image/255
train_img.append(image)
train_img = np.array(train_img)
batch_size = 64
y_train = train['label']
from tensorflow.keras.utils import to_categorical
#y_train = to_categorical(y_train)
model = Sequential()
model.add(Dense(10, activation = 'relu'))
model.add(Dense(10, activation = 'relu'))
model.add(Dense(10, activation = 'softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(train_img, y_train, epochs=20, batch_size=batch_size)
While trying to fit my model on this data I get error InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [50176,10] and labels shape [64] with loss='sparse_categorical_crossentropy'.
There were suggestions to try with loss='categorical_crossentropy' after having one-hot encoded values and that also gives error ValueError: Shapes (None, 10) and (None, 28, 28, 10) are incompatible
I am confused on how I am getting the shape [50176,10] (though examples are 49000) in the error.
I guess I am missing something on shape. Can someone guide me where I am doing wrong and how to solve this.
Edit: I have modified my code as below to pick the data from keras for_from_dataframe. But I still get the same error.
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_data = train_datagen.flow_from_dataframe(
dataframe=train,
directory='./Images/train',
x_col='filename',
y_col='label',
weight_col=None,
target_size=(28,28),
color_mode='grayscale',
class_mode='categorical',
batch_size=64
)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
#model.summary()
model.fit(train_data, epochs=20)
The main problem is in your model building code:
model = Sequential()
model.add(Dense(10, activation = 'relu'))
model.add(Dense(10, activation = 'relu'))
model.add(Dense(10, activation = 'softmax'))
You are trying to feed images and its label to ANN which obviously gives error. Also there is no any inputs given in your model.
For images, CNN should be used instead of ANN.
import tensorflow as tf
model = Sequential()
model.add(tf.keras.layers.Conv2D(32, activation = 'relu', input_shape=(28,28,3)))
model.add(tf.keras.layers.MaxPooling2D((2,2))
model.add(tf.keras.layers.Conv2D(64, activation = 'relu'))
model.add(tf.keras.layers.MaxPooling2D((2,2))
model.add(tf.keras.layers.Conv2D(128, activation = 'relu'))
model.add(tf.keras.layers.MaxPooling2D((2,2))
model.add(tf.keras.layers.Flatten())
model.add(Dense(10, activation = 'relu'))
model.add(Dense(20, activation = 'relu'))
model.add(Dense(10, activation = 'softmax'))
If you have one-hot encoded your labels, use categorical_crossentropy. If your labels are numbers then use sparse_categorical_crossentropy

Tensorflow/Keras model output is constant

I am trying to train a CNN in using keras. The input is a 128x128x3 rbg image and output is a single value between 0 and 1 (this is not a classifier model). I have normalised the input. Initially, my model was achieving some reasonable results, getting the mean absolute error to < 0.1. As I tried to tweak the model slightly I found the loss would plateau very quickly to around 0.23. I investigated further and found that it was outputting the same value for every input.
So I reverted my code back to when it was working, but it was no longer working. I eventually found that about 90% of the time it will get stuck at this local minima, outputting a constant value (which I suspect is mean of the training reference values (0.39). The other 10% of the time it will behave nicely and regress down to an error of < 0.1. So it is basically giving qualitatively different behaviour randomly and desired results rarely. The strange thing is, is that I swear it was consistently working before.
I have tried:
Changing the input size
Increasing/decreasing the learning rate by factor of 10
Removing a couple of dense layers
Changing 'relu' to 'leaky relu'
Increasing/removing dropout
def load_data(dir):
csv_data = get_csv_data()
xs = []
ys = []
for (name, y) in csv_data:
path = DIR + dir + "/" + name
img = tf.keras.preprocessing.image.load_img(path)
xs.append(tf.keras.preprocessing.image.img_to_array(img) * (1 / 255.0))
ys.append(normalize_output(float(y)))
return np.array(xs).reshape(len(csv_data), IMAGE_DIM, IMAGE_DIM, 3), np.array(ys).reshape(len(csv_data), 1)
def gen_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu', input_shape=(IMAGE_DIM, IMAGE_DIM, CHAN_COUNT)))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu'))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size = (5, 5), activation='relu'))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dense(16, activation='sigmoid'))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss=keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[keras.metrics.MeanAbsoluteError()])
return model
def run():
model = gen_model()
xs, ys = load_data("output")
generator = tf.keras.preprocessing.image.ImageDataGenerator(featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
validation_split=0.1,
rotation_range=12,
horizontal_flip=True,
vertical_flip=True)
model.fit(generator.flow(xs, ys, batch_size=32, shuffle=True),
steps_per_epoch=len(xs) / 32,
epochs = 10,
use_multiprocessing=False)
I rearranged activation on the layers. Please give it a try :
def gen_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu', input_shape=(IMAGE_DIM, IMAGE_DIM, CHAN_COUNT)))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu'))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size = (5, 5), activation='relu'))
model.add(tf.keras.layers.MaxPool2D())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss=keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[keras.metrics.MeanAbsoluteError()])
return model

TypeError: The added layer must be an instance of class Layer. Found: Tensor("input_2:0", shape=(?, 22), dtype=float32)

I am trying to add autoencoder layer to LSTM neural network. The input data is the pandas DataFrame with numerical features.
To do this task, I am using Keras and Python My current code in Python is given below.
I cannot compile the model because I seem to mix Keras and Tensorflow:
TypeError: The added layer must be an instance of class Layer. Found: Tensor("input_2:0", shape=(?, 22), dtype=float32)
I am quite new to both packages, and I'd appreciate if somebody could tell me how to fix this error.
nb_features = X_train.shape[2]
hidden_neurons = nb_classes*3
timestamps = X_train.shape[1]
NUM_CLASSES = 3
BATCH_SIZE = 32
input_size = len(col_names)
hidden_size = int(input_size/2)
code_size = int(input_size/4)
model = Sequential()
model.add(LSTM(
units=hidden_neurons,
return_sequences=True,
input_shape=(timestamps, nb_features),
dropout=0.15,
recurrent_dropout=0.20
)
)
input_vec = Input(shape=(input_size,))
# Encoder
hidden_1 = Dense(hidden_size, activation='relu')(input_vec)
code = Dense(code_size, activation='relu')(hidden_1)
# Decoder
hidden_2 = Dense(hidden_size, activation='relu')(code)
output_vec = Dense(input_size, activation='relu')(hidden_2)
model.add(input_vec)
model.add(hidden_1)
model.add(code)
model.add(hidden_2)
model.add(output_vec)
model.add(Dense(units=100,
kernel_initializer='normal'))
model.add(LeakyReLU(alpha=0.5))
model.add(Dropout(0.20))
model.add(Dense(units=200,
kernel_initializer='normal',
activation='relu'))
model.add(Flatten())
model.add(Dense(units=200,
kernel_initializer='uniform',
activation='relu'))
model.add(Dropout(0.10))
model.add(Dense(units=NUM_CLASSES,
activation='softmax'))
model.compile(loss="categorical_crossentropy",
metrics = ["accuracy"],
optimizer='adam')
The issue is that you are mixing Keras' sequential API with its functional API. To fix your issue, you must replace:
input_vec = Input(shape=(input_size,))
# Encoder
hidden_1 = Dense(hidden_size, activation='relu')(input_vec)
code = Dense(code_size, activation='relu')(hidden_1)
# Decoder
hidden_2 = Dense(hidden_size, activation='relu')(code)
output_vec = Dense(input_size, activation='relu')(hidden_2)
With:
# Encoder
model.add(Dense(hidden_size, activation='relu'))
model.add(Dense(code_size, activation='relu'))
# Decoder
model.add(Dense(hidden_size, activation='relu'))
model.add(Dense(input_size, activation='relu'))
Or convert everything to the functional API

Keras input_shape error

I keep receiving this error:
Error when checking target: expected dense_256 to have shape (1,) but got array with shape (10,)
I have check my X_train variable and I get a shape of (576,10). So, I have 576 samples, each with 10 features (all of which have been scaled already).
I now try this:
classifier = Sequential()
classifier.add(Dense(units = 5, kernel_initializer='uniform', activation = 'relu', input_shape=(10,)))
classifier.add(Dense(units = 5, kernel_initializer='uniform', activation = 'relu'))
classifier.add(Dense(units = 5, kernel_initializer='uniform', activation = 'relu'))
classifier.add(Dense(units = 5, kernel_initializer='uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer='uniform', activation = 'relu'))
classifier.compile(optimizer = 'adam', loss='mean_squared_error', metrics=['mse', 'mae', 'mape'])
classifier.fit(X_train, y_train, batch_size = 10, epochs=100)
Which is when I get the input_shape error referenced above.
So, my question is, when defining input_shape, how do I set this correctly?

Keras: concatenating model flattened output with vector

I have a Keras model defined as such:
model = Sequential()
model.add(embedding_layer)
model.add(Conv1D(filters=256, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=3))
model.add(Flatten())
model.add(Dense(num_classes, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
After the Flatten() layer, I want to concatenate 2 additional features, i.e. if Flatten() gives me a vector of size (1, n) (model.output_shape == (None, n)), I want to concatenate a separate numpy array of size (1, 2) so model.output_shape == (None, n+2). How would I go about doing this?
I think keras.layers.merge.Concatenate is what I'm looking for here, but I don't know how to implement it. There aren't many examples online, and Keras 2.0 also uses an updated syntax. Any help would be appreciated.
I played around a bit and figured it out. For anyone who's interested: this is a good use case for Keras' functional API, which always returns tensors, on which you can do tensor operations.
embedded_sequence = embedding_layer(sequence_input)
x = Conv1D(filters=256, kernel_size=3, activation='relu', padding='same')(embedded_sequence)
x = MaxPooling1D(pool_size=3)(x)
x = Flatten()(x)
# additional features input
from keras.layers.merge import Concatenate
af_input = Input(shape=(data['af_train'].shape[1],), name='af_input')
x = Concatenate()([x, af_input])
# output
main_output = Dense(num_classes, activation='sigmoid', name='main_output')(x)
model = Model(inputs=[sequence_input, af_input], outputs=main_output)
model.compile(loss='binary_crossentropy', optimizer='adam')
I haven't tested this code, but I did something similar and it worked (may not be the most efficient way though):
model = Sequential()
model.add(embedding_layer)
model.add(Conv1D(filters=256, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=3))
model.add(Flatten())
auxiliary_input = Input(shape=(2,), name='aux_input')
final_model = Sequential()
final_model.add(Merge([model, auxiliary_input], mode='concat'))
final_model.add(Dense(num_classes, activation='sigmoid'))
final_model.compile(loss='binary_crossentropy', optimizer='adam')
There is also a part of doc that give example of having multiple inputs (and also multiple outputs) but using older API usage style.

Categories