Python (Keras): Value Error: Error when checking input - python

I'm trying to train a CNN on word vectors generated using the gensim library. After I have generated all of my data in numeric form, I try to pass it on to a CNN model using Keras when I get the following error:
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (20000, 250, 50)
I've searched on this problem for hours, and all of the solutions posted for similar/same issues haven't been able to solve this error for me. Can anyone see where I'm going wrong with the input dimensions? I've generated some random numpy data that recreates the error:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Convolution2D, Flatten, Dropout
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.callbacks import TensorBoard
t = np.random.rand(20000,250,50)
l = np.random.rand(20000,1)
embedding_vecor_length = 50
net = Sequential()
net.add(Convolution2D(64, 3,input_shape=(1,250,50),
data_format='channels_first'))
# Convolutional model (3x conv, flatten, 2x dense)
net.add(Convolution2D(32,(3), padding='same'))
net.add(Convolution2D(16,(3), padding='same'))
net.add(Convolution2D(8,(3), padding='same'))
net.add(Flatten())
net.add(Dropout(0.2))
net.add(Dense(180,activation='sigmoid'))
net.add(Dropout(0.2))
net.add(Dense(1,activation='sigmoid'))
net.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
tensorBoardCallback = TensorBoard(log_dir='./logs', write_graph=True)
net.summary()
net.fit(t, l, epochs=3, callbacks=[tensorBoardCallback], batch_size=64)

Convolutions use 4 dimensions. Considering you're using "channels_first":
Images
Channels
Side 1
Side 2
Your input is missing the channels.
t = np.random.rand(20000,1,250,50)

Related

How do I format the Dense Layer Input Shape?

So I have been reading other posts about Dense layers and the input shape and unfortunately am just not really grasping how to adjust the input shape. I am trying to replicate a model that is here:
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
import tensorflow as tf
model = Sequential()
model.add(tf.keras.Input(shape=input_shape))
model.add(Dense(64,
activation='tanh'))
model.add(Dense(64,
activation='tanh'))
model.add(Dropout(0.15))
model.compile(loss=root_mean_squared_error,
optimizer=tf.keras.optimizers.Adam(learning_rate))
My inputs have been in batches of 168 elements with 3 features each. To my understanding (which is very limited, I've been learning what I can as I go), this would leave me with an input shape of (168,3). When I input that the error code that comes out is
ValueError: Dimensions must be equal, but are 64 and 3 for '{{node root_mean_squared_error/sub}} =
Sub[T=DT_FLOAT](sequential/dropout/dropout/Mul_1, Cast)' with input shapes: [?,168,64], [?,1,3].
Is there something I am missing? When I do the same thing with an LSTM model I just put the 'input_shape' variable as a parameter in the first LSTM layer. Thank you in advance for helping me and maybe pointing me in the right direction.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
model = Sequential()
model.add(tf.keras.Input(shape=(168,3)))
model.add(Dense(64,
activation='tanh'))
model.add(Dense(64,
activation='tanh'))
model.add(Dropout(0.15))
model.compile(loss="mean_squared_error",
optimizer=tf.keras.optimizers.Adam(0.1))
print(model.summary())
x = tf.constant(tf.random.normal(shape=(168,3)))
y = tf.constant(tf.random.normal(shape=(168,1)))
model.fit(x=x,y=y)

TF Keras: ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2

So I have sequences of 2D Vectors that form patterns. I want to predict how the sequence continues.
I have a start_xy array constisting of arrays with the order, start_x and start_y:
e.g. [1, 2.4, 3.8]
and the same for the end_xy.
I want to train a model a sequence prediction model:
import numpy as np
import pickle
import keras
from keras.models import Sequential
from keras.layers import LSTM, Dense
from keras.callbacks import ModelCheckpoint
import training_data_generator
tdg = training_data_generator.training_data_generator(500)
trainingdata = tdg.produceTrainingSequences()
print("Printing DATA!:")
start_xy =[]
end_xy =[]
for batch in trainingdata:
for pattern in batch:
order = 1
for sequence in pattern:
start = [order,sequence[0],sequence[1]]
start_xy.append(start)
end = [order,sequence[2],sequence[3]]
end_xy.append(end)
order = order +1
model = Sequential()
model.add(LSTM(64, return_sequences=False, input_shape=(2,len(start_xy))))
model.add(Dense(2, activation='relu'))
model.compile(loss='mse', optimizer='adam')
model.fit(start_xy,end_xy,batch_size=len(start_xy), epochs=5000, verbose=2)
But I get the error message:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [320, 3]
I suspect I have to reshape my inputs somehow, but I don't yet understand how.
How do I make this work?
Am I even doing this the right way?
You mostly just have to convert your data into numpy arrays and do some reshaping of that data so that the model will accept it.
First convert start_xy into a numpy array and reshape it to have 3 dims:
start_xy = np.array(start_xy)
start_xy = start_xy.reshape(*start_xy.shape, 1)
Next fix the input shape for the LSTM layer to be [3, 1]:
model.add(LSTM(64, return_sequences=False, input_shape=start_xy.shape[1:]))
Let me know if the error persists or if another one comes up!

How does it works the input_shape variable in Conv1d in Keras?

Ciao,
I'm working with CNN 1d on Keras but I have tons of troubles with the input shape variable.
I have a time series of 100 timesteps and 5 features with boolean labels. I want to train a CNN 1d that works with a sliding window of length 10. This is a very simple code I wrote:
from keras.models import Sequential
from keras.layers import Dense, Conv1D
import numpy as np
N_FEATURES=5
N_TIMESTEPS=10
X = np.random.rand((100, N_FEATURES))
Y = np.random.randint(0,2, size=100)
# CNN
model.Sequential()
model.add(Conv1D(filter=32, kernel_size=N_TIMESTEPS, activation='relu', input_shape=N_FEATURES
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
My problem here is that I get the following error:
File "<ipython-input-2-43966a5809bd>", line 2, in <module>
model.add(Conv1D(filter=32, kernel_size=10, activation='relu', input_shape=N_FEATURES))
TypeError: __init__() takes at least 3 arguments (3 given)
I've also tried by passing to the input_shape the following values:
input_shape=(None, N_FEATURES)
input_shape=(1, N_FEATURES)
input_shape=(N_FEATURES, None)
input_shape=(N_FEATURES, 1)
input_shape=(N_FEATURES, )
Do you know what's wrong with the code or in general can you explain the logic behind in input_shape variable in Keras CNN?
The crazy thing is that my problem is the same of the following:
Keras CNN Error: expected Sequence to have 3 dimensions, but got array with shape (500, 400)
But I cannot solve it with the solution given in the post.
The Keras version is 2.0.6-tf
Thanks
This should work:
from keras.models import Sequential
from keras.layers import Dense, Conv1D
import numpy as np
N_FEATURES=5
N_TIMESTEPS=10
X = np.random.rand(100, N_FEATURES)
Y = np.random.randint(0,2, size=100)
# Create a Sequential model
model = Sequential()
# Change the input shape to input_shape=(N_TIMESTEPS, N_FEATURES)
model.add(Conv1D(filters=32, kernel_size=N_TIMESTEPS, activation='relu', input_shape=(N_TIMESTEPS, N_FEATURES)))
# If it is a binary classification then you want 1 neuron - Dense(1, activation='sigmoid')
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Please see the comments before each line of code. Moreover, the input shape that Conv1D expects is (time_steps, feature_size_per_time_step). The translation of that for your code is (N_TIMESTEPS, N_FEATURES).

Wrong input shape for temporal 1D convolution in Keras

regarding input shapes – have been using LSTM for a while and didn’t have any problems with it but now I tried 1D convolutional layers for speeding up processing and now I run into trouble – can you see what the problem is with the following? (Dummy data used here)
I get an error for the fitting:
ValueError: Error when checking target: expected dense_17 to have 2
dimensions, but got array with shape (400, 20, 2)
I cannot see what is wrong here?!
Code is shown below
#load packages
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, GRU,
TimeDistributed
from keras.layers import Conv1D, MaxPooling1D, Flatten,
GlobalAveragePooling1D
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
nfeat, kernel, timeStep, length, fs = 36, 8, 20, 100, 100
#data (dummy)
data = np.random.rand(length*fs,nfeat)
classes = 0*data[:,0]
classes[:int(length/2*fs)] = 1
#make correct input shape (batch, timestep, feature)
X = np.asarray([data[i*timeStep:(i + 1)*timeStep,:] for i in
range(0,length * fs // timeStep)])
#classes
Y = np.asarray([classes[i*timeStep:(i + 1)*timeStep] for i in
range(0,length * fs // timeStep)])
#split into training and test set
from sklearn.model_selection import train_test_split
trainX, testX, trainY, testY =
train_test_split(X,Y,test_size=0.2,random_state=0)
#one-hot-encoding
trainY_OHC = np_utils.to_categorical(trainY)
trainY_OHC.shape, trainX.shape
#set up model with simple 1D convnet
model = Sequential()
model.add(Conv1D(8,10,activation=’relu’,input_shape=(timeStep,nfeat)))
model.add(MaxPooling1D(3))
model.add(Flatten())
model.add(Dense(10,activation=’tanh’))
model.add(Dense(2,activation=’softmax’))
model.summary()
#compile model
model.compile(loss=’mse’,optimizer=’Adam’ ,metrics=[‘accuracy’])
#train model
model.fit(trainX,trainY_OHC,epochs=5,batch_size=4,
validation_split=0.2)

Unexpected output from Embedding layer

I've been trying to implement an LSTM in Keras for several hours (using a sequential model with an embedding layer, two LSTM layers, and a dense layer), but I wind up getting different error messages.
From what I can tell, the problem is that the output of the embedding layer has two dimensions instead of three, because I get this value error (ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2) when adding the second LSTM layer, and I get the error assert len(input_shape) >= 3 AssertionError when I delete the line for adding the second LSTM layer (which means the dense layer has the same issue).
These error occurs before I call the model's "train" method.
My code is here.
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.layers import Embedding
from keras.layers import TimeDistributed
from keras.preprocessing import text
from keras.preprocessing.sequence import pad_sequences
# The data in X was preprocessed using Keras' built in pad_sequences.
# Before preprocessing, it consisted of plain lists of integers (which
# were just integers wit a one-to-one map to plain words as strings)
X = pad_sequences(X)
model = Sequential()
model.add(Embedding(batch_size=32, input_dim=len(filtered_vocabulary)+1, output_dim=256, input_length=38))
model.add(LSTM(128))
model.add(LSTM(128)) # error occurs in this line
model.add(TimeDistributed(Dense(len(filtered_vocabulary)+1, activation="softmax")))
model.compile(optimizer = "rmsprop", loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(X, X, epochs=60, batch_size=32)
I'd be glad if any of you could help me out with this.
The error occurs because the first LSTM layer only returns the last output because you haven't specified return_sequences=True in the LSTM layers. This looks like a multi-output setup so you would need to return the LSTM output at every time step using that argument for both layers.
Just to be clear, without return_sequences=True the shape is (None, 128); with the shape is (None, 38, 128).

Categories