Error of Shape Using Conv1D in Keras - python

I have troubles using Conv1D as an input layer in Sequential NN with Keras.
Here is my code :
import numpy as np
from keras.layers.convolutional import Conv1D
from keras.models import Sequential
from keras.optimizers import Adam
conv1d = Conv1D(input_shape=(None, 16), kernel_size=2, filters=2)
model = Sequential()
model.add(conv1d)
model.compile(loss="logcosh", optimizer=Adam(lr=0.001))
x_train = np.zeros((32, 16, 1))
y_train = np.zeros((32, 16, 1))
print(x_train.shape)
model.fit(x_train, y_train, batch_size=4, epochs=20)
Here is the error. I have tried multiple things but none of them helped me to resolve the issue.
ValueError: Error when checking input: expected conv1d_47_input to have shape (None, 16) but got array with shape (16, 1)

Conv1D expects the inputs to have the shape (batch_size, steps, input_dim).
Based on the shape of your training data, you have max length 16 and input dimensionality just 1. Is that what you need?
If so, then the input shape can be specified either as (16, 1) (length is always 16) or (None, 1) (dynamic length).
If you meant to define sequences of length 1 and dimensionality 16, then you need a different shape of the training data:
x_train = np.zeros((32, 1, 16))
y_train = np.zeros((32, 1, 16))

I managed to find a solution using flatten function and a dense layer and it worked
import numpy as np
from keras.layers.convolutional import Conv1D
from keras.models import Sequential
from keras.optimizers import Adam
from keras.layers import Conv1D, Dense, MaxPool1D, Flatten, Input
conv1d = Conv1D(input_shape=(16,1), kernel_size=2, filters=2)
model = Sequential()
model.add(conv1d)
model.add(Flatten())
model.add(Dense(16))
model.compile(optimizer=optimizer,loss="cosine_proximity",metrics=["accuracy"])
x_train = np.zeros((32,16,1))
y_train = np.zeros((32,16))
print(x_train.shape)
print()
model.fit(x_train, y_train, batch_size=4, epochs=20)

Related

ValueError: Can not squeeze dim[1], expected a dimension of 1 for '{{node binary_crossentropy/weighted_loss/Squeeze}}

I'm trying to fit a LSTM-model to my data with a Masking Layer in front and I get this error:
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 4 for '{{node binary_crossentropy/weighted_loss/Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1]](Cast)' with input shapes: [128,4].
This is my code:
from tensorflow.keras.layers import LSTM, Dense, BatchNormalization, Masking
from tensorflow.keras.losses import BinaryCrossentropy
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Nadam
import numpy as np
if __name__ == '__main__':
# define stub data
samples, timesteps, features = 128, 4, 99
X = np.random.rand(samples, timesteps, features)
Y = np.random.randint(0, 2, size=(samples))
# create model
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(None, 99)))
model.add(LSTM(100, return_sequences=True))
model.add(BatchNormalization())
model.add(Dense(1, activation='sigmoid'))
optimizer = Nadam(learning_rate=0.0001)
loss = BinaryCrossentropy(from_logits=False)
model.compile(loss=loss, optimizer=optimizer)
# train model
model.fit(
X,
Y,
batch_size=128)
I see from this related post, that I can't use one-hot encoded labels, but my labels are not one-hot encoded.
Also, when I remove the masking layer, training works.
From my understanding one sample consists of 4 timesteps with 99 features here. The shape of X is therefore (128,4,99)
Therefore, I only have to provide one label for each sample, the shape of Y therefore being (128,)
But it seems like the dimensions of X and or Y are not correct, as tensorflow wants to change its dimensions?
I have tried providing a label per timestep of each sample (Y = np.random.randint(0, 2, size=(samples, timesteps)), with the same result.
Why does adding the masking layer introduce this error? And how can I keep the masking layer without getting the error?
System Information:
Python version: 3.9.5
Tensorflow version: 2.5.0
OS: Windows
I don't think the problem is the Masking layer. Since you set the parameter return_sequences to True in the LSTM layer, you are getting a sequence with the same number of time steps as your input and an output space of 100 for each timestep, hence the shape (128, 4, 100), where 128 is the batch size. Afterwards, you apply a BatchNormalization layer and finally a Dense layer resulting in the shape (128, 4, 1). The problem is your labels have a 2D shape (128, 1) and your model has a 3D output due to the return_sequences parameter. So, simply setting this parameter to False should solve your problem. See also this post.
Here is a working example:
from tensorflow.keras.layers import LSTM, Dense, BatchNormalization, Masking
from tensorflow.keras.losses import BinaryCrossentropy
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Nadam
import numpy as np
if __name__ == '__main__':
# define stub data
samples, timesteps, features = 128, 4, 99
X = np.random.rand(samples, timesteps, features)
Y = np.random.randint(0, 2, size=(samples))
# create model
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(None, 99)))
model.add(LSTM(100, return_sequences=False))
model.add(BatchNormalization())
model.add(Dense(1, activation='sigmoid'))
optimizer = Nadam(learning_rate=0.0001)
loss = BinaryCrossentropy(from_logits=False)
model.compile(loss=loss, optimizer=optimizer)
# train model
model.fit(
X,
Y,
batch_size=128)

Data cardinality is ambiguous error when i trying to split keras dataset to two classes

I write a classifier using data keras cifar10, I want to recognize only two classes, not all 10. I have a problem because when I learn for all 10 classes, the program works well and learns correctly, but when I take data only for two classes then I have a ValueError error: Data cardinality is ambiguous, Make sure all arrays contain the same number of samples. I don't understand why the error is because the data looks correct
from keras.datasets import cifar10
from matplotlib import pyplot as plt
from keras.utils import to_categorical
import numpy as np
from sklearn.model_selection import train_test_split
data = cifar10.load_data()
X=data[0][0].astype('float32') / 255.0
y=to_categorical(data[0][1])
X_new = []
y_new = []
# Split data to 2 classes
for x_change,y_change in zip (X, y):
if y_change[0] == 1 or y_change[1] == 1:
X_new.append(x_change)
y_new.append(y_change)
X_train, X_test, y_train, y_test = train_test_split(X_new, y_new, test_size=0.3)
for i in range(10):
print(y_train[i])
plt.imshow(X_train[i])
plt.show()
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Dense
from keras.layers import Flatten
model3 = Sequential()
model3.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model3.add(MaxPooling2D((2, 2)))
model3.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model3.add(MaxPooling2D((2, 2)))
model3.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model3.add(MaxPooling2D((2, 2)))
model3.add(Flatten())
model3.add(Dense(10, activation='softmax'))
model3.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = model3.fit(X_train, y_train, epochs=600, batch_size=64, validation_data=(X_test, y_test))
If you want to classify on 2 classes, the last layer must have one unit and the sigmoid activation.
The last layer that you are using has 10 units and the activaction function is softmax, used for multiclass tasks.
---***---
You should verify too the batch size (see question1 and question2). Seems to me that you pass batch_size = 32 in the model construction and batch_size = 64 when fitting the model.

Error when checking input: expected lstm_input to have 3 dimensions, but got array with shape (5, 10)

import numpy as np
from tensorflow.python.keras.layers import Input, Dense
from keras.models import Sequential
from keras.layers import Dense
import tensorflow as tf
import tensorflow
from tensorflow import keras
from keras.layers import Dense
x = np.stack([np.random.choice(range(10), 10, replace=False) for _ in range(5)])
y = np.stack([np.random.choice(range(10), 10, replace=False) for _ in range(5)])
model = keras.models.Sequential()
model.add(keras.layers.LSTM(16, activation='relu', input_shape=(5,10), return_sequences=False))
model.add(keras.layers.Dense(12, activation='relu'))
model.add(keras.layers.Dense(10, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy')
model.fit(x,y)
My input and output shape is (5, 10) and dimension is 2.
When I try to execute above code the following error message occurs:
ValueError: Error when checking input: expected lstm_input to have 3 dimensions, but got array with shape (5, 10)
Your input should be of size 3 for a LSTM: [batch_size, nb_timesteps, nb_features].
The input_shape argument specify the size of each instance which is [nb_timesteps, nb_features], but the lstm expect batch of instances so the tensor you send in should have an dditional batch_size dimension like [batch_size, 5, 10]

How does it works the input_shape variable in Conv1d in Keras?

Ciao,
I'm working with CNN 1d on Keras but I have tons of troubles with the input shape variable.
I have a time series of 100 timesteps and 5 features with boolean labels. I want to train a CNN 1d that works with a sliding window of length 10. This is a very simple code I wrote:
from keras.models import Sequential
from keras.layers import Dense, Conv1D
import numpy as np
N_FEATURES=5
N_TIMESTEPS=10
X = np.random.rand((100, N_FEATURES))
Y = np.random.randint(0,2, size=100)
# CNN
model.Sequential()
model.add(Conv1D(filter=32, kernel_size=N_TIMESTEPS, activation='relu', input_shape=N_FEATURES
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
My problem here is that I get the following error:
File "<ipython-input-2-43966a5809bd>", line 2, in <module>
model.add(Conv1D(filter=32, kernel_size=10, activation='relu', input_shape=N_FEATURES))
TypeError: __init__() takes at least 3 arguments (3 given)
I've also tried by passing to the input_shape the following values:
input_shape=(None, N_FEATURES)
input_shape=(1, N_FEATURES)
input_shape=(N_FEATURES, None)
input_shape=(N_FEATURES, 1)
input_shape=(N_FEATURES, )
Do you know what's wrong with the code or in general can you explain the logic behind in input_shape variable in Keras CNN?
The crazy thing is that my problem is the same of the following:
Keras CNN Error: expected Sequence to have 3 dimensions, but got array with shape (500, 400)
But I cannot solve it with the solution given in the post.
The Keras version is 2.0.6-tf
Thanks
This should work:
from keras.models import Sequential
from keras.layers import Dense, Conv1D
import numpy as np
N_FEATURES=5
N_TIMESTEPS=10
X = np.random.rand(100, N_FEATURES)
Y = np.random.randint(0,2, size=100)
# Create a Sequential model
model = Sequential()
# Change the input shape to input_shape=(N_TIMESTEPS, N_FEATURES)
model.add(Conv1D(filters=32, kernel_size=N_TIMESTEPS, activation='relu', input_shape=(N_TIMESTEPS, N_FEATURES)))
# If it is a binary classification then you want 1 neuron - Dense(1, activation='sigmoid')
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Please see the comments before each line of code. Moreover, the input shape that Conv1D expects is (time_steps, feature_size_per_time_step). The translation of that for your code is (N_TIMESTEPS, N_FEATURES).

Wrong input shape for temporal 1D convolution in Keras

regarding input shapes – have been using LSTM for a while and didn’t have any problems with it but now I tried 1D convolutional layers for speeding up processing and now I run into trouble – can you see what the problem is with the following? (Dummy data used here)
I get an error for the fitting:
ValueError: Error when checking target: expected dense_17 to have 2
dimensions, but got array with shape (400, 20, 2)
I cannot see what is wrong here?!
Code is shown below
#load packages
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, GRU,
TimeDistributed
from keras.layers import Conv1D, MaxPooling1D, Flatten,
GlobalAveragePooling1D
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
nfeat, kernel, timeStep, length, fs = 36, 8, 20, 100, 100
#data (dummy)
data = np.random.rand(length*fs,nfeat)
classes = 0*data[:,0]
classes[:int(length/2*fs)] = 1
#make correct input shape (batch, timestep, feature)
X = np.asarray([data[i*timeStep:(i + 1)*timeStep,:] for i in
range(0,length * fs // timeStep)])
#classes
Y = np.asarray([classes[i*timeStep:(i + 1)*timeStep] for i in
range(0,length * fs // timeStep)])
#split into training and test set
from sklearn.model_selection import train_test_split
trainX, testX, trainY, testY =
train_test_split(X,Y,test_size=0.2,random_state=0)
#one-hot-encoding
trainY_OHC = np_utils.to_categorical(trainY)
trainY_OHC.shape, trainX.shape
#set up model with simple 1D convnet
model = Sequential()
model.add(Conv1D(8,10,activation=’relu’,input_shape=(timeStep,nfeat)))
model.add(MaxPooling1D(3))
model.add(Flatten())
model.add(Dense(10,activation=’tanh’))
model.add(Dense(2,activation=’softmax’))
model.summary()
#compile model
model.compile(loss=’mse’,optimizer=’Adam’ ,metrics=[‘accuracy’])
#train model
model.fit(trainX,trainY_OHC,epochs=5,batch_size=4,
validation_split=0.2)

Categories