I am trying to use Conv2d to train my model, and i have problem/ with the input of Conv2d layers. How can i reshape it to put into Conv2d
I am building model for classify voice accent, and using CNN. I am using Conv2d for this problem.
The shape of:
X_train: (78952, 26) (26 features)
X_test : (2574, 26)
I reshape it as (78952, 13, 2 , 1), (2574, 13, 2, 1) it ran well. But I cannot use kernel such as (3x3), (7x7), ....
How can I change the right input to put in Conv2d layers ?
Related
I am using keras tuner to optimize hyperparameters: hidden layers, neurons, activation function, and learning rate. I have time series regression problem with 31 inputs, 32 outputs with N number of data samples.
My original X_train shape is (N,31) and Y_train shape is (N,32). I transform it to work for keras shape and I reshape X_train and Y_train as following:
X_train.shape: (N,31,1)
Y_train.shape: (N,32).
In the above code, X_train.shape(1) is 31 and Y_train.shape(1) is 32. When I used hyperparameter tuning, it says ValueError: Input 0 of layer lstm_1 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 20).
Following Error exists:
What I am missing and what is its issues.
LSTM layers expects a 3D tensor input with the shape [batch, timesteps, feature]. Since you are using number of layers are a tuning parameter along with LSTM layers, when the number of LSTM layers is 2 and above, the LSTM layers after the first LSTM layer will also expect a 3D tensor as input which means that you will need to add the 'return_sequences=True' parameter to the setup so that the output tensor from previous LSTM layer has ndim=3 (i.e. batch size, timesteps, hidden state) which is fed into the next LSTM layer.
I'm trying to do a model using ResNet50 for image classification into 6 classes and I want to reduce the dimension of the images before using them to train the ResNet50 model. To do this I start creating a ResNet50 model using the model in keras:
ResNet = ResNet50(
include_top= None, weights='imagenet', input_tensor=None, input_shape=([64, 109, 3]),
pooling=None, classes=6)
And then I create a sequential model that includes ResNet50 but adding some final layers for the classification and also the first layer for dimensionality reduction before using ResNet50:
(About the input shape: The images I'm using have a dimension of 128x217 and the 3 is for the channel that ResNet needs)
model = models.Sequential()
model.add(GlobalAveragePooling2D(input_shape = ([128, 217, 3])))
model.add(ResNet)
model.add(GlobalAveragePooling2D())
model.add(Dense(units=512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=6, activation='softmax'))
But this doesn't work because the dimension after the first global average pooling doesn't fit with the input shape in the Resnet, the error I get is:
WARNING:tensorflow:Model was constructed with shape (None, 64, 109, 3) for input Tensor("input_6:0", shape=(None, 64, 109, 3), dtype=float32), but it was called on an input with incompatible shape (None, 3).
ValueError: Input 0 of layer conv1_pad is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 3]
I think I understand what is the problem but I don't know how to fix it since (None, 3) is not a valid input shape for ResNet50. How can I fix this? Thank you!:)
You should first understand what GlobalAveragePooling actually does. This layer cannot be apllied right after the input, because it will only give the maximum value of all the images for each channel (in your case 3 values, because you have 3 channels).
You have to use another method to reduce the size of the images (e.g. simple conversion to a smaller size.
I am learning the LSTM model to fit the data set to the multi-class classification, which is eight genres of music, but unsure about the input shape in the Keras model.
I've followed the tutorials here:
How to reshape input data for LSTM model
Multi-Class Classification Tutorial with the Keras Deep Learning Library
Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras
My data is like this:
vector_1,vector_2,...vector_30,genre
23.5 20.5 3 pop
.
.
.
(7678)
I transformed my data shape into (7678,1,30), which is 7678 pieces of music, 1 timestep, and 30 vectors. For the music genre, I used train_labels = pd.get_dummies(df['genre'])
Here is my model:
# build a sequential model
model = Sequential()
# keras convention to use the (1,30) from the scaled_train
model.add(LSTM(32,input_shape=(1,30),return_sequences=True))
model.add(LSTM(32,return_sequences=True))
model.add(LSTM(32))
# to avoid overfitting
model.add(Dropout(0.3))
# output layer
model.add(Dense(8,activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
Fitting the model
model.fit(scaled_train,train_labels,epochs=5,validation_data=(scaled_validation,valid_labels))
But when trying to fit the model, I got the error ValueError: Shapes (None, 8) and (None, 1, 8) are incompatible. Is there anything I did wrong in the code? Any help is highly appreciated.
The shape of my data
print(scaled_train.shape)
print(train_labels.shape)
print(scaled_validation.shape)
print(valid_labels.shape)
(7678, 1, 30)
(7678, 8)
(450, 30)
(450, 8)
EDIT
I've tried How to stack multiple lstm in keras?
But still, get the error ValueError: Input 0 of layer sequential_21 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 30]
As the name suggests, return_sequences=True will return a sequence (with a time step), That's why your output shape is (None, 1, 8): the time step is maintained. It doesn't flatten automatically when it goes through the dense layer. Try:
model = Sequential()
model.add(LSTM(32,input_shape=(1,30),return_sequences=False))
model.add(Dense(32,activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(8,activation='softmax'))
I guess this doesn't happen if you uncomment the second LSTM layer?
I'm solving a regression problem with Convolutional Neural Network(CNN) using Keras library. I have gone through many examples but failed to understand the concept of input shape to 1D Convolution
This my data set, 1 target variable with 3 raw signals.
For visualization the 5 segments of sensor signal are shown here, each segment has its own meaning
I want to give segment wise sensor values as input to the 1D Convolution layer but problem is that segments are of varibale length.
This is my CNN architecture
I tired to build my CNN model but confused
model = Sequential()
model.add(Conv1D(5, 7, activation='relu',input_shape=input_shape))
model.add(MaxPooling1D(pool_length=4))
model.add(Conv1D(4, 7, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
So, How can I give input to Conv1D of CNN in Keras? OR should I set fixed size input to Conv1D? but how?
My understanding is that the input_shape should be (time_steps, n_features), where time_steps would be the length of the segments (sequence of sensor signals) and n_features the number of channels (3 in your case, as you have 3 different sensors).
Therefore, the input to your network should have 3 dimensions (batch, steps, channels), where batch is the different segments.
I've only worked with fixed time_steps, If you really can't use segments with same length you might try to pad them with zeros.
On the Keras Documentation they say that you may use (None, 3) as the input_shape for variable-length sequences of 3-dimensional vectors, but I never used this way.
I need to write my CNN model as a Theano function with my weights already set by Keras (Tensorflow as the backend), but I am unsure about how to add the bias values associated with each layer.
This solution How can I get a 1D convolution in theano works nicely to write a single layer as a Theano function, but I need to stack my weights with the biases from each layer
Simplified version of my code:
model = Sequential([
InputLayer(batch_input_shape=(None,100,1)),
Convolution1D(nb_filter=16, filter_length=8, activation='relu', border_mode='same', init='he_normal', input_shape=(None,100,1)),
Convolution1D(nb_filter=32, filter_length=8, activation='relu', border_mode='same', init='he_normal'),
MaxPooling1D(pool_length=4),
Flatten(),
Dense(output_dim=32, activation='relu', init='he_normal'),
Dense(output_dim=1, input_dim=32, activation='linear'),
])
How do you add the bias weights to the CNN layer?
For instance, the weights of my first layer have the dimensions: (8, 1, 1, 16)
With a bias with dimensions: (16,)
Which is easy enough to concatenate together to get dimensions: (9, 1, 1, 16)
but for the next layer I have dimensions: (8, 1, 16, 32)
with a bias with dimensions: (32,)
How can I combine this into one weight matrix? To put into the Theano T.signal.conv.conv2d function?