Matrix size-incompatible - Keras Tensorflow - python

I'm trying to train a simple model over some picture data that belongs to 10 classes.
The images are in B/W format (not gray scale), I'm using the image_dataset_from_directory to import the data into python as well as split it into validation/training sets.
My code is as below:
My Imports
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense
Read Image Data
trainDT = tf.keras.preprocessing.image_dataset_from_directory(
data_path,
labels="inferred",
label_mode="categorical",
class_names=['0','1','2','3','4','5','6','7','8','9'],
color_mode="grayscale",
batch_size=4,
image_size=(256, 256),
shuffle=True,
seed=44,
validation_split=0.1,
subset='validation',
interpolation="bilinear",
follow_links=False,
)
Model Creation/Compile/Fit
model = Sequential([
Dense(units=128, activation='relu', input_shape=(256,256,1), name='h1'),
Dense(units=64, activation='relu',name='h2'),
Dense(units=16, activation='relu',name='h3'),
layers.Flatten(name='flat'),
Dense(units=10, activation='softmax',name='out')
],name='1st')
model.summary()
model.compile(optimizer='adam' , loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x=trainDT, validation_data=train_data, epochs=10, verbose=2)
The model training returns an error:
InvalidArgumentError Traceback (most recent call last)
....
/// anaconda paths and anaconda python code snippets in the error reporting \\\
....
InvalidArgumentError: Matrix size-incompatible: In[0]: [1310720,3], In[1]: [1,128]
[[node 1st/h1/Tensordot/MatMul (defined at <ipython-input-38-58d6507e2d35>:1) ]] [Op:__inference_test_function_11541]
Function call stack:
test_function
I don't understand where the size mismatch comes from, I've spent a few hours looking around for a solution and trying different things but nothing seems to work for me.
Appreciate any help, thank you in advance!

Dense layers expect flat input (not 3d tensor), but you are sending (256,256,1) shaped tensor into the first dense layer. If you want to use dense layers from the beginning then you will need to move the flatten to be the first layer or you will need to properly reshape your data.
model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(10, activation="softmax")
])
Also, the flatten between 2 dense layers makes no sense because the output of a dense layer is flat anyway.
From the structure of your model (especially the flatten placement), I assume that
those dense layers were supposed to be convolutional layers instead.
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(10, activation="softmax")
])
Convolutional layers can process 2D input and they will also produce more dimensional output which you need to flatten before passing it to the dense top (note that you can add more convolutional layers).

Hy mhk777 Hope you are doing well. Brother, I think that you are confusing dense layers with convolution layers. You have to apply some convolution layers to the image before giving it to dense layers. If you don't want to apply convolution than you have to give 2d array to the dense layer i.e (number of samples, data)
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
model = models.Sequential()
# Here are convolutional Layer
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(256,256,1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# Here are your dense layers
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
model.compile(optimizer='adam' , loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x=trainDT, validation_data=train_data, epochs=10, verbose=2)

Related

How to implement Many to Many LSTM architecture for numerical data (not timeseries , not NLP) in Keras

I have read this, this
I have numerical data in arrays of shape,
input_array = 14674 x 4
output_array = 13734 x 4
reshaping for LSTM (batch, timesteps, features) gives
input_array= (14574, 100, 4)
output_array = (13634, 100, 4)
Now I would like to build a Many to Many LSTM architecture for this given data,
should I use encoder-decorder or synced sequence input and output architecture
using following model but it works on when input and outputs are same
import tenowingsorflow
from tensorflow.keras.metrics import Recall, Precision
from tensorflow.keras.layers import Conv1D, Dense, MaxPooling1D, Flatten
opt = tensorflow.keras.optimizers.Adam(learning_rate=0.001)
model_enc_dec_cnn = Sequential()
model_enc_dec_cnn.add(Conv1D(filters=64, kernel_size=9, activation='relu', input_shape=(100, 4)))
model_enc_dec_cnn.add(Conv1D(filters=64, kernel_size=11, activation='relu'))
model_enc_dec_cnn.add(MaxPooling1D(pool_size=2))
model_enc_dec_cnn.add(Flatten())
model_enc_dec_cnn.add(RepeatVector(100))
model_enc_dec_cnn.add(LSTM(100, activation='relu', return_sequences=True))
model_enc_dec_cnn.add(TimeDistributed(Dense(4)))
model_enc_dec_cnn.compile( optimizer=opt, loss='mse', metrics=['accuracy'])
history = model_enc_dec_cnn.fit(X,y, epochs=3, batch_size=64, )

I want to change the specific prediction of my CNN-model to a probability

I trained a model to categorize pictures in two different types. Everything is working
quite good, but my Model can only do a specific prediction (1 or 0 in my case), but I am interested to have a prediction which is more like a probability (For example 90% 1 and 10% 0).
Where is the part of my code which I should change now? Is it something with the sigmoid function in the end which decides if its 1 or 0? Help would be nice. Thanks in advance.
import numpy as np
from keras.callbacks import TensorBoard
from keras import regularizers
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense, Conv2D, MaxPooling2D
from keras.optimizers import Adam
from keras.metrics import categorical_crossentropy
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.layers.normalization import BatchNormalization
from utils import DataGenerator, PATH
train_path = 'Dataset/train'
valid_path = 'Dataset/valid'
test_path = 'Dataset/test'
model = Sequential()
model.add(Conv2D(16, (3, 3), input_shape=(640, 640, 1), padding='same', activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Conv2D(32, (3, 3), activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(MaxPooling2D(pool_size=(5, 5)))
model.add(Conv2D(64, (3, 3), activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(MaxPooling2D(pool_size=(6, 6)))
model.add(Flatten())
model.add(Dense(64, activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
print(model.summary())
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=1e-3), metrics=['accuracy'])
epochs = 50
batch_size = 16
datagen = DataGenerator()
datagen.load_data()
model.fit_generator(datagen.flow(batch_size=batch_size), epochs=epochs, validation_data=datagen.get_validation_data(),
callbacks=[TensorBoard(log_dir=PATH+'/tensorboard')])
#model.save_weights('first_try.h5')
model.save('second_try')
If I try to get a picture in my model like this:
path = 'train/clean/picturenumber2'
def prepare(filepath):
IMG_SIZE = 640
img_array = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
model = tf.keras.models.load_model('second_try')
prediction = model.predict(prepare(path))
print(prediction)
I just get an output like this: [[1.]] Also if I put in a list with multiple pictures. The prediction itself seems to be working.
short answer :
change sigmoid activation function in the last layer to softmax
why ?
because sigmoid output range is 0.0 to 1.0, so to make a meaningful interpretation of this output, you choose an appropriate threshold above which represents the positive class and anything below as negative class.(for a binary classification problem)
even softmax has the same output range but the difference being its outputs are normalized class probabilities more on that here, so if your model outputs 0.99 on any given input, then it can be interpreted as the model is 99.0% confident that it is a positive class and 0.1% confident that it belongs to a negative class.
update :
as #amin suggested, if you need normalized probabilities you should do couple more changes for it to work.
modify your data generator to output 2 classes/labels instead of one.
change last Dense layer from 1 node to 2 nodes.

Keras CNN model accuracy not improving and decreasing over epoch?

Newbie to machine learning here.
I'm currently working on a diagnostic machine learning framework using 3D-CNNs on fMRI imaging. My dataset consists of 636 images right now, and I'm trying to distinguish between control and affected (binary classification). However, when I tried to train my model, after every epoch, my accuracy remains at 48.13%, no matter what I do. Additionally, over the epoch, the accuracy decreases from 56% to 48.13%.
So far, I have tried:
changing my loss functions (poisson, categorical cross entropy, binary cross entropy, sparse categorical cross entropy, mean squared error, mean absolute error, hinge, hinge squared)
changing my optimizer (I've tried Adam and SGD)
changing the number of layers
using weight regularization
changing from ReLU to leaky ReLU (I thought perhaps that could help if this was a case of overfitting)
Nothing has worked so far.
Any tips? Here's my code:
#importing important packages
import tensorflow as tf
import os
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, Conv3D, MaxPooling3D, Dropout, BatchNormalization, LeakyReLU
import numpy as np
from keras.regularizers import l2
from sklearn.utils import compute_class_weight
from keras.optimizers import SGD
BATCH_SIZE = 64
input_shape=(64, 64, 40, 20)
# Create the model
model = Sequential()
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization(center=True, scale=True))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization(center=True, scale=True))
model.add(Flatten())
model.add(BatchNormalization(center=True, scale=True))
model.add(Dense(128, activation='relu', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(128, activation='sigmoid', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
model.add(Dense(1, activation='softmax', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
# Compile the model
model.compile(optimizer = keras.optimizers.sgd(lr=0.000001), loss='poisson', metrics=['accuracy', tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])
# Model Testing
history = model.fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=50, verbose=1, shuffle=True)
The main issue is that you are using softmax activation with 1 neuron. Change it to sigmoid with binary_crossentropy as a loss function.
At the same time, bear in mind that you are using Poisson loss function, which is suitable for regression problems not classification ones. Ensure that you detect the exact scenario that your are trying to solve.
Softmax with one neuron makes the model illogical and only use one of the sigmoid activation functions or Softmax in the last layer

How to apply model.fit() function over an CNN-LSTM model?

I am trying to use this to classify the images into two categories. Also I applied model.fit() function but its showing error.
ValueError: A target array with shape (90, 1) was passed for an output of shape (None, 10) while using as loss binary_crossentropy. This loss expects targets to have the same shape as the output.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, LSTM
import pickle
import numpy as np
X = np.array(pickle.load(open("X.pickle","rb")))
Y = np.array(pickle.load(open("Y.pickle","rb")))
#scaling our image data
X = X/255.0
model = Sequential()
model.add(Conv2D(64 ,(3,3), input_shape = (300,300,1)))
# model.add(MaxPooling2D(pool_size = (2,2)))
model.add(tf.keras.layers.Reshape((16, 16*512)))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt,
metrics=['accuracy'])
# model.summary()
model.fit(X, Y, batch_size=32, epochs = 2, validation_split=0.1)
If your problem is categorical, your issue is that you are using binary_crossentropy instead of categorical_crossentropy; ensure that you do have a categorical instead of a binary classification problem.
Also, please note that if your labels are in simple integer format like [1,2,3,4...] and not one-hot-encoded, your loss_function should be sparse_categorical_crossentropy, not categorical_crossentropy.
If you do have a binary classification problem, like said in the error of the above ensure that:
Loss is binary_crossentroy + Dense(1,activation='sigmoid')
Loss is categorical_crossentropy + Dense(2,activation='softmax')

How does it works the input_shape variable in Conv1d in Keras?

Ciao,
I'm working with CNN 1d on Keras but I have tons of troubles with the input shape variable.
I have a time series of 100 timesteps and 5 features with boolean labels. I want to train a CNN 1d that works with a sliding window of length 10. This is a very simple code I wrote:
from keras.models import Sequential
from keras.layers import Dense, Conv1D
import numpy as np
N_FEATURES=5
N_TIMESTEPS=10
X = np.random.rand((100, N_FEATURES))
Y = np.random.randint(0,2, size=100)
# CNN
model.Sequential()
model.add(Conv1D(filter=32, kernel_size=N_TIMESTEPS, activation='relu', input_shape=N_FEATURES
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
My problem here is that I get the following error:
File "<ipython-input-2-43966a5809bd>", line 2, in <module>
model.add(Conv1D(filter=32, kernel_size=10, activation='relu', input_shape=N_FEATURES))
TypeError: __init__() takes at least 3 arguments (3 given)
I've also tried by passing to the input_shape the following values:
input_shape=(None, N_FEATURES)
input_shape=(1, N_FEATURES)
input_shape=(N_FEATURES, None)
input_shape=(N_FEATURES, 1)
input_shape=(N_FEATURES, )
Do you know what's wrong with the code or in general can you explain the logic behind in input_shape variable in Keras CNN?
The crazy thing is that my problem is the same of the following:
Keras CNN Error: expected Sequence to have 3 dimensions, but got array with shape (500, 400)
But I cannot solve it with the solution given in the post.
The Keras version is 2.0.6-tf
Thanks
This should work:
from keras.models import Sequential
from keras.layers import Dense, Conv1D
import numpy as np
N_FEATURES=5
N_TIMESTEPS=10
X = np.random.rand(100, N_FEATURES)
Y = np.random.randint(0,2, size=100)
# Create a Sequential model
model = Sequential()
# Change the input shape to input_shape=(N_TIMESTEPS, N_FEATURES)
model.add(Conv1D(filters=32, kernel_size=N_TIMESTEPS, activation='relu', input_shape=(N_TIMESTEPS, N_FEATURES)))
# If it is a binary classification then you want 1 neuron - Dense(1, activation='sigmoid')
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Please see the comments before each line of code. Moreover, the input shape that Conv1D expects is (time_steps, feature_size_per_time_step). The translation of that for your code is (N_TIMESTEPS, N_FEATURES).

Categories