I am trying to use this to classify the images into two categories. Also I applied model.fit() function but its showing error.
ValueError: A target array with shape (90, 1) was passed for an output of shape (None, 10) while using as loss binary_crossentropy. This loss expects targets to have the same shape as the output.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, LSTM
import pickle
import numpy as np
X = np.array(pickle.load(open("X.pickle","rb")))
Y = np.array(pickle.load(open("Y.pickle","rb")))
#scaling our image data
X = X/255.0
model = Sequential()
model.add(Conv2D(64 ,(3,3), input_shape = (300,300,1)))
# model.add(MaxPooling2D(pool_size = (2,2)))
model.add(tf.keras.layers.Reshape((16, 16*512)))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt,
metrics=['accuracy'])
# model.summary()
model.fit(X, Y, batch_size=32, epochs = 2, validation_split=0.1)
If your problem is categorical, your issue is that you are using binary_crossentropy instead of categorical_crossentropy; ensure that you do have a categorical instead of a binary classification problem.
Also, please note that if your labels are in simple integer format like [1,2,3,4...] and not one-hot-encoded, your loss_function should be sparse_categorical_crossentropy, not categorical_crossentropy.
If you do have a binary classification problem, like said in the error of the above ensure that:
Loss is binary_crossentroy + Dense(1,activation='sigmoid')
Loss is categorical_crossentropy + Dense(2,activation='softmax')
Related
I build a model consisting of one CNN and one LSTM. CNN to extract the features and pass that to LSTM. I am working with a Multi-class text classification problem, the input for CNN is TF-IDF.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Dropout,Flatten,Activation
from tensorflow.keras.metrics import CategoricalAccuracy, AUC
from tensorflow import keras
import tensorflow as tf
from keras.layers import TimeDistributed
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Conv1D,MaxPooling1D
from keras.layers.recurrent import LSTM
model = Sequential()
model.add(Conv1D(128, 5, activation='relu', input_shape=( 1500,1)))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(10))
model.add(Dense(5, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
history=model.fit(X_train, y_train, epochs=20,validation_data=(X_test,y_test),batch_size=512)
The error as follows:
ValueError: Input 0 of layer sequential_33 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape (None, 1, 1500)
the model expects an input_shape of 1, 1500 and you are giving one with 1500,1. change your input in the model from model.add(Conv1D(128, 5, activation='relu', input_shape=( 1500,1))) to model.add(Conv1D(128, 5, activation='relu', input_shape=( 1, 1500)))
but there might be more errors with the format after that i'm not sure if you can just add the lstm in your model.
I have read this, this
I have numerical data in arrays of shape,
input_array = 14674 x 4
output_array = 13734 x 4
reshaping for LSTM (batch, timesteps, features) gives
input_array= (14574, 100, 4)
output_array = (13634, 100, 4)
Now I would like to build a Many to Many LSTM architecture for this given data,
should I use encoder-decorder or synced sequence input and output architecture
using following model but it works on when input and outputs are same
import tenowingsorflow
from tensorflow.keras.metrics import Recall, Precision
from tensorflow.keras.layers import Conv1D, Dense, MaxPooling1D, Flatten
opt = tensorflow.keras.optimizers.Adam(learning_rate=0.001)
model_enc_dec_cnn = Sequential()
model_enc_dec_cnn.add(Conv1D(filters=64, kernel_size=9, activation='relu', input_shape=(100, 4)))
model_enc_dec_cnn.add(Conv1D(filters=64, kernel_size=11, activation='relu'))
model_enc_dec_cnn.add(MaxPooling1D(pool_size=2))
model_enc_dec_cnn.add(Flatten())
model_enc_dec_cnn.add(RepeatVector(100))
model_enc_dec_cnn.add(LSTM(100, activation='relu', return_sequences=True))
model_enc_dec_cnn.add(TimeDistributed(Dense(4)))
model_enc_dec_cnn.compile( optimizer=opt, loss='mse', metrics=['accuracy'])
history = model_enc_dec_cnn.fit(X,y, epochs=3, batch_size=64, )
Newbie to machine learning here.
I'm currently working on a diagnostic machine learning framework using 3D-CNNs on fMRI imaging. My dataset consists of 636 images right now, and I'm trying to distinguish between control and affected (binary classification). However, when I tried to train my model, after every epoch, my accuracy remains at 48.13%, no matter what I do. Additionally, over the epoch, the accuracy decreases from 56% to 48.13%.
So far, I have tried:
changing my loss functions (poisson, categorical cross entropy, binary cross entropy, sparse categorical cross entropy, mean squared error, mean absolute error, hinge, hinge squared)
changing my optimizer (I've tried Adam and SGD)
changing the number of layers
using weight regularization
changing from ReLU to leaky ReLU (I thought perhaps that could help if this was a case of overfitting)
Nothing has worked so far.
Any tips? Here's my code:
#importing important packages
import tensorflow as tf
import os
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, Conv3D, MaxPooling3D, Dropout, BatchNormalization, LeakyReLU
import numpy as np
from keras.regularizers import l2
from sklearn.utils import compute_class_weight
from keras.optimizers import SGD
BATCH_SIZE = 64
input_shape=(64, 64, 40, 20)
# Create the model
model = Sequential()
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization(center=True, scale=True))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization(center=True, scale=True))
model.add(Flatten())
model.add(BatchNormalization(center=True, scale=True))
model.add(Dense(128, activation='relu', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(128, activation='sigmoid', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
model.add(Dense(1, activation='softmax', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
# Compile the model
model.compile(optimizer = keras.optimizers.sgd(lr=0.000001), loss='poisson', metrics=['accuracy', tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])
# Model Testing
history = model.fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=50, verbose=1, shuffle=True)
The main issue is that you are using softmax activation with 1 neuron. Change it to sigmoid with binary_crossentropy as a loss function.
At the same time, bear in mind that you are using Poisson loss function, which is suitable for regression problems not classification ones. Ensure that you detect the exact scenario that your are trying to solve.
Softmax with one neuron makes the model illogical and only use one of the sigmoid activation functions or Softmax in the last layer
im quite new in keras
I have trained this with (100,8) size of input and output, i want to 1*8 output with 1*8 predict data.
for example
input that i enter 1*8.
code returns, 1*8 output data.
here is my code:
from tensorflow import keras
import numpy as np
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
import keras
from keras.layers import Input, Dense
accuracy = tf.keras.metrics.CategoricalAccuracy()
import numpy as np
xs=np.ones((100,8))
ys=np.ones((100,8))
for i in range(100):
xs[i]*=np.random.randint(30, size=8)
ys[i]=xs[i]*2
xs=xs.reshape(1,100,8)
ys=ys.reshape(1,100,8)
# model = tf.keras.Sequential([layers.Dense(units=1, input_shape=[2,4])])
model = Sequential()
model.add(Dense(10,input_shape=[100,8]))
model.add(Activation('relu'))
model.add(Dropout(0.15))
model.add(Dense(10))
model.add(Activation('relu'))
# model.add(Dropout(0.5))
model.add(Dense(8))
model.compile(optimizer='adam', loss='mean_squared_error',metrics=['accuracy'] )
model.fit(xs, ys, epochs=1000, batch_size=100)
p= np.array([[1,3,4,5,9,2,3,4]]).reshape(1,1,8)
print(model.predict(p))
you don't need to add one dimension in the first position of your data. for 2D network, you simply have to feed your model with data in the format (n_sample, n_features)
here the complete example
xs=np.ones((100,8))
ys=np.ones((100,8))
for i in range(100):
xs[i]*=np.random.randint(30, size=8)
ys[i]=xs[i]*2
xs=xs.reshape(100,8)
ys=ys.reshape(100,8)
model = Sequential()
model.add(Dense(10,input_shape=(8,)))
model.add(Activation('relu'))
model.add(Dropout(0.15))
model.add(Dense(10))
model.add(Activation('relu'))
model.add(Dense(8))
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(xs, ys, epochs=10, batch_size=100)
p = np.array([[1,3,4,5,9,2,3,4]]) # (1, 8)
pred = model.predict(p)
print(pred)
print(pred.shape) # (1, 8)
I have a numeric health record dataset. I used a 1D CNN keras model for the classification step.
I am giving a reproductible example in Python:
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Conv1D, Activation, Flatten, Dense
import numpy as np
a = np.array([[0,1,2,9,3], [0,5,1,33,6], [1, 12,1,8,9]])
train = np.reshape(a[:,1:],(a[:,1:].shape[0], a[:,1:].shape[1],1))
y_train = keras.utils.to_categorical(a[:,:1])
model = Sequential()
model.add(Conv1D(filters=2, kernel_size=2, strides=1, activation='relu', padding="same", input_shape=(train.shape[1], 1), kernel_initializer='he_normal'))
model.add(Flatten())
model.add(Dense(2, activation='sigmoid'))
model.compile(loss=keras.losses.binary_crossentropy,
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False),
metrics=['accuracy'])
model.fit(train, y_train, epochs=3, verbose=1)
I am getting this error when I apply lime to my 1D CNN model
IndexError: boolean index did not match indexed array along dimension 1; dimension is 4 but corresponding boolean dimension is 1
import lime
import lime.lime_tabular
explainer = lime.lime_tabular.LimeTabularExplainer(train)
Is there a solution ?
I did some minor changes to your initial code (changed from keras to tensorflow.keras)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, Activation, Flatten, Dense
import numpy as np
a = np.array([[0,1,2,9,3], [0,5,1,33,6], [1, 12,1,8,9]])
train = np.reshape(a[:,1:],(a[:,1:].shape[0], a[:,1:].shape[1],1))
y_train = tf.keras.utils.to_categorical(a[:,:1])
model = Sequential()
model.add(Conv1D(filters=2, kernel_size=2, strides=1, activation='relu',
padding="same", input_shape=(train.shape[1], 1),
kernel_initializer='he_normal'))
model.add(Flatten())
model.add(Dense(2, activation='sigmoid'))
model.compile(loss=tf.keras.losses.binary_crossentropy,
optimizer=tf.keras.optimizers.Adam(lr=0.001, beta_1=0.9,
beta_2=0.999, amsgrad=False),
metrics=['accuracy'])
model.fit(train, y_train, epochs=3, verbose=1)
Then I added some test data because you don't want to train and test your LIME model on the same data
b = np.array([[1,4,5,3,2], [1,4,2,55,1], [7, 3,22,3,10]])
test = np.reshape(b[:,1:],(b[:,1:].shape[0], b[:,1:].shape[1],1))
Here I show how the RecurrentTabularExplainer can be trained
import lime
from lime import lime_tabular
explainer = lime_tabular.RecurrentTabularExplainer(train,training_labels=y_train, feature_names=["random clf"],
discretize_continuous=False, feature_selection='auto', class_names=['class 1','class 2'])
Then you can run your LIME model on one of the examples in your test data:
exp = explainer.explain_instance(np.expand_dims(test[0],axis=0), model.predict, num_features=10)
and finally display the predictions
exp.show_in_notebook()
or just printing the prediction
print(exp.as_list())
You should try lime_tabular.RecurrentTabularExplainer instead of LimeTabularExplainer. It is an explainer for keras-style recurrent neural networks. Check out the examples in LIME documentation for better understanding. Good luck:)