Keras producing model with no accuracy - python

I have the following code for training a model based off of some numbers:
from numpy import loadtxt
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from time import sleep
dataset = loadtxt("data.csv", delimiter=",")
X = dataset[:,0:2]
y = dataset[:,2]
model = Sequential()
model.add(Dense(196, input_dim=2, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=600, batch_size=10)
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
For reference, here is some of the data it is being presented with:
433,866,1299,1732
421,842,1263,1684
443,886,1329,1772
142,284,426,568
437,874,1311,1748
455,910,1365,1820
172,344,516,688
219,438,657,876
101,202,303,404
289,578,867,1156
110,220,330,440
421,842,1263,1684
472,944,1416,1888
121,242,363,484
215,430,645,860
134,268,402,536
488,976,1464,1952
467,934,1401,1868
418,836,1254,1672
134,268,402,536
241,482,723,964
116,232,348,464
395,790,1185,1580
438,876,1314,1752
396,792,1188,1584
57,114,171,228
218,436,654,872
372,744,1116,1488
305,610,915,1220
462,924,1386,1848
455,910,1365,1820
42,84,126,168
347,694,1041,1388
394,788,1182,1576
184,368,552,736
302,604,906,1208
326,652,978,1304
333,666,999,1332
335,670,1005,1340
176,352,528,704
168,336,504,672
62,124,186,248
26,52,78,104
335,670,1005,1340
(The first three numbers should be inputs, and the last one an output)
The Keras program keeps training but only warrants an accuracy of 0. What am I doing wrong?

Like discussed in comments, this is a regression problem (not classification), so we can use, for example, mse (mean squared errors) as a loss function, and change activation of the last layer to linear:
X = dataset[:,0:3]
y = dataset[:,3]
model = Sequential()
model.add(Dense(196, input_dim=3, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='adam')
model.fit(X, y, epochs=600, batch_size=10)

Related

Problem with reducing the loss of a Neural Network

I have a quite large data and a binary classification problem, which I want to train with neural network, I used more than 10 combination for my NN structure varying from 3 layers to 20, I also tried to over-fit my model on a smaller sample for the purpose of debugging, but my loss does not reduce at all!! It get stuck on 0.4 after number of epochs every times with every combinations and every sample sizes! But the strange thing is that the accuracy also does not reduce and it's around 0.8 which is not very bad!
As I'm new in working with NNs, any suggestion for solving the problem?
I use keras sequential model libraries.
here is my designed neural network:
#Normalizing the data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X = sc.fit_transform(X)
print(X)
from keras import utils
y = utils.to_categorical(y)
print(y)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.9)
#Dependencies
import keras
from keras.models import Sequential
from keras.layers import Dense
# Neural network
model = Sequential()
model.add(Dense(24, input_dim=25, activation='relu'))
model.add(Dense(22, activation='relu'))
model.add(Dense(20, activation='sigmoid'))
model.add(Dense(18, activation='selu'))
model.add(Dense(18, activation='sigmoid'))
model.add(Dense(16, activation='relu'))
model.add(Dense(16, activation='sigmoid'))
model.add(Dense(14, activation='tanh'))
model.add(Dense(12, activation='relu'))
model.add(Dense(10, activation='sigmoid'))
model.add(Dense(8, activation='sigmoid'))
model.add(Dense(8, activation='selu'))
model.add(Dense(7, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(4, activation='sigmoid'))
model.add(Dense(2, activation='sigmoid'))
model.add(Dense(2, activation='sigmoid'))
opt = keras.optimizers.Adam(learning_rate=0.01)
model.compile(loss='categorical_crossentropy', optimizer = opt , metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=5000, batch_size=1000)

How to apply model.fit() function over an CNN-LSTM model?

I am trying to use this to classify the images into two categories. Also I applied model.fit() function but its showing error.
ValueError: A target array with shape (90, 1) was passed for an output of shape (None, 10) while using as loss binary_crossentropy. This loss expects targets to have the same shape as the output.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, LSTM
import pickle
import numpy as np
X = np.array(pickle.load(open("X.pickle","rb")))
Y = np.array(pickle.load(open("Y.pickle","rb")))
#scaling our image data
X = X/255.0
model = Sequential()
model.add(Conv2D(64 ,(3,3), input_shape = (300,300,1)))
# model.add(MaxPooling2D(pool_size = (2,2)))
model.add(tf.keras.layers.Reshape((16, 16*512)))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt,
metrics=['accuracy'])
# model.summary()
model.fit(X, Y, batch_size=32, epochs = 2, validation_split=0.1)
If your problem is categorical, your issue is that you are using binary_crossentropy instead of categorical_crossentropy; ensure that you do have a categorical instead of a binary classification problem.
Also, please note that if your labels are in simple integer format like [1,2,3,4...] and not one-hot-encoded, your loss_function should be sparse_categorical_crossentropy, not categorical_crossentropy.
If you do have a binary classification problem, like said in the error of the above ensure that:
Loss is binary_crossentroy + Dense(1,activation='sigmoid')
Loss is categorical_crossentropy + Dense(2,activation='softmax')

TensorFlow "Please provide as model inputs a single array or a list of arrays"

This is the error and data I entered into my model. I just can't figure out why it won't work since the dimensions are okay and it literally prints a list of arrays.
My Model + Code before:
import numpy as np
training = np.array(training)
training_inputs = list(training[:,0])
training_outputs = list(training[:,1])
print("train inputs ", training_inputs)
print("train outputs ", training_outputs)
# Now lets create our tensorflow model
# In[10]:
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import LSTM, Dense
model = Sequential()
model.add(Dense(training_inputs[0], activation='linear'))
model.add(Dense(15, activation='linear'))
model.add(Dense(15, activation='linear'))
model.add(Dense(15, activation='linear'))
model.add(Dense(len(training_outputs[0]), activation='softmax'))
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', 'loss']
)
model.fit(x=training_inputs, y=training_outputs,
epochs=10000,
batch_size=20,
verbose=True,
shuffle=True)
model.save('models/basic_chat.json')
You need an input layer to your model:
...
model = Sequential()
model.add(Dense(15, activation='linear', input_shape=( len(training_inputs[0]),)))
model.add(Dense(15, activation='linear'))
...
training_inputs = np.array(training[:,0])
training_outputs = np.array(training[:,1])

Why is my Keras model only producing the same prediction?

I'm having some trouble understanding why my Keras model has problems generating proper results (it now always returns 0). I have been able to find some others with this problem (ref 1, ref 2), but I haven't been able to understand the underlying cause.
Question: Why is my model only giving one, constant prediction?
Training Data Example
The last column is the prediction, 0 or 1.
32856500,1,1,200,6842314460,0
32800000,-1,0,0,0,0
32800000,-1,1,0,6845343222,0
32800000,-1,2,0,13692319489,0
32800000,-1,3,0,20539336035,0
32769900,-1,4,-30100,27389628085,0
32769900,-1,5,-30100,34239941481,0
32750000,-1,6,-50000,41091099905,0
32750000,-1,7,-50000,47945852379,1
Keras Code for Training
I'm using the sigmoid activation for the binary results. But I'm not sure if the issue lies here or in -for example- the binary_crossentropy or SGD optimizer.
def trainKerasModel(X, Y, path, dimensions):
# Create model
model = Sequential()
model.add(Dense(120, input_dim=dimensions, activation='sigmoid'))
model.add(Dense(100, activation='sigmoid'))
model.add(Dense(80, activation='sigmoid'))
model.add(Dense(60, activation='sigmoid'))
model.add(Dense(40, activation='sigmoid'))
model.add(Dense(20, activation='sigmoid'))
model.add(Dense(12, activation='sigmoid'))
model.add(Dense(10, activation='sigmoid'))
model.add(Dense(8, activation='sigmoid'))
model.add(Dense(6, activation='sigmoid'))
model.add(Dense(4, activation='sigmoid'))
model.add(Dense(2, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer=SGD(lr=0.01), metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=EPOCHS, batch_size=BATCHSIZE)
# Evaluate
scores = model.evaluate(X, Y)
Helpers().Log(model.metrics_names[1], scores[1]*100)
# Save model
with open(path+".json", "w") as json_file:
json_file.write(model.to_json())
# serialize weights to HDF5
model.save_weights(path+".h5")
Helpers().Log("Saved model to disk")
someFilePath = "file.csv"
dataset = numpy.loadtxt(someFilePath, delimiter=",")
dimensions = len(dataset[0]) - 1
trainKerasModel(dataset[:,0:dimensions], dataset[:,dimensions], someFilePath, dimensions)
Keras Code for Predictions
model = model_from_json(loaded_model_json)
model.load_weights(someWeightsFile)
Xnew = preprocess_input(numpy.array([[32856500,1,1,200,6842314460,0], [32800000,-1,3,0,20539336035,0], [32750000,-1,7,-50000,47945852379,1]]))
Ynew = model.predict_classes(Xnew)
print(Ynew)
12 sigmoid fc layers will never learn anything.
Read theory.
maybe you sould try just 3 layers with tanh , and no af if tanh on input. -1 for false, 1 for true.
Also apply tanh to input datasincethey are not normalized. Also cross entropy has no sence if you have only one output.
plus extending 5 input to 120 features then 12 layers is horrible overfit. You should have here 3 layers like with ~20, 16,10 items, tanh, mse loss, ca 1e-3 1e-4 learning rate

How to specify input_shape for Keras Sequential model

How do you deal with this error?
Error when checking target: expected dense_3 to have shape (1,) but got array with shape (398,)
I Tried changing the input_shape=(14,) which is the amount of columns in the train_samples, but i still get the error.
set = pd.read_csv('NHL_DATA.csv')
set.head()
train_labels = [set['Won/Lost']]
train_samples = [set['team'], set['blocked'],set['faceOffWinPercentage'],set['giveaways'],set['goals'],set['hits'],
set['pim'], set['powerPlayGoals'], set['powerPlayOpportunities'], set['powerPlayPercentage'],
set['shots'], set['takeaways'], set['homeaway_away'],set['homeaway_home']]
train_labels = np.array(train_labels)
train_samples = np.array(train_samples)
scaler = MinMaxScaler(feature_range=(0,1))
scaled_train_samples = scaler.fit_transform(train_samples).reshape(-1,1)
model = Sequential()
model.add(Dense(16, input_shape=(14,), activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(Adam(lr=.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(scaled_train_samples, train_labels, batch_size=1, epochs=20, shuffle=True, verbose=2)
1) You reshape your training example with .reshape(-1,1) which means all training samples have 1 dimension. However, you define the input shape of the network as input_shape=(14,) that tells the input dimension is 14. I guess this is one problem with your model.
2) You used sparse_categorical_crossentropy which means the ground truth labels are sparse (train_labels should be sparse) but I guess it is not.
Here is an example of how your input should be:
import numpy as np
from tensorflow.python.keras.engine.sequential import Sequential
from tensorflow.python.keras.layers import Dense
x = np.zeros([1000, 14])
y = np.zeros([1000, 2])
model = Sequential()
model.add(Dense(16, input_shape=(14,), activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile('adam', 'categorical_crossentropy')
model.fit(x, y, batch_size=1, epochs=1)

Categories