Keras: keeps the same prediction for different inputs - python

I am learning how to use keras and I keep getting some problems. I will try to be as much specific as possible.
My task: I am trying to create a neural network to predict opening status for a domestic residence.
I have a dataset with 524729 examples. I use 70% as training set and 30% as test set. I am reaching 70+% of acc in my tests but for some reason every time that I try to predict an output I get the same values.
Right now, I have the following topology:
model = Sequential()
model.add(Dense(15, input_shape=(13, ), kernel_initializer='random_normal'))
model.add(Dense(15, activation='softplus'))
model.add(Dense(15, activation='softplus'))
model.add(Dense(10, activation='sigmoid'))
model.summary()
sgd = optimizers.SGD(lr=0.1, decay=1e-6, momentum=0.3, nesterov=True)
model.compile(optimizer=sgd, loss='mean_squared_error', metrics=['mae', 'acc'])
model.fit(X_training, Y_training, validation_data=(X_test, Y_test), epochs=1, batch_size=32)
and I use:
model.predict(np_inputRN, verbose=0)
to predict the output but for some reason I keep getting the same values.
0.0172018650919,0.498908281326,0.984391093254,0.485811322927,0.480756670237,0.984736263752,0.536143004894,0.475958675146,0.494080305099,0.488458126783
Can some one help me?
==========================================================================
#Aiven :
Data Set: 524729
Test Set[30%]: 157418
Training Set [70%]: 367310
X_training.shape: (367311, 13)
Y_training.shape: (367311, 10)
X_test.shape: (157419, 13)
Y_test.shape: (157419, 10)
np_inputRN.shape: (1, 13)

Related

TensorFlow Regression with EarlyStopping and Dropout results in underfitting

New to ML and I would like to know what I'm missing or doing incorrectly.
I'm trying to figure out why my data is being underfit when applying early stopping and dropout however when I don't use earlystopping or dropout the fit seems to be okay...
Dataset I'm using:
https://www.kaggle.com/datasets/kanths028/usa-housing
Model Parameters:
The dataset has 5 features to train on and the target is the price
I chose 4 layers arbitrarily
Epochs at 600 (way too many) because I want to test early stopping
Optimizers and loss because those seemed to get me the most consistent results when compared to SKLearns LinearRegression (MAE is about 81K)
Data Pre-preprocessing:
X = df[df.columns[:-2]].values
y = df['Price'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
Fit looks okay:
model = Sequential()
model.add(Dense(5, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mae')
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=600)
Data looks underfit with earlystopping and dropout combined:
model = Sequential()
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
early_stopping = EarlyStopping(monitor='val_loss', mode='min', patience=25)
model.compile(optimizer='adam', loss='mae')
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=600, callbacks=[early_stopping])
I'm trying to figure out why early stopping would stop when the results are so far off. I would guess that the model would continue until the end of the 600 epochs however early stopping pulls the plug around 300.
I'm probably doing something wrong but I can't figure it out so any insights would be appreciated. Thank you in advance :)
It defines performance measure and specifies whether to maximize or minimize it.
Keras then stops training at the appropriate epoch. When verbose=1 is designated, it is possible to output on the screen when the training is stopped in keras.
es = EarlyStopping(monitor='val_loss', mode='min')
It may not be effective to stop right away because performance does not increase. Patience defines how many times to allow epochs that do not increase performance. Partiance is a rather subjective criterion. The optimal value can be changed depending on the design of the used data and model used.
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50)
When the training is stopped by the Model Choice Early stopping object, the state will generally have a higher validation error than the previous model. Therefore, early stopping may be controlled so that the validation error of the model is no longer lowered by stopping the training of the model at a certain point in time, but the stopped state will not be the best model. Therefore, it is necessary to store the model with the best validation performance, and for this purpose, the object called Model Checkpoint exists in keras. This object monitors validation errors and unconditionally stores parameters at this time if the validation performance is better than the previous epoch. Through this, when training is stopped, the model with the highest validation performance can be returned.
from keras.callbacks import ModelCheckpoint
mc = ModelCheckpoint ('best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
in the callbacks parameter, allowing the best model to be stored.
hist = model.fit(train_x, train_y, nb_epoch=10,
batch_size=10, verbose=2, validation_split=0.2,
callbacks=[early_stopping, mc])
In your case Patience 25 indicates whether to end when the reference value does not improve more than 25 times consecutively.
from keras.callbacks import ModelCheckpoint
model = Sequential()
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
early_stopping = EarlyStopping(monitor='val_loss', mode='min', patience=25, verbose=1)
mc = ModelCheckpoint ('best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
model.compile(optimizer='adam', loss='mae')
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=600, callbacks=[early_stopping, mc])
I recommend 2 things. In the early stop callback set the parameter
restore_best_weights=True
This way if the early stopping callback activates, your model is set to the weights for the epoch with the lowest validation loss. To get the lower validation loss I recommend you use the callback ReduceLROnPlateau. My recommended code for these callbacks is shown below.
estop=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=4,
verbose=1, estore_best_weights=True)
rlronp=tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss", factor=0.5,
patience=2, verbose=1)
callbacks=[estop, rlronp]
In model.fit set parameter callbacks=callbacks. Set epochs to a large number so it is likely the estop callback will be activated.

Why am I getting horizontal line (almost zero) from neural network instead of the desired curve?

I am trying to use neural network for my regression problem in python but the output of the neural network is a straight horizontal line which is zero. I have one input and obviously one output.
Here is my code:
def baseline_model():
# create model
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='normal', activation='relu'))
model.add(Dense(4, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error',metrics=['mse'], optimizer='adam')
model.summary()
return model
# evaluate model
estimator = KerasRegressor(build_fn=baseline_model, epochs=50, batch_size=64,validation_split = 0.2, verbose=1)
kfold = KFold(n_splits=10)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
Here are the plots of NN prediction vs. target for both training and test data.
Training Data
Test Data
I have also tried different weight initializers (Xavier and He) with no luck!
I really appreciate your help
First of all correct your syntax while adding dense layers in model remove the double equal == with single equal = with kernal_initilizer like below
model.add(Dense(1, input_dim=1, kernel_initializer ='normal', activation='relu'))
Then to make the performance better do the followong
Increase the number of hidden neurons in the hidden layers
Increase the number of hidden layers.
If still you have same problem then try to change the optimizer and activation function. Tuning the hyperparameters may help you in converging to the solution
EDIT 1
You also have to fit the estimator after cross validation like below
estimator.fit(X_train, y_train)
and then you can test on the test data as follow
prediction = estimator.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(Y_test, prediction)

LSTM Model accuracy is stuck in a certain level and doesn't change

I'm working on a multivariate timeseries prediction using LSTM. I'm trying to get a better match between my actual and predicted values, but no matter what my hyperparamters are, the accuracy won't change. I was wondering if you can give me few insight on how to increase my model accuracy...
I have 3 inputs (time, two rates) and one output (pressure).
enter image description here
This is the LSTM section of my code:
model = Sequential()
model.add(LSTM(units=4,
activation='tanh',
recurrent_activation='hard_sigmoid',
use_bias=True,
unit_forget_bias=True,
dropout=0,
recurrent_dropout=0.3,
input_shape=(look_back, 3)))
model.add(Dense(units=1,
activation='linear',
use_bias=True))
model.compile(loss='mean_squared_error', optimizer='Adam', metrics=['mae','accuracy'])
hist = model.fit(x_train, y_train,
epochs=50,
batch_size=20,
validation_split=0.0,
verbose=2,
shuffle=False)

python - returning nan when trying to predict with Keras

For a school project, I'm trying to predict data using the keras framework, but it's returning 'nan' loss and values when I try to get predicted data.
Source code :
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=5)
# create model
model = Sequential()
model.add(Dense(950, input_shape=(425,), activation='relu'))
model.add(Dense(425, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
sgd = optimizers.SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer='sgd')
# Fit the model
model.fit(X_train, y_train, epochs=20, batch_size=1, verbose=1)
#evaluate the model
y_pred = model.predict(X_test)
score = model.evaluate(X_test, y_test,verbose=1)
print(score)
# calculate predictions
predictions = model.predict(X_pred)
Data :
X_train and X_test are (panda)dataframes of 5000 rows(nber of samples) * 425 columns (number of dimensions).
y_train and y_test look like :
array([ 1.17899644, 1.46080518, 0.9662137 , ..., 2.40157461,
0.53870386, 1.3192718 ])
Can you help me with that ?
Thank you for you help!
Usually, this means that something converges to infinity. As #desertnaut pointed out in the comment, reducing the learning rate might help.
But the root of the issue is your input data. What do these 425 data points mean? Are they from different sources, different features, different parameters? Finding outliners or normalizing the data, could help.
Your code looks fine otherwise.
Make sure your target output is in range (0, 1) as you have sigmoid in the last layer.
sigmoid has an output between zero and one so if the target output is not in this range then (a) change the activation function or (b) normalize outputs in the required range.
Make sure the purpose of this model is the regression.
After considering the above three points, play around with learning rate (decrease) and the optimiser (replace with any other).
Try changing your optimizer to 'Adam' instead of SGD
You initialized your SGD optimizer in variable sgd but you're not using it in compile

Don't know whats going wrong while training dataset. Keras Model Fit

I have just started using Keras and was trying to train a model using Keras deep learning kit. Works till the epochs are runned but crashes just after it.
np.random.seed(1778) # for reproducibility
need_normalise=True
need_validataion=True
nb_epoch=2#8
#Creating model
model = Sequential()
model.add(Dense(512, input_shape=(dims,)))
model.add(PReLU())
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
opt=Adadelta(lr=1,decay=0.995,epsilon=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt)
auc_scores=[]
best_score=-1
best_model=None
print('Training model...')
if need_validataion:
for i in range(nb_epoch):
#early_stopping=EarlyStopping(monitor='val_loss', patience=0, verbose=1)
#model.fit(X_train, y_train, nb_epoch=nb_epoch,batch_size=256,validation_split=0.01,callbacks=[early_stopping])
model.fit(X_train, y_train, nb_epoch=2,batch_size=256,validation_split=0.15)
y_pre = model.predict_proba(X_valid)
scores = roc_auc_score(y_valid,y_pre)
auc_scores.append(scores)
print (i,scores)
if scores>best_score:
best_score=scores
best_model=model
plt.plot(auc_scores)
plt.show()
else:
model.fit(X_train, y_train, nb_epoch=nb_epoch, batch_size=256)
y_pre = model.predict_proba(X_test)[:,1]
print roc_auc_score(y_test,y_pre)
Error Recieved:
I have pasted it over here. Please have a look at it.
http://pastebin.com/dSw9ckkk
It looks like you have two classes, a positive class and a negative class, so that the positive class labels are 1 minus the negative class labels. In that case, you can discard the negative class labels and make it a single-class problem:
model.add(Dense(1), activation='sigmoid') # instead of Dense(nb_classes) and Activation('softmax')
Alternatively, you can still train the model on both classes and just use the positive class in the AUC calculation:
roc_auc_score(y_test[:, 1],y_pre[:, 1])

Categories