selecting keras regression model for 3 input and 1 output - python

This my excel data scatter plot. I have 3 input and 1 output for my neural network model, totally 4 columns in excel. And 200 rows. Data is standart normalized.
So I have a keras model as following:
def create_model():
ann_model = Sequential()
ann_model.add(Dense(120, input_dim=3, kernel_initializer='normal', activation='tanh'))
ann_model.add(Dense(60, activation='tanh'))
ann_model.add(Dense(1, activation='linear'))
return ann_model
original_inputs = read_inputs(r'train_cd.xlsx')
original_outputs = read_outputs(r'train_cd.xlsx')
model = create_model()
sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'accuracy'])
model.fit(original_inputs, original_outputs, batch_size=10, epochs=1800, verbose=False, shuffle=False)
test_result = model.predict(original_inputs)
# ----------Plot---------------------------------------------
plt.plot(original_outputs, label="y-original")
plt.plot(test_result, label="y-predicted")
plt.legend()
plt.show()
# -----------------------------------------------------------
But this training result is not close to %100 accuracy. How can I change the model hidden layer nodes or else? I want to get high accuracy close to 100%.

In order to increase your accuracy, you can try differents things :
Add more layer and play with their number of neurons
-> With more layers and more neurons, the model can learn higher level patterns
Print the model loss and accuracy history and see what the curves looks like (increase/decrease fast/slow, if you cap or maybe the model haven't finished to learn -> more epochs, etc)
Play with the batchsize, change your optimizer, try differents activation functions
I hope that can help you

Related

How to add a traditional classifier(SVM) to my CNN model

here's my model
model=Sequential()
model.add(Xception(weights='imagenet',input_shape=(224,224,3),include_top=False))
model.add(GlobalAveragePooling2D())
model.add(Dense(4096,activation='relu',name='fc1'))
model.add(Dense(4096,activation='relu',name='fc2'))
model.add(Dense(1000,activation='relu',name='fc3'))
model.add(Dropout(0.5))
model.add(Dense(1,activation='sigmoid',name='fc4'))
model.layers[0].trainable=False
i want to make svm classifier as my final classifier in this model so how can i do that?
also another question i want to know the predicted class of a certain input
so when i use
model.predict(x_test)
it only gives me probabilities so how can i solve that too
You can use neural network as feature extractor and take outputs from last layer into your SVM. Try following:
model=Sequential()
model.add(Xception(weights='imagenet',input_shape=(224,224,3),include_top=False))
model.add(GlobalAveragePooling2D())
model.add(Dense(4096,activation='relu',name='fc1'))
model.add(Dense(4096,activation='relu',name='fc2'))
model.add(Dense(1000,activation='relu',name='fc3'))
model.add(Dropout(0.5))
model.add(Dense(1,activation='sigmoid',name='fc4'))
model.compile(loss="categorical_crossentropy", optimizer="adam")
model.summary()
model.fit(X,y, epochs=10)
model.pop() # this will remove the last layer
model.summary() # check the network
feature_mapping = model(X)
from sklearn import svm
clf = svm.SVC()
clf.fit(feature_mapings, y)

Why am I getting horizontal line (almost zero) from neural network instead of the desired curve?

I am trying to use neural network for my regression problem in python but the output of the neural network is a straight horizontal line which is zero. I have one input and obviously one output.
Here is my code:
def baseline_model():
# create model
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='normal', activation='relu'))
model.add(Dense(4, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error',metrics=['mse'], optimizer='adam')
model.summary()
return model
# evaluate model
estimator = KerasRegressor(build_fn=baseline_model, epochs=50, batch_size=64,validation_split = 0.2, verbose=1)
kfold = KFold(n_splits=10)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
Here are the plots of NN prediction vs. target for both training and test data.
Training Data
Test Data
I have also tried different weight initializers (Xavier and He) with no luck!
I really appreciate your help
First of all correct your syntax while adding dense layers in model remove the double equal == with single equal = with kernal_initilizer like below
model.add(Dense(1, input_dim=1, kernel_initializer ='normal', activation='relu'))
Then to make the performance better do the followong
Increase the number of hidden neurons in the hidden layers
Increase the number of hidden layers.
If still you have same problem then try to change the optimizer and activation function. Tuning the hyperparameters may help you in converging to the solution
EDIT 1
You also have to fit the estimator after cross validation like below
estimator.fit(X_train, y_train)
and then you can test on the test data as follow
prediction = estimator.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(Y_test, prediction)

Why the LSTM model is not predicting the values properly

I am using an LSTM model to predict data. But when the model executes, it doesn't wrap to the values at the edges.
Graphed Result * CLICK to VIEW
and here is the lstm model
model = Sequential()
model.add(Bidirectional(LSTM(100, activation='relu', input_shape=(n_steps_in,1))))
model.add(RepeatVector(n_steps_out))
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss="mae", metrics = [test_acc])
# fit model
model.fit(X_train, y_train, epochs=7)
Can someone explain why the model doesn't predict the values till the bottom, 0r at least get close to it?
P.S : I have tried changing the epoch to 100 and other combinations also

Overfitting in LSTM even after using regularizers

I am having a time series prediction problem and building an LSTM like below :
def create_model():
model = Sequential()
model.add(LSTM(50,kernel_regularizer=l2(0.01), recurrent_regularizer=l2(0.01), bias_regularizer=l2(0.01), input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dropout(0.591))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
When I train the model on 5 splits like below :
tss = TimeSeriesSplit(n_splits = 5)
X = data.drop(labels=['target_prediction'], axis=1)
y = data['target_prediction']
for train_index, test_index in tss.split(X):
train_X, test_X = X.iloc[train_index, :].values, X.iloc[test_index,:].values
train_y, test_y = y.iloc[train_index].values, y.iloc[test_index].values
model=create_model()
history = model.fit(train_X, train_y, epochs=10, batch_size=64,validation_data=(test_X, test_y), verbose=0, shuffle=False)
I get an overfitting problem. The graph of loss is attached
I am not sure why there is overfitting when I use regularizers in my Keras model. Any help is appreciated .
EDIT:
Tried the architectures
def create_model():
model = Sequential()
model.add(LSTM(20, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
def create_model(x,y):
# define LSTM
model = Sequential()
model.add(Bidirectional(LSTM(20, return_sequences=True), input_shape=(x,y)))
model.add(TimeDistributed(Dense(1, activation='sigmoid')))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
but still it is overfitting.
First of all remove all your regularizers and dropout. You are literally spamming with all the tricks out there and 0.5 dropout is too high.
Reduce the number of units in your LSTM. Start from there. Reach a point where your model stops overfitting.
Then, add dropout if required.
After that, the next step is to add the tf.keras.Bidirectional. If still, you are not satfisfied then, increase number of layers. Remember to keep return_sequences True for every LSTM layer except the last one.
It is seldom I come across networks using layer regularization despite the availability because dropout and layer regularization have a same effect and people usually go with dropout (at maximum, I have seen 0.3 being used).

Why the accuracy and loss plot is wavy at the end of training?

I was training a word-level sentence generation model and as its training started to reach towards the end of total iterations, the accuracy started to go down and up multiple times and formed a wavy pattern in the history plot. I am unable to understand why is this happening. Is it because of overfitting? Do I need to add some dropout layers to my model?
My model:
def rnn_model():
model = Sequential()
model.add(Embedding(uniq_vals, 50, input_length=s_len))
model.add(SimpleRNN(25, return_sequences=True))
model.add(SimpleRNN(25))
model.add(Dense(50, activation='relu'))
model.add(Dense(uniq_vals, activation='softmax'))
return model
model = rnn_model()
model.compile(loss = 'categorical_crossentropy',optimizer = 'adam', metrics = ['accuracy'])
model.summary()
Also I have set the batch_size = 128 and epochs = 150
Accuracy and loss plot:

Categories