1.
model = Model(inPut, outputs=outPut)
model.compile(loss="mse", optimizer="adam")
for i in range(10):
model.fit(dataX, dataY, epochs=EPOCH, batch_size=BATCHSIZE, verbose=0, shuffle=False)
#save model
2.
model = Model(inPut, outputs=outPut)
for i in range(10):
model.compile(loss="mse", optimizer="adam")
model.fit(dataX, dataY, epochs=EPOCH, batch_size=BATCHSIZE, verbose=0, shuffle=False)
#save model
3.
for i in range(10):
model = Model(inPut, outputs=outPut)
model.compile(loss="mse", optimizer="adam")
model.fit(dataX, dataY, epochs=EPOCH, batch_size=BATCHSIZE, verbose=0, shuffle=False)
#save model
I studied neural network in keras.
i want that each iteration is each learning.
but second(third.. fourth..) iteration learning(model.fit) is going continually fisrt learning(model.fit)
loss is going continually in three code.
could you tell me some method that each iteration is each learning?
each iteration model is another model.
If it is your intent to create differently trained models but maintain the same architecture, then your code snippet for #3 is the one you're after. You create a new model using the same architecture, configure the neural network then train it. After each iteration, make sure you save the model.
Related
I am training a model and wanted to progressively freeze layers and continue training with the already learned weights. I tried to see if I can find something but can't seem to find anything. I looked at this post but it wasnt much help. Can some one please confirm that I am doing this correctly. This is using keras.
model = create_model()
model.compile(loss=loss, optimizer=opt_rms, metrics=['acc'])
mdl_fit = model.fit_generator(
train_dataset, steps_per_epoch=len(train_dataset),
callbacks=[early_stopping_monitor],
epochs=n_epochs, verbose=1, validation_data=test_dataset
)
for x in range(4):
for layer in model.layers[100-((x+1)*20):100-(x*20)]:
layer.trainable = True
model.compile(loss=loss, optimizer=opt_rms, metrics=['acc'])
mdl_fit = model.fit_generator(
train_dataset, steps_per_epoch=len(train_dataset),
# , callbacks=[early_stopping_monitor]
epochs=n_epochs, verbose=1, validation_data=test_dataset
)
I would like to save a model in keras using KerasPickleWrapper as a .sav format and send it to another person in order to do the predictions.
I scaled my data, and the other party will not. I was wondering if there's a way to wrap the scaling procedure, so the other party would not need to inverse the scaling before making the predictions.
Here's my logic behind the question, I don't know if this is correct to assume: If I fit the model with the scaled training data, my model will not perform well once the other person try do predictions using un-scaled data.
#scale data:
scaler.fit(X_train)
X_train=scaler.fit_transform(X_train)
Y_train=scaler.fit_transform(Y_train)
X_test=scaler.fit_transform(X_test)
Y_test=scaler.fit_transform(Y_test)
#Model
optimizer = keras.optimizers.Adam(lr=0.0001)
model = Sequential()
model.add(Dense(1, input_dim=1, activation='relu'))
model.add(Dense(10583, activation='relu')
model.add(Dense(1, activation='linear'))
#compile model
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mse'])
#wrap model
mw = KerasPickleWrapper(model)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
#Fit model
history= mw().fit(X_train_Xaxis, Y_train_Xaxis, epochs=100, batch_size=32, validation_split=0.2, validation_data=None, verbose=1, callbacks=[callback])
#Save Model
import pickle
filename = 'model.sav'
pickle.dump(mw, open(filename, 'wb'))
I use the following code when training a model in keras
from keras.callbacks import EarlyStopping
model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))
model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)
model.predict(X_test)
but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.
Is there any method or function to help with that?
EarlyStopping and ModelCheckpoint is what you need from Keras documentation.
You should set save_best_only=True in ModelCheckpoint. If any other adjustments needed, are trivial.
Just to help you more you can see a usage here on Kaggle.
Adding the code here in case the above Kaggle example link is not available:
model = getModel()
model.summary()
batch_size = 32
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')
model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)
EarlyStopping's restore_best_weights argument will do the trick:
restore_best_weights: whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used.
So not sure how your early_stopping_monitor is defined, but going with all the default settings and seeing you already imported EarlyStopping you could do this:
early_stopping_monitor = EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=True
)
And then just call model.fit() with callbacks=[early_stopping_monitor] like you already do.
I guess model_2.compile was a typo.
This should help if you want to save the best model w.r.t to the val_losses -
checkpoint = ModelCheckpoint('model-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto')
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[checkpoint], verbose=False)
I have:
Multiple time series as INPUT
Forecast time series point in OUTPUT
How can be sure that model predict data by using dependencies between all time series in input?
Edit 1
My current model:
model = Sequential()
model.add(keras.layers.LSTM(hidden_nodes, input_dim=num_features, input_length=window, consume_less="mem"))
model.add(keras.layers.Dense(num_features, activation='sigmoid'))
optimizer = keras.optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)
By default LSTM layer in keras (and any other type of recurrent layer) is not stateful, and hence the states are reset every time a new input is fed into the network. Your code uses this default version. If you want, you can make it stateful by specifying stateful=True inside the LSTM layer, and then the states will not be reset. You can read more about the relevant syntax here, and this blog post provides more information regarding the stateful mode.
Here is an example of the corresponding syntax, taken from here:
trainX = numpy.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = numpy.reshape(testX, (testX.shape[0], testX.shape[1], 1))
# create and fit the LSTM network
batch_size = 1
model = Sequential()
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
for i in range(100):
model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
# make predictions
trainPredict = model.predict(trainX, batch_size=batch_size)
model.reset_states()
testPredict = model.predict(testX, batch_size=batch_size)
I have just started using Keras and was trying to train a model using Keras deep learning kit. Works till the epochs are runned but crashes just after it.
np.random.seed(1778) # for reproducibility
need_normalise=True
need_validataion=True
nb_epoch=2#8
#Creating model
model = Sequential()
model.add(Dense(512, input_shape=(dims,)))
model.add(PReLU())
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
opt=Adadelta(lr=1,decay=0.995,epsilon=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt)
auc_scores=[]
best_score=-1
best_model=None
print('Training model...')
if need_validataion:
for i in range(nb_epoch):
#early_stopping=EarlyStopping(monitor='val_loss', patience=0, verbose=1)
#model.fit(X_train, y_train, nb_epoch=nb_epoch,batch_size=256,validation_split=0.01,callbacks=[early_stopping])
model.fit(X_train, y_train, nb_epoch=2,batch_size=256,validation_split=0.15)
y_pre = model.predict_proba(X_valid)
scores = roc_auc_score(y_valid,y_pre)
auc_scores.append(scores)
print (i,scores)
if scores>best_score:
best_score=scores
best_model=model
plt.plot(auc_scores)
plt.show()
else:
model.fit(X_train, y_train, nb_epoch=nb_epoch, batch_size=256)
y_pre = model.predict_proba(X_test)[:,1]
print roc_auc_score(y_test,y_pre)
Error Recieved:
I have pasted it over here. Please have a look at it.
http://pastebin.com/dSw9ckkk
It looks like you have two classes, a positive class and a negative class, so that the positive class labels are 1 minus the negative class labels. In that case, you can discard the negative class labels and make it a single-class problem:
model.add(Dense(1), activation='sigmoid') # instead of Dense(nb_classes) and Activation('softmax')
Alternatively, you can still train the model on both classes and just use the positive class in the AUC calculation:
roc_auc_score(y_test[:, 1],y_pre[:, 1])