Progressevily Freeze Layers as Network Trains - python

I am training a model and wanted to progressively freeze layers and continue training with the already learned weights. I tried to see if I can find something but can't seem to find anything. I looked at this post but it wasnt much help. Can some one please confirm that I am doing this correctly. This is using keras.
model = create_model()
model.compile(loss=loss, optimizer=opt_rms, metrics=['acc'])
mdl_fit = model.fit_generator(
train_dataset, steps_per_epoch=len(train_dataset),
callbacks=[early_stopping_monitor],
epochs=n_epochs, verbose=1, validation_data=test_dataset
)
for x in range(4):
for layer in model.layers[100-((x+1)*20):100-(x*20)]:
layer.trainable = True
model.compile(loss=loss, optimizer=opt_rms, metrics=['acc'])
mdl_fit = model.fit_generator(
train_dataset, steps_per_epoch=len(train_dataset),
# , callbacks=[early_stopping_monitor]
epochs=n_epochs, verbose=1, validation_data=test_dataset
)

Related

How can I save/overwrite my TensorFlow/Keras model ONLY when validation accuracy improves?

There is a lot already out there about saving models, but I'm struggling to work out how I can save my model only when it improves upon val_accuracy. My model looks like this:
model = keras.Sequential([
keras.layers.Embedding(numberOfWords,
embedding_vector_length, input_length=1000),
keras.layers.LSTM(128),
keras.layers.Dropout(0.3),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5), loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=200, batch_size=32,
validation_data=(x_test, y_test))
During training, I want to save the model after the first epoch. Then, after every epoch, if val_accuracy has been improved upon, I want to overwrite the old model with the new one.
How do I do this?
you just have to define a Callback-List and enter it into the model.fit declaration: Keras_fit In this example its just safing the best weigths, so its actually overwriting the old ones, and safes it into an hdf5 format. Hope that solved your problem :)
from keras.callbacks import ModelCheckpoint
filepath="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1,save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.fit(x_train, y_train, epochs=200,,callbacks=callbacks_list, batch_size=32,
validation_data=(x_test, y_test))

How to use a well trained model as input to another model?

I'll start with the code and after put my question.
model1_input= keras.Input(shape=(5,10))
x = layers.Dense(16, activation='relu')(model1_input)
model1_output = layers.Dense(4)(x)
model1= keras.Model(model1_input, model1_output, name='model1')
model1.summary()
//----
model2_input= keras.Input(shape=(5,10))
y = layers.Dense(16, activation='relu')(model2_input)
model2_output = layers.Dense(4)(y)
model2= keras.Model(model2_input, model2_output, name='model2')
model2.summary()
//----
model3_input= keras.Input(shape=(5, 10))
layer1 = model1(model3_input)
layer2 = model2(layer1)
model3_output = layers.Dense(1)(layer2)
model3= keras.Model(model3_input, model3_output , name='model3')
model3.summary()
model3.compile(loss='mse', optimizer='adam')
model3.fit(inputs, outputs, epochs=10, batch_size=32)
When execute this code, what will happen with the model 1 and model 2 weights? they would stay untrained?
I would like to use trained model1 and trained model2 predictions to train model3. Can I write something like that?
model1_input= keras.Input(shape=(5,10))
x = layers.Dense(16, activation='relu')(model1_input)
model1_output = layers.Dense(4)(x)
model1= keras.Model(model1_input, model1_output, name='model1')
model1.summary()
model1.compile(loss='mse', optimizer='adam')
model1.fit(model1_inputs, model1_outputs, epochs=10, batch_size=32)
//----
model2_input= keras.Input(shape=(5,10))
y = layers.Dense(16, activation='relu')(model2_input)
model2_output = layers.Dense(4)(y)
model2= keras.Model(model2_input, model2_output, name='model2')
model2.summary()
model2.compile(loss='mse', optimizer='adam')
model2.fit(model2_inputs, model2_outputs, epochs=10, batch_size=32)
//----
model3_input= keras.Input(shape=(5, 10))
layer1 = model1(model3_input)
layer2 = model2(layer1)
model3_output = layers.Dense(1)(layer2)
model3= keras.Model(model3_input, model3_output , name='model3')
model3.summary()
model3.compile(loss='mse', optimizer='adam')
model3.fit(inputs, outputs, epochs=10, batch_size=32)
I'm afraid when I train model3 this will change the already trained weights of models 1 and 2. In this case, what will happen with models 1 and 2 weights?
I am not sure if keras works that way but even if it does it still will change the weights as long as the layers are trainable. Try freezing the layers. These links might help you 1,2.
Another option will be to branch the layers like this.

Did model.fit() initiate model's weights in keras?

1.
model = Model(inPut, outputs=outPut)
model.compile(loss="mse", optimizer="adam")
for i in range(10):
model.fit(dataX, dataY, epochs=EPOCH, batch_size=BATCHSIZE, verbose=0, shuffle=False)
#save model
2.
model = Model(inPut, outputs=outPut)
for i in range(10):
model.compile(loss="mse", optimizer="adam")
model.fit(dataX, dataY, epochs=EPOCH, batch_size=BATCHSIZE, verbose=0, shuffle=False)
#save model
3.
for i in range(10):
model = Model(inPut, outputs=outPut)
model.compile(loss="mse", optimizer="adam")
model.fit(dataX, dataY, epochs=EPOCH, batch_size=BATCHSIZE, verbose=0, shuffle=False)
#save model
I studied neural network in keras.
i want that each iteration is each learning.
but second(third.. fourth..) iteration learning(model.fit) is going continually fisrt learning(model.fit)
loss is going continually in three code.
could you tell me some method that each iteration is each learning?
each iteration model is another model.
If it is your intent to create differently trained models but maintain the same architecture, then your code snippet for #3 is the one you're after. You create a new model using the same architecture, configure the neural network then train it. After each iteration, make sure you save the model.

Saving best model in keras

I use the following code when training a model in keras
from keras.callbacks import EarlyStopping
model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))
model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)
model.predict(X_test)
but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.
Is there any method or function to help with that?
EarlyStopping and ModelCheckpoint is what you need from Keras documentation.
You should set save_best_only=True in ModelCheckpoint. If any other adjustments needed, are trivial.
Just to help you more you can see a usage here on Kaggle.
Adding the code here in case the above Kaggle example link is not available:
model = getModel()
model.summary()
batch_size = 32
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')
model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)
EarlyStopping's restore_best_weights argument will do the trick:
restore_best_weights: whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used.
So not sure how your early_stopping_monitor is defined, but going with all the default settings and seeing you already imported EarlyStopping you could do this:
early_stopping_monitor = EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=True
)
And then just call model.fit() with callbacks=[early_stopping_monitor] like you already do.
I guess model_2.compile was a typo.
This should help if you want to save the best model w.r.t to the val_losses -
checkpoint = ModelCheckpoint('model-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto')
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[checkpoint], verbose=False)

Don't know whats going wrong while training dataset. Keras Model Fit

I have just started using Keras and was trying to train a model using Keras deep learning kit. Works till the epochs are runned but crashes just after it.
np.random.seed(1778) # for reproducibility
need_normalise=True
need_validataion=True
nb_epoch=2#8
#Creating model
model = Sequential()
model.add(Dense(512, input_shape=(dims,)))
model.add(PReLU())
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
opt=Adadelta(lr=1,decay=0.995,epsilon=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt)
auc_scores=[]
best_score=-1
best_model=None
print('Training model...')
if need_validataion:
for i in range(nb_epoch):
#early_stopping=EarlyStopping(monitor='val_loss', patience=0, verbose=1)
#model.fit(X_train, y_train, nb_epoch=nb_epoch,batch_size=256,validation_split=0.01,callbacks=[early_stopping])
model.fit(X_train, y_train, nb_epoch=2,batch_size=256,validation_split=0.15)
y_pre = model.predict_proba(X_valid)
scores = roc_auc_score(y_valid,y_pre)
auc_scores.append(scores)
print (i,scores)
if scores>best_score:
best_score=scores
best_model=model
plt.plot(auc_scores)
plt.show()
else:
model.fit(X_train, y_train, nb_epoch=nb_epoch, batch_size=256)
y_pre = model.predict_proba(X_test)[:,1]
print roc_auc_score(y_test,y_pre)
Error Recieved:
I have pasted it over here. Please have a look at it.
http://pastebin.com/dSw9ckkk
It looks like you have two classes, a positive class and a negative class, so that the positive class labels are 1 minus the negative class labels. In that case, you can discard the negative class labels and make it a single-class problem:
model.add(Dense(1), activation='sigmoid') # instead of Dense(nb_classes) and Activation('softmax')
Alternatively, you can still train the model on both classes and just use the positive class in the AUC calculation:
roc_auc_score(y_test[:, 1],y_pre[:, 1])

Categories