I'd like to monitor eg. the learning rate during training in Keras both in the progress bar and in Tensorboard. I figure there must be a way to specify which variables are logged, but there's no immediate clarification on this issue on the Keras website.
I guess it's got something to do with creating a custom Callback function, however, it should be possible to modify the already existing progress bar callback, no?
It can be achieved via a custom metric. Take the learning rate as an example:
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
x = Input((50,))
out = Dense(1, activation='sigmoid')(x)
model = Model(x, out)
optimizer = Adam(lr=0.001)
lr_metric = get_lr_metric(optimizer)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['acc', lr_metric])
# reducing the learning rate by half every 2 epochs
cbks = [LearningRateScheduler(lambda epoch: 0.001 * 0.5 ** (epoch // 2)),
TensorBoard(write_graph=False)]
X = np.random.rand(1000, 50)
Y = np.random.randint(2, size=1000)
model.fit(X, Y, epochs=10, callbacks=cbks)
The LR will be printed in the progress bar:
Epoch 1/10
1000/1000 [==============================] - 0s 103us/step - loss: 0.8228 - acc: 0.4960 - lr: 0.0010
Epoch 2/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7305 - acc: 0.4970 - lr: 0.0010
Epoch 3/10
1000/1000 [==============================] - 0s 62us/step - loss: 0.7145 - acc: 0.4730 - lr: 5.0000e-04
Epoch 4/10
1000/1000 [==============================] - 0s 58us/step - loss: 0.7129 - acc: 0.4800 - lr: 5.0000e-04
Epoch 5/10
1000/1000 [==============================] - 0s 58us/step - loss: 0.7124 - acc: 0.4810 - lr: 2.5000e-04
Epoch 6/10
1000/1000 [==============================] - 0s 63us/step - loss: 0.7123 - acc: 0.4790 - lr: 2.5000e-04
Epoch 7/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7119 - acc: 0.4840 - lr: 1.2500e-04
Epoch 8/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7117 - acc: 0.4880 - lr: 1.2500e-04
Epoch 9/10
1000/1000 [==============================] - 0s 59us/step - loss: 0.7116 - acc: 0.4880 - lr: 6.2500e-05
Epoch 10/10
1000/1000 [==============================] - 0s 63us/step - loss: 0.7115 - acc: 0.4880 - lr: 6.2500e-05
Then, you can visualize the LR curve in TensorBoard.
Another way (in fact encouraged one) of how to pass custom values to TensorBoard is by sublcassing the keras.callbacks.TensorBoard class. This allows you to apply custom functions to obtain desired metrics and pass them directly to TensorBoard.
Here is an example for learning rate of Adam optimizer:
class SubTensorBoard(TensorBoard):
def __init__(self, *args, **kwargs):
super(SubTensorBoard, self).__init__(*args, **kwargs)
def lr_getter(self):
# Get vals
decay = self.model.optimizer.decay
lr = self.model.optimizer.lr
iters = self.model.optimizer.iterations # only this should not be const
beta_1 = self.model.optimizer.beta_1
beta_2 = self.model.optimizer.beta_2
# calculate
lr = lr * (1. / (1. + decay * K.cast(iters, K.dtype(decay))))
t = K.cast(iters, K.floatx()) + 1
lr_t = lr * (K.sqrt(1. - K.pow(beta_2, t)) / (1. - K.pow(beta_1, t)))
return np.float32(K.eval(lr_t))
def on_epoch_end(self, episode, logs = {}):
logs.update({"lr": self.lr_getter()})
super(SubTensorBoard, self).on_epoch_end(episode, logs)
I've come to this question because I wanted to log more variables in the Keras progress bar. This is the way I did it after reading the answers here:
class UpdateMetricsCallback(tf.keras.callbacks.Callback):
def on_batch_end(self, batch, logs):
logs.update({'my_batch_metric' : 0.1, 'my_other_batch_metric': 0.2})
def on_epoch_end(self, epoch, logs):
logs.update({'my_epoch_metric' : 0.1, 'my_other_epoch_metric': 0.2})
model.fit(...,
callbacks=[UpdateMetricsCallback()]
)
I hope it helps others.
Related
I'm trying to create a small transformer model with Keras to model stock prices, based off of this tutorial from the Keras docs. The problem is, my test loss is massive and barely changes between epochs, unsurprisingly resulting in severe underfitting, with my outputs all the same arbitrary value.
My code is below:
def transformer_encoder_block(inputs, head_size, num_heads, filters, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=filters, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
data = ...
input = np.array(
keras.preprocessing.sequence.pad_sequences(data["input"], padding="pre", dtype="float32"))
output = np.array(
keras.preprocessing.sequence.pad_sequences(data["output"], padding="pre", dtype="float32"))
# Input shape: (723, 36, 22)
# Output shape: (723, 36, 1)
# Train data
train_features = input[100:]
train_labels = output[100:]
train_labels = tf.keras.utils.to_categorical(train_labels, num_classes=3)
# Test data
test_features = input[:100]
test_labels = output[:100]
test_labels = tf.keras.utils.to_categorical(test_labels, num_classes=3)
inputs = keras.Input(shape=(None,22), dtype="float32", name="inputs")
# Ignore padding in inputs
x = layers.Masking(mask_value=0)(inputs)
x = transformer_encoder_block(x, head_size=64, num_heads=16, filters=3, dropout=0.2)
# Multiclass = Softmax (decrease, no change, increase)
outputs = layers.TimeDistributed(layers.Dense(3, activation="softmax", name="outputs"))(x)
# Create model
model = keras.Model(inputs=inputs, outputs=outputs)
# Compile model
model.compile(loss="categorical_crossentropy", optimizer=(tf.keras.optimizers.Adam(learning_rate=0.005)), metrics=['accuracy'])
# Train model
history = model.fit(train_features, train_labels, epochs=10, batch_size=32)
# Evaluate on the test data
test_loss = model.evaluate(test_features, test_labels, verbose=0)
print("Test loss:", test_loss)
out = model.predict(test_features)
After padding, input is of shape (723, 36, 22), and output is of shape (723, 36, 1) (before converting output to one hop, after which there are 3 output classes).
Here's an example output for ten epochs (trust me, more than ten doesn't make it better):
Epoch 1/10
20/20 [==============================] - 2s 62ms/step - loss: 10.7436 - accuracy: 0.3335
Epoch 2/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7083 - accuracy: 0.3354
Epoch 3/10
20/20 [==============================] - 1s 60ms/step - loss: 10.6555 - accuracy: 0.3392
Epoch 4/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7846 - accuracy: 0.3306
Epoch 5/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7600 - accuracy: 0.3322
Epoch 6/10
20/20 [==============================] - 1s 59ms/step - loss: 10.7074 - accuracy: 0.3358
Epoch 7/10
20/20 [==============================] - 1s 59ms/step - loss: 10.6569 - accuracy: 0.3385
Epoch 8/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7767 - accuracy: 0.3314
Epoch 9/10
20/20 [==============================] - 1s 61ms/step - loss: 10.7346 - accuracy: 0.3341
Epoch 10/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7093 - accuracy: 0.3354
Test loss: [10.073813438415527, 0.375]
4/4 [==============================] - 0s 22ms/step
Using the same data on a simple LSTM model with the same shape yielded a desirable prediction with a constantly decreasing loss.
Tweaking the learning rate appears to have no effect, nor does stacking more transformer_encoder_block()s.
If anyone has any suggestions for how I can solve this, please let me know.
I am training an image captioning model. This model is consist of two other models, a BERT and an Xception model. I train both of these two models in parallel. The model training accuracy seems fine till 10 epochs then the loss starts increasing. The code and parameters of this model are as follows.
num_epochs = 20 # In practice, train for at least 30 epochs
batch_size = 1
vision_encoder = create_vision_encoder(num_projection_layers=1, projection_dims=256, dropout_rate=0.1)
text_encoder = create_text_encoder(num_projection_layers=1, projection_dims=256, dropout_rate=0.1)
dual_encoder = DualEncoder(text_encoder, vision_encoder, temperature=0.05)
dual_encoder.compile(optimizer=tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.001), #run_eagerly=True)
from tensorflow.keras.callbacks import LearningRateScheduler
import math
def step_decay(epoch):
initial_lrate = 0.001
drop = 0.005
epochs_drop = 10.0
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
lrate = LearningRateScheduler(step_decay)
callbacks_list = [lrate]
print(f"Number of GPUs: {len(tf.config.list_physical_devices('GPU'))}")
print(f"Number of examples (caption-image pairs): {train_example_count}")
print(f"Batch size: {batch_size}")
print(f"Steps per epoch: {int(np.ceil(train_example_count / batch_size))}")
train_dataset = get_dataset(os.path.join(tfrecords_dir, "train-*.tfrecord"), batch_size)
valid_dataset = get_dataset(os.path.join(tfrecords_dir, "valid-*.tfrecord"), batch_size)
# Create a learning rate scheduler callback.
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor="val_loss", factor=0.2, patience=3)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=5, restore_best_weights=True)
history = dual_encoder.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=[reduce_lr, early_stopping, callbacks_list],
)
print("Training completed. Saving vision and text encoders...")
vision_encoder.save("/content/drive/MyDrive/vision_encoder")
text_encoder.save("/content/drive/MyDrive/text_encoder")
print("Models are saved.")
model epochs
Number of GPUs: 1
Number of examples (caption-image pairs): 3500
Batch size: 1
Steps per epoch: 3500
Epoch 1/20
3500/3500 [==============================] - 217s 62ms/step - loss: 5.1028e-04 - val_loss: 1.9643e-04 - lr: 0.0010
Epoch 2/20
3500/3500 [==============================] - 218s 62ms/step - loss: 8.8274e-05 - val_loss: 3.3228e-05 - lr: 0.0010
Epoch 3/20
3500/3500 [==============================] - 220s 63ms/step - loss: 0.3582 - val_loss: 4.2012e-04 - lr: 0.0010
Epoch 4/20
3500/3500 [==============================] - 216s 62ms/step - loss: 9.6259e-04 - val_loss: 3.7130e-05 - lr: 0.0010
Epoch 5/20
3500/3500 [==============================] - 213s 61ms/step - loss: 1.7488e-05 - val_loss: 6.3365e-06 - lr: 2.0000e-04
Epoch 6/20
3500/3500 [==============================] - 208s 59ms/step - loss: 2.9985e-06 - val_loss: 1.0982e-06 - lr: 0.0010
Epoch 7/20
3500/3500 [==============================] - 207s 59ms/step - loss: 1.0761 - val_loss: 0.0212 - lr: 0.0010
Epoch 8/20
3500/3500 [==============================] - 211s 60ms/step - loss: 0.0062 - val_loss: 4.6654e-05 - lr: 2.0000e-04
Epoch 9/20
3499/3500 [============================>.] - ETA: 0s - loss: 2.2375e-05Epoch 10/20
3500/3500 [==============================] - 210s 60ms/step - loss: 234.2512 - val_loss: 309.9704 - lr: 5.0000e-06
Epoch 11/20
3500/3500 [==============================] - 211s 60ms/step - loss: 310.0370 - val_loss: 309.7400 - lr: 1.0000e-06
Training completed. Saving vision and text encoders...
WARNING:absl:Found untraced functions such as restored_function_body, restored_function_body, restored_function_body, restored_function_body, restored_function_body while saving (showing 5 of 124). These functions will not be directly callable after loading.
Models are saved.
Hi your loss is high because of the learning rate that is 5e-06 and 1e-06 in these cases try adjusting this and may be consider increasing it where you have a learning rate from 1e-4 to 0.01 because if learning rate is smaller and your model started from a place where loss was huge then it will be difficult for the model to adjust the new coefficients values.
I am trying to improve my model training performance following the Better performance with the tf.data API guideline. However, I have observed that the performance using .cache() is almost the same or even worse if compared to same settings without .cache().
datafile_list = load_my_files()
RAW_BYTES = 403*4
BATCH_SIZE = 32
raw_dataset = tf.data.FixedLengthRecordDataset(filenames=datafile_list, record_bytes=RAW_BYTES, num_parallel_reads=10, buffer_size=1024*RAW_BYTES)
raw_dataset = raw_dataset.map(tf.autograph.experimental.do_not_convert(decode_and_prepare),
num_parallel_calls=tf.data.AUTOTUNE)
raw_dataset = raw_dataset.cache()
raw_dataset = raw_dataset.shuffle(buffer_size=1024)
raw_dataset = raw_dataset.batch(BATCH_SIZE)
raw_dataset = raw_dataset.prefetch(tf.data.AUTOTUNE)
The data in datafile_list hold 9.92GB which fairly fits the system total physical RAM available (100GB). System swap is disabled.
By training the model using the dataset:
model = build_model()
model.fit(raw_dataset, epochs=5, verbose=2)
results in:
Epoch 1/5
206247/206247 - 126s - loss: 0.0043 - mae: 0.0494 - mse: 0.0043
Epoch 2/5
206247/206247 - 125s - loss: 0.0029 - mae: 0.0415 - mse: 0.0029
Epoch 3/5
206247/206247 - 129s - loss: 0.0027 - mae: 0.0397 - mse: 0.0027
Epoch 4/5
206247/206247 - 125s - loss: 0.0025 - mae: 0.0386 - mse: 0.0025
Epoch 5/5
206247/206247 - 125s - loss: 0.0024 - mae: 0.0379 - mse: 0.0024
This result is frustrating. By the docs:
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
And from this guide:
When iterating over this dataset, the second iteration will be much faster than the first one thanks to the caching.
However, the elapsed time took by all epochs are almost the same. In addition, during the training both CPU and GPU usage are very low (see images below).
By commenting out the line raw_dataset = raw_dataset.cache() the results do not show any notable difference:
Epoch 1/5
206067/206067 - 129s - loss: 0.0042 - mae: 0.0492 - mse: 0.0042
Epoch 2/5
206067/206067 - 127s - loss: 0.0028 - mae: 0.0412 - mse: 0.0028
Epoch 3/5
206067/206067 - 134s - loss: 0.0026 - mae: 0.0393 - mse: 0.0026
Epoch 4/5
206067/206067 - 127s - loss: 0.0024 - mae: 0.0383 - mse: 0.0024
Epoch 5/5
206067/206067 - 126s - loss: 0.0023 - mae: 0.0376 - mse: 0.0023
As pointed out in the docs, my expectations were using cache would result in a much fast training time. I would like to know what I am doing wrong.
Attachments
GPU usage during training using cache:
GPU usage during training WITHOUT cache:
System Stats (Memory, CPU etc) during training using cache:
System Stats (Memory, CPU etc) during training WITHOUT cache:
Just a small observation using Google Colab. According to the docs:
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
And
Note: cache will produce exactly the same elements during each
iteration through the dataset. If you wish to randomize the iteration
order, make sure to call shuffle after calling cache.
I did notice a few differences when using caching and iterating over the dataset beforehand. Here is an example.
Prepare data:
import random
import struct
import tensorflow as tf
import numpy as np
RAW_N = 2 + 20*20 + 1
bytess = random.sample(range(1, 5000), RAW_N*4)
with open('mydata.bin', 'wb') as f:
f.write(struct.pack('1612i', *bytess))
def decode_and_prepare(register):
register = tf.io.decode_raw(register, out_type=tf.float32)
inputs = register[2:402]
label = tf.random.uniform(()) + register[402:]
return inputs, label
raw_dataset = tf.data.FixedLengthRecordDataset(filenames=['/content/mydata.bin']*7000, record_bytes=RAW_N*4)
raw_dataset = raw_dataset.map(decode_and_prepare)
Train model without caching and iterating beforehand:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).batch(32).prefetch(tf.data.AUTOTUNE)
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 4s 3ms/step - loss: 0.1425
Epoch 2/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41be037d0>
Training model with caching but no iterating:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).cache().batch(32).prefetch(tf.data.AUTOTUNE)
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 4s 2ms/step - loss: 0.1428
Epoch 2/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 2s 3ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41fa87810>
Training model with caching and iterating:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).cache().batch(32).prefetch(tf.data.AUTOTUNE)
_ = list(train_ds.as_numpy_iterator()) # iterate dataset beforehand
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 3s 3ms/step - loss: 0.1427
Epoch 2/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41ac9c850>
Conclusion: The caching and the prior iteration of the dataset seem to have an effect on training, but in this example only 7000 files were used.
Why does the error of my NN not divergate to zero when my input reveals the result? I always set input[2] to the right result, so the NN should set all weights to 0, except this one.
from random import random
import numpy
from keras.models import Sequential
from keras.layers import Dense
from tensorflow import keras
datax = []
datay = []
for i in range(100000):
input = []
for j in range(1000):
input.append(random())
yval=random()
# should be found out by the nn that input[2] is always the correct output
input[2] = yval
datax.append(input)
datay.append(yval)
datax = numpy.array(datax)
datay = numpy.array(datay)
model = Sequential()
model.add(Dense(10))
model.add(Dense(10))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=keras.optimizers.Adam())
model.fit(datax, datay, epochs=100, batch_size=32, verbose=1)
it oscillates around e-05 but never gets really better than that
Epoch 33/100
3125/3125 [==============================] - 4s 1ms/step - loss: 1.2802e-04
Epoch 34/100
3125/3125 [==============================] - 4s 1ms/step - loss: 3.7720e-05
Epoch 35/100
3125/3125 [==============================] - 4s 1ms/step - loss: 4.0858e-05
Epoch 36/100
3125/3125 [==============================] - 4s 1ms/step - loss: 8.5453e-05
Epoch 37/100
3125/3125 [==============================] - 5s 1ms/step - loss: 5.5722e-05
Epoch 38/100
3125/3125 [==============================] - 5s 1ms/step - loss: 3.6459e-05
Epoch 39/100
3125/3125 [==============================] - 5s 1ms/step - loss: 1.3339e-05
Epoch 40/100
3125/3125 [==============================] - 5s 1ms/step - loss: 5.8943e-05
...
Epoch 100/100
3125/3125 [==============================] - 4s 1ms/step - loss: 1.5929e-05
The step of the gradient descent method is calculated as gradient multiplied by learning rate. So theoretically - you can not reach minimum of loss function.
Try decaying learning rate though (decaying to zero). If you are lucky - I think it could be possible because of descrete nature of float types.
Im trying to train a Product Detection model with approximately 100,000 training images and 10,000 test images. However no matter what optimizer i used in my model, i have tried Adam, SGD with multiple learning rates, my loss and accuracy does not improve. Below is my code
First i read the train images
for x in train_data.category.tolist():
if x < 10:
x = "0" + str(x)
path = os.path.join(train_DATADIR,x)
else:
x = str(x)
path = os.path.join(train_DATADIR,x)
img_array = cv2.imread(os.path.join(path,str(train_data.filename[idx])), cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array,(100,100))
train_images.append(new_array)
idx += 1
print(f'{idx}/105392 - {(idx/105392)*100:.2f}%')
narray = np.array(train_images)
then i save the train_images data into a binary file
np.save(DIR_PATH + 'train_images_bitmap.npy', narray)
then i divide the train_images by 255.0
train_images = train_images / 255.0
and declared my model with input nodes of 100x100 as the images are resized to 100x100
model = keras.Sequential([
keras.layers.Flatten(input_shape=(100, 100)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(42)
])
then i compile the model, i tried adam, SGD(lr=0.01 up to 0.2 and as low to 0.001)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Next i fit the model with a callback of the epoch
model.fit(train_images, train_labels,epochs=2000)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,monitor='val_acc',
mode='max',save_best_only=True, save_weights_only=True, verbose=1)
but the output i got on the epoch wasnt improving, how can i improve the loss and accuracy? below is the output on the epochs
Epoch 6/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0249
Epoch 7/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0248
Epoch 8/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0255
Epoch 9/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0251
Epoch 10/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0254
Epoch 11/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0254
Epoch 12/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0243
Epoch 13/2000
3294/3294 [==============================] - 12s 3ms/step - loss: 3.7210 - accuracy: 0.0238
Epoch 14/2000
3294/3294 [==============================] - 11s 3ms/step - loss: 3.7210 - accuracy: 0.0251
Epoch 15/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7209 - accuracy: 0.0253
Epoch 16/2000
3294/3294 [==============================] - 11s 3ms/step - loss: 3.7210 - accuracy: 0.0243
Epoch 17/2000
3294/3294 [==============================] - 12s 4ms/step - loss: 3.7210 - accuracy: 0.0247
Epoch 18/2000
3294/3294 [==============================] - 12s 3ms/step - loss: 3.7210 - accuracy: 0.0247
I don't think the choice of optimizer is the main problem. With only a little experience on the matter, I can only suggest some things:
For images i would try using a 2d-convolution layer before the dense layer.
Try adding a dropout-layer to reduce the possibility of overfitting.
The first layer is 100*100, and a reduction to 128 is perhaps to aggressive (i dont know, but thats at least my intuition) Try increasing from 128 to a larger number, or even add an intermediate layer :)
Perhaps something like:
model = Sequential()
model.add(Conv2D(...))
model.add(MaxPooling2D(...)
model.add(Dropout(...))
model.add(Flatten(...))
model.add(Dense(...))
model.compile(...)```