I tried with this one but i think its not really good working so any idea ?
the code:
import tensorflow as tf
import numpy as np
(x_train, y_train), (x_test, y_test) = load_data_MEB()
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=0)
print('Test accuracy:', test_acc)
The project is the recognition of nanocrystals (present in cement) on SEM.
We have a database of 1000 SEM pictures where we can see the crystals, we need a program in python or any other programming language (CNN but ideally FFNN) to recognize these crystals and eventually their shape (square and triangular).
Related
I just simply use MNIST dataset to implement a simple ML application. My code is
import tensorflow as tf
import numpy as np
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
print('Before saving')
model.evaluate(x_test, y_test, verbose=2)
model.save('model.h5')
# load model again
loaded_model = tf.keras.models.load_model('model.h5')
# evaluate on the same data
print('After loading')
loaded_model.evaluate(x_test, y_test, verbose=2)
The accuracies on the same dataset are different after loading
This is a known issue: https://github.com/tensorflow/tensorflow/issues/42045
Compile the model with metrics='sparse_categorical_accuracy' instead of just 'accuracy'.
Using Keras, I am trying to loop a training session 10 times with different splits of data. Though, after every loop my accuracy increases a lot probably because it doesn't reset and sees new data in new groups (data trained on might appear in test next loop)
I expected model.fit to reset it over, as pr. an answer here saying it does so but I can't get it to. I then tried K.clear_session() in the start of the loop, as pr. example 1 here, but it does nothing. I can save an untrained model the first time and reload it at start of loop, but this seems like a bad method/bad practice. What can I do instead/am I doing wrong?
from tensorflow.keras import backend as K
for i in range(0, 10):
print("Starting loop " + str(i))
K.clear_session()
model = keras.Model(inputs=inputs, outputs=outputs, name="SchoolProject")
model.compile(loss=tensorflow.keras.losses.binary_crossentropy, optimizer=tensorflow.keras.optimizers.Adam(lr=hpInitialLearningRate), metrics=['accuracy'])
trainData, valData, testData, trainTruth, valTruth, testTruth = getTrainValAndTestSet()
model.fit(trainData, trainTruth, epochs=hpEpochs, verbose=1, callbacks=callbacks_list, validation_data=(valData, valTruth))
score = model.evaluate(testData, testTruth, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
testAccList.append(score[1])
print("Ending loop " + str(i))
The easiest way would be to define your model inside the loop. Here's an example. You'll see that every iteration, the accuracy starts at random before improving.
import tensorflow as tf
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
for i in range(5):
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation="softmax"),
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam",
metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=16, epochs=1, validation_split=0.1)
Resetting the weights manually is a little more complicated.
here is my code, i don't know why it gives me 0.3% accuracy
can anyone tell me what is the problem with this code?
def train_mnist():
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=5)
return history.epoch, history.history['acc'][-1]
train_mnist()
thanks in Adavnce
this will work! try this
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
The problem seem to be your loss function
Try this:
Method 1
You could use categorical_crossentropy as loss but the last layer should be
tf.keras.layers.Dense(10,activation='softmax')
and then
model.compile(optimizer = 'adam',
loss"categorical_crossentropy",
metrics=["accuracy"])
Method 2
In your case, the sparse_categorical_crossentropy loss need to define
tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,name='sparse_categorical_crossentropy')
To understand the difference b\w these two see this
There is a lot already out there about saving models, but I'm struggling to work out how I can save my model only when it improves upon val_accuracy. My model looks like this:
model = keras.Sequential([
keras.layers.Embedding(numberOfWords,
embedding_vector_length, input_length=1000),
keras.layers.LSTM(128),
keras.layers.Dropout(0.3),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5), loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=200, batch_size=32,
validation_data=(x_test, y_test))
During training, I want to save the model after the first epoch. Then, after every epoch, if val_accuracy has been improved upon, I want to overwrite the old model with the new one.
How do I do this?
you just have to define a Callback-List and enter it into the model.fit declaration: Keras_fit In this example its just safing the best weigths, so its actually overwriting the old ones, and safes it into an hdf5 format. Hope that solved your problem :)
from keras.callbacks import ModelCheckpoint
filepath="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1,save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.fit(x_train, y_train, epochs=200,,callbacks=callbacks_list, batch_size=32,
validation_data=(x_test, y_test))
I'm trying to find a way to visualize which numbers in the mnist dataset a model was able to correctly identify and which ones it wasn't.
What I can't seem to find is if such a visualization is possible in tensorboard or if I would need to use/create something else to achieve it.
I'm currently working from the basic tutorial provided for tensorflow 2.0 with tensorboard added.
import datetime
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x_train,
y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
model.evaluate(x_test, y_test)
It appears the what-if tool is what I was looking for, it allows you to visually sort testing data depending on whether it was correctly or incorrectly identified by the model.
If you want to test it out here is their demo that I used to get the above image and they have multiple other demos on the tools site.