Accuracy and Validation Accuracy of pretrained model is not changed - python

I have 600 000 images and I want to classify them using keras. I am just trying the pretrained model on greyscale images. And I am trying to use the model architecture of pre-trained models like resnet50, inceptionv3, etc. But accuracy and validation accuracy of the model has not changed which is stuck at 67%. I tried changing the network, applying more epochs and also changing the pretrained model, but I always get the same result like 67% accuracy and validation accuracy. I don't understand why I am getting the same result. Please recommended some ideas on how can I solve this problem. This is my code. In this steps_per_epochs = no. of images/batch size and batch size is 128.No of images in the training dataset is 479369 and in the validation dataset is 136962.This is the output of the code.

I think you are using a pre-trained model. so, that is why it gets showed the same accuracy. my suggestion is to change the pre-trained model and tryna your custom model and then see the changes.

Related

Does it make sense to train a pre-trained architecture (ResNet) with specific images to further train and evaluate with my own specific imagery)

I was wondering if it is useful to train a pre-trained resnet (pre-trained with imagenet) with images that are closer to my classification problem. I want to use 50,000 labeled images of trees from a paper to update the weights of the pre-trained resnet. Then I would like to use these weights to re-train and evaluate the resnet, hopefully better fitted this way, with my own set of images of trees.
I already used the pre-trained resnet on my own images with moderate success. Due to the small dataset size (~5,000 imagery) I thought it might be smart to further train the pre-trained resnet with more similar data.
Any suggestions or experiences you want to share?

Get predictions from Keras/Tensorflow once model is trained

I am working on a project involving neural machine translation (translating English to French).
I have worked through some examples online, and have now finished the model. Once a model is trained using Keras, how do I then get a prediction of a translation without training the entire model again, because with the large dataset I am using, each epoch takes some time and of course, I can't train the model every time I want a translation.
So what is the correct way of then generating predictions on new inputs without training the whole model again?
Thanks
You need to save your model the model and its weights when the fit ends using :
keras.model.save(model_name)
At any time, you can load your trained model using
model = keras.load(model_name)
then perform predictions as
y_pred = model.predict(x_test)
Hope this will be helpful
You can use the .predict() function which you can pass new inputs into it and it give you a prediction. The docs for this function are here: keras

Model accuracy is not changing from 0.5%(0.0050)

I am training a CNN model for Image Classification using Keras. I am using VGG19 model and a custom dataset with 200 classes and uniformly distributed 90000 training images, 10000 Validation Images and 10000 test images. Even though the training is at 200 epochs, the accuracy is staying at a constant 0.0050. Same with the loss, 5.2988. I am using Kaggle's TPU instance to run this model.
How can I make the model more accurate? Can you suggest any different pretrained models for this purpose?
Your CNN model is behaving like a random model.
I know this because since there are 200 classes, the probability of getting a correct class at random is 1/200=0.0050 which is the accuracy that you have.
This happens when you use tensorflow/keras API instead of sequential()
Since you are using VGG19, if you are trying to use transfer learning, then maybe you have freezed the wrong layer.
If you are using API then you have to do
model = Model(inputs = input_layer, outputs = output_layer) #which is not required in sequential()
print(model.layers) # if you are using API or sequential() this is used to check your layers
Then you have to freeze the layer required as
model.layers[index_of_freeze_layer].trainable = False
If you are not freezing your model layers, then try to use lower learning rate since VGG19 is very sensitive to learning rate. (0.00001 or less depends)

Why validation accuracy remains at 75% while train accuracy is 100 %?

I used my own data set to train a model using retrain.py file from Tensorflow site. However, with my first set of images, I am seeing test accuracy of 100% while validation accuracy is at 70%. I see that validation entropy is increasing which tells overfitting. I am new to this field and got to this stage by following online tutorials.
I did not enable random brightness, crop and flip yet for training. I am trying to understand why is this behaviour? I tried flower example and it worked as expected. Cross-entropy got lowest instead of increasing with my data set.
Could some one explain whats going on inside the CNN here ?
Your model has over-fitted on the training data. If its a large model, you should consider using transfer learning where you train the model on a large dataset like ImageNet and then fine-tune on your data. You can also try adding some form of regularization to prevent overfitting specially Dropout and L2 regularization.
This simply means your model is overfitting. Overfitting means your model is not generalizing well to unseen data (ie the validation data). What you can do is add some form of regularization (L2 is used normally). What this does is it penalizes weights from getting very high values which would thereby lead to overfitting. This will also act against the model trying to fit outliers which again leads to less generalization and more overfitting.

How to Improve the model accuracy in face detection using MTCNN with tensorflow

I am doing face detection using tensorflow with MTCNN detection. successfully I got the face detection and found the number of detected faces. In the detection module some of the faces have not detected.
How can I resolve that and How do I want to improve the model accuracy or confidence score.
I am thinking, if I can change the hyper parameters i.e learning rate to improve the model accuracy
or again I need to train from starting onwards by changing the hyper parameters.
Can I train this model on another dataset. is this going to increase the model accuracy ?
thanks in advance
There are few ways that I am aware of which you can use to improve your model accuracy:
First, you need to identify if you are laking accuracy in your training or in your testing set. If you are getting less accuracy in your training set then you should change the hyperparameters or add more layers to your model or may change your data (if it is all noise because sometimes you can get the best accuracy what you are already getting).
Second, if you are lacking in your testing set then try adding more data to your model. It is probably your overfitting your model.
NOTE: You should always finetune your model no matter what the data is.
Hope this helps :)

Categories