Predictions from Savedmodel (Tensorflow 2) - python

I saved an image classifier I trained on two different classes and want to classify a new image using the classifier. Once I have my model loaded what tf function do I call to return the softmax prediction of the final layer after feeding an image?
Thank you

You should run model.predict(image_to_classify), if you just want the index of the prediction, and not the probabilities run np.argmax(model.predict(image_to_classify))

Related

If we expand or reduce the layer of the same model, can we still be able to train from pretrained model in Pytorch?

If the pretrained model such as Resnet101 were trained on ImageNet dataset, then I change some layers inside it. Can I still be able to use the pretrained model on different ABC dataset?
Lets say This is ResNet34 Model,
It is pretrained on ImageNet and saved as ResNet.pt file.
If I changed some layers inside it, lets say I made it more deeper by introducing some layers in conv4_x (check image)
model = Resnet34() #I have changes some layers inside this ResNet34()
optimizer = optim.Adam(model.parameters(), lr=0.00005)
model.load_state_dict(torch.load('Resnet.pt')['state_dict']) #This is pretrained model of ResNet before some changes
optimizer.load_state_dict(torch.load('Resnet.pt')['optimizer'])
Can I do this? or there are anyother method?
You can do anything you like - the question is: would it be better than training from scratch?
Here are a few issues you might encounter:
1. A mismatch between weights saved in ResNet.pt (the trained weights of the original ResNet18) and the state_dict of your modified model.
You would probably need to manually make sure that the old weights are correctly assigned to the original layers and only the new layer is not initialized.
2. Initializing the weights of the new layer.
Since you are training a resNet - you can take advantage of the residual connections and init the weights of the new layer such that it would initially make no contribution to the predicted value and only pass the input directly to the output via the residual link.

How to fo transfer learning of a resnet50 model with with own dataset?

I am trying to build a face verification system using keras and resnet50 model with vggface weights. The way i am trying to achieve this is by the following steps:
given two image i first find out the face using mtcnn as embeddings
then i calculate the cosine distance between two vector embeddings. the distance starts from 0 to 1..... (Here to be noted
that the lower the distance the same two faces is)
Using the pre-trained model of resnet50 i get fairly good result. But since the model was trained mostly on european data and i want face verification on indian sub-contient i cannot rely on that. I want to train them on my own dataset. I have 10000 classes with each class containing 2 image. With image augmentation i can create 10-15 image per class from those two image.
here is the sample code i am using for training
base_model = VGGFace(model='resnet50',include_top=False,input_shape=(224, 224, 3))
base_model.layers.pop()
base_model.summary()
for layer in base_model.layers:
layer.trainable = False
y=base_model.input
x=base_model.output
x=GlobalAveragePooling2D()(x)
x=Dense(1024,activation='relu')(x) #we add dense layers so that the model can learn more complex functions and classify for better results.
x=Dense(1024,activation='relu')(x) #dense layer 2
x=Dense(512,activation='relu')(x) #dense layer 3
preds=Dense(8322,activation='softmax')(x) #final layer with softmax activation
model=Model(inputs=base_model.input,outputs=preds)
model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.summary()
train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) #included in our dependencies
train_generator=train_datagen.flow_from_directory('/Users/imac/Desktop/Fayed/Facematching/testenv/facenet/Dataset/train', # this is where you specify the path to the main data folder
target_size=(224,224),
color_mode='rgb',
batch_size=32,
class_mode='categorical',
shuffle=True)
step_size_train=train_generator.n/train_generator.batch_size
model.fit_generator(generator=train_generator,
steps_per_epoch=step_size_train,
epochs=10)
model.save('directory')
As far as the code code is concern what i understand is that i disable the last layer then add 4 layer train them and store them in a diectory.
i then load the model using
model=load_model('directory of my saved model')
model.summary()
yhat = model.predict(samples)
i predict the embedding of two image and then calculate cosine distance. But the problem is that the prediction gets worsen with my trained model. For two image of same person the pre-trained model gives distance of 0.3 whereas my trained model show distance of 1.0. Although during training loss function is decreasing with each epoch and accuracy is increasing but that doesn't reflect on my prediction output. I want to increase the prediction result of pre-trained model.
How can i achieve that with my own data?
N.B: I am relatively new in machine learning and don't know a lot about model layers
What I would suggest is to go with triplet or siamese with these many number of classes. Use MTCNN to extract faces and then use facenet architecture to generate 512 dimensions embedding vectors, then visualize it using TSNE plot. Every face will be assigned a small embedding cluster. Go through this link for Keras to generate face embeddings: Link.
Then, try Triplets semi-hard and hard loss on your dataset to cluster them into 10000 classes. It might help. Go through this detailed blog on triplet loss: Triplets. Codes to go through some of the repositries: Code.

Keras: Extract features from siamese network for a single input

I have a siamese network trained Model in Keras like this. This Model expects two inputs and it then calculates the distance between these and generates a similarity score. But now I only want to extract the features and use some other techniques to find similar images. As a siamese network is basically one single (same) CNN by which both images are passed and now I don't have to calculate similarity between two, can I only pass a single image at a time and get the features from the aforementioned trained CNN?
I tried this intermediate_layer_model = Model(inputs=model.input[0], outputs=model.get_layer(layer_name).output) as mentioned here but it throws a ValueError as the graph expects 2 inputs.
Adding screenshot of my model.layers

Keras Conv2D output dimension. Multiclass instance segmentation

Im trying to create a keras instance segmentation CNN model with an Unet architecture.
The Keras CNN model i want to use/modify is:model.py
The CNN model should be able to detect 3 different object classes:(Main root, secondary root, stem).
I've converted the annotation-polygons(3 classes) to bitmap masks with a modified version of the balloon.py file in the samples folder in:GitHub: Mask RCNN Instance segmentation package
My annotation->bitmap mask file:annotation.py
Visualization of bitmaps of the 3 classes:
I want a last Conv2D output of 3 (my classes) feature maps, a final softmax activation and an categorical_crossentropy. I don't know how to "compare" my annotations to the models predictions. Do i have to use some kind of keras.losses.categorical_crossentropy(y_true, y_pred) function after the last Conv2D layer(conv9) in my model.py file?

How to do prediction using a saved model which is trained with Estimator and Dataset API?

I have trained a cnn model using tf.estimator and tf.data.TFRecordDataset, which define a model in model_fn funcition and input in input_fn function. Also using an one-shot iterator to get one batch examples at a time.
Now I have trained model files(ckpt, meta, index) in a directory. What I want to do is predicting a image's label based on the trained model without training and evaluation again. The image can be numpy array but not possible a TFRecords file(which used when traing).
I can't find an effictive solution after trying all day. I only can get the value of weights and biases and don't know how to make my predicting image and model compatible.
FYI, my training code is here.
The similar question is Prediction from model saved with tf.estimator.Estimator in Tensorflow
, but no accepted answer and my model input is using the dataset api.
So reaaally need help. Thanks.
I have answered a similar question here.
To make predictions using a custom input, you need to use the built-in predict method of Estimators:
estimator = tf.estimator.Estimator(model_fn, ...)
predict_input_fn = ... # define this using tf.data
predict_results = estimator.predict(predict_input_fn)
for idx, prediction in enumerate(predict_results):
print(idx)
for key in prediction:
print("...{}: {}".format(key, prediction[key]))

Categories