Why some TensorFlow-Hub models are not fine tunable? - python

I am just learning image classification with TensorFlow and found that there is a TensorFlow hub where we can use a lot of models for a lot of classification tasks. For example, I want to build food classification and develop the model so the model would cover foods in my country and have a higher accuracy on some specific foods. I try to use and tune this model: https://tfhub.dev/google/aiy/vision/classifier/food_V1/1, but why there is information that the model is not fine-tunable?
What makes a model can be fine-tuned and can't be fine-tuned?
Thank you.

The publisher/creator of the model makes he decision on whether the model is fine-tunable or not. Making a model fine-tunable requires model creator to make sure that the TF computation graph supports fine-tuning. For example, if the model contains dropout or batchnorm, the computation graph for fine-tuning and for inference-only will be different. The publisher/creator of the model has to make sure that model is exported correctly to support both these cases. Sometimes publishers do not to go through these steps and mark the model as non fine-tunable. 

Related

how to use a machine learning model that evaluate U-Net segmentation(or any other models) predication?

i'm currently using u-net model with different backbone (resnet,mobilenet ... vs) to segment Ct-Scan image
and after i train the models and get the prediction masks i need to use another model that going to take test set model output and divide them into two group either good prediction or bad prediction the images bellow show two example where one is considered as good prediction and the other is not good predictionbad prediction
is there any pre-trained models that i can directly use if not what kind of model i should use ?
i just want to mention that i'm new to this staff

How to train pre trained model (MNIST) tensorflow

I have a project with Fashion MNIST, which predicts clothes from uploaded images, and I want to make some improvements with it. Is it possible to modify my project that it will train automatically after each uploaded and prediction?
You can train your model manually by using the transfer learning technique(Transfer learning is a method of reusing an already trained model for another task).
Instantiate a base model and load pre-trained weights into it.
Freeze all layers in the base model by setting trainable = False.
Create a new model on top of the output of one (or several) layers
from the base model. Train your new model on your new dataset.
Please refer to this gist for working code example. Thank You.

Can I train my pretrained model with a totally different architecture?

I have trained a pretrained ResNet18 model with my custom dataset on Pytorch and wondered whether I could transfer my model file to train another one with a different architecture, e.g. ResNet50. I know I have to save my model accordingly (explained well on another post here) but this was a question that I have never thought before.
I was planning to use more advanced models like VisionTransformers (ViT) but I couldn't figure out whether I had to start with a pretrained ViT already or I could just take my previous model file and use it as the pretrained model to train a ViT.
Example Scenario: ResNet18 --> ResNet50 --> Inception v3 --> ViT
My best guess it that it's not possible due to number of weights, neurons and layer structures but I would love to hear that if I miss a crucial point here. Thanks!
Between models that only differ in number of layers (Resnet-18 and Resnet-50), it has been done to initialize some layers of the larger model from the weights of the smaller model's layers. Inversely, you can truncate a larger model by taking a subset of regularly spaced layers and initialize a smaller model. In both cases, you need to retrain everything at the end if you hope to achieve semi-decent performances.
The whole point of using architectures that vastly differ (vision transformers vs CNNs) is to learn different features from the inputs and unlock new levels of semantic understanding. Recent models like BeiT also use new self-supervised training schemes that have nothing to do with the classic ImageNet pretraining. Using trained weights from another model would go against the point.
Having said that,if you want to use a ViT, why not start from the available pretrained weights on HuggingFace and fine-tune it on the data you used to train your ResNet50 ?

Training a new dataset on bert

I am new to BERT
I have a amazon review dataset, where I want to predict the star rating based on the review
I know I can use a pretrained bert model as shown here
But I want to train the bert model on my own dataset. Is that whats being done here? And can I apply this type of 'fine tuning' on a pretrained model with any dataset to get more accurate results or do I have to do something else to train the model from scratch
And if I do want to train a model from scratch, where would I start
First of all what is pretraining? The procedure helps the model to learn syntactic <==> semantic (this is a spectrum) features of the language using an enormous amount of raw text (40GB) and processing power. objective function: casual language model and mask language model
What about fine-tuning a pre-trained model? suppose there is a model which has knowledge about the general aspect of the English language (POS, dependency tree, subj ... a little of everything). fine-tuning help us to direct the focus of the model on the most important features in our dataset, let's say in your dataset some syntactic feature is the game-changer, and the model should be careful about it!
objective function: based on downstream task
Training from scratch isn't feasible for most of us, but there is an approach to continue the pre-training phase using your own corpus/corpora (task-specific) without damaging model pieces of knowledge (hopefully)!
objective function: casual language model and mask language model
Here is an article about this approach and its effectiveness and you can be inspired by Scibert and COVIDbert. As you expect the use pre-trained bert as a starting point and continue pre-training using domain-specified corpus!

Pretrained CNN(tensorflow/darknet/caffe) weights for human/vehicle detection only

I am making use of tensorflow's pretrained weights from Tensorflow detection model zoo , which is primarily trained on COCO datasets , which covers about 80 different classes including humans, so making use of these models certainly results in higher computational trade-off, so is there any publicly open pre-trained weights which only focuses on one class in this case either human or vehicle(car).
If there is no such models available then how it is possible to fine-tune or customize these existing models like,
"ssd_inception_v2_coco_2018_01_28" , which performs pretty well with mAP of 32 and computationally efficient as well , so how such models can be utilized to detect only humans but not any other objects.
Have you tried the model person-vehicle-bike-detection-crossroad-1016 ?

Categories