Faster RCNN transfer learning Keras - python

I have implemented with my own custom dataset a faster RCNN in Keras following this very useful guide:
https://medium.com/analytics-vidhya/a-practical-implementation-of-the-faster-r-cnn-algorithm-for-object-detection-part-2-with-cac45dada619
I would like to ask you some details in that implementation for those of you that already had a similar experience on that:
Once I the code is running the compiler returns: "Could not load pretrained model weights. Weights can be found in the keras application folder \https://github.com/fchollet/keras/tree/master/keras/applications", Does it mean that I not exploiting transfer learning?
Do you know something on that?
Thanks in advance

Related

how to train GANs properly

I started experimenting with gans and on the internet, there are a lot of options the thing I worry about now is which one should I use Keras fit or Keras train_on_batchs which one is the proper way to train the model thanks
According to tensorflow's documentation, the combination of tf.data & model.train_on_batch() is the fastest way to train a model.
You can check more about tf.data in the link below:
https://www.tensorflow.org/guide/data
I have already implemented GANs with keras. My implementations might help you get an idea how to train those models:
https://github.com/kochlisGit/Keras-GAN

How to compile model based on keras.applications(for transfer learning) for the Google aiy vision kit

I've been trying 2 years(not continuously, moving to other things and then going back) to compile and run a keras.applications based model in the vision kit, i have unsuccessfully tried lots of approaches(even forgot some), asked questions in forums, in the official project repo, stack overflow, etc with no luck,
for example:
https://github.com/google/aiyprojects-raspbian/issues/698,
https://github.com/google/aiyprojects-raspbian/issues/518,
How to remove layers from a keras model in order to use as baseline for creating another model
https://github.com/keras-team/keras/issues/6464#issuecomment-670829431
After asking many different questions i thought maybe posting directly my use case and asking directly for that could be more promising:
I need to compile and run a keras.applications based model to run it in the aiy vision kit, i know the device is limited so i'm trying to use a model that the documentation says is supported in the vision kit: MobilenetV2 and i'm doing transfer-learning through freezing some of the layers and removing others from keras.applications.MobileNetV2 and then adding custom trainable layers, for testing also i'm trying VGG16, had many issues in the past but most recent issues are:
Even by just using the first layers of the pre-trained model and discarding the rest(the exported .pb file is small, around 2.5 mb) i get(for vgg16,this happens in my computer at compilation time, not in the raspberry device) : Not enough on-device memory to run model.
For MobileNetv2 even when the documentation says it is supported i get:
Check failed: other_op->type == OperatorType::kTensorFlowMerge
Any suggestions for my case? or it's simply just impossible to run keras.applications based model in the vision kit? If it's impossible is it possible to combine the tf for poets mobilenet .pb file with keras output layers and compile that one?
Would really appreciate some help with this or at least a definitive: "no it's not possible" so i don't keep pursuing something that is just not possible.

Brief explanation on tensorflow object detection working mechanism

I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works.
Can anyone explain how dataset are trained in fit into models?
You can't "simply" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning.
I suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that.

How to use trained neural network in different platform/technology?

Given I trained a simple neural network using Tensorflow and Python on my laptop and I want to use this model on my phone in C++ app.
Is there any compatibility format I can use? What is the minimal framework to run neural networks (not to train)?
UDP. I'm also interested in Tensorflow to NOT-Tensorflow compatibility. Do I need to build it up from scratch or there're any best practices?
Yes if you are using iOS or Android. Depending on your specific needs, you have a choice between TensorFlow for Mobile and TensorFlow Lite
https://www.tensorflow.org/mobile/
In particular, to load pre-trained models
https://www.tensorflow.org/mobile/prepare_models
Technically you don't need a framework at all. A conventional fully connected neural network is simple enough that you can implement it in straight C++. It's about 100 lines of code for the matrix multiplication and a dozen or so for the non-linear activation function.
The biggest part is figuring out how to parse a serialized Tensorflow model, especially given that there are quite a few ways to do so. You probably will want to freeze your TensorFlow model; this inserts the weights from the latest training into the model.

Transfer learning with inception model in Tensorflow (python)

How can I load a .pb protobuf model and then tweak the network as needed (specially the outer layers) in order to train a new model for completely different classes? Effectively doing transfer learning?
I want to do something like these (i.e. train the outer layers with a bigger learning rate than the inner layers) among other things, so I need a way to not only load the graph with the variables, but to alter the network's structure and hyperparameters too.
If anyone has an example to follow with the inception model, it would be amazing!
My question is very similar to this one.
I've searched all over the internet (TF docs, Github, StackOverflow, Google...) but I can't seam to find something useful for a novice.
Thanks a lot!
This is the updated tutorial from official Tensorflow website https://www.tensorflow.org/hub/tutorials/image_retraining
They use the pre trained Inception V3 model and everything works fine. You can change the dataset folder to your own dataset
tf.import_graph_def() is the function for loading a GraphDef:
https://www.tensorflow.org/versions/0.6.0/api_docs/python/framework.html#import_graph_def
Hopefully once imported, you can make the modifications to the graph you need. It would be easier, though, to modify the Python code that generated the graph in the first place, if you have access to that.

Categories