I've downloaded the tflite model from https://www.tensorflow.org/lite/models/pose_estimation/overview and I want to use it to estimate poses in images on my laptop with python, how to load the model and use it to do this?
Related
I have trained my model of doors in yolo-v3 but now I need it in TensorFlow-Lite. But, I am facing a problem, that is, if I want to train my model for tensorflow, I need annotation file in ".csv" or ".xml" but the ones I have are "*.txt". I did found a software to create annotation files manually from drawing rectangles in pictures but I can not do that for thousands of images due to time shortage.
Can anyone guide me how to handle such situation?
I have followed the following link but the resulted model did not work.
https://medium.com/analytics-vidhya/yolov3-to-tensorflow-lite-conversion-4602cec5c239
i think it will be good to train tensorflow implementation on your data , then converting tensrflow model to tflite should be easy
here is yolov3 in tf : https://github.com/YunYang1994/tensorflow-yolov3
then use official tensorflow codes to convert to tflite : https://www.tensorflow.org/lite/convert
I have used combination of MTCNN (for face detection) and Facenet model is trained on different faces and have generated weights (face embedding) into .npz file. I have used Keras API to load model and train and use it for inference for further face recognition. This whole setup is working fine.
Now, I want to use the same weights for Face Recognition in Android app using Firebase AutoML custom model implementation which supports only tensorflow-lite models. So I want to convert the Facenet trained weights (face embedding in '.npz' file format) into tensorflow-lite (.tflite) model.
But I am not able to find any solution for it there are options to convert Facenet frozen model '.pb' file to convert into tflite. Click here for details.
Please help if you have any idea about this conversion.
Thanks
I've just started with tensorflow. I wrote a program that uses Fashion_MNIST dataset to train the model. And then predicts the labels using 'test_images'and it's working good so far.
But what I am curious how can I use my own image of a shoe or shirt for prediction. Because all the test images are of shape 28*28. How can I do this ?
The task you are engaged in is the task of data preparation and preprocessing. Among the things you must do already having a directory with images is the tagging of the images, for this task I recommend labelImg.
If you also need the dimensionality of the input to be of a specific size like the example you give, you can use digital image processing software. The OpenCV library has dimensionality reduction tools that work for this.
My YOLO model works fine for detecting objects such as bottle, person, cellphone, backpack et cetera. But I want to make my model detect a ring or a bracelet or a helmet (objects which are not in the present in the present yolo model). Without GPU can I make a custom object detection yolo model? What are the risks involved? (if any).
My System is Windows 10 Home single language with 8GB RAM.
Re-compile darknet.exe to run on CPU is terribly slow. I've tried before. It's totally unpractical.
Recommend you study Intel OpenVINO toolkit.
https://software.intel.com/en-us/openvino-toolkit
and
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html
OpenVINO toolkit can load and run any frameworks on their CPU/integrated GPU.
You can still use regular NVIDIA cards to train your custom objects by Darknet YOLO.
Then use 3rd-party converter tools (which can be easily found on the GitHub) to convert YOLO weight files you trained to the Tensorflow PB file.
Then use Intel's Model Optimizer to transform the PB file and label file into their so-called "Inference Representation" files (named in *.bin, *.xml, *.labels, and *.mapping files) which later can be loaded and run on Intel's CPU or integrated GPU.
Their Model Optimizer will automatically optimize and remove some unused nodes in YOLO convolutional network file and improve the overall inference speed, which is much faster than simply using re-compiled CPU version of darknet.exe to run YOLO weight on CPU.
Yes you can do that.
Just change the following lines in the Makefile of darknet folder-
GPU=1
CUDNN=1 (for GPU)
change it to -
GPU=0
CUDNN=0 (for CPU)
Yes you can train your YOLO model to detect custom objects too.. Just follow this blog - Link
I have retrained the inception model from my data set of traffic sign.Its working fine but when I am trying to check other image e.g panda it's resulting with the name of traffic sign with some probabilities.I don't understand why its doing it.I need both tensor-flow data-set and my own category too.
My steps:
I have installed the python 3.5.2 in windows 7
I installed tensor-flow with
pip --install tensorflow
I download these two files retrain.py to train my data and label_image.py to check image.
Files downloaded from:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/image_retraining
You have misunderstood the fundamentals of transfer learning wrt this image retraining program.
In the image retraining program you are referencing, you are taking the inception CNN model that is already pretraining on the imageNET dataset. You are then retraining the final classification layers on your NEW classes and data.
The transfer learning occurs because you are retaining all the learnt feature extraction filters etc. in the early layers and you are just reclassifying the activations of those layers to new classes based on your new dataset. This means you are replacing the classification part with a new one. AFAIK there is no way to simply add classes to a CNN model via transfer learning because you have already trained a softmax layer (for example) with the classification distribution for each class.
To achieve what you are suggesting will require you to retrain the final layers of inception with the original dataset PLUS your additional data. This will take a long time due to the size of imageNET.
I would re-evaluate whether you actually need to be able to utilise all these classes in your application or whether it is sufficient to just have your traffic signs etc.
You can learn more about the program at the tensorflow tutorial here.