I have trained my custom object using Yolov4 and I have the following file :
yolov4-custom_best.weights
yolov4-custom.cfg
obj.names
'obj.names' has the names of the classes of the custom object.
In order to deploy my custom object detector to a web application I need the files in a tensorflow model format (eg : object_detector.h5)
Can someone help?
To get the model files in .h5 format you have to save the model. To save the model you can use
# The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
You can also save the model by using tf.keras.models.save_model()
tf.keras.models.save_model(model,filepath,save_format='h5')
For more details, please refer to this documentation. Thank You!
Related
I've used tensor flow lite to make my own custom android.tflite model as described in this tflite collab demo.
I was going to test my object detection model using tflite_support, but this module is not Windows compatible.
Can I use my .tflite model with "regular" tensorflow? If so, how do I load the .tflite model? Following this example, we use tf.saved_model.load(model_dir), which I believe will not load our android.tflite file.
Is there a function for loading a .tflite file? Or while I'm making my model with tensorflow_model_maker, to I need to specify a different format for the model?
Additionally, I see that tflite has a way to convert normal tensorflow models to tflite models, but I don't see that it has a means of doing the reverse:
tf.lite.TFLiteConverter(
funcs, trackable_obj=None
)
I have successfully trained a Convolutional neural network model using Google Colab in a file named model_prep.py. The model receives 92% accuracy. Now that i'm happy with the model I have used pyTorch to save my model.
torch.save(model, '/content/drive/MyDrive/myModel.pt')
My understanding of this is that once the model has been fully trained, I could use pyTorch to save the trained model to then be loaded into future projects for predictions on new data. Therefore I created a separate test.py file where i loaded the trained model like so,
model = torch.load('/content/drive/MyDrive/myModel.pt')
model.eval()
But within the new test.py file, I receive an error message
AttributeError: Can't get attribute 'ResNet1D' on <module '__main__'>
Although this error does not occur when loading the model in the same notebook as the trained model was created (model_prep.py). This error only occurs when loading the model into a separate notebook with no model architecture. How do I go about this problem? I would like to load the trained model into a new separate file to perform on new data. Can someone suggest a solution?
In the future, I would like to create a GUI using tkinter and deploy the trained model to check predictions using new data within the tkinter file. Is this possible?
Even I was facing the same error. What this trying to say is create an instance of your model by calling the class and then do torch.load().
If go to one of blogs on Saving and Loading Models by PyTorch and there in load section, you can clearly see this line # Model class must be defined somewhere.
Hence I would recommend, in your test.py file try to define the model class as you have done in train.py (guessing this is your filename where you have created your model) and then load as shown below.
model = ModelClass()
model = torch.load(PATH, , map_location=torch.device('cpu')) #<--- if current device is 'CPU'
model.eval() #<---- To prevent it from going to retraining mode.
As stated by Pytorch Blog (here) Saving a model in this way will save the entire module using Python’s pickle module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved. The reason for this is that pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors.
I fixed this with the TorchScript approach. We save model with
model_scripted = torch.jit.script(model)# Export to TorchScript
model_scripted.save('model_scripted.pt') # Save
and to load it
model = torch.jit.load('model_scripted.pt')
model.eval()
More details here
I am using Tensorflow 1.15 and Python 3.7 and I am a beginner.
I trained a tensorflow model with my own dataset on google cloud as described here:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_training_and_evaluation.md
After training my google cloud bucket had listed the model.ckpt files. I saved the model as described here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md . Doing this generated some files: checkpoint, frozen_inference_graph.pb, model.ckpt.data-00000-of-00001, model.ckpt.index, model.ckpt.meta, pipeline.config and a folder "saved_model" which contains a file saved_model.pb and an empty variables folder. So far so good. Now I wanted to use these files to make predictions using this notebook: https://colab.research.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/object_detection_tutorial.ipynb but I stuck at the "load object detection model" section, I always get this
OSError: SavedModel file does not exist at: home/user/models/research/exported_graphs/saved_model/{saved_model.pbtxt|saved_model.pb}
What am I doing wrong? I tried all the possibilities and read dozens of stackoverflow articles but I can't find any usable solution. Is there any other possibility to use the model.ckpt files which were generated by training to make a .h5 / make predictions?
Thank you very much in advance!
Firstly upload saved_model on Google_cloud, and use below code snippet to load saved_model.
def load_model(model_name):
#path to model directory
model_dir = "model_directory"
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model
I'm trying to convert TFLite Face Mesh model to MLModel (Apple).
TFLite model description:
https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view
TFLite actual .tflite file:
https://github.com/google/mediapipe/blob/master/mediapipe/models/face_landmark.tflite
Looking at CoreMLTools provided by Apple (https://coremltools.readme.io/docs/introductory-quickstart) seems like it's possible, but all the samples codes demonstrate conversation from Keras and not from TFLite (although it's clearly supported):
How does one convert TFLite model to MLModel model?
As far as I know, there is no direct conversion from TFLite to Core ML. Someone could create such a converter but apparently no one has.
Two options:
Do it yourself. There is a Python API to read the TFLite file (flatbuffers) and an API to write Core ML files (NeuralNetworkBuilder in coremltools). Go through the layers of the TFLite model one-by-one, and add them to the NeuralNetworkBuilder, then save as a .mlmodel file.
Let TFLite do this for you. When you use the CoreMLDelegate in TFLite, it actually performs the model conversion on-the-fly and saves a .mlmodel file (or the compiled version, .mlmodelc). Then it uses Core ML to run this model. You can write some code to load the model with TFLite using the CoreMLDelegate, then grab the .mlmodel file that this created from the app bundle and use that.
I trained a model in Google Cloud ML and saved it as a saved model format. I've attached the directory for the saved model below.
https://drive.google.com/drive/folders/18ivhz3dqdkvSQY-dZ32TRWGGW5JIjJJ1?usp=sharing
I am trying to load the model into R using the following code but it is returning <tensorflow.python.training.tracking.tracking.AutoTrackable> with an object size of 552 bytes, definetly not correct. If anyone can properly load the model, I would love to know how you did it. It should also be able to be loaded into python I assume, that could work too. The model was trained on GPU, not sure which tensorflow version. Thank you very much!
library(keras)
list.files("/path/to/inceptdual400OG")
og400<-load_model_tf("/path/to/inceptdual400OG")
Since the shared model is not available anymore (it says that is in the trash folder)and it is not specified in the question I can't tell which framework you used to save the model on first place. I will suggest trying the Keras load function or the Tensorflow load function depending on which type of saved file model you have.
Bear in mind modify this argument as "compile = FALSE" if you have the model already compiled.
Remember to import the latest libraries if you trained your model with tf>=2.0 because of dependencies incompatibilities {Tensorflow, Keras} and rsconnect::appDependencies() output would be worth checking.