I've freezed my model and got .pb file. Then I've quantize my model using tocoConverter on Linux, as it's not supported on Windows. I've got quantized_model.tflite. I can load it and get predictions on Linux, but I have issues to make it on Windows, as my project requires.
I've tried to load it using tf.contrib.lite.Interpreter using this code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter=tf.contrib.lite.Interpreter(model_path="quantized_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
# change the following line to feed into your own data.
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
*ImportError: No module named 'tensorflow.contrib.lite.python.Interpreter*
But it failed with "No module named 'tensorflow.contrib.lite.python.interpreter" error. I always get this errors on Windows, when trying to use something from tf.contrib.lite. Maybe there is a way to load this on Windows? Or can you advice alternative options to quantize a model on Windows?
toco is currently not supported on Windows build for cmake. This is what I remember reading somewhere.
Related
I have this model: https://github.com/williamyang1991/DualStyleGAN and try to convert it to CoreML. So far I create copy of original Colab notebook and append at the end two blocks:
!pip install coremltools
import coremltools as ct
and
##title Convert inverted image.
inverted_latent = torch.Tensor(result_latents[0][4]).cuda().unsqueeze(0).unsqueeze(1)
with torch.no_grad():
net.eval()
[sampled_src, sampled_dst] = net(inverted_latent, input_is_latent=True)[0]
traced_model = torch.jit.trace(net, inverted_latent)
mlmodel = ct.convert(traced_model, inputs=[ct.ImageType(name="input", shape=inverted_latent.shape,bias=[-1,-1,-1],scale=2.0/255.0)])
mlmodel.save("modelsaved.mlmodel")
To run it, you should put any image with face to /content and in /usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py
replace round method at 545, 546 lines with np.round
But then it fails at
mlmodel = ct.convert(...
with:
RuntimeError: PyTorch convert function for op 'pythonop' not implemented.
I suggest that there the way to rewrite this module with methods that could be convert, am I right? But I can't to figure out how to find the source of this module.
So my question is:
If I think in a right way, how I can find the source of module?
And if I wrong, please advise me the right way to do it.
The code starts by loading the model into PyTorch.
The code then converts the model into CoreML format and saves it to a .mlmodel file.The code below will take the existing PyTorch model and convert it into a CoreML model with input and output features.
The outputs are saved in the file example.mlmodel which can be opened in Xcode or any other development environment that supports CoreML models.
import torch
import coremltools
model = torch.load('MyPyTorchModel.pt')
coreml_model = coremltools.converters.pytorch.from_pytorch(model,
input_features=
['input'],
output_features=
['output'])
coreml_model.save('MyCoreMLModel.mlmodel')
I have read the Coreml guide which shows how to convert a pb model to mlmodel by using coremltools. However, I get the error below when trying to follow the guide. Which means the input shape must be specific.
ValueError: "ResizeBilinear" op: the second input, which is the output size, must be known statically
So, have anyone know how to convert the flexible input shape mlmodel?
Here is my code:
import coremltools as ct
def mlmodel_image(pb):
input_shape = ct.Shape(shape=(1, ct.RangeDim(1, 720), ct.RangeDim(1, 1280), 3))
model_input = ct.ImageType(shape=input_shape)
mlmodel = ct.convert(pb, inputs=[model_input], source='TensorFlow')
mlmodel.save(pb.replace(".pb", "_img.mlmodel"))
print('------save to ', pb.replace(".pb", "_img.mlmodel"))
please try my sample:
https://github.com/dhrebeniuk/RealTimeFastStyleTransfer
And look my article with attached Google Colab Notebook in PyTorch.
There is instructions how run Style Transfer on iOS with maximum performance.
I'm working with neuronal networks and I need to use a Raspberry Pi v2.
When I want to install tensorflow 2.X it fails, and I just can install tensorflow 1.14. For this reason I found the tflite library, that theoretically helps me, with a lite version of tf.
Here an image that shows I can't install it.
First of all, I convert my keras model (model.h5) into .tflite model.
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
Since here, all OK. The problem is when I want to use this model. With tensorflow I know how to do it,
from tensorflow import keras
def importModel(myPath):
file = open(myPath+'model/model.json', 'r')
model_json = file.read(); file.close()
model = keras.models.model_from_json(model_json)
model.load_weights(myPath+'model/model.h5')
return model, scaler
But I really don't understand how to do it with tflite, can somebody help me, please??
You can find this in the official documentation
import numpy as np
import tensorflow as tf
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
And if you have trouble to install TensorFlow 2.x on your raspberry it's maybe because you are not using the latest version of Python3
I trained a Pipeline model, which uses CountVectorizer, TfidfTransformer, OneVsRestClassifier and also a GridSearchCV.
Now I want to save it into a tflite file, to use it on my Android app.
For a Sequential model (where my tflite file was created successfully), I did:
sequential_model = Sequential()
...
# train and fit the model
...
h5_file = "h5_model.h5"
tflite_file = "tflite_model.tflite"
sequential_model.save(h5_file)
converter = tf.lite.TFLiteConverter.from_keras_model_file(h5_file)
tflite_model = converter.convert()
open(tflite_file, "wb").write(tflite_model)
All good to save Sequential model into a tflite file.
Well, Pipeline has no attribute "save", unlike a Sequential model, so I tried saving the Pipeline model with joblib and then with pickle, but none of them worked.
Let's say that pipeline_model is my trained model (the one described in the first sentence).
pb_file = 'pipeline_model.pb'
# I also tried with other extensions, like h5, hdf5, sav, pkl
joblib.dump(pipeline_model, filename)
# or with pickle equivalent and pkl extension
# pickle.dump(pipeline_model, open(pb_file, 'wb'))
Now the pb file is created and I want to create a tflite one. Since it's not a Keras model, I can't use from_keras_model_file, so I tried instead with from_saved_model.
pb_file = 'pipeline_model.pb'
tflite_file = "tflite_model.tflite"
converter = tf.lite.TFLiteConverter.from_saved_model(pb_file)
tflite_model = converter.convert()
open(tflite_file, "wb").write(tflite_model)
It generates the error on line of converter = ...:
OSError: SavedModel file does not exist at: pb_file.pb/{saved_model.pbtxt|saved_model.pb}
I tried running it on Kaggle, Colab, PyCharm IDE, with both versions of tensorflow (1 and 2), with different file extensions and nothing seems to work.
I also noticed that TFLiteConverter contains the methods from_frozen_graph and from_session, but these two requires an extra parameter, so I don't think these could be the solution.
So, how can I obtain my tflite file from the trained Pipeline model? Please, if you find any solution, tell me the library versions that you used, since there could be a different behaviour on different libs.
I'm trying to convert this CPM-TF model to TFLite, but to use the TocoConverter, I need to specify input and output tensors.
https://github.com/timctho/convolutional-pose-machines-tensorflow
I ran the included run_freeze_model.py and got the cpm_hand_frozen.pb (GraphDef?) file.
From this post I copied the code snippet for converting the ProtoBuf file with known inputs and outputs. But looking through the model definition code, I have some trouble finding the correct answers for the in- and outputs.
Tensorflow Convert pb file to TFLITE using python
import tensorflow as tf
import numpy as np
from config import FLAGS
path_to_frozen_graphdef_pb = 'frozen_models/cpm_hand_frozen.pb'
def main(argv):
input_tensors = [1, FLAGS.input_size, FLAGS.input_size, 3]
output_tensors = np.zeros(FLAGS.num_of_joints)
frozen_graph_def = tf.GraphDef()
with open(path_to_frozen_graphdef_pb, 'rb') as f:
frozen_graph_def.ParseFromString(f.read())
tflite_model = tf.contrib.lite.toco_convert(frozen_graph_def, input_tensors, output_tensors)
if __name__ == '__main__':
tf.app.run()
I'm quite new to Tensorflow, but I think the input should be defined as
[1, FLAGS.input_size, FLAGS.input_size, 3]
Found that here: https://github.com/timctho/convolutional-pose-machines-tensorflow/blob/master/models/nets/cpm_hand.py#L23
Not sure what 1 represents, but None does not work and I guess the other parameters are the image size and color channels.
However, with that input, it returns an error:
AttributeError: 'int' object has no attribute 'dtype'
I got no clue on what the output should be, other than it should be an array.
UPDATE 1
Looking through the TF docs, it appears that I need to define the input as a tensor (obvious!).
https://www.tensorflow.org/lite/convert/python_api
input_tensors = tf.placeholder(name="img", dtype=tf.float32, shape=(1,FLAGS.input_size, FLAGS.input_size, 3))
This does not return an error, but I still need to figure out if the input are correct and what the output should be like.
UPDATE 2
Alright, so I finally got it to spit out the tflite model with this code snippet
def tflite_converter():
graph_def_file = os.path.join('frozen_models', '{}_frozen.pb'.format('cpm_hand'))
input_arrays = ['input_placeholer']
output_arrays = [FLAGS.output_node_names]
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open('{}.tflite'.format('cpm_hand'), 'wb').write(tflite_model)
I hope that I did it correctly. I will try to do inference on the model on Android.
I do also think there is a misspelling in the input tensor input_placeholder. It appears to be corrected in the code itself, but from printing out all node names from the pretrained model, the spelling input_placeholer is present.
Node names can be seen here: https://github.com/timctho/convolutional-pose-machines-tensorflow/issues/59
My setup is:
Ubuntu 18.04
CUDA 9.1 & cuDNN 7.0
Python 3.6.5
Tensorflow GPU 1.6
Inference works like a charm, so there should be no issue on the setup itself.