I have this model: https://github.com/williamyang1991/DualStyleGAN and try to convert it to CoreML. So far I create copy of original Colab notebook and append at the end two blocks:
!pip install coremltools
import coremltools as ct
and
##title Convert inverted image.
inverted_latent = torch.Tensor(result_latents[0][4]).cuda().unsqueeze(0).unsqueeze(1)
with torch.no_grad():
net.eval()
[sampled_src, sampled_dst] = net(inverted_latent, input_is_latent=True)[0]
traced_model = torch.jit.trace(net, inverted_latent)
mlmodel = ct.convert(traced_model, inputs=[ct.ImageType(name="input", shape=inverted_latent.shape,bias=[-1,-1,-1],scale=2.0/255.0)])
mlmodel.save("modelsaved.mlmodel")
To run it, you should put any image with face to /content and in /usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py
replace round method at 545, 546 lines with np.round
But then it fails at
mlmodel = ct.convert(...
with:
RuntimeError: PyTorch convert function for op 'pythonop' not implemented.
I suggest that there the way to rewrite this module with methods that could be convert, am I right? But I can't to figure out how to find the source of this module.
So my question is:
If I think in a right way, how I can find the source of module?
And if I wrong, please advise me the right way to do it.
The code starts by loading the model into PyTorch.
The code then converts the model into CoreML format and saves it to a .mlmodel file.The code below will take the existing PyTorch model and convert it into a CoreML model with input and output features.
The outputs are saved in the file example.mlmodel which can be opened in Xcode or any other development environment that supports CoreML models.
import torch
import coremltools
model = torch.load('MyPyTorchModel.pt')
coreml_model = coremltools.converters.pytorch.from_pytorch(model,
input_features=
['input'],
output_features=
['output'])
coreml_model.save('MyCoreMLModel.mlmodel')
Related
I am trying to replicates the code from this page.
At my workplace we have access to transformers and pytorch library but cannot connect to internet from our python environment. Could anyone help with how we could get the script working after manually downloading files to my machine?
my specific questions are -
should I go to the location bert-base-uncased at main and download all the files? Do I have put them in a folder with a specific name?
How should I change the below code
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Tokenize our sentence with the BERT tokenizer.
tokenized_text = tokenizer.tokenize(marked_text)
How should I change the below code
# Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased',
output_hidden_states = True, # Whether the model returns all hidden-states.
)
Please let me know if anyone has done this…thanks
###update1
I went to the link and manually downloaded all files to a folder and specified path of that folder in my code. Tokenizer works but this line model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states = True, # Whether the model returns all hidden-states. ) fails. Any idea what should i do? I noticed that 4 big files when downloaded have very strange name...should I rename them to same names as shown on the above page? Do I need to download any other files?
the error message is OSErrr: unable to load weights from pytorch checkpoint file for bert-base-uncased2/ at bert-base-uncased/pytorch_model.bin If you tried to load a pytroch model from a TF 2 checkpoint, please set from_tf=True
clone the model repo for downloading all the files
git lfs install
git clone https://huggingface.co/bert-base-uncased
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
git usage:
download git from here https://git-scm.com/downloads
paste these to your cli(terminal):
a. git lfs install
b. git clone https://huggingface.co/bert-base-uncased
wait for download, it will take time. if you want monitor your web performance
find the current directory simply pasting cd to your cli and get the file path(e.g "C:/Users/........./bert-base-uncased" )
use it as:
from transformers import BertModel, BertTokenizer
model = BertModel.from_pretrained("C:/Users/........./bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("C:/Users/........./bert-base-uncased")
Manual download, without git:
Download all the files from here https://huggingface.co/bert-base-uncased/tree/main
Put them in a folder named "yourfoldername"
use it as:
model = BertModel.from_pretrained("C:/Users/........./yourfoldername")
tokenizer = BertTokenizer.from_pretrained("C:/Users/........./yourfoldername")
For only model(manual download, without git):
just click the download button here and download only pytorch pretrained model. its about 420mb
https://huggingface.co/bert-base-uncased/blob/main/pytorch_model.bin
download config.json file from here https://huggingface.co/bert-base-uncased/tree/main
put both of them in a folder named "yourfilename"
use it as:
model = BertModel.from_pretrained("C:/Users/........./yourfilename")
Answering "###update1" for the error: 'OSErrr: unable to load weights from pytorch checkpoint file for bert-base-uncased2/ at bert-base-uncased/pytorch_model.bin If you tried to load a pytroch model from a TF 2 checkpoint, please set from_tf=True'
Please try this methods from -> https://huggingface.co/transformers/model_doc/bert.html
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained("C:/Users/........./bert-base-uncased")
model = BertForMaskedLM.from_pretrained("C:/Users/........./bert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
if this works we understand that there is nothing wrong with filesystem or foldernames.
If it works try to get hiddenstate after(note that bert model already returns hiddenstate as explained: " The bare Bert Model transformer outputting raw hidden-states without any specific head on top." so you dont need to use "output_hidden_states = True,")
from transformers import BertTokenizer, BertModel
import torch
tokenizer = BertTokenizer.from_pretrained("C:/Users/........./bert-base-uncased")
model = BertModel.from_pretrained("C:/Users/........./bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
if this not works try to load pytorch model with one of these methods
# Load all tensors onto the CPU
torch.load("C:/Users/........./bert-base-uncased/pytorch_model.bin", map_location=torch.device('cpu'))
# Load all tensors onto GPU 1
torch.load("C:/Users/........./bert-base-uncased/pytorch_model.bin", map_location=lambda storage, loc: storage.cuda(1))
if pytorch load method is not worked, we understand that there is pytorch version compatibility problem between pytorch 1.4.0 and released bert pytorch model. Or maybe your pytorch_model.bin file not downloaded very well. And please pay attention when pytorch 1.4.0 released the last python was python3.4
I have read the Coreml guide which shows how to convert a pb model to mlmodel by using coremltools. However, I get the error below when trying to follow the guide. Which means the input shape must be specific.
ValueError: "ResizeBilinear" op: the second input, which is the output size, must be known statically
So, have anyone know how to convert the flexible input shape mlmodel?
Here is my code:
import coremltools as ct
def mlmodel_image(pb):
input_shape = ct.Shape(shape=(1, ct.RangeDim(1, 720), ct.RangeDim(1, 1280), 3))
model_input = ct.ImageType(shape=input_shape)
mlmodel = ct.convert(pb, inputs=[model_input], source='TensorFlow')
mlmodel.save(pb.replace(".pb", "_img.mlmodel"))
print('------save to ', pb.replace(".pb", "_img.mlmodel"))
please try my sample:
https://github.com/dhrebeniuk/RealTimeFastStyleTransfer
And look my article with attached Google Colab Notebook in PyTorch.
There is instructions how run Style Transfer on iOS with maximum performance.
I need to change the input size of an ONNX model from [1024,2048,3] to [1,1024,2048,3].
For this, I've tried using update_inputs_outputs_dims by ONNX
import onnx
from onnx.tools import update_model_dims
model = onnx.load("./0818_pspnet_1.0_713_resnet_v1/pspnet_citysc.onnx")
updated_model = update_model_dims.update_inputs_outputs_dims(model, {"inputs:0":[1,1024,2048,3]}, {"predictions:0":[1, 1025, 2049, 1]})
onnx.save(updated_model, 'pspnet_citysc_upd.onnx')
However, this is the error I end up with.
ValueError: Unable to set dimension value to 1 for axis 0 of inputs:0. Contradicts existing dimension value 1024.
The ONNX model is exported from a Tensorflow frozen graph of PSPNet. If the above approach does not work, would I need to modify the frozen graph?
Any help is greatly appreciated.
You can use the dynamic shape fixed tool from onnxruntime
python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param batch --dim_value 1 model.onnx model.fixed.onnx
I trained a Pipeline model, which uses CountVectorizer, TfidfTransformer, OneVsRestClassifier and also a GridSearchCV.
Now I want to save it into a tflite file, to use it on my Android app.
For a Sequential model (where my tflite file was created successfully), I did:
sequential_model = Sequential()
...
# train and fit the model
...
h5_file = "h5_model.h5"
tflite_file = "tflite_model.tflite"
sequential_model.save(h5_file)
converter = tf.lite.TFLiteConverter.from_keras_model_file(h5_file)
tflite_model = converter.convert()
open(tflite_file, "wb").write(tflite_model)
All good to save Sequential model into a tflite file.
Well, Pipeline has no attribute "save", unlike a Sequential model, so I tried saving the Pipeline model with joblib and then with pickle, but none of them worked.
Let's say that pipeline_model is my trained model (the one described in the first sentence).
pb_file = 'pipeline_model.pb'
# I also tried with other extensions, like h5, hdf5, sav, pkl
joblib.dump(pipeline_model, filename)
# or with pickle equivalent and pkl extension
# pickle.dump(pipeline_model, open(pb_file, 'wb'))
Now the pb file is created and I want to create a tflite one. Since it's not a Keras model, I can't use from_keras_model_file, so I tried instead with from_saved_model.
pb_file = 'pipeline_model.pb'
tflite_file = "tflite_model.tflite"
converter = tf.lite.TFLiteConverter.from_saved_model(pb_file)
tflite_model = converter.convert()
open(tflite_file, "wb").write(tflite_model)
It generates the error on line of converter = ...:
OSError: SavedModel file does not exist at: pb_file.pb/{saved_model.pbtxt|saved_model.pb}
I tried running it on Kaggle, Colab, PyCharm IDE, with both versions of tensorflow (1 and 2), with different file extensions and nothing seems to work.
I also noticed that TFLiteConverter contains the methods from_frozen_graph and from_session, but these two requires an extra parameter, so I don't think these could be the solution.
So, how can I obtain my tflite file from the trained Pipeline model? Please, if you find any solution, tell me the library versions that you used, since there could be a different behaviour on different libs.
I've freezed my model and got .pb file. Then I've quantize my model using tocoConverter on Linux, as it's not supported on Windows. I've got quantized_model.tflite. I can load it and get predictions on Linux, but I have issues to make it on Windows, as my project requires.
I've tried to load it using tf.contrib.lite.Interpreter using this code:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter=tf.contrib.lite.Interpreter(model_path="quantized_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
# change the following line to feed into your own data.
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
*ImportError: No module named 'tensorflow.contrib.lite.python.Interpreter*
But it failed with "No module named 'tensorflow.contrib.lite.python.interpreter" error. I always get this errors on Windows, when trying to use something from tf.contrib.lite. Maybe there is a way to load this on Windows? Or can you advice alternative options to quantize a model on Windows?
toco is currently not supported on Windows build for cmake. This is what I remember reading somewhere.