I followed the TensorFlow tutorial on https://www.tensorflow.org/tutorials/keras/text_classification and saved the model.
I was able to successfully import into Go using the tfgo library:
package main
import (
"fmt"
tg "github.com/galeone/tfgo"
tf "github.com/tensorflow/tensorflow/tensorflow/go"
)
func main() {
model := tg.LoadModel("movie_reviews", []string{"serve"}, nil)
root := tg.NewRoot()
t := tg.NewTensor(root, tg.Const(root, [3]int32{1, 2, 3}))
fake, _ := tf.NewTensor([3]int32{1, 2, 3})
model.Exec([]tf.Output{t.Output}, map[tf.Output]*tf.Tensor{
model.Op("input", 0): fake,
})
fmt.Println(model)
}
But now I don't know how to interact with it. In Python you have all these model methods, i.e. predict, evaluate, etc.
With the Go binding it seems you need to know the exact operation name in order to interact with it?
How would I find out about that?
Yes, using to Go bindings and tfgo you have to know the exact operation name.
Getting all the names, under the "serve" tags, by the way, it's straightforward using the saved_model_cli CLI tool, shipped together with the TensorFlow Python package.
saved_model_cli show --all --dir <path of your SavedModel>
It will give you all the information needed, for every tag available into the SavedModel. In your case, you have to give a look at the signature_def with the key "serve" or "serving_default".
Disclaimer: I'm the author of the tfgo package. I've also covered this topic in chapter 10 of the book Hands-On Neural networks with TensorFlow 2.0 - in the section dedicated to the SavedModel serialization format.
Related
I am referring to the models that can be found here: https://pytorch.org/docs/stable/torchvision/models.html#torchvision-models
As, #dennlinger mentioned in his answer : torch.utils.model_zoo, is being internally called when you load a pre-trained model.
More specifically, the method: torch.utils.model_zoo.load_url() is being called every time a pre-trained model is loaded. The documentation for the same, mentions:
The default value of model_dir is $TORCH_HOME/models where
$TORCH_HOME defaults to ~/.torch.
The default directory can be overridden with the $TORCH_HOME
environment variable.
This can be done as follows:
import torch
import torchvision
import os
# Suppose you are trying to load pre-trained resnet model in directory- models\resnet
os.environ['TORCH_HOME'] = 'models\\resnet' #setting the environment variable
resnet = torchvision.models.resnet18(pretrained=True)
I came across the above solution by raising an issue in the PyTorch's GitHub repository:
https://github.com/pytorch/vision/issues/616
This led to an improvement in the documentation i.e. the solution mentioned above.
Yes, you can simply copy the urls and use wget to download it to the desired path. Here's an illustration:
For AlexNet:
$ wget -c https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth
For Google Inception (v3):
$ wget -c https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth
For SqueezeNet:
$ wget -c https://download.pytorch.org/models/squeezenet1_1-f364aa15.pth
For MobileNetV2:
$ wget -c https://download.pytorch.org/models/mobilenet_v2-b0353104.pth
For DenseNet201:
$ wget -c https://download.pytorch.org/models/densenet201-c1103571.pth
For MNASNet1_0:
$ wget -c https://download.pytorch.org/models/mnasnet1.0_top1_73.512-f206786ef8.pth
For ShuffleNetv2_x1.0:
$ wget -c https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth
If you want to do it in Python, then use something like:
In [11]: from six.moves import urllib
# resnet 101 host url
In [12]: url = "https://download.pytorch.org/models/resnet101-5d3b4d8f.pth"
# download and rename the file to `resnet_101.pth`
In [13]: urllib.request.urlretrieve(url, "resnet_101.pth")
Out[13]: ('resnet_101.pth', <http.client.HTTPMessage at 0x7f7fd7f53438>)
P.S: You can find the download URLs in the respective python modules of torchvision.models
There is a script available that will output a list of URLs across the entire package.
From within the pytorch/vision package execute the following:
python scripts/collect_model_urls.py .
# ...
# https://download.pytorch.org/models/swin_v2_b-781e5279.pth
# https://download.pytorch.org/models/swin_v2_s-637d8ceb.pth
# https://download.pytorch.org/models/swin_v2_t-b137f0e2.pth
# https://download.pytorch.org/models/vgg11-8a719046.pth
# https://download.pytorch.org/models/vgg11_bn-6002323d.pth
# ...
TL;DR: No, it is not possible directly, but you can easily adapt it.
I think what you want to do is to look at torch.utils.model_zoo, which is internally called when you load a pre-trained model:
If we look at the code for the pre-trained models, for example AlexNet here, we can see that it simply calls the previously mentioned model_zoo function, but without the saved location. You can either modify the PyTorch source to specify this (that would actually be a great addition IMO, so maybe open a pull request for that), or else simply adopt the code in the second link to your own liking (and save it to a custom location under a different name), and then manually insert the relevant location there.
If you want to regularly update PyTorch, I would heavily recommend the second method, since it doesn't involve directly altering PyTorch's code base, and potentially throw errors during updates.
I've found many topics related to this on the Internet but I could find no solutions.
Suppose I want to download any PMML model from this examples list, and run them in Python (Python 3 preferably). Is there any way to do this?
I'm looking for a way to import a PMML that was deployed OUTSIDE Python and import it to use it with this language.
You could use PyPMML to apply PMML in Python, for example:
from pypmml import Model
model = Model.fromFile('DecisionTreeIris.pmml')
result = model.predict({
"Sepal_Length" : 5.1,
"Sepal_Width" : 3.5,
"Petal_Length" : 1.4,
"Petal_Width" : 0.2
})
For more info about other PMML libraries, be free to see:
https://github.com/autodeployai
After some research I found the solution to this: the 'openscoring' library.
Using it is very simple:
import subprocess
from openscoring import Openscoring
import numpy as np
p = subprocess.Popen('java -jar openscoring-server-executable-1.4.3.jar',
shell=True)
os = Openscoring("http://localhost:8080/openscoring")
# Deploying a PMML document DecisionTreeIris.pmml as an Iris model:
os.deployFile("Iris", "DecisionTreeIris.pmml")
# Evaluating the Iris model with a data record:
arguments = {
"Sepal_Length" : 5.1,
"Sepal_Width" : 3.5,
"Petal_Length" : 1.4,
"Petal_Width" : 0.2
}
result = os.evaluate("Iris", arguments)
print(result)
This returns the value of the target variable in a dictionary. There is no need to go outside of Python to use PMML models anymore (you just have to run the server with Java, which can be done with Python as well as I showed above).
Isn't it like trying to host H2O models in python apps? Looks like a bridge from python to Java is required here too. Such bridges are not stable at all, been there, tested them. Just a general suggestion: do not mix languages between ML algos and apps code, train in python, serve in python, re-validate also in python. Legacy R and H2O models can be always re-fitted in python.
I am new to Tensorrt and I am not so familiar with C language also. May I ask if there is any example to import caffe modell(caffeparser) and at the same time to use plugin with python. Plugin library example: "https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/_nv_infer_plugin_8h_source.html".
I saw an example doing something like the below. Is it necessary to modify the the pluginfactory class? or it has been already done with the python plugin api?
import tensorrt
import tensorrtplugins
from tensorrt.plugins import _nv_infer_plugin_bindings as nvinferplugin
from tensorrt.parsers import caffeparser
plugin_factory = tensorrtplugins.FullyConnectedPluginFactory()
parser = caffeparser.create_caffe_parser()
parser.set_plugin_factory(plugin_factory)
engine = trt.utils.caffe_to_trt_engine(G_LOGGER,
MODEL_PROTOTXT,
CAFFE_MODEL,
1,
1 << 20,
OUTPUT_LAYERS,
trt.infer.DataType.FLOAT,
plugin_factory
)
P.s: I am trying to convert YOLO2 to Tensorrt format. Therefore, some layers(e.g kYOLOREORG and kPRELU) can only be supported by the plugin.
Another way to do so is to add the plugin during constructing the network, by method network.add_plugin_ext() ?However, I am not so sure how to specify the previous layer that is going to be imported later.
Thank you so much for your answer. Your help will be much appreciated!
I trained my model in Amazon-SageMaker and downloaded it to my local computer. Unfortunately, I don't have any idea how to run the model locally.
The Model is in a directory with files like:
image-classification-0001.params
image-classification-0002.params
image-classification-0003.params
image-classification-0004.params
image-classification-0005.params
image-classification-symbol.json
model-shapes.json
Would anyone know how to run this locally with Python, or be able to point me to a resource that could help? I am trying to avoid calling the model using the Amazon API.
Edit: The model I used was created with code very similar to this example.
Any help is appreciated, I will award the bounty to whoever is most helpful, even if they don't completely solve the question.
This is not a complete answer as I do not have SageMaker setup (And I do not know MXNet) and so I can not practically test this approach (yes, as already mentioned, I do not want to call this a complete answer rather a probable pointer/approach to solve this issue).
The Assumption -
You mentioned a that your model is very similar to the notebook link you provided. If you read the text in the notebook carefully, you will see at some point there is something like this -
"In this demo, we are using Caltech-256 dataset, which contains 30608 images of 256 objects. For the training and validation data, we follow the splitting scheme in this MXNet example."
See the mention of MXNet there? Let us assume that you did not change a lot and hence your model is built using MXNet as well.
The Approach -
Assuming what I just mentioned, if you go and search in the documentation of AWS SageMaker Python SDK you will see a section about serialization of the modules. Which again, by itself, starts with another assumption -
"If you train function returns a Module object, it will be serialized by the default Module serialization system, unless you've specified a custom save function."
Assuming that this is True for your case, further reading in the same document tells us that "model-shapes.json" is a JSON serialised representation of your models, "model-symbol.json" is the serialization of the module symbols created by calling the 'save' function on the 'symbol' property of module, and finally "module.params" is the serialized (I am not sure if it is text or binary format) form of the module parameters.
Equipped with this knowledge we go and look into the documentation of MXNet. And Voila! We see here how we can save and load models with MXNet. So as you already have those saved files, you just need to load them in a local installation of MXNet and then run them to predict the unknown.
I hope this will help you to find a direction to solve your problem.
Bonus -
I am not sure if this also can do the same job, (it is also mentioned by #Seth Rothschild in the comments) but it should, you can see that AWS SageMaker Python SDK has a way to load models from saved ones as well.
Following SRC's advice, I was able to get it to work by following the instructions in this question and this doc which describe how to load a MXnet model.
I loaded the model like so:
lenet_model = mx.mod.Module.load('model_directory/image-classification',5)
image_l = 64
image_w = 64
lenet_model.bind(for_training=False, data_shapes=[('data',(1,3,image_l,image_w))],label_shapes=lenet_model._label_shapes)
Then predicted using the slightly modified helper functions in the previously linked documentation:
import mxnet as mx
import matplotlib.pyplot as plot
import cv2
import numpy as np
from mxnet.io import DataBatch
def get_image(url, show=False):
# download and show the image
fname = mx.test_utils.download(url)
img = cv2.cvtColor(cv2.imread(fname), cv2.COLOR_BGR2RGB)
if img is None:
return None
if show:
plt.imshow(img)
plt.axis('off')
# convert into format (batch, RGB, width, height)
img = cv2.resize(img, (64, 64))
img = np.swapaxes(img, 0, 2)
img = np.swapaxes(img, 1, 2)
img = img[np.newaxis, :]
return img
def predict(url, labels):
img = get_image(url, show=True)
# compute the predict probabilities
lenet_model.forward(DataBatch([mx.nd.array(img)]))
prob = lenet_model.get_outputs()[0].asnumpy()
# print the top-5
prob = np.squeeze(prob)
a = np.argsort(prob)[::-1]
for i in a[0:5]:
print('probability=%f, class=%s' %(prob[i], labels[i]))
Finally I called the prediction with this code:
labels = ['a','b','c', 'd','e', 'f']
predict('https://eximagesite/img_tst_a.jpg', labels )
If you want to host your trained model locally, and you are using Apache MXNet as your model framework (as you have in the above example), the simplest way is to use MXNet Model Server: https://github.com/awslabs/mxnet-model-server
Once you installed it locally, you can start serving using:
mxnet-model-server \
--models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model
and then call the local endpoint with the image
curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
curl -X POST http://127.0.0.1:8080/squeezenet/predict -F "data=#kitten.jpg"
I'm trying to make a simple speech recognition program in Python using Sphinx. I installed it using pip in CMD, then I installed PocketSphinx in the same way. The tutorial I'm following says I need to include the model directories for PocketSphinx, but I don't know where the directory is. How do I find it, and am I doing something wrong?
If you are using pocketsphinx-python installed via pip, and following some example code similar to that provided by the package's github page, you may find there are a few code changes needed.
Here's what's currently in the README (as of March 11, 2018):
from pocketsphinx.pocketsphinx import *
from sphinxbase.sphinxbase import *
MODELDIR = "pocketsphinx/model"
DATADIR = "pocketsphinx/test/data"
# Create a decoder with certain model
config = Decoder.default_config()
config.set_string('-hmm', path.join(MODELDIR, 'en-us/en-us'))
config.set_string('-lm', path.join(MODELDIR, 'en-us/en-us.lm.bin'))
config.set_string('-dict', path.join(MODELDIR, 'en-us/cmudict-en-us.dict'))
This not-yet-accepted pull request describes some changes which may help for those of us using pip and working on our python code outside the downloaded module's directory (at least in a *nix/Mac environment, I haven't tested on Windows). Here's a diff snippet; the key idea is to use path.dirname(pocketsphinx.__file__) to get the base directory in which to look for the model directory:
-MODELDIR = "pocketsphinx/model"
-DATADIR = "pocketsphinx/test/data"
+import pocketsphinx;
+POCKETSPHINXDIR = path.dirname(pocketsphinx.__file__)
+MODELDIR = path.join(POCKETSPHINXDIR, "model")
+DATADIR = path.join(POCKETSPHINXDIR, "data")
(Small note: I took liberty to fix a small typo in the spelling of POCKETSPHINXDIR, so this code isn't exactly the same as the pull request)
Go the location where your python is installed look for the following location inside that (this location is according to windows installation)
Lib\site-packages\speech_recognition\pocketsphinx-data
default model is en-US however there are few other language models that one can download from here
https://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/
It may be late to answer now, but for newcomers, Python module has some convenience methods:
from pocketsphinx import get_model_path, get_data_path
print(get_model_path())
print(get_data_path())