I've found many topics related to this on the Internet but I could find no solutions.
Suppose I want to download any PMML model from this examples list, and run them in Python (Python 3 preferably). Is there any way to do this?
I'm looking for a way to import a PMML that was deployed OUTSIDE Python and import it to use it with this language.
You could use PyPMML to apply PMML in Python, for example:
from pypmml import Model
model = Model.fromFile('DecisionTreeIris.pmml')
result = model.predict({
"Sepal_Length" : 5.1,
"Sepal_Width" : 3.5,
"Petal_Length" : 1.4,
"Petal_Width" : 0.2
})
For more info about other PMML libraries, be free to see:
https://github.com/autodeployai
After some research I found the solution to this: the 'openscoring' library.
Using it is very simple:
import subprocess
from openscoring import Openscoring
import numpy as np
p = subprocess.Popen('java -jar openscoring-server-executable-1.4.3.jar',
shell=True)
os = Openscoring("http://localhost:8080/openscoring")
# Deploying a PMML document DecisionTreeIris.pmml as an Iris model:
os.deployFile("Iris", "DecisionTreeIris.pmml")
# Evaluating the Iris model with a data record:
arguments = {
"Sepal_Length" : 5.1,
"Sepal_Width" : 3.5,
"Petal_Length" : 1.4,
"Petal_Width" : 0.2
}
result = os.evaluate("Iris", arguments)
print(result)
This returns the value of the target variable in a dictionary. There is no need to go outside of Python to use PMML models anymore (you just have to run the server with Java, which can be done with Python as well as I showed above).
Isn't it like trying to host H2O models in python apps? Looks like a bridge from python to Java is required here too. Such bridges are not stable at all, been there, tested them. Just a general suggestion: do not mix languages between ML algos and apps code, train in python, serve in python, re-validate also in python. Legacy R and H2O models can be always re-fitted in python.
Related
Imagine I have .mzn file called abc.mzn and it is as follows.
array[1..3] of int:a;
output[show(a)];
Now I have a .dzn file called cde.dzn and it is as follows.
a=[1,2,3];
I will run minizinc python package as below,
import minizinc as minizinc
from minizinc import Instance,Model,Solver
x=Solver.lookup("geocode")
M1=Model("./abc.mzn")
instance1=Instance(x,M1)
instance1("a")=[1,2,3]
result = instance1.solve()
print(result)
Above code works fine and no problem with that.I am keen to use dzn module in this python code instead of Instance module and to get rid of manually assigning below line.
As you can see we need to manually assign the values for all parameters using instance1=..
instance1("a")=[1,2,3]
Is there any way that we can use .dzn file to assign values(using dzn module).I noted that in the package itself we have the dzn module already there.
can we do in below manner or how to get results.
import minizinc as minizinc
from minizinc import dzn,Model,Solver
M1=Model("./abc.mzn")
D1=dzn("./cde.dzn") etc..
The DZN module in MiniZinc Python is meant to be used through the .add_file method of Instance/Model. Using this method you can add data files (.dzn/.json) or additional model files .mzn to your MiniZinc model or instance.
So for your example would become:
from minizinc import Model
M1 = Model("./abc.mzn")
M1.add_file("./cde.dzn")
The below lets me get 5 suggestions for the masked token, but i'd like to get 10 suggestions - does anyone know if this is possible with hugging face?
!pip install -q transformers
from __future__ import print_function
import ipywidgets as widgets
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill("I am going to guess <mask> in this sentence")
I would like to add that the parameter was changed to top_k.
It can be passed to each individual call of nlp_fill as well as the pipeline method.
Again this is an unfortunate shortcoming of the "under construction" documentation.
If you look closely at the parameters of the FillMaskPipeline (which is what pipeline('fill-mask') constructs, see here),
then you will find that it has a topk=5 parameter, which you can simply set to a value of your liking by specifying it in the pipeline constructor:
from transformers import pipeline
nlp_fill = pipeline('fill-mask', topk=10)
I followed the TensorFlow tutorial on https://www.tensorflow.org/tutorials/keras/text_classification and saved the model.
I was able to successfully import into Go using the tfgo library:
package main
import (
"fmt"
tg "github.com/galeone/tfgo"
tf "github.com/tensorflow/tensorflow/tensorflow/go"
)
func main() {
model := tg.LoadModel("movie_reviews", []string{"serve"}, nil)
root := tg.NewRoot()
t := tg.NewTensor(root, tg.Const(root, [3]int32{1, 2, 3}))
fake, _ := tf.NewTensor([3]int32{1, 2, 3})
model.Exec([]tf.Output{t.Output}, map[tf.Output]*tf.Tensor{
model.Op("input", 0): fake,
})
fmt.Println(model)
}
But now I don't know how to interact with it. In Python you have all these model methods, i.e. predict, evaluate, etc.
With the Go binding it seems you need to know the exact operation name in order to interact with it?
How would I find out about that?
Yes, using to Go bindings and tfgo you have to know the exact operation name.
Getting all the names, under the "serve" tags, by the way, it's straightforward using the saved_model_cli CLI tool, shipped together with the TensorFlow Python package.
saved_model_cli show --all --dir <path of your SavedModel>
It will give you all the information needed, for every tag available into the SavedModel. In your case, you have to give a look at the signature_def with the key "serve" or "serving_default".
Disclaimer: I'm the author of the tfgo package. I've also covered this topic in chapter 10 of the book Hands-On Neural networks with TensorFlow 2.0 - in the section dedicated to the SavedModel serialization format.
I am new to Tensorrt and I am not so familiar with C language also. May I ask if there is any example to import caffe modell(caffeparser) and at the same time to use plugin with python. Plugin library example: "https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/_nv_infer_plugin_8h_source.html".
I saw an example doing something like the below. Is it necessary to modify the the pluginfactory class? or it has been already done with the python plugin api?
import tensorrt
import tensorrtplugins
from tensorrt.plugins import _nv_infer_plugin_bindings as nvinferplugin
from tensorrt.parsers import caffeparser
plugin_factory = tensorrtplugins.FullyConnectedPluginFactory()
parser = caffeparser.create_caffe_parser()
parser.set_plugin_factory(plugin_factory)
engine = trt.utils.caffe_to_trt_engine(G_LOGGER,
MODEL_PROTOTXT,
CAFFE_MODEL,
1,
1 << 20,
OUTPUT_LAYERS,
trt.infer.DataType.FLOAT,
plugin_factory
)
P.s: I am trying to convert YOLO2 to Tensorrt format. Therefore, some layers(e.g kYOLOREORG and kPRELU) can only be supported by the plugin.
Another way to do so is to add the plugin during constructing the network, by method network.add_plugin_ext() ?However, I am not so sure how to specify the previous layer that is going to be imported later.
Thank you so much for your answer. Your help will be much appreciated!
I have a simple LinearModel with two sparse and two real-valued features. I trained it and now I want to export it with the export_savedmodel. Referencing few sources I came up with something along the lines of:
feature_spec = create_feature_spec_for_parsing(
[
real_valued_column_1, real_valued_column_2,
sparse_column_1, sparce_column_2
]
)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
my_estimator.export_savedmodel('my_model/', serving_input_fn=input_receiver_fn)
where:
real_valued_column_1 = tf.contrib.layers.real_valued_column(
'avg_consumption_h')
sparse_column_1 = tf.contrib.layers.sparse_column_with_integerized_feature("sparse_1", bucket_size=24)
Unfortunately I get ValueError: A default input_alternative must be provided. on export_savedmodel. I digged in a little into the codebase of tensorflow and it seems that build_parsing_serving_input_receiver_fn always returns ServingInputReceiver but the method that extracts input_alternatives always creates them empty if serving_input_fn passed to export_savedmodel is not of the type InputFnOps.
Is build_parsing_serving_input_receiver_fn somehow deprecated, something is wrong in the process of extraction of input_alternative, or maybe I'm misunderstanding process completely and doing something wrong?
I'm using python 3.6 with tensorflow 1.2, my model is a simple tf.contrib.learn.LinearRegressor.
You can try the following
from tensorflow.contrib.learn.python.learn.utils.input_fn_utils import build_parsing_serving_input_fn
input_receiver_fn = build_parsing_serving_input_fn(feature_spec)