I am working on a python desktop app. This app does some predictions. Right now I train my sklearn model using python script, save the parameters of the model as a dictionary in a yaml file. Then, I build in this yaml into my python app. Then, when I am using the app, the model is recreated using parameters from the dictionary. I realized, that people who have a different version of sklearn get an error. I tried to save my model in a pickle file, but in this case, it produced some warning when app was running on a machine with a different version of sklearn.
There is no guarantee that a given sklearn model would be compatible between versions of sklearn. Indeed, the implementation or the internal API may change between versions. See more informations here.
If you consider one version, the best way is indeed to pickle, and not to save the parameters in a yaml file. It's even better to use joblib to do so. See more informations here.
I realized, that people who have a different version of sklearn get an error.
In this case, create isolated Python environments using virtualenvs
Alternatively you can just generate a Python code from a trained model. This way you eliminate any possibility of object incompatibility. Here is a tool that can help with that https://github.com/BayesWitnesses/m2cgen
Related
I would like to use a modelica model, which is based on an external library, for a co-simulation in another environment like ANSYS, Abaqus, etc. The model should be able to interact with another model. What is the easiest way to co-simulate a modelica model?
For this purpose, the FMU export in openmodelica seems to be the right approach. The problem is that the model is based on an external library and therefore different issues occur (also stated here: https://openmodelica.org/forum/default-topic/2180-libraries-not-included-in-fmu).
In my case, I tried loading the fmu with the FMPY gui (python -m fmpy.gui) which resulted in the error:
"Failed to load path/to/haeger_model_win64.fmu. The unit 'ml' of variable 'C_a.V' is not defined."
What works for me now: loading the .mo file with OMPython and simulation in Python. That seems difficult though because I do not know how to interact but only simulate for preset parameters.
You can find the model (haeger_model.mo), the external library (HumanLib.mo) and FMU exports of haeger_model here: https://github.com/xi2pi/LPModelica
The simulation with OMPython works just like explained in the tutorial (https://www.openmodelica.org/doc/OpenModelicaUsersGuide/latest/ompython.html).
The problem is that I do not know how to interact with the model when simulating in Python. An approach with an FMU is preferred.
I am building an app that allows a user to retrain inception through a GUI. I am currently basically going through tensorflow's image_retrain example found here and building a GUI as a front-end. I don't know where the actual trained neural network is stored or how to train multiple instances of it. It seems that the output_graph argument defaults to /tmp/output_graph.pb but I don't know if that is the trained network itself or something else (I don't know a lot about tensorflow). I also want everything to be stored only within the directory my app runs from. I am also currently cloning the entire tensorflow github repository and I'm wondering if all I need is the retrain.py file and the label_image.py file. Here are my questions:
1. What is the output graph?
2. How do I store multiple instances of retrained inception that are each trained for specific sets of categories?
3. Do I need to clone the whole repository?
I am running on windows with python 3.5.2 64 bit and tensorflow version 1.5.0
the output graph is where the network itself is stored
make a directory for your instance of inception. create a bottleneck directory inside the main directory. use the additional arguments --output_graph /path/to/main/dir/output_graph.pb --bottleneck_dir /path/to/main/dir/bottleneck_dir --output_labels /path/to/main/dir/output_labels.txt when you run retrain.py.
Then, when you run label_image.py, you should have additional arguments that look like this: --graph /path/to/main/dir/output_graph.pb --labels=/path/to/main/dir/output_labels.txt and the other arguments that you would have used based on the tensorflow tutorial
No, you only need retrain.py and label_image.py
I am using an open-source Matlab toolbox for brain-computer interface (BCI). I want to send the brain imaging data over to Tensorflow for classification and get the results back to Matlab. Is there any way to pass data structures from Matlab to Tensorflow and get the results back into Matlab?
In case someone lands here with a similar question, I'd like to suggest a Matlab package I am currently writing. It's called tensorflow.m and it's available on GitHub. There's no stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - this is all you'd need to classify the images in Matlab (only).
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. The Python interface of Matlab also seems to be rather adventurous, while tensorflow.m is pure Matlab/C.
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.
So far the best way I found is to run your python module in matlab through matlab's now built-in mechanism for connecting to python:
I wrote my python script in a .py file and in there I imported tensorflow and used it in different functions. You can then return the results to matlab by calling
results = py.myModule.myFunction(arg1,arg2,...,argN)
More detailed instructions for calling user-defined python modules in matlab could be found in the following link:
http://www.mathworks.com/help/matlab/matlab_external/call-user-defined-custom-module.html
While I admire, and am somewhat baffled by, the documentation's commitment to mediating everything related to TensorFlow Serving through Bazel, my understanding of it is tenuous at best. I'd like to minimize my interaction with it.
I'm implementing my own TF Serving server by adapting code from the Inception + TF Serving tutorial. I find the BUILD files intimidating enough as it is, and rather than slogging through a lengthy debugging process, I decided to simply edit BUILD to refer to the .cc file, in lieu of also building the python stuff which (as I understand it?) isn't strictly necessary.
However, my functional installation of TF Serving can't be imported into python. With normal TensorFlow you build a .whl file and install it that way; is there something similar you can do with TF Serving? That way I could keep the construction and exporting of models in the realm of the friendly python interactive shell rather than editing it, crossing all available fingers, building in bazel, and then /bazel-bin/path/running/whatever.
Simply adding the directory to my PYTHONPATH has so far been unsuccessful.
Thanks!
You are close, you need to update the environment as they do in this script
.../serving/bazel-bin/tensorflow_serving/example/mnist_export
I printed out the environment update, did it manually
export PYTHONPATH=...
then I was able to import tensorflow_serving
what is in your opinion the best way to get data (measure date for example) into modelica (dymola)? Is it possible, to import data from python to modelica (for example into a combi-time-table)?
My idea would be as follows:
pre processing of measured data in python
load the data from python into modelica (comi-time-table)
rund simulation studies (scripted in python)
I would appreciate any suggestions.
That's probably a matter of opinion. But since you have to do much of your data post- and preprocessing in Python I would definitely export my (plant) model from Dymola as a co-simulation FMU and run it in Python.
In Dymola you can export FMU's and 'execute' them on the same pc that holds the Dymola license file. If you need to run the FMU on another pc you'll have to buy a special binary export license.
There is a free Python package called PyFMI (www.pyfmi.org) which makes it easy to run an FMU in Python. See the examples at http://www.jmodelica.org/page/4924.
PyFMI can be a bit tricky to get up and running (with the right Python package dependencies and so on). So if you are not an experienced Python user I would suggest that you download the installer for JModelica.org which will do much the setting up for you.
Best regards,
Rene Just Nielsen
As Rene Just Nielsen pointed out, this is primarily opinion based.
To give another way of accomplishing your goal, try the DymolaInterface. You could either set the table-parameter via the Interface in python, our use a .txt-file which you create and alter in python, and Modelica just knows the path to the file. The interface comes with your Dymola installation under Modelica\Library\python_interface\dymola.egg where you will also find documentation for the functions.
Another python-package for using FMU's is FMPy. I image both FMPy and PyFMI have their pros and cons.
The last option which does not require any external python package would be to use mos-files to execute simulations and use the .txt-files to read in the data. If the task you described is the only thing you want to accomplish, mos-scripts are quite sufficient.