I am a Tensorflow enthusiast and I am trying to export a model (developed in Python and then frozen and optimized with the Tensorflow tools) for the usage (just for inference) within a C++ project.
What I have experienced is that, even following all the prescriptions found in other issues opened already by other users, the C++ executable I obtain after compiling the source is much slower in the inference operation (I mean session->run) by a factor of 10 compared to the same operation in a Python inference code.
I am aware of different issues opened on this topic. Following those I built the C++ project using the following command:
bazel build -c opt --copt=-mfma --copt=-mfpmath=both //tensorflow/project:project
I tried also to use the same batch size for the inference tensor as used for training, but I still experience the same worsening of magnitude 10 in time performance for the session->run operation.
I am aware of the fact that in principle, C++ implementation should be faster than Python's one (just because Python is higher level than C++), so this effect is in my opinion counterintuitive. My question is whether I am doing something wrong or this is just a feature of Tensorflow.
Another question: googling around the web, I could find out that freezing graphs has the effect of slowing down the inference process (I might be wrong on that), but I couldn't figure out an alternative way of loading a graph within a C++ code instead of the frozen one (anyway, freezing or not the graph has no effect on Python's performance). Perhaps somebody could also explain whether other options are available at the moment.
Thank you very much in advance for all your kind suggestion and thank you for the outstanding job with Tensorflow.
I figure out the problem is related to the frozen graph. In python I found out that I was using the checkpoint-saved model for the python case while I used the frozen one for the c++ code (my mistake, sorry). It seems anyway that freezing the graph is dramatically slowing down the inference process. After switching to the frozen model in C++ fashion, also the python inference code needs the same C++ inference time.
Related
If I want to take maximum advantage of the computing power of my 4 GPU machine learning desktop, should I use PyStan or PyMC3 for probabilistic programming in Python?
Honestly, GPU support for both is rather shoddy. Both are works in progress. I'd look elsewhere.
If you're really trying to create a performance optimized model, my recommendation would be to use TensorFlow probability since it has first class GPU support and gives fairly fine-grained control over execution.
There's also NumPyro, which I haven't used personally but looks promising.
Using the Python API, whether I run using a build or in the Editor the simulation is much slower than when using the provided mlagents-learn methods. I'm running something similar to this, using a PyTorch implementation of DDPG and CUDA 9.0. Is this expected behaviour?
Got to the Academy's proprieties and make sure that the value of Time Scale in the Inference Configuration is equals to the Training Configuration's one.
For other info check the official documentation of ML-Agents at Learning-Environment-Design-Academy
To use deep reinforcement learning models you have to use unityagents 0.4, and in 0.4 version how are you using mlagents-learn. It supports different commands.
I want to integrate MATLAB and TensorFlow, although I can run TensorFlow native in python but I am required to use MATLAB for image processing. Can someone please help me out with this one?
could this work?
A MATLAB implementation of the TensorFlow Neural Networks Playground.
https://github.com/StackOverflowMATLABchat/NeuralNetPlayground
I used a mex function for inference via the C++ API of TensorFlow once. That's pretty straight forward. I had to link the required TensorFlow libs statically from source though.
Similar to #HansQ's suggestion, I am currently writing a somewhat extensive wrapper called tensorflow.m for Matlab, see its repo on GitHub. I cannot provide a stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples).
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.
I know that Tensorflow is written with a C++ engine, but I haven't found any C++ source code in my installation directory (I installed via pip). When I inspect the python codes, I got a sense that the python level is just a wrapper where the essence of the algorithm is not presented. For example, in tensorflow/python/ops/gradients.py, the gradients() function calls python_grad_func() to compute the gradients, which is a class method of DeFun.
My question is that, are all the essential part of Tensorflow written in C++ and the python are only serving as some APIs?
This is mostly correct, though there's a lot of sophisticated stuff implemented in Python. Instead of saying "algorithms" in C++, what I'd say is that the core dataflow execution engine and most of the ops (e.g., matmul, etc.) are in C++. A lot of the plumbing, as well as some functionality like defining gradients of functions, is in Python.
For more information and discussion about why it's this way, see this StackOverflow answer
I am using an open-source Matlab toolbox for brain-computer interface (BCI). I want to send the brain imaging data over to Tensorflow for classification and get the results back to Matlab. Is there any way to pass data structures from Matlab to Tensorflow and get the results back into Matlab?
In case someone lands here with a similar question, I'd like to suggest a Matlab package I am currently writing. It's called tensorflow.m and it's available on GitHub. There's no stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - this is all you'd need to classify the images in Matlab (only).
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. The Python interface of Matlab also seems to be rather adventurous, while tensorflow.m is pure Matlab/C.
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.
So far the best way I found is to run your python module in matlab through matlab's now built-in mechanism for connecting to python:
I wrote my python script in a .py file and in there I imported tensorflow and used it in different functions. You can then return the results to matlab by calling
results = py.myModule.myFunction(arg1,arg2,...,argN)
More detailed instructions for calling user-defined python modules in matlab could be found in the following link:
http://www.mathworks.com/help/matlab/matlab_external/call-user-defined-custom-module.html