Unity ML-Agents Running Very Slowly - python

Using the Python API, whether I run using a build or in the Editor the simulation is much slower than when using the provided mlagents-learn methods. I'm running something similar to this, using a PyTorch implementation of DDPG and CUDA 9.0. Is this expected behaviour?

Got to the Academy's proprieties and make sure that the value of Time Scale in the Inference Configuration is equals to the Training Configuration's one.
For other info check the official documentation of ML-Agents at Learning-Environment-Design-Academy

To use deep reinforcement learning models you have to use unityagents 0.4, and in 0.4 version how are you using mlagents-learn. It supports different commands.

Related

PyStan or PyMC3 to take advantage of machine learning desktop?

If I want to take maximum advantage of the computing power of my 4 GPU machine learning desktop, should I use PyStan or PyMC3 for probabilistic programming in Python?
Honestly, GPU support for both is rather shoddy. Both are works in progress. I'd look elsewhere.
If you're really trying to create a performance optimized model, my recommendation would be to use TensorFlow probability since it has first class GPU support and gives fairly fine-grained control over execution.
There's also NumPyro, which I haven't used personally but looks promising.

Tensorflow C++ API is slow compared to Python

I am a Tensorflow enthusiast and I am trying to export a model (developed in Python and then frozen and optimized with the Tensorflow tools) for the usage (just for inference) within a C++ project.
What I have experienced is that, even following all the prescriptions found in other issues opened already by other users, the C++ executable I obtain after compiling the source is much slower in the inference operation (I mean session->run) by a factor of 10 compared to the same operation in a Python inference code.
I am aware of different issues opened on this topic. Following those I built the C++ project using the following command:
bazel build -c opt --copt=-mfma --copt=-mfpmath=both //tensorflow/project:project
I tried also to use the same batch size for the inference tensor as used for training, but I still experience the same worsening of magnitude 10 in time performance for the session->run operation.
I am aware of the fact that in principle, C++ implementation should be faster than Python's one (just because Python is higher level than C++), so this effect is in my opinion counterintuitive. My question is whether I am doing something wrong or this is just a feature of Tensorflow.
Another question: googling around the web, I could find out that freezing graphs has the effect of slowing down the inference process (I might be wrong on that), but I couldn't figure out an alternative way of loading a graph within a C++ code instead of the frozen one (anyway, freezing or not the graph has no effect on Python's performance). Perhaps somebody could also explain whether other options are available at the moment.
Thank you very much in advance for all your kind suggestion and thank you for the outstanding job with Tensorflow.
I figure out the problem is related to the frozen graph. In python I found out that I was using the checkpoint-saved model for the python case while I used the frozen one for the c++ code (my mistake, sorry). It seems anyway that freezing the graph is dramatically slowing down the inference process. After switching to the frozen model in C++ fashion, also the python inference code needs the same C++ inference time.

What is the best way to save sklearn model?

I am working on a python desktop app. This app does some predictions. Right now I train my sklearn model using python script, save the parameters of the model as a dictionary in a yaml file. Then, I build in this yaml into my python app. Then, when I am using the app, the model is recreated using parameters from the dictionary. I realized, that people who have a different version of sklearn get an error. I tried to save my model in a pickle file, but in this case, it produced some warning when app was running on a machine with a different version of sklearn.
There is no guarantee that a given sklearn model would be compatible between versions of sklearn. Indeed, the implementation or the internal API may change between versions. See more informations here.
If you consider one version, the best way is indeed to pickle, and not to save the parameters in a yaml file. It's even better to use joblib to do so. See more informations here.
I realized, that people who have a different version of sklearn get an error.
In this case, create isolated Python environments using virtualenvs
Alternatively you can just generate a Python code from a trained model. This way you eliminate any possibility of object incompatibility. Here is a tool that can help with that https://github.com/BayesWitnesses/m2cgen

How will I integrate MATLAB to TensorFlow?

I want to integrate MATLAB and TensorFlow, although I can run TensorFlow native in python but I am required to use MATLAB for image processing. Can someone please help me out with this one?
could this work?
A MATLAB implementation of the TensorFlow Neural Networks Playground.
https://github.com/StackOverflowMATLABchat/NeuralNetPlayground
I used a mex function for inference via the C++ API of TensorFlow once. That's pretty straight forward. I had to link the required TensorFlow libs statically from source though.
Similar to #HansQ's suggestion, I am currently writing a somewhat extensive wrapper called tensorflow.m for Matlab, see its repo on GitHub. I cannot provide a stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples).
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.

Connecting Matlab to Tensorflow

I am using an open-source Matlab toolbox for brain-computer interface (BCI). I want to send the brain imaging data over to Tensorflow for classification and get the results back to Matlab. Is there any way to pass data structures from Matlab to Tensorflow and get the results back into Matlab?
In case someone lands here with a similar question, I'd like to suggest a Matlab package I am currently writing. It's called tensorflow.m and it's available on GitHub. There's no stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - this is all you'd need to classify the images in Matlab (only).
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. The Python interface of Matlab also seems to be rather adventurous, while tensorflow.m is pure Matlab/C.
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.
So far the best way I found is to run your python module in matlab through matlab's now built-in mechanism for connecting to python:
I wrote my python script in a .py file and in there I imported tensorflow and used it in different functions. You can then return the results to matlab by calling
results = py.myModule.myFunction(arg1,arg2,...,argN)
More detailed instructions for calling user-defined python modules in matlab could be found in the following link:
http://www.mathworks.com/help/matlab/matlab_external/call-user-defined-custom-module.html

Categories