I want to integrate MATLAB and TensorFlow, although I can run TensorFlow native in python but I am required to use MATLAB for image processing. Can someone please help me out with this one?
could this work?
A MATLAB implementation of the TensorFlow Neural Networks Playground.
https://github.com/StackOverflowMATLABchat/NeuralNetPlayground
I used a mex function for inference via the C++ API of TensorFlow once. That's pretty straight forward. I had to link the required TensorFlow libs statically from source though.
Similar to #HansQ's suggestion, I am currently writing a somewhat extensive wrapper called tensorflow.m for Matlab, see its repo on GitHub. I cannot provide a stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples).
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.
Related
I am a Tensorflow enthusiast and I am trying to export a model (developed in Python and then frozen and optimized with the Tensorflow tools) for the usage (just for inference) within a C++ project.
What I have experienced is that, even following all the prescriptions found in other issues opened already by other users, the C++ executable I obtain after compiling the source is much slower in the inference operation (I mean session->run) by a factor of 10 compared to the same operation in a Python inference code.
I am aware of different issues opened on this topic. Following those I built the C++ project using the following command:
bazel build -c opt --copt=-mfma --copt=-mfpmath=both //tensorflow/project:project
I tried also to use the same batch size for the inference tensor as used for training, but I still experience the same worsening of magnitude 10 in time performance for the session->run operation.
I am aware of the fact that in principle, C++ implementation should be faster than Python's one (just because Python is higher level than C++), so this effect is in my opinion counterintuitive. My question is whether I am doing something wrong or this is just a feature of Tensorflow.
Another question: googling around the web, I could find out that freezing graphs has the effect of slowing down the inference process (I might be wrong on that), but I couldn't figure out an alternative way of loading a graph within a C++ code instead of the frozen one (anyway, freezing or not the graph has no effect on Python's performance). Perhaps somebody could also explain whether other options are available at the moment.
Thank you very much in advance for all your kind suggestion and thank you for the outstanding job with Tensorflow.
I figure out the problem is related to the frozen graph. In python I found out that I was using the checkpoint-saved model for the python case while I used the frozen one for the c++ code (my mistake, sorry). It seems anyway that freezing the graph is dramatically slowing down the inference process. After switching to the frozen model in C++ fashion, also the python inference code needs the same C++ inference time.
I am currently working on a Raspberry Pi project in which I am trying to calculate a FFT on some numpy arrays containing some measurement data. I would like to get this done on the GPU to free resources on the CPU. I found the GPU_FFT library of Andrew Holme that apparently allows exactly that.
http://www.aholme.co.uk/GPU_FFT/Main.htm
However, I do not know how to use this library exactly (I read the included instructions) as I have no knowledge of interaction between Python and C (I have never used C before). Further, I could not find any instructions on how to use GPU_FFT on the internet.
Is there any documentation/explanation that I could not find myself or can I use other Python libaries like PyFFT?
Have you tried using the Mathematica package on the rpi? I haven't but they advertise that their code is very fast and use the gpu.
I know that Tensorflow is written with a C++ engine, but I haven't found any C++ source code in my installation directory (I installed via pip). When I inspect the python codes, I got a sense that the python level is just a wrapper where the essence of the algorithm is not presented. For example, in tensorflow/python/ops/gradients.py, the gradients() function calls python_grad_func() to compute the gradients, which is a class method of DeFun.
My question is that, are all the essential part of Tensorflow written in C++ and the python are only serving as some APIs?
This is mostly correct, though there's a lot of sophisticated stuff implemented in Python. Instead of saying "algorithms" in C++, what I'd say is that the core dataflow execution engine and most of the ops (e.g., matmul, etc.) are in C++. A lot of the plumbing, as well as some functionality like defining gradients of functions, is in Python.
For more information and discussion about why it's this way, see this StackOverflow answer
I am using an open-source Matlab toolbox for brain-computer interface (BCI). I want to send the brain imaging data over to Tensorflow for classification and get the results back to Matlab. Is there any way to pass data structures from Matlab to Tensorflow and get the results back into Matlab?
In case someone lands here with a similar question, I'd like to suggest a Matlab package I am currently writing. It's called tensorflow.m and it's available on GitHub. There's no stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - this is all you'd need to classify the images in Matlab (only).
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. The Python interface of Matlab also seems to be rather adventurous, while tensorflow.m is pure Matlab/C.
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.
So far the best way I found is to run your python module in matlab through matlab's now built-in mechanism for connecting to python:
I wrote my python script in a .py file and in there I imported tensorflow and used it in different functions. You can then return the results to matlab by calling
results = py.myModule.myFunction(arg1,arg2,...,argN)
More detailed instructions for calling user-defined python modules in matlab could be found in the following link:
http://www.mathworks.com/help/matlab/matlab_external/call-user-defined-custom-module.html
I use Python for named entity recognition task and I wanna use CRF. I am looking for a python implementation of CRF. I've realized that 3 implementations of CRF (StanfordNER, CRF++ and CRFsuite) has python bindings but only by CRFsuite I can train a model in python, while with CRF++ and StanfordNER, I can only try the existing models in python.
On the other hand, I have so many problems with building CRFsuite bindings for python in windows. I found the descripition in CRFsuite website incomplete or maybe I can't understand it well!
I just looked through similar questions about any python implementation of CRF and noticed that people mentioned that it is easy to have access CRFsuite from python! like these two examples:
1- code for CRF implementation in C++ or Matlab for images,
2- Implementation of CRF in python
I would appreciate it if anybody tried to build these bindings in windows share the instructions with me. Or even any other suggestion about using CRF in python?
Thanks