This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Call python function from MATLAB
Is there a way to call functions in a Python script from MATLAB, and if so, how?
You're trying to do something non-trivial. Matlab doesn't have any binding of Python-scripts, and also I don't know of a really good substitute. You can choose between two roads, maybe one can help you a little bit (though both are non-trivial, just as your task):
Convert to Python
In nowadays, Python is a valid substitute for most use cases of Matlab. There are libraries for munerical calculations, scientific routines, a great plotting library and a fantastic community that will try to help you with all your problems. And, it's free.
If you only have a couple of Matlab scripts running or you just started using Matlab I'd recommend consideringa a change of platform. To convert your existing Matlab-scripts, you could try the Open Matlab-Python-Converter.
Create a webservice using Python and access it from Matlab
The cleanest way to execute Python-Code from Matlab is to setup a webservice with Python and call this webservice from Matlab. This sounds really complicated, but it isn't. For small projects, I'd recommend XML-RPC. You can look at my other post to see an example of a small XML-RPC-server. If your methods exist, it'Ss easy to adapt to your needs, just import these methods and offer them as webservice calls.
On the Matlab side, you must stick to third-party tools for connecting to an XML-RPC server, e.g., Apache XML-RPC jars.
It might become complicated when you want to pass in other variables than primitives, e.g., arrays. I have no experience how well this works.
Related
General python question-
I have built a script using numpy and pandas libraries. I have now been told that I cannot use any libraries- only base python to code. This is because apparently open source libraries are not approved.
Does this restriction make sense? Isn't base python as open source as pandas/numpy libraries are?
Is it possible to convert pandas/numpy code to base python? Does this sound like a simple exercise or does it require learning a lot of new functions? Majority of the code is reading tables and then using if/then type statements and looking up values from other tables to generate and populate new tables.
I'm only going to address the 2nd point. Reimplementing all of numpy/pandas is certainly a very large and useless task. But you're not reimplementing all of it, you only need some parts, and if it's only a few functions, than it's certainly possible.
I'd start from a working script, replace arrays by python lists, and implement the needed fucntions one by one. For SO specifically, I suspect you're better off asking specific questions, e.g. how to implement an analog of a function X in pure python etc.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
The standard way to test VHDL code logic is to write a test bench in VHDL and utilize a simulator like ModelSim; which, I have done numerous times.
I have heard that instead of writing test benches in VHDL, engineers are now using Python to test there VHDL code.
Questions:
How is this done?
Is this done by writing a test bench in Python and then compiling this Python file or linking into Modelsim?
Is this done in Python using a module like myHDL and then linking/importing your VHDL file into Python? Is so, how is the timing diagram generated?
When writing a test bench in Python can you use standard Python coding/modules or just a module like myHDL?
For example if I want to test a TCP/IP stack in VHDL, can I use the socket module in Python to do this (i.e. import socket)?
Is there a reference, paper, or tutorial that shows how to do this? I've checked the Xilinx, Altera, and Modelsim websites but could not find anything.
The only thing I find online about using Python for FPGA are a few packages: with myHDL being the most referenced.
Why Python is good for this
Python is an efficient language for writing reference models in or to process and verify output files of your testbench. Many things take much less time to program in Python than in a VHDL testbench.
One key feature of Python that makes it so suitable is the automatic usage of big integers. If your VHDL uses >64 bit widths, it can be tedious to interface with that in other languages. Also when verifying arithmetic functionality, e.g. Python's Decimal package is a convenient tool to build high-precision references.
How can this be done?
This is an example that does not rely on any other software tools.
A straightforward practical way to interface Python is via input and output files. You can read and write files in your VHDL testbench as it is running in your simulator. Usually each line in the file represents a transaction.
Depending on your needs, you can either have the VHDL testbench compare your design outputs to a reference, or read the output file read back into a Python program. If using files takes too long, at least in Linux it's easy to set up a pipe with mkfifo that you can use as if it were a file, and you can then have your Python program and simulator read from and write to in sync.
Using standard Python modules
With the method I described, you'll have to find a way to separate whatever that module is doing into a stream of data or raw transactions. Unfortunately for your example of a TCP/IP stack, I see no straightforward way to do this and it would probably require significant clarification to discuss further.
More information
Several useful things are linked to in the comments above, especially the link posted by user1155120: https://electronics.stackexchange.com/questions/106784/can-you-interface-a-modelsim-testbench-with-an-external-stimuli
Maybe it is better to design whole design in some other language like SystemC/myHdl/HWToolkit test it and then only export it to your target language.
In order to simulate HDL and drive it from regular python code you have to use framework like cocotb/PyCoRAM and others. Still it gets complicated really fast.
One of the best approaches I have found is to use simulator + agents with queues/abstract memory and in code work only with this queues and not simulator or model itself. It makes writing of testbenches/verification simple.
In python-hw word only HWToolkit currently uses this approach:
tx_rx_test.py
You could write IO from python to text files and use your traditional File-IO VHDL methods to stimulate the DUT.
But, you are better off performing verification in a native environment using a language that was developed to perform functional verification. Using Systemverilog and the Modelsim superset packaged as "Questa Prime" (if you have access to it) is much better.
Assuming you want your python file(s) to be a part of your real-time simulation in terms of transfer functions and/or stimulus generation, they will need to be converted to shared-object files. everything else is post-processing, and is not simulator dependent. e.g. writing output files for later comparison. Non-native simulations, using shared object files, require access via VPI/VLI/FLI/PLI/DPI (depending on what language/tool you are using) which then introduces the overhead of passing the kernel back and forth across the interface. i.e. stop sim, work external, run sim, stop sim, etc.
Systemverilog (IEEE-1800) was developed for IC designers, by IC designers to address the issue of functional verification. while I am all for re-use in terms of .so files from system engineering simulations, if you are writing the testbench (which you shouldn't write your own TB, but that is another discussion), you would be better off using SV given all the built-in functions for constraining stimulus generation, functional coverage and reporting via UCDB.
you should have a reference manual in your Modelsim install to describe using the Programming Interface. you can learn more here : https://verificationacademy.com/ but, it is highly SV-centric... Unfortunately, VHDL is not a verification language, even with the additions they have made to be more like SV, it is, still, better categorized as a modeling language.
Check out PyVHDL. It integrates Python and VHDL with an Eclipse GUI with editing, simulation and VCD generation & display.
GUI Image: https://i.stack.imgur.com/BvQWN.jpg
I am new to object oriented programming and I am struggling to find a good tutorial on how to make a library in C++ that I can import into Python.
At the moment I am just trying to make a simple example that adds two numbers. I am confused about the process. Essentially I want to be able to do something like this in Python:
import MyCPPcode
MyCPPcode.Add(5,3) #function prints 5+3=8
I am not requesting a full example with code, just the steps that I need to take.
Do I need to make a .dll or a static library? I am using MS Visual Studio 2013.
Also, does the process tailor the C++ library code for Python in any way or will this library be available for other languages as well?
While I cannot guide you through the whole process, because I do not know python too well, Here is what I know:
It is absolutely possible. While not being something for someone who is new to object-oriented programing, it's called the python-C/C++ API. If you search for that in the python documentation there are several chapters about it.
While the example function you're showing might look like that from python, the process is a lot more redundant in c++ (behind the scenes). There are tools that combat that issue, for example Cython, but if you want to learn I'd suggest going pure python API.
As for availability with other languages... Well, the internal functions (i.e. adding two numbers) are of course general c++, so you can reuse them in other projects, but yes, the created library will be made to work with python, and not something else.
I have a platform for python scripting, and I would like to call matlab functions inside. I have found several threads tackling the issue, among them those two
How do I interact with MATLAB from Python?
Running m-files from Python
However, threads are either not recent, or not very detailed.
is mlabwrap reliable?
what would you advocate as a solution to call matlab functions / .m files in python script?
using win32com from python to call a matlab session --> is this a good idea ? could you point to more doc or examples on this topic?
looks like link to sourceForge is not up to date, last update 2010,
http://sourceforge.net/projects/mlabwrap/
Could you hint at some latest version ?
Thanks
I would still recommend mlabwrap as the solution for this.
I use mlabwrap on a regular (weekly?) basis, on both Linux and Windows, across several different versions of Python, and several different versions of Matlab. To answer your specific questions:
mlabwrap will reliably perform across platforms, Python, and Matlab versions. It does however have limitations, and it will reliably fail when pushed past those limitations. Usually, these can be worked around.
See my answer here for more information on calling Matlab functions vs. Matlab scripts through mlabwrap. This answer also describes how to workaround one of the primary limitations of mlabwrap, which is that not all Matlab objects can be directly converted into Python objects.
I don't know anything about calling Matlab using win32com.
I have used mlabwrap in what I'll term 'Python-primary' style, where the majority of the programming in Python, using Matlab as a library for specific mathematical functions that aren't available in scipy/numpy, and in a 'Matlab-primary' style, the majority of the programming is in Matlab, and the final results are importedinto Python for use in some external process.
For Python-primary, the thing to keep in mind is that not all Matlab functions will return Python-readable data. mlabwrap will return a MLabObjectProxy object from these functions. These commonly occur when you use Matlab functions to create objects that are passed into other Matlab functions to actually process the data. For example, you can use the digital signal processing toolbox to create a Welch spectrum object which you can then use to get the power spectrum of your data. Theoretically, you can pass these MLabObjectProxies into Matlab functions that require them. In my experience the more you pass these back and forth, the more likely you are to find a bug in mlabwrap. What you can do instead is write a simple Matlab wrapper function obtains the object, processes the data, and then returns appropriate output as an array.
You can also get around problems with the MLabObjectProxies by using the low-level commands in mlabwrap. For example, if I have a matlab_struct that is a struct array with field matlab_struct.label, and I only want the labels on the Python side, I can do the following:
# place matlab_struct into the Matlab workspace
mlab._set('matlab_struct', matlab_struct)
# convert the labels into a cell array
matlab_struct_labels = mlab.eval('{matlab_struct.labels}')
The main low-level commands available are mlab._set('variable_name', variable), mlab.eval('command string'), and mlab.get('variable_name').
If I'm doing a lot of heavy-duty processing in Matlab, say in a toolbox or plugin that isn't available elsewhere, I'll write what I call 'Matlab-primary' code, where I try to avoid passing data back and forth through mlabwrap, instead manipulating variables in the Matlab workspace by calling .m scripts, saving the resulting output to a data file, and importing that into my Python code.
Good luck!
mlabwrapper
is a great solution for python <-> MATLAB bridging. If something is not working for you just report concrete problems on SO :)
You have to note that mlabwrapper as project has been around for quite a while. http://mlabwrap.sourceforge.net/
I had problems with mlabwrap.cpp recently, for which I found the following github fork
Related Thereis is a copy of mlabwrap v1.1-pre
(http://mlabwrap.sourceforge.net/) patched as described here:
http://sourceforge.net/mailarchive/message.php?msg_id=27312822
with a patch fixing the error:
mlabraw.cpp:225: error: invalid conversion from ‘const mwSize*’ to
‘const int*’ Also note that in Ubuntu you need to sudo apt-get install
csh
For details see http://github.com/aweinstein/mlabwrap
mlab
After spending more time, I made a github mirror to update, bugfix and maintain the wrapper https://github.com/ewiger/mlab (patches and pull-requests are welcomed!)
It can be pip installed, i.e.
pip install mlab
I have excluded the cpp implementation for now. In the current way it works as the following:
For Linux/Mac library creates a pipe connection with MATLAB instance. The rest is serialization (partially pointed out by #brentlance), which is done using numpy.
For Windows library uses DCOM to communicate. (But I am still to fix version lookup using registry).
When to use mlab?
I would recommend to call very high level user-functions in MATLAB (mostly returning logical results or very standard built-in types as matrices) to minimize any communication with MATLAB. This approach is perfect for legacy code, but might require writing some wrapping interfaces to simplify function declarations.
Overall, the code is a bit cumbersome and is a patchwork of many. The core part (now this is matlabpipe and matlabcom) seems to do the job pretty well. Ultimately, I would not recommend mlab for full-scale productive application unless you are willing to spend time testing, bug-reporting, bug-fixing and feature-requesting all of your use-cases.
To answer your questions:
We used mlabwrap and it was reliable. The only difficulty was to compile it. We had a few issues with that. However, once you compile it, it runs well.
I would recommend matlab_wrapper, because it's much easier to install (pure Python, no compilation), supports various data structures (numeric, logical, struct, cell arrays) and runs on GNU/Linux, Windows, OSX.
Using win32com to communicate with MATLAB should work, but it's quite low-level and not portable.
Disclaimer: I'm the author of matlab_wrapper.
Since MATLAB R2014b, there is now a MATLAB API for Python. See the MATLAB documentation for more information. It however requires to start the MATLAB engine which can take some time.
This question already has answers here:
How do Rpy2, pyrserve and PypeR compare?
(4 answers)
Closed 7 years ago.
I am quite new to R, and pretty much used to python. I am not so comfortable writing R code. I am looking for python interface to R, which lets me use R packages in pythonic way.
I have done google research and found few packages which can do that:
Rpy2
PypeR
pyRserve
But not sure which one is better ? Which has more contributers and more actively used ?
Please note my main requirement is pythonic way for accessing R packages.
As pointed out by #lgautier, there is already another answer on this subject. I leave my answer here as it adds the experience of approaching R as a novice, knowing Python first.
I use both Python and R and sympathise with your need as a newcomer to R.
Since any answer you get will be subjective, I summarise a few points from my experience:
I use rpy2 as my interface and find it is 'Pythonic', stable, predictable, and effective enough for my needs. I have not used the other packages so this is not a comment on them, rather on the merits of rpy2 itself.
BUT do not expect that there will be an easy way of using R in Python without learning both. I find that adding an interface between the two languages allows ease of coding when you know both, but a nightmare of debugging for someone who is deficient in one of the languages.
My advice:
For most applications, Python has packages that allow you to do most of the things that you want to do in R, from data wrangling to plotting. Check out SciPy, NumPy, pandas, BioPython, matplotlib and other scientific packages, or even the full Anaconda or Enthought python distributions. This allows you to stay within the Python environment and provides you most of the power that you need.
At the same time, you will want R's vast range of specialised packages, so spend some time learning it in an interactive environment. I found it almost impossible to master even basic R on the command line, but RStudio and the tutorials at Quick-R and Learn-R got me going very fast.
Once you know both, then you will do magic with rpy2 without the horrors of cross-language debugging.
New Resources
Update on 29 Jan 2015
This answer has proved popular and so I thought it would be useful to point out two more recent resources:
Ralph Heinkel gave a great talk on this subject at EuroPython 2014. The video on Combining the powerful worlds of Python and R is available on the EuroPython YouTube channel. Quoting him:
The triplet R, Rserve, and pyRserve allows the building up of a network bridge from Python to R: Now R-functions can be called from Python as if they were implemented in Python, and even complete R scripts can be executed through this connection.
It is now possible to combine R and Python using rmagic in IPython/Jupyter greatly easing the work of producing reproducible research and notebooks that combine both languages.
A question about comparing rpy2, pyrserve, and pyper with each other was answered on the site earlier.
Regarding the number of contributors, I'd say that all 3 have a relatively small number. A site like Ohloh can give a more detailled answer.
How actively a package is used is tricky to determine. One indication might be the number of downloads, an other might be the number of posts on mailing lists or the number questions on a site like stackoverflow, the number of other packages using it or citing it, the number of CVs or job openings mentioning the package. As much as I believe that I could give a fair evaluation, I might also be seen as having a conflict of interest. ;-)
All three have their pros and cons. I'd say that you base you choice on that.
My personal experience has been with Rpy, not Rpy2. I used it for a while, but dropped it in favor of using system commands. A typical case for me was running a FORTRAN model using Python scripts, and post-processing with R. In my experience the easiest solution was to create a command line tool using R, which is quite straightforward (at least under Linux). The command line tool could be executed in the root of the model run, and the script would produce a set of R objects and plots in an Routput directory. The advantage of disconnecting R and Python in this way was that I could easily debug the R code separate from the Python code.
I think Rpy really shines when a lot of back and forth communication between R and Python is needed. But if the functionality is nicely separable, and the overhead of disk i/o is not too bad, I would stick to system calls. See ?system for more information regarding system calls, and Rscript for running R scripts as a command line tool.
Regarding your wish to write R code in a Python way, this is not possible as all the solutions require you to write R code in R syntax. For Rpy this means R syntax, but a little different (no . for example). I agree with #gauden that there is no shortcut in using R through Rpy.