i work on ubuntu 10.04 and used cython to compile my python code.
i then tried to copy 2 of my binaries (one with numpy, and one without) to another distribution with supported kernel and etc... the only thing i did which is not so cool is that i used the python that comes with that distribution (2.6), and copy from my ubuntu the numpy libraries.
when i exec the one without numpy, it works. when i exec the one with the 'from numpy import ...' i get an error like: undefined symbol: _PyUnicodeUCS4_IsWhitespace.
i thought that the numpy just compiled for UCS4 where as the python version in the new dist is UCS2. but for my surprise, when i exec the same python code with the numpy import - as python and not compiled - it works.
so basically i can say that if i open 'python' and import numpy libraries it works and i can use them. but if i'm using the compiled version - i get that UCS4 error..
any ideas?
(the new dist is not so much in my control and i can't really just compile anything i want on that dist)
Thanks.
well, it goes like this:
when running python interpreter and imports the numpy library it tries to load from libpython.so the symbol Python is compiled with (i guess so). this is why it works with the interpreter. so the request for that unicode function doesn't come from numpy - but from Python - so it uses the UCS2 functions it compiled with (probably).
but when running the compiled version, and again, it tries to load that function - it can't find it because it searches for a UCS4 version..
i did a small check: grep "_PyUnicode" in libpython, in the first dist, and in the second - and there was the different: one printed UCS4 functions, and the other printed UCS2 functions..
so the "easy" solution here i guess is to compile on my first dist a UCS2 version Python, then setting Cython to compile with UCS2.. i believe that will do the job.
Related
Is it possible to run MATLAB functions from within Python?
I search the internet, I could only find PyMat. The bad thing is the compiled version only supports Python2.2 and I am using 2.6. So I tried to download the source code, so I can compile it for myself. But I cannot compile it, VC++ express seems not to have the necessary functionalities to compile it. Does anyone have the compile version for PC?
or any substitutes for PyMat?
Thanks
I know this is an old question and has been answered. But I was looking for the same thing (for the Mac) and found that there are quite a few options with different methods of interacting with matlab and different levels of maturity. Here's what I found:
pymat
A low level interface to Matlab using the matlab engine (libeng) for communication (basically a library that comes with matlab). The module has to be compiled and linked with libeng.
http://pymat.sourceforge.net
Last updated: 2003
pymat2
A somewhat short lived continuation of the pymat development. Seems to work on windows (including 64bit), linux and mac (with some changes).
https://code.google.com/p/pymat2/
Last updated: 2012
mlabwrap
A high level interface that also comes as a module which needs compilation and linking against libeng. It exposes Matlab functions to python so you can do fun stuff like
mlab.plot(x, y, 'o')
http://mlabwrap.sourceforge.net
Last updated: 2009
mlab
A repackaging effort of mlabwrap. Basically it replaces the c++ code that links against 'libeng' in mlabwrap with a python module (matlabpipe) that communicates with matlab through a pipe. The main advantage of this is that it doesn't need compilation of any kind.
Unfortunately the package currently has a couple of bugs and doesn't seem to work on the mac at all. I reported a few of them but gave up eventually. Also, be prepared for lots of trickery and a bunch of pretty ugly hacks if you have to go into the source code ;-) If this becomes more mature it could be one of the best options.
https://github.com/ewiger/mlab
last update: 2013
pymatlab
A newer package (2010) that also interacts with Matlab through libeng. Unlike the other packages this one loads the engine library through ctypes thus no compilation required. Its not without flaws but still being maintained and the (64bit Mac specific) issues I found should be easy enough to fix.
(edit 2014-05-20: it seems those issues have already been fixed in the source so things should be fine with 0.2.4)
http://pymatlab.sourceforge.net
last update: 2014
python-matlab-bridge
Also a newer package that is still actively maintained. Communicates with Matlab through some sort of socket. Unfortunately the exposed functions are a bit limited. I couldn't figure out how to invoke a function that takes structs as parameters. Requires zmq, pyzmq and IPython which are easy enough to install.
http://arokem.github.io/python-matlab-bridge
last update: 2014
Another option is Mlabwrap:
Mlabwrap is a high-level python to Matlab® bridge that lets Matlab look like a normal python library.
It works well with numpy arrays. An example from the home page:
>>> from mlabwrap import mlab; from numpy import *
>>> xx = arange(-2*pi, 2*pi, 0.2)
>>> mlab.surf(subtract.outer(sin(xx),cos(xx)))
PyMat looks like it's been abandoned.
I'm assuming you are on windows so you could always do the simplest approach and use Matlab's COM interface:
>>> import win32com.client
>>> h = win32com.client.Dispatch('matlab.application')
>>> h.Execute ("plot([0 18], [7 23])")
>>> h.Execute ("1+1")
u'\nans =\n\n 2\n\n'
More info here
There is a python-matlab bridge which is unique in the sense that Matlab runs in the background so you don't have the startup cost each time you call a Matlab function.
https://github.com/jaderberg/python-matlab-bridge
it's as easy as downloading and the following code:
from pymatbridge import Matlab
mlab = Matlab(matlab='/Applications/MATLAB_R2011a.app/bin/matlab')
mlab.start()
res = mlab.run('path/to/yourfunc.m', {'arg1': 3, 'arg2': 5})
print res['result']
where the contents of yourfunc.m would be something like this:
%% MATLAB
function lol = yourfunc(args)
arg1 = args.arg1;
arg2 = args.arg2;
lol = arg1 + arg2;
end
see this page: An Open-Source MATLAB®-to-Python® Compiler
I would like to add one more option to the excellent summary by Lukas:
matlab_wrapper
The advantage of matlab_wrapper is that it is pure Python library and you will not need to compile anything. Works in GNU/Linux, Windows and OSX.
https://github.com/mrkrd/matlab_wrapper
Disclaimer: I'm the author of matlab_wrapper
You can use the official matlab engine by installing Matlab, then building python engine from its extern files. You can check the guide website below:
---Thanks for the advice in the first comment of this answer ---
the essential step in brief are (On Windows platform, other can checked in the url below):
1. download and then install matlab, the version must be R2014 or later.
2. open a PowerShell window under admin, then:
cd "matlabroot\extern\engines\python"
3. use command-line below to install:
python setup.py install
The admin is essential, or you'll fail to build it.
For more information, you can click the official start sheet below:
http://cn.mathworks.com/help/matlab/matlab_external/install-the-matlab-engine-for-python.html
newer versions of matlab seem to provide a module that allows you to call matlab functions from within python. see here and here.
2 more options for you to consider:
Follow the official MATLAB docs:
Create a Python Application with MATLAB Code. This will create a Python library that includes MATLAB runtime which you can call from within your Python code.
Run your MATLAB code in GNU Octave then call it from Python using Oct2Py
This is the solution from Mathworks.
In your current folder, create a MATLAB script in a file named triarea.m.
function a = triarea(b,h)
a = 0.5*(b.* h);
Meanwhile, you run the python code as follows,
import matlab.engine
eng = matlab.engine.start_matlab()
eng.addpath('your/code/folders/')
ret = eng.triarea(1.0,5.0)
print(ret)
>>> 2.5
Matlab already provides the python module of the engine, to install that you can do the following,
cd matlab_root_folder/extern/engines/python
python setup.py install
You are all done!
Tips: you need to be careful about the data type, the engine is not friendly with numpy. You need to convert the data first.
mat_array = matlab.double(list(my_numpy_array))
eng.my_matlabe_function(mat_array )
I am cross compiling for an embedded device using yocto - so using pip install is not appropriate.
My build works, but keeps defaulting to ucs2 character type, which causes an error:
numpy.core.multiarray failed to import.
Caveat, I haven't really tried this...
As far as I can see, building numpy with ucs4 support means that you have to compile python with ucs4-support. Thus, you would need to add
EXTRA_OECONF += "--enable-unicode=ucs4"
in a python_xxx.bbappend, depending on which python (2 or 3) and which OE-release you're using.
If you're getting any other issues after this, is unknown...
When I import a module I built, I get this boost-python related error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: dlopen(./myMod.so, 2): Symbol not found: __ZN5boost6python7objects15function_objectERKNS1_11py_functionERKSt4pairIPKNS0_6detail7keywordES9_E
Referenced from: ./myMod.so
Expected in: flat namespace
in ./myMod.so
What does this actually mean? Why was this error raised?
Description
The problem was caused by mixing objects that compiled with libc++ and object that compiled with libstdc++.
In our case, the library myMod.so (compiled with libstdc++) need boost-python that compiled with libstdc++ (boost-python-libstdc++ from now). When boost-python is boost-python-libstdc++, it will work fine. Otherwise - on computer that its boost-python has compiled with libc++ (or another c++ library), it will have a problem loading and running it.
In our case, it happens because that libc++ developers intentionally changed the name of all of their symbols to prevent you (and save you) from mixing code from their library and code from a different one: myMod.so need a function that take an argument from the type. In libc++, this type's name is std::__1::pair. Therefore, this symbol was not found.
To understand why mixing two version of the same API is bad, consider this situation: There are two libraries: Foo and Bar. They both have a function that takes a std::string and uses it for something but they use a different c++ library. When a std::string that has been created by Foo will be passed to Bar, Bar will think that this is an instance of its c++ library's std::string and then bad things can happen (they are a completely different objects).
Note: In some cases, there would be no problem with two or more different versions of the same API in a completely different parts of a program. There will be a problem if they will pass this API's objects between them. However, checking that can be very hard, especially if they pass the API object only as a member of another object. Also, a library's initialization function can do things that should not happen twice. Another version may do these things again.
How to solve that?
You can always recompile your libraries and make them match each other.
You can link boost-python to your library as a static library. Then, it will work on almost every computer (even one that doesn't has boost-python installed). See more about that here.
Summary
myMod.so need another version of boost-python, one that compiled with a specific c++ library. Therefore, It would not work with any another version.
In my case I was receiving:
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/xmlsec.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_xmlSecDSigNs'
BACKGROUND:
M1 MacBook Pro with Montery
I was working with a python virtualenv (using pyenv) to use an earlier version of python3.8 (3.8.2), while my system had 3.8.10 installed natively.
While I was in the activated 3.8.2 virtualenv I noticed the path in dlopen() was pointing to the package in the native python install NOT the virtualenv install.
SOLUTION:
In my case, I did not need the native 3.8 version at all so I simply removed it and this solved the problem.
I encounter the same problem.
Expected in: flat namespace
Add the linker flag fixes the problem
-lboost_python37
change the dynamic library name to the one installed on the os.
By the way, my os is macOS High Sierra and I use brew to install boost_python3.
Symbol not found means the definition of the declared function or variable was not found. When a header file of a shared object is compiled with your program, linker adds symbols of declared functions and objects to your compiled program. When your program is loaded by the OS's loader, the symbols are resolved so that their definition will be loaded. It is only at this time where if the implementation is missing, loader complains it couldn't find the definition due to may be failing to resolve the actual path to the library or the library itself wasn't compiled with the implementation/source file where the definition of the function or object resides. There is a good article on this on the linux journal http://www.linuxjournal.com/article/6463.
In my case I was just failing to import all the required sources (c++ files) when compiling with Cython.
From the string after "Symbol not found" you can understand which library you are missing.
One of the solutions I found was to uninstall and reinstall it using the no-binary flag, which forces pip to compile the module from source instead of installing from precompiled wheel.
pip install --no-binary :all: <name-of-module>
Found this solution here
Here's what I've learned (osx):
If this is supposed to work (i.e. it works on another computer), you may be experiencing clang/gcc issues. To debug this, use otool -l on the .so file which is raising the error, or a suspect library (in my example it's a boost-python dylib file) and examine the contents. Anything in the /System/ folder is built with clang, and should be installed somewhere else with the gcc compiler. Never delete anything in the /System folder.
.so files are dynamic libraries (so = shared object). On Windows they are called .dll (dynamic-link library). They contain compiled code which contains functions available for usage to any executable which links them.
What is important to notice here is that those .so are not Python files. They were probably compiled from C or C++ code and contain public functions which can be used from Python code (see documentation on Extending Python with C or C++).
On your case, well, you have a corrupt .so. Try reinstalling the affected libraries, or Python, or both.
Problem
I had this same issue when running puma as part of Rails app
LoadError:
dlopen(/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle, 0x0009): symbol not found in flat namespace '_ERR_load_crypto_strings'
/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle
Solution
It was solved just by installing puma gem again gem install puma
I just successfully installed PyEphem using pip in a pyenv. However, on import I receive:
ImportError: /python2.7/site-packages/ephem/_libastro.so: undefined symbol: PyUnicodeUCS2_AsUTF8String
In looking around I've seen it mentioned that some modules are built "against Python" in regards to Unicode and suggest recompiling. I'm quite new to Python and Ubuntu 14.04, and although I believe this is the answer to my issue, I do not know what recompiling means or how to do it.
The symbol PyUnicode_AsUTF8String(value) is used once in _libastro.c and is defined on my system in the file:
/usr/include/python2.7/unicodeobject.h
There it can be aliased one of two ways:
#ifndef Py_UNICODE_WIDE
# ...
# define PyUnicode_AsUTF8String PyUnicodeUCS2_AsUTF8String
# ...
#else
# ...
# define PyUnicode_AsUTF8String PyUnicodeUCS4_AsUTF8String
Your error message makes it sound as though your system Python is compiled to use 4-byte-wide Unicode strings (hence why the linker cannot find a UCS2 version of this function inside of it), but that the version of PyEphem that auto-compiled on your system when you ran pip install somehow got confused and unset Py_UNICODE_WIDE and thus generated C code that was expected a UCS2 symbol.
Do you have several compiled versions of Python on your system, where the Unicode setting of one version could accidentally be affecting how this compile for your system Python takes place?
When I am trying to use F2PY, I'll get the error:
Failed to import Numeric: No module named Numeric
I know that numeric is dead and instead we should use numpy. But files:
/usr/local/lib/python2.7/dist-packages/f2py2e/src/fortranobject.h and
/usr/local/lib/python2.7/dist-packages/f2py2e/f2py2e.py both use the Numeric package. I tried to replace it with numpy, but I was not successful.
I used to use f2py without any problem, but after I formatted my computer and got a fresh copy of Ubuntu, I have this problem.
I also tried to use the option --2d-numpy for f2py like:
f2py -c --fcompiler=intel --2d-numpy -m processoutput processoutput.f
But it didn't work, and it is still looking for numpy.
Thank you for your help.
I ran into a similar situation using msys under Windows, and indeed I was trying to use an outdated version of f2py. The newer version is included with numpy (and doesn't need to be installed separately). And can be found in the site-packages/numpy/f2py directory. Although my setup is a bit different, I was able to compile from python using this script:
import numpy.f2py.f2py2e as f2py2e
import sys
sys.argv += "-c -m hello hello.f".split()
f2py2e.main()
You can download old versions of Numeric here: http://sourceforge.net/projects/numpy/files/Old%20Numeric/24.2/
If you install that, I think f2py will be satisfied.