What does "Symbol not found / Expected in: flat namespace" actually mean? - python

When I import a module I built, I get this boost-python related error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: dlopen(./myMod.so, 2): Symbol not found: __ZN5boost6python7objects15function_objectERKNS1_11py_functionERKSt4pairIPKNS0_6detail7keywordES9_E
Referenced from: ./myMod.so
Expected in: flat namespace
in ./myMod.so
What does this actually mean? Why was this error raised?

Description
The problem was caused by mixing objects that compiled with libc++ and object that compiled with libstdc++.
In our case, the library myMod.so (compiled with libstdc++) need boost-python that compiled with libstdc++ (boost-python-libstdc++ from now). When boost-python is boost-python-libstdc++, it will work fine. Otherwise - on computer that its boost-python has compiled with libc++ (or another c++ library), it will have a problem loading and running it.
In our case, it happens because that libc++ developers intentionally changed the name of all of their symbols to prevent you (and save you) from mixing code from their library and code from a different one: myMod.so need a function that take an argument from the type. In libc++, this type's name is std::__1::pair. Therefore, this symbol was not found.
To understand why mixing two version of the same API is bad, consider this situation: There are two libraries: Foo and Bar. They both have a function that takes a std::string and uses it for something but they use a different c++ library. When a std::string that has been created by Foo will be passed to Bar, Bar will think that this is an instance of its c++ library's std::string and then bad things can happen (they are a completely different objects).
Note: In some cases, there would be no problem with two or more different versions of the same API in a completely different parts of a program. There will be a problem if they will pass this API's objects between them. However, checking that can be very hard, especially if they pass the API object only as a member of another object. Also, a library's initialization function can do things that should not happen twice. Another version may do these things again.
How to solve that?
You can always recompile your libraries and make them match each other.
You can link boost-python to your library as a static library. Then, it will work on almost every computer (even one that doesn't has boost-python installed). See more about that here.
Summary
myMod.so need another version of boost-python, one that compiled with a specific c++ library. Therefore, It would not work with any another version.

In my case I was receiving:
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/xmlsec.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_xmlSecDSigNs'
BACKGROUND:
M1 MacBook Pro with Montery
I was working with a python virtualenv (using pyenv) to use an earlier version of python3.8 (3.8.2), while my system had 3.8.10 installed natively.
While I was in the activated 3.8.2 virtualenv I noticed the path in dlopen() was pointing to the package in the native python install NOT the virtualenv install.
SOLUTION:
In my case, I did not need the native 3.8 version at all so I simply removed it and this solved the problem.

I encounter the same problem.
Expected in: flat namespace
Add the linker flag fixes the problem
-lboost_python37
change the dynamic library name to the one installed on the os.
By the way, my os is macOS High Sierra and I use brew to install boost_python3.

Symbol not found means the definition of the declared function or variable was not found. When a header file of a shared object is compiled with your program, linker adds symbols of declared functions and objects to your compiled program. When your program is loaded by the OS's loader, the symbols are resolved so that their definition will be loaded. It is only at this time where if the implementation is missing, loader complains it couldn't find the definition due to may be failing to resolve the actual path to the library or the library itself wasn't compiled with the implementation/source file where the definition of the function or object resides. There is a good article on this on the linux journal http://www.linuxjournal.com/article/6463.

In my case I was just failing to import all the required sources (c++ files) when compiling with Cython.
From the string after "Symbol not found" you can understand which library you are missing.

One of the solutions I found was to uninstall and reinstall it using the no-binary flag, which forces pip to compile the module from source instead of installing from precompiled wheel.
pip install --no-binary :all: <name-of-module>
Found this solution here

Here's what I've learned (osx):
If this is supposed to work (i.e. it works on another computer), you may be experiencing clang/gcc issues. To debug this, use otool -l on the .so file which is raising the error, or a suspect library (in my example it's a boost-python dylib file) and examine the contents. Anything in the /System/ folder is built with clang, and should be installed somewhere else with the gcc compiler. Never delete anything in the /System folder.

.so files are dynamic libraries (so = shared object). On Windows they are called .dll (dynamic-link library). They contain compiled code which contains functions available for usage to any executable which links them.
What is important to notice here is that those .so are not Python files. They were probably compiled from C or C++ code and contain public functions which can be used from Python code (see documentation on Extending Python with C or C++).
On your case, well, you have a corrupt .so. Try reinstalling the affected libraries, or Python, or both.

Problem
I had this same issue when running puma as part of Rails app
LoadError:
dlopen(/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle, 0x0009): symbol not found in flat namespace '_ERR_load_crypto_strings'
/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle
Solution
It was solved just by installing puma gem again gem install puma

Related

Trouble shooting using Pythonnet and setting Runtime.PythonDLL property

I try to use an assembly for .NET framework 4.8 via Pythonnet. I am using version 3.0.1 with Python 3.10. The documentation of Pythonnet is stating:
You must set Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable starting with version 3.0, otherwise you will receive BadPythonDllException (internal, derived from MissingMethodException) upon calling Initialize. Typical values are python38.dll (Windows), libpython3.8.dylib (Mac), libpython3.8.so (most other Unix-like operating systems).
However, the documentation unfortunately is not stating how the property is set and I do not understand how to do this.
When I try:
import clr
from pythonnet import load
load('netfx')
clr.AddReference(r'path\to\my.dll')
unsurprisingly the following error is coming up
Failed to initialize pythonnet: System.InvalidOperationException: This property must be set before runtime is initialized
bei Python.Runtime.Runtime.set_PythonDLL(String value)
bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size)
bei Python.Runtime.Runtime.set_PythonDLL(String value)
bei Python.Runtime.Loader.Initialize(IntPtr data, Int32 size)
[...]
in load
raise RuntimeError("Failed to initialize Python.Runtime.dll")
RuntimeError: Failed to initialize Python.Runtime.dll
The question now is, where and how the Runtime.PythonDLL property or PYTHONNET_PYDLL environment variable is set
Thanks,
Jens
I believe this is because import clr internally calls pythonnet.load, and in the version of pythonnet you are using this situation does not print any warning.
E.g. the right way is to call load before you call import clr for the first time.
The way I understand your use case is for Embedding .NET in Python however, the way I understand the requirement to "set Runtime.PythonDLL property" is that is for Embedding Python in .NET which was my use case. Anyhow, maybe the following will be useful.
Hidden at the bottom of the main GitHub README.md is a link to the WiKi (and of course the tab at the top of the GitHub repo) where thankfully there is a lot more detailed information and links to useful articles.
The main README.md asserts Runtime.PythonDLL "must be set" yet their example code does not illustrate doing so. Furthermore, the documentation on the official website asserts Python.Runtime.dll must be "referenced" which further confuses things.
In my experience Python.Runtime.dll was automatically referenced when installing the pythonnet NuGet package via Visual Studio. Perhaps Python.Runtime.dll does not automatically get referenced in earlier version or maybe when Python.NET is installed in other ways other than by using NuGet?
To answer your question "how to set Runtime.PythonDLL property?". My understanding is the way that is done is by assigning a string path to that property before the other usual setup:
Runtime.PythonDLL = #"C:\Users\<username>\AppData\Local\Programs\Python\Python310\python38.dll"
In my case, I found this path by using where python on Windows (which in bash, or Get-Command in PowerShell):
C:\>where python
C:\Users\<username>\AppData\Local\Programs\Python\Python310\python.exe

ImportError: DLL load failed while importing ON WINDOWS

I fixed a super-annoying case of "ImportError: DLL load failed while importing" in a way that generally applies to Windows, so let me share it with the group. Here is the question:
I installed FINUFFT via pip install finufft. When I import finufft, I get this error:
ImportError: DLL load failed while importing _finufft: The specified module could not be found.
How do I fix it?
Read to the end before doing anything.
The error means that a DLL cannot find another DLL that it was linked with. But which other DLL?
Download Dependencies.
Locate your problematic DLL. In this specific case: Locate the folder ...\Lib\site-packages\finufft\ of the FINUFFT installation that you want to fix. ...\ is the path of your standard python installation or of your python virtual environment.
Start DependenciesGui.exe and use it to open the problematic DLL, e.g. ...\finufft\_finufft.cp38-win_amd64.pyd. (A .pyd is a regular DLL with some specific entry points for python.)
On the left, you will see a complete list of the problematic DLL's direct dependencies, whose dependencies you can in turn unfold by mouse click. Apart from typical Windows-DLLs, like kernel32.dll and MSVCRT.dll, and apart from the FFTW-DLLs, which should already be in the FINUFFT-folder, there will also be some - possibly missing - Linux-DLLs. For me, it was libgcc_s_seh-1.dll, libgomp-1.dll and libstdc++-6.dll. By checking their direct dependencies, I also discovered libwinpthread-1.dll as missing.
[See EDIT below!!!] I found those DLLs in Anaconda (...\Anaconda3\Library\mingw-w64\bin\), but you can probably also get them from cygwin (...\cygwin64\bin\), git (...\Git\mingw64\bin\) or anything else that downloads mingw64 and its packages on Windows.
To solve the problem, copy the respective DLLs into ...\Lib\site-packages\finufft\ and give them the exact filenames that the FINUFFT-DLL is expecting according to Dependencies. This works because Windows and because of the Windows DLL search order.
Now, import finufft should work in the specific python environment whose FINUFFT installation you fixed. Clearly, this method can be applied anytime DLL dependencies are missing.
EDIT - correction of my answer by #CristiFati: If possible, DLLs and similar things should always be built with the same toolchain. So if you don't compile them yourself, get them from as few different places as possible, i.e. don't mix regular python, Anaconda, cygwin, etc. - if possible. Of course, Windows DLLs will have a different origin from Linux DLLs.

How should I use cx_freeze with a Macports library?

I'm currently using the Python 3.4 Mac OS X build from Python.org. I'm using a Python module that depends on a library that I built in Macports. The script does not run out-of-the-box:
Traceback (most recent call last):
File "magnetx.py", line 6, in <module>
import yara
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so, 2): Library not loaded: /usr/local/lib/libyara.3.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so
Reason: image not found
I can fix this if I set an environment variable
export DYLD_FALLBACK_LIBRARY_PATH="/opt/local/lib:$DYLD_FALLBACK_LIBRARY_PATH"
Unfortunately, it does not satisfy cx_freeze. It keeps looking in /usr/local/lib, when it should be looking in /opt/local/lib.
copying
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so
-> build/exe.macosx-10.6-intel-3.4/yara.so copying /usr/local/lib/libyara.3.dylib ->
build/exe.macosx-10.6-intel-3.4/libyara.3.dylib error: [Errno 2] No
such file or directory: '/usr/local/lib/libyara.3.dylib'
I could probably build Python in Macports, but that seems like it should be unnecessary. Any ideas on how to fix this?
On OS X, dependent libraries are referenced using absolute paths. The path that gets copied into your binary depends on the so-called "install name" of the library you link against at build time. In your case, the yara.so does not reference the library you would like it to load. Let's explore a couple of reasons why this could be the case, and a couple of ways to fix that:
I've verified that libyara.dylib as installed by MacPorts (on my system) has an install name of /opt/local/lib/libyara.0.dylib. Sometimes, build systems that don't use a cross-platform library build tool and don't expect the peculiarities of OS X mess this up (and use relative paths or /usr/local/lib). If this was the case, it would be a bug in the software's build system, which could be manually fixed using install_name_tool(1)'s -id flag (before linking against the library).
Your copy of yara.so may have been built against a different version of libyara.dylib that resides in /usr/local/lib. That would explain why your yara.so does not contain the correct absolute path to the MacPorts copy of libyara.dylib, but it would also prevent the error you're seeing from happening in the first place, unless you had a copy in /usr/local/lib at build time and deleted it later on. As you've already seen, you can instruct OS X' loader to also search different paths using the DYLD_* series of environment variables. My take on why this doesn't work for cx_freeze is that it doesn't pay attention to the DYLD_* series of variables.
If you are sure that the copy of libyara.dylib yara.so expects to find in /usr/local/lib is binary-compatible with the one in /opt/local/lib, you can manually modify the library load commands in yara.so to point to the latter path using install_name_tool(1)'s -change old new parameter, e.g. install_name_tool -change /usr/local/lib/libyara.3.dylib /opt/local/lib/libyara.0.dylib /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so. This is essentially modifying the binary with the change the loader did for you when you set DYLD_FALLBACK_LIBRARY_PATH. Since the library major version numbers seem to be different, this may not be a safe assumption.
If you don't know whether yara.so is compatible with MacPorts' build of libyara.0.dylib, you can and should recompile yara.so. If the re-compile went right, you should be able to check the library load commands using otool -L yara.so and see the paths beginning with /opt/local in there (provided that otool -D /opt/local/lib/libyara.0.dylib correctly points to itself).
Edit: I've just re-checked and noticed that my MacPorts build's library version number differs from the one your system expects. That sounds a lot like case number 2 to me.

import error: ephem/_libastro.so undefined symbol: PyUnicodeUCS2_AsUTF8String

I just successfully installed PyEphem using pip in a pyenv. However, on import I receive:
ImportError: /python2.7/site-packages/ephem/_libastro.so: undefined symbol: PyUnicodeUCS2_AsUTF8String
In looking around I've seen it mentioned that some modules are built "against Python" in regards to Unicode and suggest recompiling. I'm quite new to Python and Ubuntu 14.04, and although I believe this is the answer to my issue, I do not know what recompiling means or how to do it.
The symbol PyUnicode_AsUTF8String(value) is used once in _libastro.c and is defined on my system in the file:
/usr/include/python2.7/unicodeobject.h
There it can be aliased one of two ways:
#ifndef Py_UNICODE_WIDE
# ...
# define PyUnicode_AsUTF8String PyUnicodeUCS2_AsUTF8String
# ...
#else
# ...
# define PyUnicode_AsUTF8String PyUnicodeUCS4_AsUTF8String
Your error message makes it sound as though your system Python is compiled to use 4-byte-wide Unicode strings (hence why the linker cannot find a UCS2 version of this function inside of it), but that the version of PyEphem that auto-compiled on your system when you ran pip install somehow got confused and unset Py_UNICODE_WIDE and thus generated C code that was expected a UCS2 symbol.
Do you have several compiled versions of Python on your system, where the Unicode setting of one version could accidentally be affecting how this compile for your system Python takes place?

Why do no Python DLLs built with MSVC load with mod_wsgi?

I recently updated from Python 2.5 to 2.7 (I tried 2.6 during my hassles) and while everything works fine from the command line or in the Django runserver, mod_wsgi cannot load any module that contains DLLs (pyd) built with MSVC.
For example, if I build my own versions of pycrypto or lxml then I will get the following error only from mod_wsgi:
ImportError at /
DLL load failed: The specified module could not be found.
Even the official PIL binaries will fail to import the _imaging C module in mod_wsgi but that may be another problem.
However, if I use a version of pycrypto built with MinGW from somewhere like http://www.voidspace.org.uk/python/modules.shtml#pycrypto then it will import fine even in mod_wsgi. I don't find this solution satisfactory though since the whole reason I updated Python was to avoid needing to hunt for prebuilt binaries and I can't build them myself because MinGW fails >50% of the time for me.
EDIT2:
I noticed this in Python27/Lib/distutils/msvc9compiler.py on lines 680-705:
try:
# Remove references to the Visual C runtime, so they will
# fall through to the Visual C dependency of Python.exe.
# This way, when installed for a restricted user (e.g.
# runtimes are not in WinSxS folder, but in Python's own
# folder), the runtimes do not need to be in every folder
# with .pyd's.
manifest_f = open(manifest_file)
try:
manifest_buf = manifest_f.read()
finally:
manifest_f.close()
pattern = re.compile(
r"""<assemblyIdentity.*?name=("|')Microsoft\."""\
r"""VC\d{2}\.CRT("|').*?(/>|</assemblyIdentity>)""",
re.DOTALL)
manifest_buf = re.sub(pattern, "", manifest_buf)
pattern = "<dependentAssembly>\s*</dependentAssembly>"
manifest_buf = re.sub(pattern, "", manifest_buf)
manifest_f = open(manifest_file, 'w')
try:
manifest_f.write(manifest_buf)
finally:
manifest_f.close()
except IOError:
pass
This probably explains why everything works from the command line but not in mod_wsgi. Commenting all this out seems to fix the problem but doesn't feel like the proper fix. The question now is where to put msvcr90.dll so that Apache can use it? I notice that Apache's bin folder contains msvcr70.dll and msvcr80.dll but putting 90 in there doesn't work.
I've had a similar issue and eventually found a solution here: download/update your apache server with one from http://www.apachelounge.com/download/.
While I don't know anything about mod_wsgi, I dare to guess that the most probable reason is missing runtime dependencies. You may want to inspect your MSVC-build with Dependency Walker that ships with MSVC (e.g., in MSVC 2005, it's located at \Common7\Tools\Bin\Depends.Exe). It will show you which DLLs are required by a binary.
As another workaround, it should be possible to build your modules with statically linked runtime (see Project Properties -> C/C++ -> Code Generation -> Runtime -- choose "Multithreaded" (not "Multithreaded DLL"); or, if building from command line, make sure /MT is used instead of /MD). However there may be problems if runtime-dependent things (e.g. FILE* objects) cross module boundary.
UPD If you have correct VC redist installed, the reason may be a problem with SxS configuration (i.e. manifest of .pyd itself is wrong or missing, or conflicts with manifest of the application that loads the .pyd). You can use sxstrace utility to see what exactly is going on. See Diagnosing SideBySide failures.
Also, did you try static linking of the runtime? Or, better yet, check what are requirements of your host process.
I was getting this error with zmq. The solution was to include the python27.dll manifest in the libzmq.pyd file (and it'll most likely work for other pyd/dll's). Make sure you use all 64-bit or all 32-bit.
"C:\Program Files (x86)\Windows Kits\8.0\bin\x64\mt.exe" -inputresource:C:\windows\system32\python27.dll;#2 -outputresource:libzmq.pyd;#2
See https://code.google.com/p/pyodbc/issues/detail?id=214

Categories