CMake find_package for Python3 fails upon finding Python 2 - python

While moving to CMake 3.17.3 (from 3.13.3) we have stumbled on a problem with Python. In the root CMakeLists.txt we now have:
find_package(Python3 REQUIRED COMPONENTS Interpreter)
However, this fails with the following:
CMake Error at /home/abadura/cmake/cmake-3.17.3-Linux-x86_64/share/cmake-3.17/Modules/FindPackageHandleStandardArgs.cmake:164 (message):
Could NOT find Python3 (missing: Python3_EXECUTABLE Interpreter)
Reason given by package:
Interpreter: Wrong major version for the interpreter "/build/ltesdkroot/Platforms/LINUX/MB_PS_LFS_REL_2020_07_0068/sdk/bld-tools/x86_64-pc-linux-gnu/bin/python"
Call Stack (most recent call first):
/home/abadura/cmake/cmake-3.17.3-Linux-x86_64/share/cmake-3.17/Modules/FindPackageHandleStandardArgs.cmake:445 (_FPHSA_FAILURE_MESSAGE)
/home/abadura/cmake/cmake-3.17.3-Linux-x86_64/share/cmake-3.17/Modules/FindPython/Support.cmake:2437 (find_package_handle_standard_args)
/home/abadura/cmake/cmake-3.17.3-Linux-x86_64/share/cmake-3.17/Modules/FindPython3.cmake:309 (include)
CMakeLists.txt:20 (find_package)
while the Python found (/build/ltesdkroot/Platforms/LINUX/MB_PS_LFS_REL_2020_07_0068/sdk/bld-tools/x86_64-pc-linux-gnu/bin/python) reports version Python 2.7.15+.
Searching and experimenting I found out that adding:
set(Python3_FIND_STRATEGY VERSION)
before the find_package helps. With this added Python is found:
-- Found Python3: /opt/python/x86_64/3.6.0/bin-wrapped/python3.6 (found version "3.6.0") found components: Interpreter
Now, the problem is that FindPython3 documentation states the following:
Python3_FIND_STRATEGY
This variable defines how lookup will be done. The
Python3_FIND_STRATEGY variable can be set to one of the following:
VERSION: Try to find the most recent version in all specified locations. This is the default if policy CMP0094 is undefined or
set to OLD.
LOCATION: Stops lookup as soon as a version satisfying version constraints is founded. This is the default if policy CMP0094 is
set to NEW.
So, the LOCATION should work OK based on my understanding of the description. It should ignore the /build/ltesdkroot/Platforms/LINUX/MB_PS_LFS_REL_2020_07_0068/sdk/bld-tools/x86_64-pc-linux-gnu/bin/python since it doesn't satisfy version constraints (it is Python 2 rather than Python 3) and should continue to find the /opt/python/x86_64/3.6.0/bin-wrapped/python3.6 which does satisfy the constraints. (For completeness, I tried adding explicit 3.6 version in find_package and it didn't change anything.)
Another issue is whether I should set Python3_FIND_STRATEGY or use CMP0094 directly. The CMP0094 documentation has a somewhat scary warning:
Note The OLD behavior of a policy is deprecated by definition
and may be removed in a future version of CMake.
So, I'm not sure which approach (setting Python3_FIND_STRATEGY or activating OLD for CMP0094) is the desired (stable) approach.

Related

Using CMake FindPython() with "Development" component when cross-compiling

I have a CMake toolchain file containing the following
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR arm)
set(target_triplet "arm-linux-gnueabihf")
set(target_root /srv/chroot/raspbian)
set(CMAKE_C_COMPILER ${target_triplet}-gcc CACHE FILEPATH "C compiler")
set(CMAKE_CXX_COMPILER ${target_triplet}-g++ CACHE FILEPATH "C++ compiler")
set(CMAKE_SYSROOT ${target_root})
set(CMAKE_LIBRARY_ARCHITECTURE ${target_triplet})
# Look for the headers and libraries in the target system.
set(CMAKE_FIND_ROOT_PATH ${target_root})
# Setting the root path is not enough to make pkg-config work.
set(ENV{PKG_CONFIG_DIR} "")
set(ENV{PKG_CONFIG_LIBDIR} "${CMAKE_FIND_ROOT_PATH}/usr/lib/${target_triplet}/pkgconfig")
set(ENV{PKG_CONFIG_SYSROOT_DIR} ${CMAKE_FIND_ROOT_PATH})
# Don't look for programs in the root path (these are ARM programs, they won't
# run on the build machine)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
# Only look for libraries, headers and packages in the sysroot, don't look on
# the build machine
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
This relies on having a working Raspbian installation under /srv/chroot/raspbian and is supposed to make it possible to easily use its system libraries. This works fine for "simple" libraries after setting PKG_CONFIG_XXX like above, but fails for
find_package(Python3 COMPONENTS Development.Module REQUIRED)
with the following errors:
CMake Error at /usr/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Python3 (missing: Python3_INCLUDE_DIRS Development.Module)
Call Stack (most recent call first):
/usr/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.24/Modules/FindPython/Support.cmake:3217 (find_package_handle_standard_args)
/usr/share/cmake-3.24/Modules/FindPython3.cmake:490 (include)
Python3API/CMakeLists.txt:9 (find_package)
I'm a bit lost in 3421 lines of FindPython/Support.cmake module, so I don't understand why it doesn't find the headers and, unfortunately, the error is not very helpful and there doesn't seem any way to turn on debugging for this code. But it seems like it doesn't look inside the chroot containing the target system at all, because it's supposed to use ${CMAKE_LIBRARY_ARCHITECTURE}-python-config if it's available, and a file with this name does exist in ${target_root}/usr/bin, but somehow it doesn't seem to be found. I've tried setting CMAKE_FIND_ROOT_PATH_MODE_PROGRAM to ONLY, but it doesn't seem to work.
Is it possible to make this work without manually setting Python3_INCLUDE_DIRS and all the other variables? Please note that I really want to use the target root and not install the packages on the host system, as they are not available for it in the versions old enough to ensure compatibility with the system being targeted.
Thanks in advance for any suggestions!
Actually, doing
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM ONLY)
find_package(Python3 COMPONENTS Development.Module REQUIRED)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
does work for Python 3. Unfortunately if you still need to support Python 2 too, it doesn't work for it because python2.7-config doesn't behave correctly when invoked from outside the sysroot and returns wrong paths with a duplicate sysroot path in them. I couldn't find any workaround for this and ended up hardcoding Python 2 include directories.

What does "Symbol not found / Expected in: flat namespace" actually mean?

When I import a module I built, I get this boost-python related error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: dlopen(./myMod.so, 2): Symbol not found: __ZN5boost6python7objects15function_objectERKNS1_11py_functionERKSt4pairIPKNS0_6detail7keywordES9_E
Referenced from: ./myMod.so
Expected in: flat namespace
in ./myMod.so
What does this actually mean? Why was this error raised?
Description
The problem was caused by mixing objects that compiled with libc++ and object that compiled with libstdc++.
In our case, the library myMod.so (compiled with libstdc++) need boost-python that compiled with libstdc++ (boost-python-libstdc++ from now). When boost-python is boost-python-libstdc++, it will work fine. Otherwise - on computer that its boost-python has compiled with libc++ (or another c++ library), it will have a problem loading and running it.
In our case, it happens because that libc++ developers intentionally changed the name of all of their symbols to prevent you (and save you) from mixing code from their library and code from a different one: myMod.so need a function that take an argument from the type. In libc++, this type's name is std::__1::pair. Therefore, this symbol was not found.
To understand why mixing two version of the same API is bad, consider this situation: There are two libraries: Foo and Bar. They both have a function that takes a std::string and uses it for something but they use a different c++ library. When a std::string that has been created by Foo will be passed to Bar, Bar will think that this is an instance of its c++ library's std::string and then bad things can happen (they are a completely different objects).
Note: In some cases, there would be no problem with two or more different versions of the same API in a completely different parts of a program. There will be a problem if they will pass this API's objects between them. However, checking that can be very hard, especially if they pass the API object only as a member of another object. Also, a library's initialization function can do things that should not happen twice. Another version may do these things again.
How to solve that?
You can always recompile your libraries and make them match each other.
You can link boost-python to your library as a static library. Then, it will work on almost every computer (even one that doesn't has boost-python installed). See more about that here.
Summary
myMod.so need another version of boost-python, one that compiled with a specific c++ library. Therefore, It would not work with any another version.
In my case I was receiving:
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/xmlsec.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_xmlSecDSigNs'
BACKGROUND:
M1 MacBook Pro with Montery
I was working with a python virtualenv (using pyenv) to use an earlier version of python3.8 (3.8.2), while my system had 3.8.10 installed natively.
While I was in the activated 3.8.2 virtualenv I noticed the path in dlopen() was pointing to the package in the native python install NOT the virtualenv install.
SOLUTION:
In my case, I did not need the native 3.8 version at all so I simply removed it and this solved the problem.
I encounter the same problem.
Expected in: flat namespace
Add the linker flag fixes the problem
-lboost_python37
change the dynamic library name to the one installed on the os.
By the way, my os is macOS High Sierra and I use brew to install boost_python3.
Symbol not found means the definition of the declared function or variable was not found. When a header file of a shared object is compiled with your program, linker adds symbols of declared functions and objects to your compiled program. When your program is loaded by the OS's loader, the symbols are resolved so that their definition will be loaded. It is only at this time where if the implementation is missing, loader complains it couldn't find the definition due to may be failing to resolve the actual path to the library or the library itself wasn't compiled with the implementation/source file where the definition of the function or object resides. There is a good article on this on the linux journal http://www.linuxjournal.com/article/6463.
In my case I was just failing to import all the required sources (c++ files) when compiling with Cython.
From the string after "Symbol not found" you can understand which library you are missing.
One of the solutions I found was to uninstall and reinstall it using the no-binary flag, which forces pip to compile the module from source instead of installing from precompiled wheel.
pip install --no-binary :all: <name-of-module>
Found this solution here
Here's what I've learned (osx):
If this is supposed to work (i.e. it works on another computer), you may be experiencing clang/gcc issues. To debug this, use otool -l on the .so file which is raising the error, or a suspect library (in my example it's a boost-python dylib file) and examine the contents. Anything in the /System/ folder is built with clang, and should be installed somewhere else with the gcc compiler. Never delete anything in the /System folder.
.so files are dynamic libraries (so = shared object). On Windows they are called .dll (dynamic-link library). They contain compiled code which contains functions available for usage to any executable which links them.
What is important to notice here is that those .so are not Python files. They were probably compiled from C or C++ code and contain public functions which can be used from Python code (see documentation on Extending Python with C or C++).
On your case, well, you have a corrupt .so. Try reinstalling the affected libraries, or Python, or both.
Problem
I had this same issue when running puma as part of Rails app
LoadError:
dlopen(/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle, 0x0009): symbol not found in flat namespace '_ERR_load_crypto_strings'
/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle
Solution
It was solved just by installing puma gem again gem install puma

How should I use cx_freeze with a Macports library?

I'm currently using the Python 3.4 Mac OS X build from Python.org. I'm using a Python module that depends on a library that I built in Macports. The script does not run out-of-the-box:
Traceback (most recent call last):
File "magnetx.py", line 6, in <module>
import yara
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so, 2): Library not loaded: /usr/local/lib/libyara.3.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so
Reason: image not found
I can fix this if I set an environment variable
export DYLD_FALLBACK_LIBRARY_PATH="/opt/local/lib:$DYLD_FALLBACK_LIBRARY_PATH"
Unfortunately, it does not satisfy cx_freeze. It keeps looking in /usr/local/lib, when it should be looking in /opt/local/lib.
copying
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so
-> build/exe.macosx-10.6-intel-3.4/yara.so copying /usr/local/lib/libyara.3.dylib ->
build/exe.macosx-10.6-intel-3.4/libyara.3.dylib error: [Errno 2] No
such file or directory: '/usr/local/lib/libyara.3.dylib'
I could probably build Python in Macports, but that seems like it should be unnecessary. Any ideas on how to fix this?
On OS X, dependent libraries are referenced using absolute paths. The path that gets copied into your binary depends on the so-called "install name" of the library you link against at build time. In your case, the yara.so does not reference the library you would like it to load. Let's explore a couple of reasons why this could be the case, and a couple of ways to fix that:
I've verified that libyara.dylib as installed by MacPorts (on my system) has an install name of /opt/local/lib/libyara.0.dylib. Sometimes, build systems that don't use a cross-platform library build tool and don't expect the peculiarities of OS X mess this up (and use relative paths or /usr/local/lib). If this was the case, it would be a bug in the software's build system, which could be manually fixed using install_name_tool(1)'s -id flag (before linking against the library).
Your copy of yara.so may have been built against a different version of libyara.dylib that resides in /usr/local/lib. That would explain why your yara.so does not contain the correct absolute path to the MacPorts copy of libyara.dylib, but it would also prevent the error you're seeing from happening in the first place, unless you had a copy in /usr/local/lib at build time and deleted it later on. As you've already seen, you can instruct OS X' loader to also search different paths using the DYLD_* series of environment variables. My take on why this doesn't work for cx_freeze is that it doesn't pay attention to the DYLD_* series of variables.
If you are sure that the copy of libyara.dylib yara.so expects to find in /usr/local/lib is binary-compatible with the one in /opt/local/lib, you can manually modify the library load commands in yara.so to point to the latter path using install_name_tool(1)'s -change old new parameter, e.g. install_name_tool -change /usr/local/lib/libyara.3.dylib /opt/local/lib/libyara.0.dylib /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/yara.so. This is essentially modifying the binary with the change the loader did for you when you set DYLD_FALLBACK_LIBRARY_PATH. Since the library major version numbers seem to be different, this may not be a safe assumption.
If you don't know whether yara.so is compatible with MacPorts' build of libyara.0.dylib, you can and should recompile yara.so. If the re-compile went right, you should be able to check the library load commands using otool -L yara.so and see the paths beginning with /opt/local in there (provided that otool -D /opt/local/lib/libyara.0.dylib correctly points to itself).
Edit: I've just re-checked and noticed that my MacPorts build's library version number differs from the one your system expects. That sounds a lot like case number 2 to me.

Upgrading OpenCV brew formula: Python not loading properly

I'm trying update the latest opencv formula (as of writing, this formula installs opencv 2.4.7) to build the latest version of opencv (2.4.8).
The first thing I did was brew edit opencv, and change the url to 'https://github.com/Itseez/opencv/archive/2.4.8.tar.gz' and update the checksum. I anticipated that I might have to deal with some built issues, but the problem I see seems to occur while the formula is being run.
Specifically, when I run brew upgrade opencv (or unlink and run brew install opencv), I get the following message:
==> Downloading https://github.com/Itseez/opencv/archive/2.4.8.tar.gz
Already downloaded: /Library/Caches/Homebrew/opencv-2.4.8.tar.gz
==> Patching
patching file cmake/OpenCVFindOpenNI.cmake
Warning: Formula#python is deprecated and will go away shortly.
Error: undefined method `incdir' for #<PythonDependency: "python" []>
Please report this bug:
https://github.com/Homebrew/homebrew/wiki/troubleshooting
/usr/local/Library/Formula/opencv.rb:49:in `install'
/usr/local/Library/Homebrew/build.rb:165:in `install'
/usr/local/Library/Homebrew/formula.rb:272:in `brew'
/usr/local/Library/Homebrew/formula.rb:617:in `stage'
/usr/local/Library/Homebrew/resource.rb:63:in `unpack'
/usr/local/Library/Homebrew/extend/fileutils.rb:21:in `mktemp'
/usr/local/Library/Homebrew/resource.rb:60:in `unpack'
/usr/local/Library/Homebrew/resource.rb:53:in `stage'
/usr/local/Library/Homebrew/formula.rb:615:in `stage'
/usr/local/Library/Homebrew/formula.rb:267:in `brew'
/usr/local/Library/Homebrew/build.rb:144:in `install'
/usr/local/Library/Homebrew/build.rb:45:in `main'
/usr/local/Library/Homebrew/build.rb:12
/usr/local/Library/Formula/opencv.rb:80
Python doesn't seem to get loaded. I'm also confused by the deprecation warning; everything I've found suggests that python formula is the one I should be using. Judging by this link the depends_on :python dependency seems like it should work (and it obviously did when I tried with the previous 2.4.7.1 formula).
For reference, here are my environment variables:
Apple_PubSub_Socket_Render=/tmp/launch-Ygtqzn/Render
CMD_DURATION=20.7s
COMMAND_MODE=unix2003
DISPLAY=/tmp/launch-a4CGwS/org.x:0
GEM_HOME=/Users/myname/.rvm/gems/ruby-1.9.3-p194
GEM_PATH=/Users/myname/.rvm/gems/ruby-1.9.3-p194:/Users/myname/.rvm/gems/ruby-1.9.3-p194#global
GREP_COLOR=97;45
GREP_OPTIONS=--color=auto
HOME=/Users/myname
LANG=en_CA.UTF-8
LOGNAME=myname
PATH=/usr/local/bin:/Users/myname/.rvm/gems/ruby-1.9.3-p194/bin:/Users/myname/.rvm/gems/ruby-1.9.3-p194#global/bin:/Users/myname/.rvm/rubies/ruby-1.9.3-p194/bin:/Users/myname/.rvm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin:/Users/myname/.rvm/bin:/usr/local/heroku/bin
PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:/usr/lib/pkgconfig:/usr/X11/lib/pkgconfig
PWD=/Users/myname/projects/forks/ruby-opencv
SHELL=/usr/local/bin/fish
SHLVL=1
SSH_AUTH_SOCK=/tmp/launch-lvn79S/Listeners
TERM=xterm-color
TERM_PROGRAM=Apple_Terminal
TERM_PROGRAM_VERSION=273.1
TMPDIR=/var/folders/pv/pvvR8qgvGOCfd5dza+ZbVU+++TI/-Tmp-/
USER=myname
__CF_USER_TEXT_ENCODING=0x1F5:0:0
__fish_bin_dir=/usr/local/Cellar/fish/2.0.0/bin
__fish_datadir=/usr/local/Cellar/fish/2.0.0/share/fish
__fish_help_dir=/usr/local/Cellar/fish/2.0.0/share/doc/fish
__fish_sysconfdir=/usr/local/Cellar/fish/2.0.0/etc/fish
rvm_bin_path=/Users/myname/.rvm/bin
rvm_path=/Users/myname/.rvm
rvm_prefix=/Users/myname
rvm_version=1.25.3:master
And python (homebrewed) version
python --version #=>Python 2.7.6
brew doctor output gives me a warning about a passenger config file (which shouldn't influence the building of opencv) and a warning that opencv is unlinked (I unlinked it to try running brew install opencv)
Thanks in advance for the help.
I had the same error. I don't understand why "incdir" or other variables are not defined correctly in the formula file. But I solved this error by editing the opencv formula file as follows, (setting each path directly)
(around 50th lines)
DPYTHON_INCLUDE_DIR=/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/include/python2.7
DPYTHON_LIBRARY=/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/libpython2.7.dylib
DPYTHON_EXECUTABLE=/usr/local/bin/python
I worked around this issue by rolling back to an earlier homebrew version, then installing opencv, and then going back to the current version.
See this gist for detailed instructions: https://gist.github.com/frederikhermans/8561382

Mercurial installation issue

We've installed mercurial 1.4.1 and python 2.6.2 on a solaris 8 box. Now some hg commands work as expected, others fail.
I was able to initialize a repository (hg init) and add a file (hg add) but the committing (hg commit) leads to an error message:
abort: could not import module found!
I need a hint where to look - I'm not a python expert, is this missing found module part of the python distribution or does it belong to mercurial? Any idea how to fix it?
Edit
Thanks for your comments - hg debuginstall runs fine, just reports one problem - I didn't set a username in any of the config files. Can't believe that this causes the actual problems...
Edit
--traceback was a good hint!!
Here's the last line (can't copy&paste):
ImportError: ld.so.1: hg: fatal: relocation error:
file:/usr/local/lib/python2.6/lib-dynload/zlib.so:
symbol inflateCopy: referenced symbol not found
The zlib.so library is present was installed with either the python or mercurial package.
Looks like I'm not the only one: here's the same problem with python 2.5 on solaris 10
You need to install the zlib library for your system (libz.so).
Check your LD_LIBRARY_PATH settings.
If it is pulling libz from an odd place you will need to fix it so that it is pulling from /pkg/local/lib first
I was seeing this:
ldd /pkg/local/lib/python2.7/lib-dynload/zlib.so
libz.so => /import/wgs/lib/libz.so
But now its working for me.

Categories