I'm currently developing a C++/Python package using pybind11 for the Python bindings. This project is mixed: it has parts which are written in Python and other parts which are written in C++ and are compiled as an external module.
The project uses cmake>1.7. Roughly this is the directory structure of the project:
PythonProject
-> python_sources
-> include [cpp headers]
-> src [cpp sources]
--> module.cpp [pybind11 bindings declaration]
--> cpp_sources [where the implementation of the headers goes]
The project also depends on Eigen3.
I have the project setup, so that when building the wheels the cpp module is compiled first and then the *.so (if macOS) is copied to python_sources.
In my machine (macOS 11.2 with XCODE 12) I can generate the wheels with
python -m build or python -m build --sdist(depending if I am only building source distributions)
This works well and it copies the cpp built module correctly to PythonProject/python_sources so that the module can be imported as import python_sources.module.
Also, I'm able to test the installation with pip install -e ./ and all tests pass.
The issue is that when running cbuildwheel on with Github actions, it seems that the c++ module is never built.
I have tried to force building the module and copying it to python_sources, but this also does not work, and the tests do not pass.
This is my cbuildwheel configuration:
- uses: joerick/cibuildwheel#v1.10.0
env:
# Python 2.7 on Windows with workaround not supported by scikit-build
CIBW_SKIP: cp27-win*
CIBW_BUILD: cp38-${{ matrix.cibw-arch }}
CIBW_ENVIRONMENT: PYTHONPROJECT_EXTRA_CMAKE_ARGS="-DPREBUILT_DEPENDENCIES=$(python -c 'import os; print(os.getcwd().replace(os.path.sep, '/'))')/build_dependencies"
CMAKE_BUILD_PARALLEL_LEVEL=2
CMAKE_OSX_ARCHITECTURES="arm64"
CIBW_BEFORE_ALL_MACOS: |
brew install libomp && brew install eigen && brew install openblas
CIBW_BEFORE_ALL_LINUX: |
yum install eigen3-devel
CIBW_BEFORE_BUILD: cmake -S . -B build_dependencies $CMAKE_ARCH && cmake --build build_dependencies --target numerical -j 2 && cp build_dependencies/src/*.so python_sources/
CIBW_BEFORE_BUILD_MACOS: cmake -S . -B build_dependencies -DCMAKE_OSX_ARCHITECTURES="arm64" $CMAKE_ARCH && cmake --build build_dependencies --target module -j 2 && cp build_dependencies/src/*.so python_sources/
CIBW_BEFORE_TEST: pip install --upgrade -r tests/requirements.txt
CIBW_TEST_COMMAND: pytest {project}/tests
I want to be able to build these wheels on GitHub actions (not necessarily with cibuildwheels, although that is my preferred option). But I am quite lost here and would really appreciate some help on how to setup the CI configuration.
Related
What I should have:
I want my Yocto Project to build a package for my Python project with all dependencies inside. The project has to run out of box on the resulting read-only sdcard image.
It simply should install all requirements in the required version to the package.
What I tried without luck:
Calling pip in do_install():
"pip/pip3 is not found", even it's in RDEPENDS.
Anyway, I really prefer this way.
With inherit pypi:
When trying with inherit pypi, it tries to get also my local sources (my pyton project) from pypi. And I have always to copy the requirements to the recipe. This is not my preferred way.
Calling pip in pkg_postinst():
It tries to install the modules on first start and fails, because the system has no internet connection and it's a read-only system. It must run out of the box without installation on first boot time. Does its stuff to late.
Where I'll get around:
There should be no need to change anything in the recipes when something changes in requirements.txt.
Background information
I'm working with Yocto Rocko in a Linux environment.
In the Hostsystem, there is no pip installed. I want to run this one installed from RDEPENDS in the target system.
Building the Package (only this recipe) with:
bitbake myproject
Building the whole sdcard image:
bitbake myProject-image-base
The recipe:
myproject.bb (relevant lines):
RDEPENDS_${PN} = "python3 python3-pip"
APP_SOURCES_DIR := "${#os.path.abspath(os.path.dirname(d.getVar('FILE', True)) + '/../../../../app-sources')}"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI = " \
file://${APP_SOURCES_DIR}/myProject \
...
"
inherit allarch # tried also with pypi and setuptools3 for the pypi way.
do_install() { # Line 116
install -d -m 0755 ${D}/myProject
cp -R --no-dereference --preserve=mode,links -v ${APP_SOURCES_DIR}/myProject/* ${D}/myProject/
pip3 install -r ${APP_SOURCES_DIR}/myProject/requirements.txt
# Tried also python ${APP_SOURCES_DIR}/myProject/setup.py install
}
# Tried also this, but it's no option because the data MUST be included in the Package:
# pkg_postinst_${PN}() {
# #!/bin/sh -e
# pip3 install -r /myProject/requirements.txt
# }
FILES_${PN} = "/myProject/*"
Resulting Errors:
Expected to install the listed modules from requirements.txt into the myProject package, so that the python app will run directly on the resulting readonly sdcard image.
With pip, I get:
| /*/tmp/work/*/myProject/0.1.0-r0/temp/run.do_install: 116: pip3: not found
| WARNING: exit code 127 from a shell command.
| ERROR: Function failed: do_install ...
When using pypi:
404 Not Found
ERROR: myProject-0.1.0-r0 do_fetch: Fetcher failure for URL: 'https://files.pythonhosted.org/packages/source/m/myproject/myproject-0.1.0.tar.gz'. Unable to fetch URL from any source.
=> But it should not fetch myProject, since it is already local and nowhere remote.
Any ideas? What would be the best way to reach to a ready to use sdcard image without the need to change recipes when requirements.txt changes?
You should use RDEPENDS_${PN} to take care of your dependencies for your app in the recipe.
For example, assuming your python app needs aws-iot-device-sdk-python module, you should add it to RDEPENDS in the recipe. In your case, it would be like this:
RDEPENDS_${PN} = "python3 \
python3-pip \
python3-aws-iot-device-sdk-python \
"
Here's the link showing the Python modules supported by OpenEmbedded Layer.
https://layers.openembedded.org/layerindex/branch/master/layer/meta-python/
If the modules you need are not there, you will likely need to create recipes for the modules.
My newest findings:
Yocto/bitbake seems to suppress interpreting the requirements, because this breaks automatic dependency resolving what could lead to conflicts.
Reason: The required modules from setup.py would not be stored as independent packages, but as part of my package. So, bitbake does not know about this modules what could conflict with other packages that probably requires same modules in different versions.
What was in my recipe:
MY_INSTALL_ARGS = "--root=${D} \
--prefix=${prefix} \
--install-lib=${PYTHON_SITEPACKAGES_DIR} \
--install-data=${datadir}"
do_install() {
PYTHONPATH=${PYTHON_SITEPACKAGES_DIR} \
${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py install ${MY_INSTALL_ARGS}
}
If I execute this outside of bitbake as python3 setup.py install ${MY_INSTALL_ARGS}, all will be installed correctly, but in the recipe, no requirements are installed.
There is a parameter --no-deps, but I didn't find where it is set.
I think there could be one possibility to exploit the requirements out of setup.py:
Find out where to disable --no-deps in the openembedded/poky layer for easy_install.
Creating a separate PYTHON_SITEPACKAGES_DIR
Install this separate PYTHON_SITEPACKAGES_DIR in eg the home directory as private python modules dir.
This way, no python module would trigger a conflict.
Since I do not have the time to experiment with this, I'll define now one recipe per requirement.
You try installing pip?
Debian
apt-get install python-pip
apt-get install python3-pip
Centos
yum install python-pip
A few months ago I wrote this skeleton package for linking a C library to Python using NumPy and Cython. When I created it compilation worked fine and I was able to use the C code from Python.
However it now seems that my Python, Cython, or NumPy versions have changed in a way that the package won't compile anymore. Was there an API change in one of these packages that I'm not aware of? My .pxd header file isn't found and I get a number of compilation errors regarding the use of with nogil, such as:
Error compiling Cython file:
------------------------------------------------------------
...
cdef long total
cdef long n = x.shape[0]
# If we want to use "nogil", we have to use "x.data", not "x".
with nogil:
total = sum_array(x.data, n)
^
------------------------------------------------------------
src/cython_wrapper.pyx:32:25: Coercion from Python not allowed without the GIL
But importantly these errors don't occur with the previous versions of Cython (0.27.1) and NumPy (1.13.3).
Here are the steps to reproduce this:
Old Environment
virtualenv -p python3.6 old
cd old
source bin/activate
pip install cython==0.27.1 numpy==1.13.3
git clone https://github.com/GjjvdBurg/NumPy_C_Extension
cd NumPy_C_Extension
rm src/cython_wrapper.c
python setup.py build_ext -i
# Compilation proceeds without errors
New Environment
virtualenv -p python3.6 new
cd new
source bin/activate
pip install cython==0.28.3 numpy==1.14.5
git clone https://github.com/GjjvdBurg/NumPy_C_Extension
cd NumPy_C_Extension
rm src/cython_wrapper.c
python setup.py build_ext -i
# Compilation fails
Any help would be greatly appreciated!
I was trying to build opencv for python3. However, cmake always sets python build option to be python2.7.11 even after I manually specified include and lib option for python3:
-- Python 2:
-- Interpreter: /home/ryu/anaconda2/bin/python2.7 (ver 2.7.11)
-- Python 3:
-- Interpreter: /usr/bin/python3 (ver 3.4.3)
-- Libraries: /usr/lib/x86_64-linux-gnu/libpython3.4m (ver 3.4.3)
-- numpy: /home/ryu/.local/lib/python3.4/site-packages/numpy/core/include (ver 1.11.0)
-- packages path: lib/python3.4/dist-packages
--
-- **Python (for build): /home/ryu/anaconda2/bin/python2.7**
Did I miss some cmake option?
OS: Ubuntu 14,04
thanks
You can override the python executable to build to by appending the argument PYTHON_DEFAULT_EXECUTABLE with the python executable URI during the cmake invokation.
cmake {...} -DPYTHON_DEFAULT_EXECUTABLE=$(which python3) ..
I was struggling with this one for some hours and the answers mentioned above didn't solve the problem straightaway.
Adding to Ivan's answer, I had to include these flags in cmake to make this work:
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D BUILD_opencv_python3=ON \
-D HAVE_opencv_python3=ON \
-D PYTHON_DEFAULT_EXECUTABLE=<path_to_python3>
I leave that here, so it is maybe useful for someone else in the future.
it was take some hours for me. I built Dockerfile with opencv for python3
the key string is
pip install numpy
Full Docker file:
FROM python:3.8
RUN apt-get update && apt-get -y install \
cmake \
qtbase5-dev \
libdc1394-22-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev
RUN cd /lib \
&& git clone --branch 4.1.1 --depth 1 https://github.com/opencv/opencv.git \
&& git clone --branch 4.1.1 --depth 1 https://github.com/opencv/opencv_contrib.git
RUN pip install numpy \
&& mkdir /lib/opencv/build \
&& cd /lib/opencv/build \
&& cmake -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local -DWITH_TBB=ON -DWITH_V4L=ON -DWITH_QT=ON -DWITH_OPENGL=ON -DWITH_FFMPEG=ON -DOPENCV_ENABLE_NONFREE=ON -DOPENCV_EXTRA_MODULES_PATH=/lib/opencv_contrib/modules .. \
&& make -j8 \
&& make install
CMD ["bash"]
The main point is to force compiler to build cv2 module for python
To make it we need python3 should be included in line To be built in CMakeCache.txt file in build folder of opencv
ref https://breakthrough.github.io/Installing-OpenCV/
If there are any errors, ensure that you downloaded all the required packages - the output should help track down what is missing. To ensure the Python module will be built, you should see python2 in the list of configured modules after running cmake
(in my case python3)
I've been trying to install opencv on a Pi3 and this solution didn't work for me as python (for build) was always set to Python2.7 but I found that by changing the order of an elseif statement at the bottom of 'OpenCVDetectPython.cmake' fixed the problem. For me, this file is located at '~/opencv-3.3.1/cmake'.
The original code segment:
if(PYTHON_DEFAULT_EXECUTABLE)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
elseif(PYTHON2INTERP_FOUND) # Use Python 2 as default Python interpreter
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON2_EXECUTABLE}")
elseif(PYTHON3INTERP_FOUND) # Use Python 3 as fallback Python interpreter (if there is no Python 2)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON3_EXECUTABLE}")
endif()
My re-ordered code segment:
if(PYTHON_DEFAULT_EXECUTABLE)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
elseif(PYTHON3INTERP_FOUND) # Use Python 3 as fallback Python interpreter (if there is no Python 2)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON3_EXECUTABLE}")
elseif(PYTHON2INTERP_FOUND) # Use Python 2 as default Python interpreter
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON2_EXECUTABLE}")
endif()
I don't know the reasoning behind it, but cmake is set to default to python2 if python2 exists, swapping the order of these elseif statements switches it to default to python3 if it exists
** Disclaimer **
I was using the script found at https://gist.github.com/willprice/c216fcbeba8d14ad1138 to download, install and build everything
(script was modified to not create a virtual environment as I didn't want one and with j1 not j4 as it failed around 85% when running with multiple cores).
I don't think the relevant file exists until you have attempted a build.
Changing the options in cmake did nothing for me no matter what options I modified. The simpliest (hacky) solution for me was to
sudo mv /usr/bin/python2.7 /usr/bin/pythonNO-temp
Then you build and install opencv
then
sudo mv /usr/bin/pythonNO-temp /usr/bin/python2.7
I am trying to install python package "M2Crypto" via requirements.txt and I receive the following error message:
/usr/include/openssl/opensslconf.h:36: Error: CPP #error ""This openssl-devel package does not work your architecture?"". Use the -cpperraswarn option to continue swig processing.
error: command 'swig' failed with exit status 1
I tried passing
option_name: SWIG_FEATURES
value: "-cpperraswarn -includeall -I/usr/include/openssl"
But the error persists. Any idea?
The following config file (placed in .ebextensions) works for me:
packages:
yum:
swig: []
container_commands:
01_m2crypto:
command: 'SWIG_FEATURES="-cpperraswarn -includeall -D`uname -m` -I/usr/include/openssl" pip install M2Crypto==0.21.1'
Make sure you don't specify M2Crypto in your requirements.txt though, Elastic Beanstalk will try to install all dependencies before running the container commands.
I have found a solution that gets M2Crypto installed on Beanstalk but it is a bit of hack and it is your responsibility to make sure that it is good for a production environment. I dropped M2Crypto from my project because this issue is ridiculous, try pycrypto if you can.
Based on (I only added python setup.py test):
#!/bin/bash
python -c "import M2Crypto" 2> /dev/null
if [ "$?" == 1 ]
then
cd /tmp/
pip install -d . --use-mirrors M2Crypto==0.21.1
tar xvfz M2Crypto-0.21.1.tar.gz
cd M2Crypto-0.21.1
./fedora_setup.sh build
./fedora_setup.sh install
python setup.py test
fi`
In the environment config file
commands:
m2crypto:
command: scripts/m2crypto.sh
ignoreErrors: True
test: echo '! python -c "import M2Crypto"' | bash
ignoreErrors is NOT a good idea but I just used it to test if the package actually gets installed and seems like it.
Again, this might seem to get the package installed but I am not sure because removing ignoreErrors causes failure. Therefore, I won't mark this as the accepted answer but it was way too much to be a comment.
I have Python 2.7.3 installed on RHEL 6, and when I tried to install pysvn-1.7.6, I got an error. What should I do?
/search/python/pysvn-1.7.6/Import/pycxx-6.2.4/CXX/Python2/Objects.hxx:2912: warning: deprecated conversion from string constant to 'char*'
Compile: pysvn_svnenv.cpp into pysvn_svnenv.o
Compile: pysvn_profile.cpp into pysvn_profile.o
Compile: /search/python/pysvn-1.7.6/Import/pycxx-6.2.4/Src/cxxsupport.cxx into cxxsupport.o
Compile: /search/python/pysvn-1.7.6/Import/pycxx-6.2.4/Src/cxx_extensions.cxx into cxx_extensions.o
Compile: /search/python/pysvn-1.7.6/Import/pycxx-6.2.4/Src/cxxextensions.c into cxxextensions.o
Compile: /search/python/pysvn-1.7.6/Import/pycxx-6.2.4/Src/IndirectPythonInterface.cxx into IndirectPythonInterface.o
Link pysvn/_pysvn_2_7.so
make: *** No rule to make target `egg'. Stop.
error: Not a URL, existing file, or requirement spec: 'dist/pysvn-1.7.6-py2.7-linux-x86_64.egg'
I solved this problem, the reason is that i have made a mistake.
i just executed the following command, it is not in the instruction.
python setup.py install
the installation steps are (the Source is the dir name in pysvn directory):
cd Source
python setup.py configure
make
cd ../Tests
make
cd Source
mkdir [YOUR PYTHON LIBDIR]/site-packages/pysvn
cp pysvn/__init__.py [YOUR PYTHON LIBDIR]/site-packages/pysvn
cp pysvn/_pysvn*.so [YOUR PYTHON LIBDIR]/site-packages/pysvn
I had a same problem. and I find this solution and it is working.
Download the latest epel-release rpm from
http://dl.fedoraproject.org/pub/epel/6/x86_64/
for now :
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Install epel-release rpm:
rpm -Uvh epel-release*rpm
Install pysvn rpm package:
yum install pysvn