Cibuildwheel Fails to Compile With Static Libraries - python

I have a Python C extension module which relies on static libraries. Below is my file tree, I haven't included all the files because I am trying to simplify the problem.
folder/
├── src/
| ├── main.c
| └── other.c
├── include/
| ├── glfw3native.h
| └── glfw3.h
├── lib/
| └── libglfw3.a
└── setup.py
Below is my setup.py file, I have removed some unnecessary lines.
import setuptools
setuptools.setup(
ext_modules = [
setuptools.Extension(
"Module.__init__", ["src/main.c", "src/other.c"],
include_dirs = ["include"],
library_dirs = ["lib"],
libraries = ["glfw"])
])
I can successfully compile my project with the following command.
python setup.py bdist_wheel
Now I want to use cibuildtools to compile my project for multiple platforms.
cibuildwheel --platform linux
For some reason, the program crashes when it tries to link the libraries. Even though the library path is stated, it shows the following error.
cannot find -lglfw
Why does this happen when compiling with cibuildwheel?

Because static binaries are different on every system, I need to compile my libraries on the corresponding platform. In the end, I used the CIBW_BEFORE_ALL variable to execute the build commands for my libraries.

Related

Building CMake extension with pip

I'm working on a C++/Python project with the following structure:
foo
├── CMakeLists.txt
├── include
├── source
└── python
   ├── foo
   │   ├── _foo_py.py
   │   └── __init__.py
   ├── setup.py
   └── source
   ├── CMakeLists.txt
   └── _foo_cpp.cpp
foo/source and foo/include contain C++ source files and foo/python/source/_foo_cpp.cpp contains pybind11 wrapper code for this C++ code. Running setup.py is supposed to build the C++ code (by running CMake), create a _foo_cpp Python module in the form of a shared object and integrate it with the Python code in _foo_py.py. I.e. I want to be able to simply call python setup.py install from foo/python to install the foo module to my system. I'm currently using a CMake extension class in setup.py to make this work:
class CMakeExtension(Extension):
def __init__(self, name, sourcedir):
Extension.__init__(self, name, sources=[])
self.sourcedir = os.path.abspath(sourcedir)
class CMakeBuild(build_ext):
def run(self):
try:
subprocess.check_output(['cmake', '--version'])
except OSError:
raise RuntimeError("cmake command must be available")
for ext in self.extensions:
self.build_extension(ext)
def build_extension(self, ext):
if not os.path.exists(self.build_temp):
os.makedirs(self.build_temp)
self._setup(ext)
self._build(ext)
def _setup(self, ext):
cmake_cmd = [
'cmake',
ext.sourcedir,
]
subprocess.check_call(cmake_cmd, cwd=self.build_temp)
def _build(self, ext):
cmake_build_cmd = [
'cmake',
'--build', '.',
]
subprocess.check_call(cmake_build_cmd, cwd=self.build_temp)
The problem arises when I try to directly call pip in foo/python, e.g. like this:
pip wheel -w wheelhouse --no-deps .
It seems that before running the code in setup.py, pip copies the content of the working directory into a temporary directory. This obviously doesn't include the C++ code and the top-level CMakeLists.txt. That in turn causes CMakeBuild._setup to fail because there is seemingly no way to obtain a path to the foo root directory from inside setup.py after it has been copied to another location by pip.
Is there anything I can do to make this setup work with both python and pip? I have seen some approaches that first run cmake to generate a setup.py from a setup.py.in to inject package version, root directory path etc. but I would like to avoid this and have setup.py call cmake instead of the other way around.

Python package published with poetry is not found after install

In the last few days, I was working on a python module. Until now, I used poetry as a packages management tool in many other projects, but it is my first time wanting to publish a package to PyPI.
I was able to run the poetry build and poetry publish commands. I was also able to also install the published package:
$ pip3 install git-profiles
Collecting git-profiles
Using cached https://files.pythonhosted.org/packages/0e/e7/bac9027effd1e34a5b5718f2b35c0b28b3d67f3809e2f2981b6c7b58963e/git_profiles-1.1.0-py3-none-any.whl
Installing collected packages: git-profiles
Successfully installed git-profiles-1.1.0
However, right after the install, I am not able to run my package:
$ git-profiles --help
git-profiles: command not found
My project has the following structure:
git-profiles/
├── src/
│   ├── commands/
│   ├── executor/
│   ├── git_manager/
│   ├── profile/
│   ├── utils/
│   ├── __init__.py
│   └── git_profiles.py
└── tests
I tried to work with different scripts configurations in the pyproject.toml file but I've never been able to make it work after install.
[tool.poetry.scripts]
poetry = "src:git_profiles.py"
or
[tool.poetry.scripts]
git-profile = "src:git_profiles.py"
I don't know if this is a python/pip path/version problem or I need to change something in the configuration file.
If it is helpful, this is the GitHub repository I'm talking about. The package is also published on PyPI.
Poetry's scripts sections wraps around the console script definition of setuptools. As such, the entrypoint name and the call path you give it need to follow the exact same rules.
In short, a console script does more or less this from the shell:
import my_lib # the module isn't called src, that's just a folder name
# the right name to import is whatever you put at [tool.poetry].name
my_lib.my_module.function()
Which, if given the name my-lib-call (the name can be the same as your module, but it doesn't need to be) would be written like this:
[tool.poetry.scripts]
my-lib-call = "my_lib.my_module:function"
Adapted to your project structure, the following should do the job:
[tool.poetry.scripts]
git-profile = "git-profiles:main"

Python package setup: setup.py with customisation to handle wrapped fortran

I have a python package i would like to distribute. I have the package set-up and am able to download the tarball, unzip and install it using:
python setup.py install
which works fine.
I would also like to upload the package to PyPi, and enable it to be installed using pip.
However, the package contains f2py wrapped fortran, and which needs to be compiled on build with the resulting .so files moved to the eventual installation folder. I am confused as to how to do this using:
python3 setup.py sdist
followed by:
pip3 install pkg_name_here.tar.gz
The reason being that when I run
python3 setup.py sdist
the custom commands are being run, part of which is trying to move the compiled *so files to the installation folder, which has not yet been created. An example of the code outline i have used is in this example here:
from setuptools.command.install import install
from setuptools.command.develop import develop
from setuptools.command.egg_info import egg_info
'''
BEGIN CUSTOM INSTALL COMMANDS
These classes are used to hook into setup.py's install process. Depending on the context:
$ pip install my-package
Can yield `setup.py install`, `setup.py egg_info`, or `setup.py develop`
'''
def custom_command():
import sys
if sys.platform in ['darwin', 'linux']:
os.system('./custom_command.sh')
class CustomInstallCommand(install):
def run(self):
install.run(self)
custom_command()
class CustomDevelopCommand(develop):
def run(self):
develop.run(self)
custom_command()
class CustomEggInfoCommand(egg_info):
def run(self):
egg_info.run(self)
custom_command()
'''
END CUSTOM INSTALL COMMANDS
'''
setup(
...
cmdclass={
'install': CustomInstallCommand,
'develop': CustomDevelopCommand,
'egg_info': CustomEggInfoCommand,
},
...
)
In my instance the custom_command() compiles and wraps the fortran and copies the lib files to the installation folder.
What I would like to know is if there is a way of only running these custom commands during the installation with pip? i.e avoid custom_command() being run during packaging, and only run during installation.
Update
Following Pierre de Buyl's suggestion i have made some progress, but still do not have this working.
The setup.py file currently looks something like:
def setup_f90_ext(parent_package='',top_path=''):
from numpy.distutils.misc_util import Configuration
from os.path import join
config = Configuration('',parent_package,top_path)
tort_src = [join('PackageName/','tort.f90')]
config.add_library('tort', sources=tort_src,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
extra_link_args=['-lgomp'])
sources = [join('PackageName','f90wrap_tort.f90')]
config.add_extension(name='',
sources=sources,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
libraries=['tort'],
extra_link_args=['-lgomp'],
include_dirs=['build/temp*/'])
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
import subprocess
import os
import sys
version_file = open(os.getcwd()+'/PackageName/'+ 'VERSION')
__version__ = version_file.read().strip()
subprocess.call(cmd, shell=True)
config = {'name':'PackageName',
'version':__version__,
'project_description':'Package description',
'description':'Description',
'long_description': open('README.txt').read(),#read('README.txt'),
}
config2 = dict(config,**setup_f90_ext(parent_package='PackageName',top_path='').todict())
setup(**config2)
where f90wrap_tort.f90 is the f90wrapped fortran file, and tort.f90 is the original fortran.
This file works with python setup.py install if I run the command twice
The first time I run python setup.py install I get the following error:
gfortran:f90: ./PackageName/f90wrap_tort.f90
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
compilation terminated.
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
The reason I put the include_dirs=['build/temp*/'] argument in the extension was because I noticed after running python setup.py install the first time tort_mod was being built and stored there.
What I can't figure out is how to get the linking correct so that this is all done in one step.
Can anyone see what I am missing?
After a bit of googling, I suggest the following:
Use NumPy's distutils
Use the add_library keyword (seen here) for your plain Fortran files. This will build the Fortran files as a library but not try to interface to them with f2py.
Pre-build the f90 wrappers with f90wrap, include them in your package archive and specify those files as source in the extension.
I did not test the whole solution as it is a bit time consuming, but this is what SciPy does for some of their modules, see here.
The documentation of NumPy has an item over add_library
EDIT 1: after building with the include_dirs=['build/temp.linux-x86_64-2.7']) config, I obtain this directory structure on the first build attempt.
build/lib.linux-x86_64-2.7
├── crystal_torture
│   ├── cluster.py
│   ├── dist.f90
│   ├── f90wrap_tort.f90
│   ├── graph.py
│   ├── __init__.py
│   ├── minimal_cluster.py
│   ├── node.py
│   ├── node.pyc
│   ├── pymatgen_doping.py
│   ├── pymatgen_interface.py
│   ├── tort.f90
│   ├── tort.py
│   └── tort.pyc
└── crystal_torture.so

Packaging stub files

Let's say I have very simple package with a following structure:
.
├── foo
│   ├── bar
│   │   └── __init__.py
│   └── __init__.py
└── setup.py
Content of the files:
setup.py:
from distutils.core import setup
setup(
name='foobar',
version='',
packages=['foo', 'foo.bar'],
url='',
license='Apache License 2.0',
author='foobar',
author_email='',
description=''
)
foo/bar/__init__.py:
def foobar(x):
return x
The remaining files are empty.
I install the package using pip:
cd foobar
pip install .
and can confirm it is installed correctly.
Now I want to create a separate package with stub files:
.
├── foo
│   ├── bar
│   │   └── __init__.pyi
│   └── __init__.pyi
└── setup.py
Content of the files:
setup.py:
from distutils.core import setup
import sys
import pathlib
setup(
name='foobar_annot',
version='',
packages=['foo', 'foo.bar'],
url='',
license='Apache License 2.0',
author='foobar',
author_email='',
description='',
data_files=[
(
'shared/typehints/python{}.{}/foo/bar'.format(*sys.version_info[:2]),
["foo/bar/__init__.pyi"]
),
],
)
foo.bar.__init__.pyi:
def foobar(x: int) -> int: ...
I can install this package, see that it creates anaconda3/shared/typehints/python3.5/foo/bar/__init__.pyi in my Anaconda root, but it doesn't look like it is recognized by PyCharm (I get no warnings). When I place pyi file in the main package everything works OK.
I would be grateful for any hints how to make this work:
I've been trying to make some sense from PEP 484 - Storing and distributing stub files but to no avail. Even pathlib part seem to offend my version of distutils
PY-18597 and https://github.com/python/mypy/issues/1190#issuecomment-188526651 seem to be related but somehow I cannot connect the dots.
I tried putting stubs in the .PyCharmX.X/config/python-skeletons but it didn't help.'
Some things that work, but don't resolve the problem:
Putting stub files in the current project and marking as sources.
Adding stub package root to the interpreter path (at least in some simple cases).
So the questions: How to create a minimal, distributable package with Python stubs, which will be recognized by existing tools. Based on the experiments I suspect one of two problems:
I misunderstood the structure which should be created by the package in the shared/typehints/pythonX.Y - if this is true, how should I define data_files?
PyCharm doesn't consider these files at all (this seem to be contradicted by some comments in the linked issue).
It suppose to work just fine, but I made some configure mistake and looking for external problem which doesn't exist.
Are there any established procedures to troubleshoot problems like this?
Problem is that you didn't include the foo/__init__.pyi file in your stub distribution. Even though it's empty, it makes foo a stub files package, and enables search for foo.bar.
You can modify the data_files in your setup.py to include both
data_files=[
(
'shared/typehints/python{}.{}/foo/bar'.format(*sys.version_info[:2]),
["foo/bar/__init__.pyi"]
),
(
'shared/typehints/python{}.{}/foo'.format(*sys.version_info[:2]),
["foo/__init__.pyi"]
),
],

setup.py sdist flattings package file structure, removes intermediate folders

File structure:
.
|-- rdir
| |-- __init__.py
| |-- core
| | |-- __init__.py
| | |-- rdir_core.py
| | |-- rdir_node.py
| |-- generateHTML
| | |-- __init__.py
| |-- rdir.py
|-- setup.py
Setup.py:
from setuptools import setup, find_packages
setup(
name="rdir",
version="0.40",
description="....",
author="lhfcws",
author_email="lhfcws#gmail.com",
url="...",
license="MIT",
packages=["rdir"],
scripts=["rdir/rdir.py"],
install_requires=['colorama', 'pyquery'],
)
Commands:
sudo python setup.py install #local install
sudo python setup.py sdist upload #pypi upload
Try from rdir import rdir in other path like home directory, only meets with:
ImportError: No module named core.rdir_core
Of course it works well if I import rdir in the project directory.
And I looked into site-packages/rdir.egg-info/, I found that all the .py files were moved into a flat structure:
EGG-INFO
├── PKG-INFO
├── SOURCES.txt
├── dependency_links.txt
├── not-zip-safe
├── requires.txt
├── scripts
│   ├── __init__.py
│   ├── generate_page.py
│   ├── rdir.py
│   ├── rdir_core.py
│   └── rdir_node.py
└── top_level.txt
I also tried if I just import rdir_core in rdir.py, it compiles correctly. So I guessed there's sth wrong with my setup.py, and I read some demos, some setup.py of famous python projects on github and some official manuals. I changed my setup.py according to those reference, but all failed. I have no idea so I have to ask for help.
Is it something wrong with my setup.py? Or is there anything I've missed?
Or please show me a good example of a setup.py of multi file structure projects. Thank you!
BTW, if these above cannot offer you enough information, please look at rdir on Github
It's the problem of the packages keyword in your setup.py. You should list sub packages as well as the top level package.
packages=['rdir', 'rdir.core', 'rdir.generateHTML'],
Or, using find_packages, which you already imported
packages=find_packages(),
I didn't try the sdist stuff, maybe it's just collecting all py files as scripts.
P.S. You can use python setup.py build to test the result folder structure.
Inspired by Ray, I've solved by myself.
It is the problem of scripts in the setup.py.
The project was a flat structure, so it works well before 0.30. However in my computer, the scripts options will install the script to /usr/local/bin and generate a flat structure in EGG.
If I dont remove the old scripts in /usr/local/bin, python interpreter will check those scripts under /usr/local/bin first and cause error.
So the solution is:
remove scripts in setup.py
set packages=find_package()
sudo rm /usr/local/bin/rdir*
Thanks to you all.

Categories