I'm working on a C++/Python project with the following structure:
foo
├── CMakeLists.txt
├── include
├── source
└── python
├── foo
│ ├── _foo_py.py
│ └── __init__.py
├── setup.py
└── source
├── CMakeLists.txt
└── _foo_cpp.cpp
foo/source and foo/include contain C++ source files and foo/python/source/_foo_cpp.cpp contains pybind11 wrapper code for this C++ code. Running setup.py is supposed to build the C++ code (by running CMake), create a _foo_cpp Python module in the form of a shared object and integrate it with the Python code in _foo_py.py. I.e. I want to be able to simply call python setup.py install from foo/python to install the foo module to my system. I'm currently using a CMake extension class in setup.py to make this work:
class CMakeExtension(Extension):
def __init__(self, name, sourcedir):
Extension.__init__(self, name, sources=[])
self.sourcedir = os.path.abspath(sourcedir)
class CMakeBuild(build_ext):
def run(self):
try:
subprocess.check_output(['cmake', '--version'])
except OSError:
raise RuntimeError("cmake command must be available")
for ext in self.extensions:
self.build_extension(ext)
def build_extension(self, ext):
if not os.path.exists(self.build_temp):
os.makedirs(self.build_temp)
self._setup(ext)
self._build(ext)
def _setup(self, ext):
cmake_cmd = [
'cmake',
ext.sourcedir,
]
subprocess.check_call(cmake_cmd, cwd=self.build_temp)
def _build(self, ext):
cmake_build_cmd = [
'cmake',
'--build', '.',
]
subprocess.check_call(cmake_build_cmd, cwd=self.build_temp)
The problem arises when I try to directly call pip in foo/python, e.g. like this:
pip wheel -w wheelhouse --no-deps .
It seems that before running the code in setup.py, pip copies the content of the working directory into a temporary directory. This obviously doesn't include the C++ code and the top-level CMakeLists.txt. That in turn causes CMakeBuild._setup to fail because there is seemingly no way to obtain a path to the foo root directory from inside setup.py after it has been copied to another location by pip.
Is there anything I can do to make this setup work with both python and pip? I have seen some approaches that first run cmake to generate a setup.py from a setup.py.in to inject package version, root directory path etc. but I would like to avoid this and have setup.py call cmake instead of the other way around.
Related
I am creating documentation with Sphinx. My folder structure looks as follows:
MyProject
├── mypackage
│ ├── __init__.py
│ ├── mycode.py
│ └── etc.
└── docs
├── build
├── make.bat
├── Makefile
└── source
├── conf.py
├── index.rst
├── _static
└── _templates
I begin by running make clean and make html in the docs directory. Next, to populate the documentation, I run sphinx-apidoc -o ./source ../mypackage, and all corresponding .rst files are generated as expected. Finally, I run make clean and make html once more to ensure a clean build, as suggested in the Sphinx-RTD-Tutorial. However, on this final build, I get the following output:
Running Sphinx v4.0.2
making output directory... done
[autosummary] generating autosummary for: index.rst, mypackage.rst, mypackage.mycode.rst
Extension error (sphinx.ext.autosummary):
Handler <function process_generate_options at 0x10678dee0> for event 'builder-inited' threw an exception (exception: list.remove(x): x not in list)
make: *** [html] Error 2
Removing the autosummary extension and just running autodoc with the same sequence of commands leads to a similar error:
Exception occurred:
File "/Users/myname/opt/anaconda3/envs/myenv/lib/python3.9/site-packages/sphinx/ext/autodoc/mock.py", line 151, in mock
sys.meta_path.remove(finder)
ValueError: list.remove(x): x not in list
Here is the source code method that the error comes from:
#contextlib.contextmanager
def mock(modnames: List[str]) -> Generator[None, None, None]:
"""Insert mock modules during context::
with mock(['target.module.name']):
# mock modules are enabled here
...
"""
try:
finder = MockFinder(modnames)
sys.meta_path.insert(0, finder)
yield
finally:
sys.meta_path.remove(finder)
finder.invalidate_caches()
Does anyone know what might be raising this error or have an idea as to what is happening in this method? Could it have to do with my specification of sys.path in my conf.py file?
[conf.py]
sys.path.insert(0, os.path.abspath('../../mypackage'))
I was able to resolve this error using the autodoc_mock_imports config:
autodoc_mock_imports
This value contains a list of modules to be
mocked up. This is useful when some external dependencies are not met
at build time and break the building process. You may only specify the
root package of the dependencies themselves and omit the sub-modules:
autodoc_mock_imports = ["django"]
Will mock all imports under the django package.
New in version 1.3.
Changed in version 1.6: This config value only requires to declare the
top-level modules that should be mocked.
I'm trying to package a series of configuration files along with some source code. I have a directory structure like this (which I cannot change, due to the nature of the team)
.
├── configs
│ ├── machines
│ ├── scope
├── esm_tools
│ ├── __init__.py
├── README.rst
├── setup.cfg
├── setup.py
61 directories, 45 files (Truncated)
From what I understand here (https://docs.python.org/3/distutils/setupscript.html#installing-package-data), I can add some parts to the setup call:
setup(
# ... other stuff
include_package_data=True,
name="esm_tools",
packages=["configs", "esm_tools"],
package_dir={"configs": "configs", "esm_tools": "esm_tools"},
package_data={'configs': ['*']},
version="4.1.5",
zip_safe=False,
)
However, I can't access the package data with:
In [1]: import pkg_resources
In [2]: pkg_resources.resource_listdir("esm_tools", "config")
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-2-f0f255c14df6> in <module>
----> 1 pkg_resources.resource_listdir("esm_tools", "config")
/global/AWIsoft/miniconda/4.7.12/lib/python3.7/site-packages/pkg_resources/__init__.py in resource_listdir(self, package_or_requirement, resource_name)
1161 """List the contents of the named resource directory"""
1162 return get_provider(package_or_requirement).resource_listdir(
-> 1163 resource_name
1164 )
1165
/global/AWIsoft/miniconda/4.7.12/lib/python3.7/site-packages/pkg_resources/__init__.py in resource_listdir(self, resource_name)
1439
1440 def resource_listdir(self, resource_name):
-> 1441 return self._listdir(self._fn(self.module_path, resource_name))
1442
1443 def metadata_listdir(self, name):
/global/AWIsoft/miniconda/4.7.12/lib/python3.7/site-packages/pkg_resources/__init__.py in _listdir(self, path)
1608
1609 def _listdir(self, path):
-> 1610 return os.listdir(path)
1611
1612 def get_resource_stream(self, manager, resource_name):
FileNotFoundError: [Errno 2] No such file or directory: '/home/ollie/pgierz/dev/esm_tools/esm_tools/esm_tools/config'
Any help would be greatly appreciated, I'm not sure what I'm doing wrong...
Based on your call pkg_resources.resource_listdir("esm_tools", "config"), I assume you want to remap configs to esm_tools.config in the installed package:
site-packages
├── esm_tools
│ ├── __init__.py
│ ├── config
│ │ ├── machines
│ │ ├── scope
This means you have to do the following things:
Tell setuptools to include a subpackage esm_tools.config (even if it doesn't really exist in source code base and is technically a namespace one, we'll construct it via further configuration)
Tell setuptools where the sources for the new esm_tools.config package are located (this is again just a necessary measure to tell setuptools what to include in the dist. The package itself won't provide any Python sources since no Python files are located in configs).
Tell setuptools to include package data for esm_tools.config from the correct path.
Example:
setup(
...,
packages=['esm_tools', 'esm_tools.config'], # 1
package_dir={'esm_tools.config': 'configs'}, # 2
package_data={'esm_tools.config': ['../configs/*']}, # 3
)
Note that this won't work with editable installs (neither via pip install --editable . nor with python setup.py develop), so you will have to construct more or less ugly local workarounds with symlinks or .pth files or whatever. The wheel dist (built via python setup.py bdist_wheel or pip wheel) will work out of the box, for source dists you'll have to include the configs dir via MANIFEST.in as package_data won't be read at sdist time:
# MANIFEST.in
...
graft configs
I have a package called clana (Github, PyPI) with the following structure:
.
├── clana
│ ├── cli.py
│ ├── config.yaml
│ ├── __init__.py
│ ├── utils.py
│ └── visualize_predictions.py
├── docs/
├── setup.cfg
├── setup.py
├── tests/
└── tox.ini
The setup.py looks like this:
from setuptools import find_packages
from setuptools import setup
requires_tests = [...]
install_requires = [...]
config = {
"name": "clana",
"version": "0.3.6",
"author": "Martin Thoma",
"author_email": "info#martin-thoma.de",
"maintainer": "Martin Thoma",
"maintainer_email": "info#martin-thoma.de",
"packages": find_packages(),
"entry_points": {"console_scripts": ["clana=clana.cli:entry_point"]},
"install_requires": install_requires,
"tests_require": requires_tests,
"package_data": {"clana": ["clana/config.yaml"]},
"include_package_data": True,
"zip_safe": False,
}
setup(**config)
How to check that it didn't work
Quick
python3 setup.py sdist
open dist/clana-0.3.8.tar.gz # config.yaml is not in this file
The real check
I thought this would make sure that the config.yaml is in the same directory as the cli.py when the package is installed. But when I try this:
virtualenv venv
source venv/bin/activate
pip install clana
cd venv/lib/python3.6/site-packages/clana
ls
I get:
cli.py __init__.py __pycache__ utils.py visualize_predictions.py
The way I upload it to PyPI:
python3 setup.py sdist bdist_wheel && twine upload dist/*
So the config.yaml is missing. How can I make sure it is there?
You can add a file name MANIFEST.in next to setup.py with a list of the file you want to add, wildcard allowed (ex: include *.yaml or include clana/config.yaml)
then the option include_package_data=True will activate the manifest file
In short: add config.yaml to MANIFEST.in, and set include_package_data. One without the other is not enough.
Basically it goes like this:
MANIFEST.in adds files to sdist (source distribution).
include_package_data adds these same files to bdist (built distribution), i.e. it extends the effect of MANIFEST.in to bdist.
exclude_package_data prevents files in sdist to be added to bdist, i.e. it filters the effect of include_package_data.
package_data adds files to bdist, i.e. it adds build artifacts (typically the products of custom build steps) to your bdist and has of course no effect on sdist.
So in your case, the file config.yaml is not installed, because it is not added to your bdist (built distribution). There are 2 ways to fix this depending on where the file comes from:
either the file is a build artifact (typically it is somehow created during the ./setup.py build phase), then you need to add it to package_data ;
or the file is part of your source (typically it is in your source code repository), then you need to add it to MANIFEST.in, set include_package_data, and leave it out of exclude_package_data (this seems to be your case here).
See:
https://stackoverflow.com/a/54953494/11138259
https://setuptools.readthedocs.io/en/latest/setuptools.html#including-data-files
Following from the documentation on including data files, if your package has data files such as .yaml files, you may include them like so:
setup(
...
package_data={
"": ["*.yaml"],
},
...
)
This will allow any file in your package with the file extension .yaml to be included.
I'm struggling to figure out how to copy the wrapper generated by swig at the same level than the swig shared library. Consider this tree structure:
│ .gitignore
│ setup.py
│
├───hello
├───src
│ hello.c
│ hello.h
│ hello.i
│
└───test
test_hello.py
and this setup.py:
import os
import sys
from setuptools import setup, find_packages, Extension
from setuptools.command.build_py import build_py as _build_py
class build_py(_build_py):
def run(self):
self.run_command("build_ext")
return super().run()
setup(
name='hello_world',
version='0.1',
cmdclass={'build_py': build_py},
packages=["hello"],
ext_modules=[
Extension(
'hello._hello',
[
'src/hello.i',
'src/hello.c'
],
include_dirs=[
"src",
],
depends=[
'src/hello.h'
],
)
],
py_modules=[
"hello"
],
)
When I do pip install . I'll get this content on site-packages:
>tree /f d:\virtual_envs\py364_32\Lib\site-packages\hello
D:\VIRTUAL_ENVS\PY364_32\LIB\SITE-PACKAGES\HELLO
_hello.cp36-win32.pyd
>tree /f d:\virtual_envs\py364_32\Lib\site-packages\hello_world-0.1.dist-info
D:\VIRTUAL_ENVS\PY364_32\LIB\SITE-PACKAGES\HELLO_WORLD-0.1.DIST-INFO
INSTALLER
METADATA
RECORD
top_level.txt
WHEEL
As you can see hello.py (the file generated by swig) hasn't been copied in site-packages.
Thing is, I've already tried many answers from the below similars posts:
setup.py: run build_ext before anything else
python distutils not include the SWIG generated module
Unfortunately, the question still remains unsolved.
QUESTION: How can I fix my current setup.py so the swig wrapper will be copied at the same level than the .pyd file?
setuptools cannot do this the way you want: it will look for py_modules only where setup.py is located. The easiest way IMHO is keeping the SWIG modules where you want them in the namespace/directory structure: rename src to hello, and add hello/__init__.py (may be empty or simply include everything from hello.hello), leaving you with this tree:
$ tree .
.
├── hello
│ ├── __init__.py
│ ├── _hello.cpython-37m-darwin.so
│ ├── hello.c
│ ├── hello.h
│ ├── hello.i
│ ├── hello.py
│ └── hello_wrap.c
└── setup.py
Remove py_modules from setup.py. The "hello" in the package list will make setuptools pick up the whole package, and include __init__.py and the generated hello.py:
import os
import sys
from setuptools import setup, find_packages, Extension
from setuptools.command.build_py import build_py as _build_py
class build_py(_build_py):
def run(self):
self.run_command("build_ext")
return super().run()
setup(
name='hello_world',
version='0.1',
cmdclass={'build_py': build_py},
packages = ["hello"],
ext_modules=[
Extension(
'hello._hello',
[
'hello/hello.i',
'hello/hello.c'
],
include_dirs=[
"hello",
],
depends=[
'hello/hello.h'
],
)
],
)
This way, also .egg-linking the package works (python setup.py develop), so you can link the package under development into a venv or so. This is also the reason for the way setuptools (and distutils) works: the dev sandbox should be structured in a way that allows running the code directly from it, without moving modules around.
The SWIG-generated hello.py and the generated extension _hello will then live under hello:
>>> from hello import hello, _hello
>>> print(hello)
<module 'hello.hello' from '~/so56562132/hello/hello.py'>
>>> print(_hello)
<module 'hello._hello' from '~/so56562132/hello/_hello.cpython-37m-darwin.so'>
(as you can see from the extension filename, I am on a Mac right now, but this works exactly the same under Windows)
Also, beyond packaging, there's more useful information about SWIG and Python namespaces and packages in the SWIG manual: http://swig.org/Doc4.0/Python.html#Python_nn72
I have a python package i would like to distribute. I have the package set-up and am able to download the tarball, unzip and install it using:
python setup.py install
which works fine.
I would also like to upload the package to PyPi, and enable it to be installed using pip.
However, the package contains f2py wrapped fortran, and which needs to be compiled on build with the resulting .so files moved to the eventual installation folder. I am confused as to how to do this using:
python3 setup.py sdist
followed by:
pip3 install pkg_name_here.tar.gz
The reason being that when I run
python3 setup.py sdist
the custom commands are being run, part of which is trying to move the compiled *so files to the installation folder, which has not yet been created. An example of the code outline i have used is in this example here:
from setuptools.command.install import install
from setuptools.command.develop import develop
from setuptools.command.egg_info import egg_info
'''
BEGIN CUSTOM INSTALL COMMANDS
These classes are used to hook into setup.py's install process. Depending on the context:
$ pip install my-package
Can yield `setup.py install`, `setup.py egg_info`, or `setup.py develop`
'''
def custom_command():
import sys
if sys.platform in ['darwin', 'linux']:
os.system('./custom_command.sh')
class CustomInstallCommand(install):
def run(self):
install.run(self)
custom_command()
class CustomDevelopCommand(develop):
def run(self):
develop.run(self)
custom_command()
class CustomEggInfoCommand(egg_info):
def run(self):
egg_info.run(self)
custom_command()
'''
END CUSTOM INSTALL COMMANDS
'''
setup(
...
cmdclass={
'install': CustomInstallCommand,
'develop': CustomDevelopCommand,
'egg_info': CustomEggInfoCommand,
},
...
)
In my instance the custom_command() compiles and wraps the fortran and copies the lib files to the installation folder.
What I would like to know is if there is a way of only running these custom commands during the installation with pip? i.e avoid custom_command() being run during packaging, and only run during installation.
Update
Following Pierre de Buyl's suggestion i have made some progress, but still do not have this working.
The setup.py file currently looks something like:
def setup_f90_ext(parent_package='',top_path=''):
from numpy.distutils.misc_util import Configuration
from os.path import join
config = Configuration('',parent_package,top_path)
tort_src = [join('PackageName/','tort.f90')]
config.add_library('tort', sources=tort_src,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
extra_link_args=['-lgomp'])
sources = [join('PackageName','f90wrap_tort.f90')]
config.add_extension(name='',
sources=sources,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
libraries=['tort'],
extra_link_args=['-lgomp'],
include_dirs=['build/temp*/'])
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
import subprocess
import os
import sys
version_file = open(os.getcwd()+'/PackageName/'+ 'VERSION')
__version__ = version_file.read().strip()
subprocess.call(cmd, shell=True)
config = {'name':'PackageName',
'version':__version__,
'project_description':'Package description',
'description':'Description',
'long_description': open('README.txt').read(),#read('README.txt'),
}
config2 = dict(config,**setup_f90_ext(parent_package='PackageName',top_path='').todict())
setup(**config2)
where f90wrap_tort.f90 is the f90wrapped fortran file, and tort.f90 is the original fortran.
This file works with python setup.py install if I run the command twice
The first time I run python setup.py install I get the following error:
gfortran:f90: ./PackageName/f90wrap_tort.f90
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
compilation terminated.
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
The reason I put the include_dirs=['build/temp*/'] argument in the extension was because I noticed after running python setup.py install the first time tort_mod was being built and stored there.
What I can't figure out is how to get the linking correct so that this is all done in one step.
Can anyone see what I am missing?
After a bit of googling, I suggest the following:
Use NumPy's distutils
Use the add_library keyword (seen here) for your plain Fortran files. This will build the Fortran files as a library but not try to interface to them with f2py.
Pre-build the f90 wrappers with f90wrap, include them in your package archive and specify those files as source in the extension.
I did not test the whole solution as it is a bit time consuming, but this is what SciPy does for some of their modules, see here.
The documentation of NumPy has an item over add_library
EDIT 1: after building with the include_dirs=['build/temp.linux-x86_64-2.7']) config, I obtain this directory structure on the first build attempt.
build/lib.linux-x86_64-2.7
├── crystal_torture
│ ├── cluster.py
│ ├── dist.f90
│ ├── f90wrap_tort.f90
│ ├── graph.py
│ ├── __init__.py
│ ├── minimal_cluster.py
│ ├── node.py
│ ├── node.pyc
│ ├── pymatgen_doping.py
│ ├── pymatgen_interface.py
│ ├── tort.f90
│ ├── tort.py
│ └── tort.pyc
└── crystal_torture.so