I have a python package i would like to distribute. I have the package set-up and am able to download the tarball, unzip and install it using:
python setup.py install
which works fine.
I would also like to upload the package to PyPi, and enable it to be installed using pip.
However, the package contains f2py wrapped fortran, and which needs to be compiled on build with the resulting .so files moved to the eventual installation folder. I am confused as to how to do this using:
python3 setup.py sdist
followed by:
pip3 install pkg_name_here.tar.gz
The reason being that when I run
python3 setup.py sdist
the custom commands are being run, part of which is trying to move the compiled *so files to the installation folder, which has not yet been created. An example of the code outline i have used is in this example here:
from setuptools.command.install import install
from setuptools.command.develop import develop
from setuptools.command.egg_info import egg_info
'''
BEGIN CUSTOM INSTALL COMMANDS
These classes are used to hook into setup.py's install process. Depending on the context:
$ pip install my-package
Can yield `setup.py install`, `setup.py egg_info`, or `setup.py develop`
'''
def custom_command():
import sys
if sys.platform in ['darwin', 'linux']:
os.system('./custom_command.sh')
class CustomInstallCommand(install):
def run(self):
install.run(self)
custom_command()
class CustomDevelopCommand(develop):
def run(self):
develop.run(self)
custom_command()
class CustomEggInfoCommand(egg_info):
def run(self):
egg_info.run(self)
custom_command()
'''
END CUSTOM INSTALL COMMANDS
'''
setup(
...
cmdclass={
'install': CustomInstallCommand,
'develop': CustomDevelopCommand,
'egg_info': CustomEggInfoCommand,
},
...
)
In my instance the custom_command() compiles and wraps the fortran and copies the lib files to the installation folder.
What I would like to know is if there is a way of only running these custom commands during the installation with pip? i.e avoid custom_command() being run during packaging, and only run during installation.
Update
Following Pierre de Buyl's suggestion i have made some progress, but still do not have this working.
The setup.py file currently looks something like:
def setup_f90_ext(parent_package='',top_path=''):
from numpy.distutils.misc_util import Configuration
from os.path import join
config = Configuration('',parent_package,top_path)
tort_src = [join('PackageName/','tort.f90')]
config.add_library('tort', sources=tort_src,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
extra_link_args=['-lgomp'])
sources = [join('PackageName','f90wrap_tort.f90')]
config.add_extension(name='',
sources=sources,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
libraries=['tort'],
extra_link_args=['-lgomp'],
include_dirs=['build/temp*/'])
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
import subprocess
import os
import sys
version_file = open(os.getcwd()+'/PackageName/'+ 'VERSION')
__version__ = version_file.read().strip()
subprocess.call(cmd, shell=True)
config = {'name':'PackageName',
'version':__version__,
'project_description':'Package description',
'description':'Description',
'long_description': open('README.txt').read(),#read('README.txt'),
}
config2 = dict(config,**setup_f90_ext(parent_package='PackageName',top_path='').todict())
setup(**config2)
where f90wrap_tort.f90 is the f90wrapped fortran file, and tort.f90 is the original fortran.
This file works with python setup.py install if I run the command twice
The first time I run python setup.py install I get the following error:
gfortran:f90: ./PackageName/f90wrap_tort.f90
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
compilation terminated.
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
The reason I put the include_dirs=['build/temp*/'] argument in the extension was because I noticed after running python setup.py install the first time tort_mod was being built and stored there.
What I can't figure out is how to get the linking correct so that this is all done in one step.
Can anyone see what I am missing?
After a bit of googling, I suggest the following:
Use NumPy's distutils
Use the add_library keyword (seen here) for your plain Fortran files. This will build the Fortran files as a library but not try to interface to them with f2py.
Pre-build the f90 wrappers with f90wrap, include them in your package archive and specify those files as source in the extension.
I did not test the whole solution as it is a bit time consuming, but this is what SciPy does for some of their modules, see here.
The documentation of NumPy has an item over add_library
EDIT 1: after building with the include_dirs=['build/temp.linux-x86_64-2.7']) config, I obtain this directory structure on the first build attempt.
build/lib.linux-x86_64-2.7
├── crystal_torture
│ ├── cluster.py
│ ├── dist.f90
│ ├── f90wrap_tort.f90
│ ├── graph.py
│ ├── __init__.py
│ ├── minimal_cluster.py
│ ├── node.py
│ ├── node.pyc
│ ├── pymatgen_doping.py
│ ├── pymatgen_interface.py
│ ├── tort.f90
│ ├── tort.py
│ └── tort.pyc
└── crystal_torture.so
Related
I have a Python C extension module which relies on static libraries. Below is my file tree, I haven't included all the files because I am trying to simplify the problem.
folder/
├── src/
| ├── main.c
| └── other.c
├── include/
| ├── glfw3native.h
| └── glfw3.h
├── lib/
| └── libglfw3.a
└── setup.py
Below is my setup.py file, I have removed some unnecessary lines.
import setuptools
setuptools.setup(
ext_modules = [
setuptools.Extension(
"Module.__init__", ["src/main.c", "src/other.c"],
include_dirs = ["include"],
library_dirs = ["lib"],
libraries = ["glfw"])
])
I can successfully compile my project with the following command.
python setup.py bdist_wheel
Now I want to use cibuildtools to compile my project for multiple platforms.
cibuildwheel --platform linux
For some reason, the program crashes when it tries to link the libraries. Even though the library path is stated, it shows the following error.
cannot find -lglfw
Why does this happen when compiling with cibuildwheel?
Because static binaries are different on every system, I need to compile my libraries on the corresponding platform. In the end, I used the CIBW_BEFORE_ALL variable to execute the build commands for my libraries.
I'm testing a trivial installable package with Python setuptools. I have read a handful of tutorials and official documentation, so I thought I had a good background. It seems I was wrong.
The minimal package I'm playing with has the following structure:
myapp
├── config
│ └── text.txt
├── MANIFEST.in
├── myapp
│ ├── __init__.py
│ ├── lib.py
│ └── __main__.py
└── setup.py
With the following contents:
__init__.py (this is just to make the package usable as a library when imported):
import myapp.lib
__main__.py (I want to make this application basically runnable from the terminal with -m or better, with an entry point):
import sys
import myapp
import myapp.lib
def main():
print("main function called")
print(f"current path: {sys.path}")
myapp.lib.funct()
if __name__ == "__main__":
main()
myapp/lib.py (this is the module with the logic, and it relies on external data):
from pathlib import Path
textfile = Path(__file__).parent.parent / 'config/text.txt'
def funct():
print("hello from function in lib")
with open(textfile, 'r') as f:
print(f.read())
config/text.txt (this is the data):
some text constant as part of the module
To package this module, I have crafted the following setup.py
setup(
name="myapp",
version="1.0.0",
description="A simple test",
author="me",
url='www.me.com',
author_email="me#email.com",
classifiers=[
"Programming Language :: Python :: 3",
],
packages=["myapp"],
include_package_data=True,
entry_points={"console_scripts": ["myapp=myapp.__main__:main"]},
)
As well as a MANIFEST.in file:
include config/*
OK, these are the ingredients. Now, I create a distribution with python setup.py sdist. This creates a file dist/myapp-1.0.0.tar.gz, which does include the data file (along with plenty of metadata I don't fully understand):
> tar tf myapp-1.0.0.tar.gz
myapp-1.0.0/
myapp-1.0.0/MANIFEST.in
myapp-1.0.0/PKG-INFO
myapp-1.0.0/config/
myapp-1.0.0/config/text.txt
myapp-1.0.0/myapp/
myapp-1.0.0/myapp/__init__.py
myapp-1.0.0/myapp/__main__.py
myapp-1.0.0/myapp/lib.py
myapp-1.0.0/myapp.egg-info/
myapp-1.0.0/myapp.egg-info/PKG-INFO
myapp-1.0.0/myapp.egg-info/SOURCES.txt
myapp-1.0.0/myapp.egg-info/dependency_links.txt
myapp-1.0.0/myapp.egg-info/entry_points.txt
myapp-1.0.0/myapp.egg-info/top_level.txt
myapp-1.0.0/setup.cfg
myapp-1.0.0/setup.py
Now, the problem comes when I try to install the package with pip install myapp-1.0.0.tar.gz. It does not copy the config/text.txt anywhere. In my site-packages folder, I see the directories myapp and myapp-1.0.0.dist-info, but there is no text file inside any of them.
There are various things I don't understand:
is the definition of textfile in myapp/lib.py correct/sensible? I mean, I find it cumbersome and almost like I'm hardcoding this, having to use a module on purpose to build a simple path to this data file. I feel I'm overdoing this, but I have not found this part of the packing step very well documented. It's surprising, as this must be a very common problem.
given the the data file IS in the tar file. Why did pip not copy it to site-packages?
what is this myapp-1.0.0.dist-info folder in my site-packages?
I'm working on a C++/Python project with the following structure:
foo
├── CMakeLists.txt
├── include
├── source
└── python
├── foo
│ ├── _foo_py.py
│ └── __init__.py
├── setup.py
└── source
├── CMakeLists.txt
└── _foo_cpp.cpp
foo/source and foo/include contain C++ source files and foo/python/source/_foo_cpp.cpp contains pybind11 wrapper code for this C++ code. Running setup.py is supposed to build the C++ code (by running CMake), create a _foo_cpp Python module in the form of a shared object and integrate it with the Python code in _foo_py.py. I.e. I want to be able to simply call python setup.py install from foo/python to install the foo module to my system. I'm currently using a CMake extension class in setup.py to make this work:
class CMakeExtension(Extension):
def __init__(self, name, sourcedir):
Extension.__init__(self, name, sources=[])
self.sourcedir = os.path.abspath(sourcedir)
class CMakeBuild(build_ext):
def run(self):
try:
subprocess.check_output(['cmake', '--version'])
except OSError:
raise RuntimeError("cmake command must be available")
for ext in self.extensions:
self.build_extension(ext)
def build_extension(self, ext):
if not os.path.exists(self.build_temp):
os.makedirs(self.build_temp)
self._setup(ext)
self._build(ext)
def _setup(self, ext):
cmake_cmd = [
'cmake',
ext.sourcedir,
]
subprocess.check_call(cmake_cmd, cwd=self.build_temp)
def _build(self, ext):
cmake_build_cmd = [
'cmake',
'--build', '.',
]
subprocess.check_call(cmake_build_cmd, cwd=self.build_temp)
The problem arises when I try to directly call pip in foo/python, e.g. like this:
pip wheel -w wheelhouse --no-deps .
It seems that before running the code in setup.py, pip copies the content of the working directory into a temporary directory. This obviously doesn't include the C++ code and the top-level CMakeLists.txt. That in turn causes CMakeBuild._setup to fail because there is seemingly no way to obtain a path to the foo root directory from inside setup.py after it has been copied to another location by pip.
Is there anything I can do to make this setup work with both python and pip? I have seen some approaches that first run cmake to generate a setup.py from a setup.py.in to inject package version, root directory path etc. but I would like to avoid this and have setup.py call cmake instead of the other way around.
In the last few days, I was working on a python module. Until now, I used poetry as a packages management tool in many other projects, but it is my first time wanting to publish a package to PyPI.
I was able to run the poetry build and poetry publish commands. I was also able to also install the published package:
$ pip3 install git-profiles
Collecting git-profiles
Using cached https://files.pythonhosted.org/packages/0e/e7/bac9027effd1e34a5b5718f2b35c0b28b3d67f3809e2f2981b6c7b58963e/git_profiles-1.1.0-py3-none-any.whl
Installing collected packages: git-profiles
Successfully installed git-profiles-1.1.0
However, right after the install, I am not able to run my package:
$ git-profiles --help
git-profiles: command not found
My project has the following structure:
git-profiles/
├── src/
│ ├── commands/
│ ├── executor/
│ ├── git_manager/
│ ├── profile/
│ ├── utils/
│ ├── __init__.py
│ └── git_profiles.py
└── tests
I tried to work with different scripts configurations in the pyproject.toml file but I've never been able to make it work after install.
[tool.poetry.scripts]
poetry = "src:git_profiles.py"
or
[tool.poetry.scripts]
git-profile = "src:git_profiles.py"
I don't know if this is a python/pip path/version problem or I need to change something in the configuration file.
If it is helpful, this is the GitHub repository I'm talking about. The package is also published on PyPI.
Poetry's scripts sections wraps around the console script definition of setuptools. As such, the entrypoint name and the call path you give it need to follow the exact same rules.
In short, a console script does more or less this from the shell:
import my_lib # the module isn't called src, that's just a folder name
# the right name to import is whatever you put at [tool.poetry].name
my_lib.my_module.function()
Which, if given the name my-lib-call (the name can be the same as your module, but it doesn't need to be) would be written like this:
[tool.poetry.scripts]
my-lib-call = "my_lib.my_module:function"
Adapted to your project structure, the following should do the job:
[tool.poetry.scripts]
git-profile = "git-profiles:main"
I am making a simple proof of concept example for my thesis about python package distribution. I am stuck on example where I want to install a simple package.
Folder and files layout is the following:
baseApp/
├── baseApp
│ ├── app.py
│ └── __init__.py
├── __init__.py
└── setup.py
File setup.py contains:
from setuptools import setup, find_packages
setup(
name='BaseApp',
version='1.0',
packages="baseApp",
entry_points={
'console_scripts': [
'baseApp=baseApp.app:main '
]
}
)
File app.py is simple file with one function:
def main():
print("main function")
My idea was to install this package using pip, but running pip install ./baseApp always gives error message:
running install
running bdist_egg
running egg_info
creating BaseApp.egg-info
writing BaseApp.egg-info/PKG-INFO
writing dependency_links to BaseApp.egg-info/depjjjendency_links.txt
writing entry points to BaseApp.egg-info/entry_points.txt
writing top-level names to BaseApp.egg-info/top_level.txt
writing manifest file 'BaseApp.egg-info/SOURCES.txt'
error: package directory 'b' does not exist
Even trying to navigate into the folder and then running python setup.py install gives me the same error. What bothers me the most is that I don't understand what the error message was trying to say to me. I don't see any directory called b nor the reason why there shold be one.
I also tried using virtual environment and system distribution, but both resulted in same error message.
My question is what causes this behaviour and is there any easy way to solve it or am I missing something?
In my case i have the same problem because package was not installed completely so i installed it again and then everything goes well