I'm struggling to figure out how to copy the wrapper generated by swig at the same level than the swig shared library. Consider this tree structure:
│ .gitignore
│ setup.py
│
├───hello
├───src
│ hello.c
│ hello.h
│ hello.i
│
└───test
test_hello.py
and this setup.py:
import os
import sys
from setuptools import setup, find_packages, Extension
from setuptools.command.build_py import build_py as _build_py
class build_py(_build_py):
def run(self):
self.run_command("build_ext")
return super().run()
setup(
name='hello_world',
version='0.1',
cmdclass={'build_py': build_py},
packages=["hello"],
ext_modules=[
Extension(
'hello._hello',
[
'src/hello.i',
'src/hello.c'
],
include_dirs=[
"src",
],
depends=[
'src/hello.h'
],
)
],
py_modules=[
"hello"
],
)
When I do pip install . I'll get this content on site-packages:
>tree /f d:\virtual_envs\py364_32\Lib\site-packages\hello
D:\VIRTUAL_ENVS\PY364_32\LIB\SITE-PACKAGES\HELLO
_hello.cp36-win32.pyd
>tree /f d:\virtual_envs\py364_32\Lib\site-packages\hello_world-0.1.dist-info
D:\VIRTUAL_ENVS\PY364_32\LIB\SITE-PACKAGES\HELLO_WORLD-0.1.DIST-INFO
INSTALLER
METADATA
RECORD
top_level.txt
WHEEL
As you can see hello.py (the file generated by swig) hasn't been copied in site-packages.
Thing is, I've already tried many answers from the below similars posts:
setup.py: run build_ext before anything else
python distutils not include the SWIG generated module
Unfortunately, the question still remains unsolved.
QUESTION: How can I fix my current setup.py so the swig wrapper will be copied at the same level than the .pyd file?
setuptools cannot do this the way you want: it will look for py_modules only where setup.py is located. The easiest way IMHO is keeping the SWIG modules where you want them in the namespace/directory structure: rename src to hello, and add hello/__init__.py (may be empty or simply include everything from hello.hello), leaving you with this tree:
$ tree .
.
├── hello
│ ├── __init__.py
│ ├── _hello.cpython-37m-darwin.so
│ ├── hello.c
│ ├── hello.h
│ ├── hello.i
│ ├── hello.py
│ └── hello_wrap.c
└── setup.py
Remove py_modules from setup.py. The "hello" in the package list will make setuptools pick up the whole package, and include __init__.py and the generated hello.py:
import os
import sys
from setuptools import setup, find_packages, Extension
from setuptools.command.build_py import build_py as _build_py
class build_py(_build_py):
def run(self):
self.run_command("build_ext")
return super().run()
setup(
name='hello_world',
version='0.1',
cmdclass={'build_py': build_py},
packages = ["hello"],
ext_modules=[
Extension(
'hello._hello',
[
'hello/hello.i',
'hello/hello.c'
],
include_dirs=[
"hello",
],
depends=[
'hello/hello.h'
],
)
],
)
This way, also .egg-linking the package works (python setup.py develop), so you can link the package under development into a venv or so. This is also the reason for the way setuptools (and distutils) works: the dev sandbox should be structured in a way that allows running the code directly from it, without moving modules around.
The SWIG-generated hello.py and the generated extension _hello will then live under hello:
>>> from hello import hello, _hello
>>> print(hello)
<module 'hello.hello' from '~/so56562132/hello/hello.py'>
>>> print(_hello)
<module 'hello._hello' from '~/so56562132/hello/_hello.cpython-37m-darwin.so'>
(as you can see from the extension filename, I am on a Mac right now, but this works exactly the same under Windows)
Also, beyond packaging, there's more useful information about SWIG and Python namespaces and packages in the SWIG manual: http://swig.org/Doc4.0/Python.html#Python_nn72
Related
I'm making a python package using setuptools, and I'm having trouble making all nested folders in my source code available for import after installing the package. The directory I'm working in has a structcure like illustrated below.
├── setup.py
└── src
└── foo
├── a
│ ├── aa
│ │ └── aafile.py
│ └── afile.py
├── b
│ └── bfile.py
└── __init__.py
Currently, I can't import submodules, such as from foo.a import aa or from foo.a.aa import some_method, unless I explicitly pass the names of the submodules to setuptools. That is, setup.py needs to contain something like
from setuptools import setup
setup(
version="0.0.1",
name="foo",
py_modules=["foo", "foo.a", "foo.a.a", "foo.b"],
package_dir={"": "src"},
packages=["foo", "foo.a", "foo.a.a", "foo.b"],
include_package_data=True,
# more arguments go here
)
This makes organizing the code pretty cumbersome. Is there a simple way to just allows users of the package to install any submodule contained in src/foo?
You'll want setuptools.find_packages() – though all in all, you might want to consider tossing setup.py altogether in favor of a PEP 517 style build with no arbitrary Python but just pyproject.toml (and possibly setup.cfg).
from setuptools import setup, find_packages
setup(
version="0.0.1",
name="foo",
package_dir={"": "src"},
packages=find_packages(where='src'),
include_package_data=True,
)
Every package/subpackage must contain a (at least empty) __init__.py file to be considered so.
If you want to the whole package&subpackages tree to be imported with just one import foo consider filling your __init__.py files with the import of the relative subpackages.
# src/foo/__init__.py
import foo.a
import foo.b
# src/foo/a/__init__.py
import foo.a.aa
# src/foo/b/__init__.py
import foo.b.bb
Otherwise leave the __init__.py files empty and the user will need to manually load the subpackag/submodule he wants.
I am trying to call a custom module installed in my virtualenv. The module built by myself has the following structure:
test
├── README.md
├── setup.py
├── src
│ ├── __init__.py
│ └── module_x
│ └── abc.py
└── tests
setup.py looks like following:
from setuptools import find_packages, setup
with open('README.md', 'r') as f:
long_description = f.read()
setup(
name='test',
version='0.1.0',
author='me',
description='description',
long_description=long_description,
long_description_content_type='text/markdown',
packages=find_packages('src'),
package_dir={'': 'src'},
install_requires=[''],
entry_points={
'console_scripts': [
'test=module_x.abc:main'
],
}
)
Inside of the abc.py, I am using the following code:
class RandomClass:
def __init__(self, msg):
self.msg = msg
info()
def info():
print(self.msg)
def main(msg):
RandomClass(msg)
init.py looks like
from .module_x.abc import main
The egg was installed using pip install -e ..
UPDATED
How can I call the module in a new Python script like from test import main? Currently it works just like from test.src import main and I don't want to know the whole structure of the package. In the future will exist also module_y, module_z etc. My assumption is that I would need to modify something in the setup.py.
I have a python package i would like to distribute. I have the package set-up and am able to download the tarball, unzip and install it using:
python setup.py install
which works fine.
I would also like to upload the package to PyPi, and enable it to be installed using pip.
However, the package contains f2py wrapped fortran, and which needs to be compiled on build with the resulting .so files moved to the eventual installation folder. I am confused as to how to do this using:
python3 setup.py sdist
followed by:
pip3 install pkg_name_here.tar.gz
The reason being that when I run
python3 setup.py sdist
the custom commands are being run, part of which is trying to move the compiled *so files to the installation folder, which has not yet been created. An example of the code outline i have used is in this example here:
from setuptools.command.install import install
from setuptools.command.develop import develop
from setuptools.command.egg_info import egg_info
'''
BEGIN CUSTOM INSTALL COMMANDS
These classes are used to hook into setup.py's install process. Depending on the context:
$ pip install my-package
Can yield `setup.py install`, `setup.py egg_info`, or `setup.py develop`
'''
def custom_command():
import sys
if sys.platform in ['darwin', 'linux']:
os.system('./custom_command.sh')
class CustomInstallCommand(install):
def run(self):
install.run(self)
custom_command()
class CustomDevelopCommand(develop):
def run(self):
develop.run(self)
custom_command()
class CustomEggInfoCommand(egg_info):
def run(self):
egg_info.run(self)
custom_command()
'''
END CUSTOM INSTALL COMMANDS
'''
setup(
...
cmdclass={
'install': CustomInstallCommand,
'develop': CustomDevelopCommand,
'egg_info': CustomEggInfoCommand,
},
...
)
In my instance the custom_command() compiles and wraps the fortran and copies the lib files to the installation folder.
What I would like to know is if there is a way of only running these custom commands during the installation with pip? i.e avoid custom_command() being run during packaging, and only run during installation.
Update
Following Pierre de Buyl's suggestion i have made some progress, but still do not have this working.
The setup.py file currently looks something like:
def setup_f90_ext(parent_package='',top_path=''):
from numpy.distutils.misc_util import Configuration
from os.path import join
config = Configuration('',parent_package,top_path)
tort_src = [join('PackageName/','tort.f90')]
config.add_library('tort', sources=tort_src,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
extra_link_args=['-lgomp'])
sources = [join('PackageName','f90wrap_tort.f90')]
config.add_extension(name='',
sources=sources,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
libraries=['tort'],
extra_link_args=['-lgomp'],
include_dirs=['build/temp*/'])
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
import subprocess
import os
import sys
version_file = open(os.getcwd()+'/PackageName/'+ 'VERSION')
__version__ = version_file.read().strip()
subprocess.call(cmd, shell=True)
config = {'name':'PackageName',
'version':__version__,
'project_description':'Package description',
'description':'Description',
'long_description': open('README.txt').read(),#read('README.txt'),
}
config2 = dict(config,**setup_f90_ext(parent_package='PackageName',top_path='').todict())
setup(**config2)
where f90wrap_tort.f90 is the f90wrapped fortran file, and tort.f90 is the original fortran.
This file works with python setup.py install if I run the command twice
The first time I run python setup.py install I get the following error:
gfortran:f90: ./PackageName/f90wrap_tort.f90
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
compilation terminated.
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
The reason I put the include_dirs=['build/temp*/'] argument in the extension was because I noticed after running python setup.py install the first time tort_mod was being built and stored there.
What I can't figure out is how to get the linking correct so that this is all done in one step.
Can anyone see what I am missing?
After a bit of googling, I suggest the following:
Use NumPy's distutils
Use the add_library keyword (seen here) for your plain Fortran files. This will build the Fortran files as a library but not try to interface to them with f2py.
Pre-build the f90 wrappers with f90wrap, include them in your package archive and specify those files as source in the extension.
I did not test the whole solution as it is a bit time consuming, but this is what SciPy does for some of their modules, see here.
The documentation of NumPy has an item over add_library
EDIT 1: after building with the include_dirs=['build/temp.linux-x86_64-2.7']) config, I obtain this directory structure on the first build attempt.
build/lib.linux-x86_64-2.7
├── crystal_torture
│ ├── cluster.py
│ ├── dist.f90
│ ├── f90wrap_tort.f90
│ ├── graph.py
│ ├── __init__.py
│ ├── minimal_cluster.py
│ ├── node.py
│ ├── node.pyc
│ ├── pymatgen_doping.py
│ ├── pymatgen_interface.py
│ ├── tort.f90
│ ├── tort.py
│ └── tort.pyc
└── crystal_torture.so
Let's say I have very simple package with a following structure:
.
├── foo
│ ├── bar
│ │ └── __init__.py
│ └── __init__.py
└── setup.py
Content of the files:
setup.py:
from distutils.core import setup
setup(
name='foobar',
version='',
packages=['foo', 'foo.bar'],
url='',
license='Apache License 2.0',
author='foobar',
author_email='',
description=''
)
foo/bar/__init__.py:
def foobar(x):
return x
The remaining files are empty.
I install the package using pip:
cd foobar
pip install .
and can confirm it is installed correctly.
Now I want to create a separate package with stub files:
.
├── foo
│ ├── bar
│ │ └── __init__.pyi
│ └── __init__.pyi
└── setup.py
Content of the files:
setup.py:
from distutils.core import setup
import sys
import pathlib
setup(
name='foobar_annot',
version='',
packages=['foo', 'foo.bar'],
url='',
license='Apache License 2.0',
author='foobar',
author_email='',
description='',
data_files=[
(
'shared/typehints/python{}.{}/foo/bar'.format(*sys.version_info[:2]),
["foo/bar/__init__.pyi"]
),
],
)
foo.bar.__init__.pyi:
def foobar(x: int) -> int: ...
I can install this package, see that it creates anaconda3/shared/typehints/python3.5/foo/bar/__init__.pyi in my Anaconda root, but it doesn't look like it is recognized by PyCharm (I get no warnings). When I place pyi file in the main package everything works OK.
I would be grateful for any hints how to make this work:
I've been trying to make some sense from PEP 484 - Storing and distributing stub files but to no avail. Even pathlib part seem to offend my version of distutils
PY-18597 and https://github.com/python/mypy/issues/1190#issuecomment-188526651 seem to be related but somehow I cannot connect the dots.
I tried putting stubs in the .PyCharmX.X/config/python-skeletons but it didn't help.'
Some things that work, but don't resolve the problem:
Putting stub files in the current project and marking as sources.
Adding stub package root to the interpreter path (at least in some simple cases).
So the questions: How to create a minimal, distributable package with Python stubs, which will be recognized by existing tools. Based on the experiments I suspect one of two problems:
I misunderstood the structure which should be created by the package in the shared/typehints/pythonX.Y - if this is true, how should I define data_files?
PyCharm doesn't consider these files at all (this seem to be contradicted by some comments in the linked issue).
It suppose to work just fine, but I made some configure mistake and looking for external problem which doesn't exist.
Are there any established procedures to troubleshoot problems like this?
Problem is that you didn't include the foo/__init__.pyi file in your stub distribution. Even though it's empty, it makes foo a stub files package, and enables search for foo.bar.
You can modify the data_files in your setup.py to include both
data_files=[
(
'shared/typehints/python{}.{}/foo/bar'.format(*sys.version_info[:2]),
["foo/bar/__init__.pyi"]
),
(
'shared/typehints/python{}.{}/foo'.format(*sys.version_info[:2]),
["foo/__init__.pyi"]
),
],
I have a project named myproj structured like
/myproj
__init__.py
module1.py
module2.py
setup.py
my setup.py looks like this
from distutils.core import setup
setup(name='myproj',
version='0.1',
description='Does projecty stuff',
author='Me',
author_email='me#domain.com',
packages=[''])
But this places module1.py and module2.py in the install directory.
How do I specify setup such that the directory /myproj and all of it's contents are dropped into the install directory?
In your myproj root directory for this project, you want to move module1.py and module2.py into a directory named myproj under that, and if you wish to maintain Python < 3.3 compatibility, add a __init__.py into there.
├── myproj
│ ├── __init__.py
│ ├── module1.py
│ └── module2.py
└── setup.py
You may also consider using setuptools instead of just distutils. setuptools provide a lot more helper methods and additional attributes that make setting up this file a lot easier. This is the bare minimum setup.py I would construct for the above project:
from setuptools import setup, find_packages
setup(name='myproj',
version='0.1',
description="My project",
author='me',
author_email='me#example.com',
packages=find_packages(),
)
Running the installation you should see lines like this:
copying build/lib.linux-x86_64-2.7/myproj/__init__.py -> build/bdist.linux-x86_64/egg/myproj
copying build/lib.linux-x86_64-2.7/myproj/module1.py -> build/bdist.linux-x86_64/egg/myproj
copying build/lib.linux-x86_64-2.7/myproj/module2.py -> build/bdist.linux-x86_64/egg/myproj
This signifies that the setup script has picked up the required source files. Run the python interpreter (preferably outside this project directory) to ensure that those modules can be imported (not due to relative import).
On the other hand, if you wish to provide those modules at the root level, you definitely need to declare py_modules explicitly.
Finally, the Python Packaging User Guide is a good resource for more specific questions anyone may have about building distributable python packages.