I am making a simple proof of concept example for my thesis about python package distribution. I am stuck on example where I want to install a simple package.
Folder and files layout is the following:
baseApp/
├── baseApp
│ ├── app.py
│ └── __init__.py
├── __init__.py
└── setup.py
File setup.py contains:
from setuptools import setup, find_packages
setup(
name='BaseApp',
version='1.0',
packages="baseApp",
entry_points={
'console_scripts': [
'baseApp=baseApp.app:main '
]
}
)
File app.py is simple file with one function:
def main():
print("main function")
My idea was to install this package using pip, but running pip install ./baseApp always gives error message:
running install
running bdist_egg
running egg_info
creating BaseApp.egg-info
writing BaseApp.egg-info/PKG-INFO
writing dependency_links to BaseApp.egg-info/depjjjendency_links.txt
writing entry points to BaseApp.egg-info/entry_points.txt
writing top-level names to BaseApp.egg-info/top_level.txt
writing manifest file 'BaseApp.egg-info/SOURCES.txt'
error: package directory 'b' does not exist
Even trying to navigate into the folder and then running python setup.py install gives me the same error. What bothers me the most is that I don't understand what the error message was trying to say to me. I don't see any directory called b nor the reason why there shold be one.
I also tried using virtual environment and system distribution, but both resulted in same error message.
My question is what causes this behaviour and is there any easy way to solve it or am I missing something?
In my case i have the same problem because package was not installed completely so i installed it again and then everything goes well
Related
I have a small team of people working on a small new Python project.
I created a small package called Environment.
This has two files:
Setup.py
from setuptools import setup, find_packages
setup(name='Environment', version='1.0', packages=find_packages())
root.py
ROOT_DIR = "STATIC_VALUE_15"
When I use pip install -e . and then run the project within the anaconda enviroment I use for this project, I can run files that use this Environment package. However when other people use the same branch and do the same steps, they get the error:
Exception has occurred: ModuleNotFoundError
They are also using their conda environment and running the project the same way in vsc.
When anyone uses pip list, for this package our paths are all correct.
Are there any suggestions for things that I could try and test to get it to work on their system, or why it works on mine and not theirs?
Setup.py is deprecated, use pyproject.toml instead.
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "environment"
version = "1.0.0"
You also need to follow this file hierarchy:
package_project/
└── src/
└── environment/
├── __init__.py
└── root.py
You may read Packaging Python Projects. There is plenty of room to make mistake, I struggled a lot in the past because I forgot a file, or didn't follow the file hierarchy, and if the build fail, sometime you can still run or import the code because of your python environment.
If you want to test if it works for someone else without having someone else, you can build the package and install it in an other environment where the sources are not easily accessible for python, using the .whl.
I have a Python package that at first appears to install just fine, but when calling on one of the entry points raises a ModuleNotFoundException. The module is otherwise found just fine with both import package from the interactive interpreter as well as with python -m package.etc. But if I try to call on the entry-point directly (flike python -m package.etc.main) it will raise an AttributeError saying that the module has no attribute __path__.
I can see the package if I do pip list.
The project is currently set up with the "template" pyproject.toml and only setup.cfg, but the behaviour is essentially the same (the traceback looks slightly different but the error is the same) when using setup.py over pyproject.toml, both with pip but also if I invoke setup.py directly. The structure of the project is:
package
├── __init__.py
├── cli
│ ├── __init__.py
│ ├── entry.py
├── file.py
I get the same behaviour if doing this in a virtual environment as when I do it with a userspace (--user) install.
Modifying the environment variable ${PYTHONPATH} fixes the issue, and installing the package in editable mode works just fine.
Turns out that the issue was that I had something like:
[options]
packages = find:
[options.packages.find]
include =
README.md
in my setup.cfg, and it appears as if the declaration of include was exclusive which led to the package not being included in the installation, which still worked when installed in editable mode (presumably because editable mode only sets up some sort of links or appends the source code directories to some path).
I've coded my python project and have succeeded in publishing it to test pypi. However, now I can't figure out how to correctly configure it as a console script. Upon running my_project on the command line, I get the following stack trace:
Traceback (most recent call last):
File "/home/thatcoolcoder/.local/bin/my_project", line 5, in <module>
from my_project.__main__ import main
ModuleNotFoundError: No module named 'my_project'
Clearly, it's created a script to run but the script is then failing to import my actual code.
Folder structure:
pyproject.toml
setup.cfg
my_project
├── __init__.py (empty)
├── __main__.py
Relevant sections of setup.cfg:
[metadata]
name = my-project
version = 1.0.5
...
[options]
package_dir =
= my_project
packages = find:
...
[options.packages.find]
where = my_project
[options.entry_points]
console_scripts =
my_project = my_project.__main__:main
pyproject.toml (probably not relevant)
[build-system]
requires = [
"setuptools>=42",
"wheel"
]
__main__.py:
from my_project import foo
def main():
foo.bar()
if __name__ == '__main__':
main()
To build and upload, I'm running the following: (python is python 3.10)
python -m build
python -m twine upload --repository testpypi dist/*
Then to install and run:
pip install -i https://test.pypi.org/pypi/ --extra-index-url https://pypi.org/simple my-project --upgrade
my_project
How can I make this console script work?
Also, this current method of setting console_scripts only allows it to be run as my_project; is it possible to also make it work by python -m my_project? Or perhaps this will work once my main issue is fixed.
It's funny, but I had the same frustration when trying to install scripts on multiple platforms. (As Python calls them; posix and nt.)
So I wrote setup-py-script in 2020. It's up on github now.
It installs scripts that use their own modules as a self-contained zip-file. (This method was inspired by youtube-dl.) That means no more leftover files when you delete a script but forget to remove the module et cetera.
It does not require root or administrator privileges; installation is done in user-accessible directories.
You might have to structure your project slightly differently; the script itself is not in the module directory. See the project README.
I finally got back to this problem today and it appears that I was using an incorrect source layout, which caused the pip module installation to not work. I switched to a directory structure like this one:
├── src
│ └── mypackage
│ ├── __init__.py
│ └── mod1.py
├── setup.py
└── setup.cfg
and modified the relevant parts of my setup.cfg:
[options]
package_dir=
=src
packages=find:
[options.packages.find]
where=src
Then I can run it like python -m mypackage. This also made the console scripts work. It works on Linux but I presume it also works on other systems.
I have a python package i would like to distribute. I have the package set-up and am able to download the tarball, unzip and install it using:
python setup.py install
which works fine.
I would also like to upload the package to PyPi, and enable it to be installed using pip.
However, the package contains f2py wrapped fortran, and which needs to be compiled on build with the resulting .so files moved to the eventual installation folder. I am confused as to how to do this using:
python3 setup.py sdist
followed by:
pip3 install pkg_name_here.tar.gz
The reason being that when I run
python3 setup.py sdist
the custom commands are being run, part of which is trying to move the compiled *so files to the installation folder, which has not yet been created. An example of the code outline i have used is in this example here:
from setuptools.command.install import install
from setuptools.command.develop import develop
from setuptools.command.egg_info import egg_info
'''
BEGIN CUSTOM INSTALL COMMANDS
These classes are used to hook into setup.py's install process. Depending on the context:
$ pip install my-package
Can yield `setup.py install`, `setup.py egg_info`, or `setup.py develop`
'''
def custom_command():
import sys
if sys.platform in ['darwin', 'linux']:
os.system('./custom_command.sh')
class CustomInstallCommand(install):
def run(self):
install.run(self)
custom_command()
class CustomDevelopCommand(develop):
def run(self):
develop.run(self)
custom_command()
class CustomEggInfoCommand(egg_info):
def run(self):
egg_info.run(self)
custom_command()
'''
END CUSTOM INSTALL COMMANDS
'''
setup(
...
cmdclass={
'install': CustomInstallCommand,
'develop': CustomDevelopCommand,
'egg_info': CustomEggInfoCommand,
},
...
)
In my instance the custom_command() compiles and wraps the fortran and copies the lib files to the installation folder.
What I would like to know is if there is a way of only running these custom commands during the installation with pip? i.e avoid custom_command() being run during packaging, and only run during installation.
Update
Following Pierre de Buyl's suggestion i have made some progress, but still do not have this working.
The setup.py file currently looks something like:
def setup_f90_ext(parent_package='',top_path=''):
from numpy.distutils.misc_util import Configuration
from os.path import join
config = Configuration('',parent_package,top_path)
tort_src = [join('PackageName/','tort.f90')]
config.add_library('tort', sources=tort_src,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
extra_link_args=['-lgomp'])
sources = [join('PackageName','f90wrap_tort.f90')]
config.add_extension(name='',
sources=sources,
extra_f90_compile_args=['-fopenmp -lgomp -O3'],
libraries=['tort'],
extra_link_args=['-lgomp'],
include_dirs=['build/temp*/'])
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
import subprocess
import os
import sys
version_file = open(os.getcwd()+'/PackageName/'+ 'VERSION')
__version__ = version_file.read().strip()
subprocess.call(cmd, shell=True)
config = {'name':'PackageName',
'version':__version__,
'project_description':'Package description',
'description':'Description',
'long_description': open('README.txt').read(),#read('README.txt'),
}
config2 = dict(config,**setup_f90_ext(parent_package='PackageName',top_path='').todict())
setup(**config2)
where f90wrap_tort.f90 is the f90wrapped fortran file, and tort.f90 is the original fortran.
This file works with python setup.py install if I run the command twice
The first time I run python setup.py install I get the following error:
gfortran:f90: ./PackageName/f90wrap_tort.f90
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
compilation terminated.
f951: Warning: Nonexistent include directory ‘build/temp*/’ [-Wmissing-include-dirs]
./PackageName/f90wrap_tort.f90:4:8:
use tort_mod, only: test_node
1
Fatal Error: Can't open module file ‘tort_mod.mod’ for reading at (1): No such file or directory
The reason I put the include_dirs=['build/temp*/'] argument in the extension was because I noticed after running python setup.py install the first time tort_mod was being built and stored there.
What I can't figure out is how to get the linking correct so that this is all done in one step.
Can anyone see what I am missing?
After a bit of googling, I suggest the following:
Use NumPy's distutils
Use the add_library keyword (seen here) for your plain Fortran files. This will build the Fortran files as a library but not try to interface to them with f2py.
Pre-build the f90 wrappers with f90wrap, include them in your package archive and specify those files as source in the extension.
I did not test the whole solution as it is a bit time consuming, but this is what SciPy does for some of their modules, see here.
The documentation of NumPy has an item over add_library
EDIT 1: after building with the include_dirs=['build/temp.linux-x86_64-2.7']) config, I obtain this directory structure on the first build attempt.
build/lib.linux-x86_64-2.7
├── crystal_torture
│ ├── cluster.py
│ ├── dist.f90
│ ├── f90wrap_tort.f90
│ ├── graph.py
│ ├── __init__.py
│ ├── minimal_cluster.py
│ ├── node.py
│ ├── node.pyc
│ ├── pymatgen_doping.py
│ ├── pymatgen_interface.py
│ ├── tort.f90
│ ├── tort.py
│ └── tort.pyc
└── crystal_torture.so
I read a lot of answers on this question, but no solution works for me.
Project layout:
generators_data\
en_family_names.txt
en_female_names.txt
__init__.py
generators.py
setup.py
I want include "generators_data" with it's content into installation. My setup.py:
from distutils.core import setup
setup(name='generators',
version='1.0',
package_data={'generators': ['generators_data/*']}
)
I tried
python setup.py install
got
running install
running build
running install_egg_info
Removing c:\Python27\Lib\site-packages\generators-1.0-py2.7.egg-info
Writing c:\Python27\Lib\site-packages\generators-1.0-py2.7.egg-info
but generators_data directory doesn't appear in "c:\Python27\Lib\site-packages\". Why?
The code you posted contains two issues: setup.py should be sibling to the package you want to distribute, not inside it, and you need to list packages in setup.py.
Try with this this layout:
generators/ # project root, the directory you get from git clone or equivalent
setup.py
generators/ # Python package
__init__.py
# other modules
generators_data/
names.txt
And this setup.py:
setup(name='generators',
version='1.0',
packages=['generators'],
package_data={'generators': ['generators_data/*']},
)