i am currently working on a package and am confused with setuptools. This package contains many dependencies and with these dependencies, multiple scripts can be executed via cli.
E.G.
> main_pkg
> main_pkg_which_needs_dep1
> main_pkg_which_needs_dep2
> ...
It is not necessary to have all scripts available on a system. Only the relevant ones. So i thought that i could simply modify my setup.py as follows:
...
entry_points=dict(console_scripts=[
'main_pkg = main_pkg.main_pkg:main ',
'main_pkg_which_needs_dep1 = main_pkg.main_pkg:main_dep1 [dep1]',
...
]),
...
extras_require={
"dep1": ["psycopg"],
"dep2": ["apsw"],
"dep3": ["numpy"],
...
},
And assummed if someone executes pip install main_pkg, that only main_pkg would be available in CLI. (Therefore, if executing pip install main_pkg[dep1], then there would be main_pkg and main_pkg_which_needs_dep1 available in CLI)
However, executing pip install main_pkg also makes all other console_scripts available through CLI, failing if executing e.g. main_pkg_which_needs_dep1 due to missing dependencies.
Is this behaviour expected by setuptools?
From the documentation i am reading the following:
It is up to the installer to determine how to handle the situation where PDF was not indicated (e.g. omit the console script, provide a warning when attempting to load the entry point, assume the extras are present and let the implementation fail later).
Also, if looking here, the documentation mentions the following:
In this case, the hello-world script is only viable if the pretty-printer extra is indicated, and so a plugin host might exclude that entry point (i.e. not install a console script) if the relevant extra dependencies are not installed.
Am i understanding the documentation correctly, that the installer (plugin host? --> pip?) has to handle this case, which is currently not working?
Or do i have to further modify the setup.py to achieve such a behaviour?
Thanks in advance!
I ran into this same problem. Based on this thread: https://github.com/pypa/pip/issues/9726, it does not look like you can optionally install console scripts.
However, this comment: https://github.com/pypa/pip/issues/9726#issuecomment-826381705 proposes a solution that may help you. I'll copy-paste it below.
Have myscript with the extra [cli] depends on myscript-cli the package and myscript-cli depends on myscript but contains the entrypoint to the console_script in the main package.
If you install myscript[cli] it requires myscript-cli package which then gets installed and that contains the entrypoint you wanted. This makes myscript[cli] or myscript-cli install both packages, but permits a myscript install that will not require the -cli package and thus will not provide the entrypoint.
Related
I saw this nice explanation video (link) of packaging using pip and I got two questions:
The first one is:
I write a code which I want to share with my colleagues, but I do not aim to share it via pypi. Thus, I want to share it internally, so everyone can install it within his/ her environment.
I actually needn't to create a wheel file with python setup.py bdist_wheel, right? I create the setup.py file and I can install it with the command pip install -e . (for editable use), and everyone else can do it so as well, after cloning the repository. Is this right?
My second question is more technical:
I create the setup.py file:
from setuptools import setup
setup(
name = 'helloandbyemate',
version = '0.0.1',
description="Say hello in slang",
py_modules=['hellomate'],
package_dir={"": "src"}
)
To test it, I write a file hellomate.py which contains a function printing hello, mate!. I put this function in src/. In the setup.py file I put only this module in the list py_modules. In src/ is another module called byemate.py. When I install the whole module, it installs the module byemate.py as well, although I only put hellomate in the list of py_modules. Has anyone an explanation for this behaviour?
I actually needn't to create a wheel file ... everyone else can do it so as well, after cloning the repository. Is this right?
This is correct. However, the installation from source is slower, so you may want to publish wheels to an index anyway if you would like faster installs.
When I install the whole module, it installs the module byemate.py as well, although I only put hellomate in the list of py_modules. Has anyone an explanation for this behaviour?
Yes, this is an artifact of the "editable" installation mode. It works by putting the src directory onto the sys.path, via a line in the path configuration file .../lib/pythonX.Y/site-packages/easy-install.pth. This means that the entire source directory is exposed and everything in there is available to import, whether it is packaged up into a release file or not.
The benefit is that source code is "editable" without reinstallation (adding/removing/modifying files in src will be reflected in the package immediately)
The drawback is that the editable installation is not exactly the same as a "real" installation, where only the files specified in the setup.py will be copied into site-packages directly
If you don't want other files such as byemate.py to be available to import, use a regular install pip install . without the -e option. However, local changes to hellomate.py won't be reflected until the installation step is repeated.
Strict editable installs
It is possible to get a mode of installation where byemate.py is not exposed at all, but live modifications to hellomate.py are still possible. This is the "strict" editable mode of setuptools. However, it is not possible using setup.py, you have to use a modern build system declaration in pyproject.toml:
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "helloandbyemate"
version = "0.0.1"
description = "Say hello in slang"
[tool.setuptools]
py-modules = ["hellomate"]
include-package-data = false
[tool.setuptools.package-dir]
"" = "src"
Now you can perform a strict install with:
pip install -e . --config-settings editable_mode=strict
I am aware of this popular topic, however I am running into a different outcome when installing a python app using pip with git+https and python setup.py
I am building a docker image. I am trying to install in an image containing several other python apps, this custom webhook.
Using git+https
RUN /venv/bin/pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=webhooks/sentry
This seems to install the webhook the right way, as the relevant endpoint is l8r discoverable.
What is more, when I exec into the running container and doing a search for relevant files, I see the following
./venv/lib/python3.7/site-packages/sentry_sdk
./venv/lib/python3.7/site-packages/__pycache__/alerta_sentry.cpython-37.pyc
./venv/lib/python3.7/site-packages/sentry_sdk-0.15.1.dist-info
./venv/lib/python3.7/site-packages/alerta_sentry.py
./venv/lib/python3.7/site-packages/alerta_sentry-5.0.0-py3.7.egg-info
In my second approach I just copy this directory locally and in my Dockerfile I do
COPY sentry /app/sentry
RUN /venv/bin/python /app/sentry/setup.py install
This does not install the webhook appropriately and what is more, in the respective container I see a different file layout
./venv/lib/python3.7/site-packages/sentry_sdk
./venv/lib/python3.7/site-packages/sentry_sdk-0.15.1.dist-info
./venv/lib/python3.7/site-packages/alerta_sentry-5.0.0-py3.7.egg
./alerta_sentry.egg-info
./dist/alerta_sentry-5.0.0-py3.7.egg
(the sentry_sdk - related files must be irrelevant)
Why does the second approach fail to install the webhook appropriately?
Should these two option yield the same result?
What finally worked is the following
RUN /venv/bin/pip install /app/sentry/
I don't know the subtle differences between these two installation modes
I did notice however that /venv/bin/python /app/sentry/setup.py install did not produce an alerta_sentry.py but only the .egg file, i.e. ./venv/lib/python3.7/site-packages/alerta_sentry-5.0.0-py3.7.egg
On the other hand, /venv/bin/pip install /app/sentry/ unpacked (?) the .egg creating the ./venv/lib/python3.7/site-packages/alerta_sentry.py
I don't also know why the second installation option (i.e. the one creating the .egg file) was not working run time.
(Before responding with a 'see this link' answer, know that I've been searching for hours and have probably read it all. I've done my due diligence, I just can't seem to find the solution)
That said, I'll start with my general setup and give details after.
Setup: On my desktop, I have a project that I am running in Pycharm, Python3.4, using a virtual environment. In the cloud (AWS). I have an EC2 instance running Ubuntu. I'm not using a virtual environment in the cloud. The cloud machine has both python 2.7 and python 3.5 installed.
[Edit] I've switched to a virtual machine on my cloud environment, and installing from setup distrubution (still broken)
Problem: On my desktop, both within pycharm and from the command line (within the virtual environment using workon (project), I can run a particular file called "do_daily.py" without any issues. However, If I try to run the same file on the cloud server, I get the famous import error.
[edit] Running directly from command line on remote server.
python3 src/do_daily.py
File "src/do_daily.py", line 3, in <module>
from src.db_backup import dev0_backup as dev0bk
ImportError: No module named 'src.db_backup'
Folder Structure: My folder structure for the specific import is (among other stuff).
+ project
+ src
- __init__.py
- do_daily.py
+ db_backup
- __init__.py
- dev0_backup.py
Python Path: (echo $PYTHONPATH)
/home/ubuntu/automation/Project/src/tg_servers:/home/ubuntu/automation/Project/src/db_backup:/home/ubuntu/automation/Project/src/aws:/home/ubuntu/automation/Project/src:/home/ubuntu/automation/Project
Other stuff:
print(sys.executable) = /usr/bin/python3
print(sys.path) = gives me all the above plus a bunch of default paths.
I have run out of ideas and would appreciate any help.
Thank you,
SteveJ
SOLUTION
Clearly the accepted answer was the most comprehensive and represents the best approach to the problem. However, for those seeing this later - I was able to solve my specific problem a little more directly.
(From within the virtual environment), both the add2virtualenv and creating .pth files did work. What I was missing is that I had to add all packages; src, db_backup, pkgx,y,z etc...
I have created a github repository (https://github.com/thebjorn/pyimport.git), and tested the code on a freshly created AWS/Ubuntu instance.
First the installs and updates I did (installing and updating pip3):
ubuntu#:~$ sudo apt-get update
ubuntu#:~$ sudo apt install python3-pip
ubuntu#:~$ pip3 install -U pip
Then get the code:
ubuntu#:~$ git clone https://github.com/thebjorn/pyimport.git
My version of do_daily.py imports dev0_backup, contains a function that tells us it was called, and a __main__ section (for calling with -m or filename):
ubuntu#ip-172-31-29-112:~$ cat pyimport/src/do_daily.py
from __future__ import print_function
from src.db_backup import dev0_backup as dev0bk
def do_daily_fn():
print("do_daily_fn called")
if __name__ == "__main__":
do_daily_fn()
The setup.py file points directly to the do_daily_fn:
ubuntu#ip-172-31-29-112:~$ cat pyimport/setup.py
from setuptools import setup
setup(
name='pyimport',
version='0.1',
description='pyimport',
url='https://github.com/thebjorn/pyimport.git',
author='thebjorn',
license='MIT',
packages=['src'],
entry_points={
'console_scripts': """
do_daily = src.do_daily:do_daily_fn
"""
},
zip_safe=False
)
Install the code in dev mode:
ubuntu#:~$ pip3 install -e pyimport
I can now call do_daily in a number of ways (notice that I haven't done anything with my PYTHONPATH).
The console_scripts in setup.py makes it possible too call do_daily by just typing its name:
ubuntu#:~$ do_daily
do_daily_fn called
Installing the package (in dev mode or otherwise) makes the -m flag work out-of-the box:
ubuntu#:~$ python3 -m src.do_daily
do_daily_fn called
You can even call the file directly (although this is by far the ugliest way and I would recommend against it):
ubuntu#:~$ python3 pyimport/src/do_daily.py
do_daily_fn called
Your PYTHONPATH should contain /home/ubuntu/automation/Project and likely nothing below it.
There all reasons to use a virtualenv in production and never install any packages into system Python explicitly. System Python is for running the OS-provided software written in Python. Don't mix it with your deployments.
A few questions here.
From which directory are you running you program?
Did you try to import the db_backup module inside of src/__init__.py?
Until now, my project had only .cpp files that were compiled into different binaries and I managed to configure CPack to build a proper debian package without any problems.
Recently I wrote a couple of python applications and added them to the project, as well as some custom modules that I would also like to incorporate to the package.
After writing a setup.py script, I'm wondering how to add these files to the CPack configuration in a way that setup.py get's executed automatically when the user installs the package on the system with dpkg -i package.deb.
I'm struggling to find relevant information on how to configure CPack to install custom python applications/modules. Has anyone tried this?
I figured out a way to do it but it's not very simple. I'll do my best to explain the procedure so please be patient.
The idea of this approach is to use postinst and prerm to install and remove the python application from the system.
In the CMakeLists.txt that defines the project, you need to state that CPACK is going to be used to generate a .deb package. There's some variables that need to be filled with info related to the package itself, but one named CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA is very important because it's used to specify the location of postinst and prerm, which are standard scripts of the debian packaging system that are automatically executed by dpkg when the package is installed/removed.
At some point of your main CMakeLists.txt you should have something like this:
add_subdirectory(name_of_python_app)
set(CPACK_COMPONENTS_ALL_IN_ONE_PACKAGE 1)
set(CPACK_PACKAGE_NAME "fake-package")
set(CPACK_PACKAGE_VENDOR "ACME")
set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "fake-package - brought to you by ACME")
set(CPACK_PACKAGE_VERSION "1.0.2")
set(CPACK_PACKAGE_VERSION_MAJOR "1")
set(CPACK_PACKAGE_VERSION_MINOR "0")
set(CPACK_PACKAGE_VERSION_PATCH "2")
SET(CPACK_SYSTEM_NAME "i386")
set(CPACK_GENERATOR "DEB")
set(CPACK_DEBIAN_PACKAGE_MAINTAINER "ACME Technology")
set(CPACK_DEBIAN_PACKAGE_DEPENDS "libc6 (>= 2.3.1-6), libgcc1 (>= 1:3.4.2-12), python2.6, libboost-program-options1.40.0 (>= 1.40.0)")
set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA "${CMAKE_SOURCE_DIR}/name_of_python_app/postinst;${CMAKE_SOURCE_DIR}/name_of_python_app/prerm;")
set(CPACK_SET_DESTDIR "ON")
include(CPack)
Some of these variables are optional, but I'm filling them with info for educational purposes.
Now, let's take a look at the scripts:
postinst:
#!/bin/sh
# postinst script for fake_python_app
set -e
cd /usr/share/pyshared/fake_package
sudo python setup.py install
prerm:
#!/bin/sh
# prerm script
#
# Removes all files installed by: ./setup.py install
sudo rm -rf /usr/share/pyshared/fake_package
sudo rm /usr/local/bin/fake_python_app
If you noticed, script postinst enters at /usr/share/pyshared/fake_package and executes the setup.py that is laying there to install the app on the system. Where does this file come from and how it ends up there? This file is created by you and will be copied to that location when your package is installed on the system. This action is configured in name_of_python_app/CMakeLists.txt:
install(FILES setup.py
DESTINATION "/usr/share/pyshared/fake_package"
)
install(FILES __init__.py
DESTINATION "/usr/share/pyshared/fake_package/fake_package"
)
install(FILES fake_python_app
DESTINATION "/usr/share/pyshared/fake_package/fake_package"
)
install(FILES fake_module_1.py
DESTINATION "/usr/share/pyshared/fake_package/fake_package"
)
install(FILES fake_module_2.py
DESTINATION "/usr/share/pyshared/fake_package/fake_package"
)
As you can probably tell, besides the python application I want to install there's also 2 custom python modules that I wrote that also need to be installed. Below I describe the contents of the most important files:
setup.py:
#!/usr/bin/env python
from distutils.core import setup
setup(name='fake_package',
version='1.0.5',
description='Python modules used by fake-package',
py_modules=['fake_package.fake_module_1', 'fake_package.fake_module_2'],
scripts=['fake_package/fake_python_app']
)
_init_.py: is an empty file.
fake_python_app : your python application that will be installed in /usr/local/bin
And that's pretty much it!
A setup.py file is the equivalent of the configure && make && make install dance for a standard unix source distribution and as such is inappropriate to run as a part of a distributions package install process. See this discussion of the different ways to include Python modules in a .deb package.
I'm trying to install the pyMySQL module for python so that I can setup Django (see this previous question).
I can't get easy_install.exe PyMySQL-0.3-py2.6.egg to run for the life of me. Every time I get the error easy_install.exe not recognized as an internal or external command... I've tried adding various directories to my system path including:
C:\Python27\Lib\site-packages\;
C:\Python27\Scripts\;
C:\Python27\Scripts\easy_install.exe
C:\Python27\Scripts\easy_install.exe PyMySQL-0.3-py2.6.egg
What am I missing that is keeping this from executing?(note I'm on windows 7)
You have to install setuptools first
[edit]
Uh,
C:\Users\Robus>easy_install
Yada yada, not found
C:\Python26\Scripts>easy_install
error: No urls, filenames, or requirements specified (see --help)
C:\Python26>
The next best thing I can think of is - do you, by any chance, have more than one version of python installed? In that case setuptools might have been installed somewhere else