python pip install wheel with custom entry points - python

In a python virtualenv on Windows, I've installed a custom package that over-rides setuptools.command.install.easy_install.get_script_args to create a new custom type of entry point 'custom_entry'
I have another package that I want to prepare with setuptools exposing a custom entry point.
If I prepare an egg distribution of this package and install it with my modified easy_install.exe, this creates the custom entry points correctly.
However, if I prepare a wheel distribution and install it with a modified pip.exe, the custom entry points do not get added.
Why does pip not follow the same install procedure as easy_install?
Reading the source for pip, it seems that the function get_entrypoints in wheel.py excludes all entry points other than console_scripts and gui_scripts. Is this correct?
If so, how should I install custom entry points for pip installations?
---- Edit
It looks like I should provide more details.
In my first package, custom-installer, I'm over-riding (monkey-patching, really) easy_install.get_script_args, in custom_install.__init__.py:
from setuptools.command import easy_install
_GET_SCRIPT_ARGS = easy_install.get_script_args
def get_script_args(dist, executable, wininst):
for script_arg in _GET_SCRIPT_ARGS(dist, executable, wininst):
yield script_arg # replicate existing behaviour, handles console_scripts and other entry points
for group in ['custom_entry']:
for name, _ in dist.get_entry_map(group).items():
script_text = (
## some custom stuff
)
## do something else
yield (## yield some other stuff) # to create adjunct files to the -custom.py script
yield (name + '-custom.py', script_text, 't')
easy_install.get_script_args = get_script_args
main = easy_install.main
And in that package's setup.py, I provide a (console_script) entry point for my custom installer:
entry_points={
'console_scripts': [
'custom_install = custom_install.__init__:main'
]
}
Installing this package with pip correctly creates the installer script /venv/Scripts/custom_install.exe
With my second package, customized, I have both regular and custom entry points to install from setup.py, for two modules custom and console.
entry_points={
'console_scripts': [
'console = console.__main__:main'
],
'custom_entry': [
'custom = custom.__main__:main'
]
}
I would like to see both of these entry points installed regardless of the install procedure.
If I build the package customized as an egg distribution and install this with custom_install.exe created by custom-installer, then both entry points of customized are installed.
I would like to be able to install this package as a wheel file using pip, but from reading the source code, pip seems to explicitly skip and any entry points other than 'console_scripts' and 'gui_scripts':
def get_entrypoints(filename):
if not os.path.exists(filename):
return {}, {}
# This is done because you can pass a string to entry_points wrappers which
# means that they may or may not be valid INI files. The attempt here is to
# strip leading and trailing whitespace in order to make them valid INI
# files.
with open(filename) as fp:
data = StringIO()
for line in fp:
data.write(line.strip())
data.write("\n")
data.seek(0)
cp = configparser.RawConfigParser()
cp.readfp(data)
console = {}
gui = {}
if cp.has_section('console_scripts'):
console = dict(cp.items('console_scripts'))
if cp.has_section('gui_scripts'):
gui = dict(cp.items('gui_scripts'))
return console, gui
Subsequently, pip generates entry point scripts using a completely different set of code to easy_install. Presumably, I could over-ride pip's implementations of these, as done with easy_install, to create my custom entry points, but I feel like I'm going the wrong way.
Can anyone suggest a simpler way of implementing my custom entry points that is compatible with pip? If not, I can override get_entrypoints and move_wheel_files.

You will probably need to use the keyword console_scripts in your setup.py file. See the following answer:
entry_points does not create custom scripts with pip or easy_install in Python?
It basically states that you need to do the following in your setup.py script:
entry_points = {
'console_scripts': ['custom_entry_point = mypackage.mymod.test:foo']
}
See also: http://calvinx.com/2012/09/09/python-packaging-define-an-entry-point-for-console-commands/

Related

How to create a .exe similar to pip.exe, jupyter.exe, etc. from C:\Python37\Scripts?

In order to make pip or jupyter available from command-line on Windows no matter the current working directory with just pip ... or jupyter ..., Python on Windows seems to use this method:
put C:\Python37\Scripts in the PATH
create a small 100KB .exe file C:\Python37\Scripts\pip.exe or jupyter.exe that probably does not much more than calling a Python script with the interpreter (I guess?)
Then doing pip ... or jupyter ... in command-line works, no matter the current directory.
Question: how can I create a similar 100KB mycommand.exe that I could put in a directory which is in the PATH, so that I could do mycommand from anywhere in command-line?
Note: I'm not speaking about pyinstaller, cxfreeze, etc. (that I already know and have used before); I don't think these C:\Python37\Scripts\ exe files use this.
Assuming you are using setuptools to package this application, you need to create a console_scripts entry point as documented here:
https://packaging.python.org/guides/distributing-packages-using-setuptools/#console-scripts
For a single top-level module it could look like the following:
setup.py
#!/usr/bin/env python3
import setuptools
setuptools.setup(
py_modules=['my_top_level_module'],
entry_points={
'console_scripts': [
'mycommand = my_top_level_module:my_main_function',
],
},
# ...
name='MyProject',
version='1.0.0',
)
my_top_level_module.py
#!/usr/bin/env python3
def my_main_function():
print("Hi!")
if __name__ == '__main__':
my_main_function()
And then install it with a command such as:
path/to/pythonX.Y -m pip install path/to/MyProject

How do you get the filename of a Python wheel when running setup.py?

I have a build process that creates a Python wheel using the following command:
python setup.py bdist_wheel
The build process can be run on many platforms (Windows, Linux, py2, py3 etc.) and I'd like to keep the default output names (e.g. mapscript-7.2-cp27-cp27m-win_amd64.whl) to upload to PyPI.
Is there anyway to get the generated wheel's filename (e.g. mapscript-7.2-cp27-cp27m-win_amd64.whl) and save to a variable so I can then install the wheel later on in the script for testing?
Ideally the solution would be cross platform. My current approach is to try and clear the folder, list all files and select the first (and only) file in the list, however this seems a very hacky solution.
setuptools
If you are using a setup.py script to build the wheel distribution, you can use the bdist_wheel command to query the wheel file name. The drawback of this method is that it uses bdist_wheel's private API, so the code may break on wheel package update if the authors decide to change it.
from setuptools.dist import Distribution
def wheel_name(**kwargs):
# create a fake distribution from arguments
dist = Distribution(attrs=kwargs)
# finalize bdist_wheel command
bdist_wheel_cmd = dist.get_command_obj('bdist_wheel')
bdist_wheel_cmd.ensure_finalized()
# assemble wheel file name
distname = bdist_wheel_cmd.wheel_dist_name
tag = '-'.join(bdist_wheel_cmd.get_tag())
return f'{distname}-{tag}.whl'
The wheel_name function accepts the same arguments you pass to the setup() function. Example usage:
>>> wheel_name(name="mydist", version="1.2.3")
mydist-1.2.3-py3-none-any.whl
>>> wheel_name(name="mydist", version="1.2.3", ext_modules=[Extension("mylib", ["mysrc.pyx", "native.c"])])
mydist-1.2.3-cp36-cp36m-linux_x86_64.whl
Notice that the source files for native libs (mysrc.pyx or native.c in the above example) don't have to exist to assemble the wheel name. This is helpful in case the sources for the native lib don't exist yet (e.g. you are generating them later via SWIG, Cython or whatever).
This makes the wheel_name easily reusable in the setup.py script where you define the distribution metadata:
# setup.py
from setuptools import setup, find_packages, Extension
from setup_helpers import wheel_name
setup_kwargs = dict(
name='mydist',
version='1.2.3',
packages=find_packages(),
ext_modules=[Extension(...), ...],
...
)
file = wheel_name(**setup_kwargs)
...
setup(**setup_kwargs)
If you want to use it outside of the setup script, you have to organize the access to setup() args yourself (e.g. reading them from a setup.cfg script or whatever).
This part is loosely based on my other answer to setuptools, know in advance the wheel filename of a native library
poetry
Things can be simplified a lot (it's practically a one-liner) if you use poetry because all the relevant metadata is stored in the pyproject.toml. Again, this uses an undocumented API:
from clikit.io import NullIO
from poetry.factory import Factory
from poetry.masonry.builders.wheel import WheelBuilder
from poetry.utils.env import NullEnv
def wheel_name(rootdir='.'):
builder = WheelBuilder(Factory().create_poetry(rootdir), NullEnv(), NullIO())
return builder.wheel_filename
The rootdir argument is the directory containing your pyproject.toml script.
flit
AFAIK flit can't build wheels with native extensions, so it can give you only the purelib name. Nevertheless, it may be useful if your project uses flit for distribution building. Notice this also uses an undocumented API:
from flit_core.wheel import WheelBuilder
from io import BytesIO
from pathlib import Path
def wheel_name(rootdir='.'):
config = str(Path(rootdir, 'pyproject.toml'))
builder = WheelBuilder.from_ini_path(config, BytesIO())
return builder.wheel_filename
Implementing your own solution
I'm not sure whether it's worth it. Still, if you want to choose this path, consider using packaging.tags before you find some old deprecated stuff or even decide to query the platform yourself. You will still have to fall back to private stuff to assemble the correct wheel name, though.
My current approach to install the wheel is to point pip to the folder containing the wheel and let it search itself:
python -m pip install --no-index --find-links=build/dist mapscript
twine also can be pointed directly at a folder without needing to know the exact wheel name.
I used a modified version of hoefling's solution. My goal was to copy the build to a "latest" wheel file. The setup() function will return an object with all the info you need, so you can find out what it actually built, which seems simpler than the solution above. Assuming you have a variable version in use, the following will get the file name I just built and then copies it.
setup = setuptools.setup(
# whatever options you currently have
)
wheel_built = 'dist/{}-{}.whl'.format(
setup.command_obj['bdist_wheel'].wheel_dist_name,
'-'.join(setup.command_obj['bdist_wheel'].get_tag()))
wheel_latest = wheel_built.replace(version, 'latest')
shutil.copy(wheel_built, wheel_latest)
print('Copied {} >> {}'.format(wheel_built, wheel_latest))
I guess one possible drawback is you have to actually do the build to get the name, but since that was part of my workflow, I was ok with that. hoefling's solution has the benefit of letting you plan the name without doing the build, but it seems more complex.

Custom logic in setup.py to change environment header

I have a Python package that I'm distributing on PyPI. I create a script called run_program1 that launches a GUI on the command line.
A snippet of my setup.py file:
setup(
name='my_package',
...
entry_points={
'gui_scripts': [
'run_program1 = program1:start_func',
]
}
)
Unfortunately, the run_program1 executable fails to when installed with Anaconda Python, with an error like this:
This program needs access to the screen. Please run with a Framework build of python, and only when you are logged in on the main display of your Mac.
This issue turns out to be a fundamental issue between Anaconda and setuptools:
https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/9kQreoBIj3A
I'm trying to create an ugly hack to change the environment in the executable that pip creates -- run_program1 -- from #!/Users/***/anaconda2/bin/python to #/usr/bin/env pythonw. I can do this manually after installing on my machine by opening ~/anaconda2/bin/run_program1 and simply replacing the first line. With that edit, the executable works as expected. However, I need to create a hack that will allow me to do this for all users who use pip to install my_package.
I am using this approach to insert custom logic into my setup.py file: https://blog.niteoweb.com/setuptools-run-custom-code-in-setup-py/
class CustomInstallCommand(install):
"""Customized setuptools install command - prints a friendly greeting."""
def run(self):
print "Hello, developer, how are you? :)"
install.run(self)
setup(
...
cmdclass={
'install': CustomInstallCommand,
}, ...)
What I can't figure out is, what should I put into the custom class to change the header in the run_program1 executable? Any ideas of how to approach this?

Changing console_script entry point interpreter for packaging

I'm packaging some python packages using a well known third party packaging system, and I'm encountering an issue with the way entry points are created.
When I install an entry point on my machine, the entry point will contain a shebang pointed at whatever python interpreter, like so:
in /home/me/development/test/setup.py
from setuptools import setup
setup(
entry_points={
"console_scripts": [
'some-entry-point = test:main',
]
}
)
in /home/me/.virtualenvs/test/bin/some-entry-point:
#!/home/me/.virtualenvs/test/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'test==1.0.0','console_scripts','some-entry-point'
__requires__ = 'test==1.0.0'
import sys
from pkg_resources import load_entry_point
sys.exit(
load_entry_point('test==1.0.0', 'console_scripts', 'some-entry-point')()
)
As you can see, the entry point boilerplate contains a hard-coded path to the python interpreter that's in the virtual environment that I'm using to create my third party package.
Installing this entry point using my third-party packaging system results in the entry point being installed on the machine. However, with this hard-coded reference to a python interpreter which doesn't exist on the target machine, the user must run python /path/to/some-entry-point.
The shebang makes this pretty unportable. (which isn't a design goal of virtualenv for sure; but I just need to MAKE it a little more portable here.)
I'd rather not resort to crazed find/xargs/sed commands. (Although that's my fallback.)
Is there some way that I can change the interpreter path after the shebang using setuptools flags or configs?
You can customize the console_scripts' shebang line by setting 'sys.executable' (learned this from a debian bug report). That is to say...
sys.executable = '/bin/custom_python'
setup(
entry_points={
'console_scripts': [
... etc...
]
}
)
Better though would be to include the 'execute' argument when building...
setup(
entry_points={
'console_scripts': [
... etc...
]
},
options={
'build_scripts': {
'executable': '/bin/custom_python',
},
}
)
For future reference for someone who wants to do this at runtime without modifying the setup.py, it's possible to pass the interpreter path to setup.py build via pip with:
$ ./venv/bin/pip install --global-option=build \
--global-option='--executable=/bin/custom_python' .
...
$ head -1 ./venv/bin/some-entry-point
#!/bin/custom_python
Simply change the shebang of your setup.py to match the python you want your entry points to use:
#!/bin/custom_python
(I tried #damian answer but not working for me, maybe the setuptools version on Debian Jessie is too old)

Resolving Python dependencies at installation time in setup.py based on an interpreter version

Suppose I have a bunch of requirement files like:
requirements.txt # common for both 2.x and 3.x
requirements-2.txt # 2.x
requirements-3.txt # 3.x
and I would like to populate install_requires argument in setup.py file based on the current Python interpreter version. Assume of course that pip handles the installation process.
Solution 1: Of course, I can write a simple function that will read and return correct requirements. In a case with multiple projects this is obviously not acceptable, since I will have to copy the function everywhere.
Solution 2: Next idea here is to write a simple package that does it for me, but the problem is that it should be available not only at distribution time (like python setup.py sdist), but more importantly, at installation time on one's machine.
I was managed to write a simple module that does the thing, lets call it depmodule. I also had following setup.py:
# -*- coding: utf-8 -*-
from setuptools import setup, find_packages
try:
from depmodule import find_requirements
except ImportError:
# this line is executed when reading setup.py for the first time
# since depmodule is not installed yet
find_requirements = lambda: []
setup(
name='some-package',
packages=find_packages(),
# snip...
platforms='any',
# note that depmodule is listed here as a requirement, so it will be
# installed before some-package, thus will be available when it comes
# to running setup.py of some-package
install_requires=['depmodule'] + find_requirements(),
)
When it comes to pip install some-package it actually resolves dependencies correctly, but they are not picked up by pip, so it only installs: depmodule some-package (in that order) instead of depmodule dep1 dep2 ... some-package.
I tried to use setup_requires argument but with no luck. The dependency was downloaded, but I could not access it, since it was an egg package (not extracted).
Is there any way I can overcome this issue? Are there any alternatives (other approaches) that could help with this?
Thanks!
Since you do not want to replicate code for multiple projects you might consider generating the setup.py and/or updating some part of setup.py from from a single source.
This would be similar to generate a Makefile from a Makefile.in template.
I have a py_setup program for this. I call it with py_setup --new to generate a new setup.py in the current directory, taking parts of py_setup as a template. When py_setup is
run with a filename as argument, it tries to update segments in that file but leaves the rest of them untouched.
When run without arguments and without options py_setup does update segments on all */setup.py files.
The segments in py_setup (and the setup.py files) are seperated by comment lines of the form #_## segement_name or ended with #_#. Anything before or between segments is copied, but will never get updated in the setup.py
Anything after the line #_### in the py_setup is never copied, that is the actual code that is the actual py_setup program code
Most lines are copied verbatim, except for the segment separator comments (stripped after the segment name) and the line that starts with:
setup = setup
From that line setup = is stripped at the start, so it ends up as a call to setup() in the setup.py but not when running py_setup.
When updating, only existing segments that are in the target setup.py are replaced with lines from the same named segments in py_setup. Removing or changing the segment name makes sure code changes will not be udpated.

Categories