What am I doing wrong pypi missing "Links for" - python

I'm trying to tryout pypi to publish some libraries. So I started with a simple project.
I have the following setup.py:
import os
from distutils.core import setup
setup(
name='event_amd',
packages = ["event_amd"],
description='Port for EventEmitter from nodejs',
version='1.0.7',
author="Borrey Kim",
author_email="borrey#gmail.com",
url="https://bitbucket.org/borreykim/event_amd",
download_url="https://bitbucket.org/borreykim/event_amd/downloads/event_amd-1.0.6.tar.gz",
keywords=['events'],
long_description = """\
This is an initial step to port over EventEmitter of nodejs. This is done with the goal of having libraries that are cross platform so that cross communication is easier, and collected together.
"""
)
I've registered it but: sudo pip install event_amd gives me an error:
DistributionNotFound: No distributions at all found for event-amd
(I'm not sure how event_amd turns to event-amd?)
Also there is no links under (which other projects seem to have ):
https://pypi.python.org/simple/event_amd/
I was wondering if I am doing something wrong in the setup.py or what may be causing this.
Thanks in advance.

You need to upload a source archive after registering the release: python setup.py register sdist upload

Related

Python requirements.txt restrict dependency to be installed only on atom processors

I'm using TensorFlow under inside an x64_64 environment, but the processor is an Intel Atom processor. This processor lacks the AVX processor extension and since the pre-built wheels for TensorFLow are complied with the AVX extension TensorFLow does not work and exits. Hence I had to build my own wheel and I host it on GitHub as a released file.
The problem I have is to download this pre-built wheel only in an Atom based processor. I was able to achieve this previously using a setup.py file where this can be easily detected, but I have migrated to pyproject.toml which is very poor when it comes to customization and scripted installation support.
Is there anything similar in addition to platform_machine=='x86_64' which checks for the processor type? Or has the migration to pyproject.toml killed here my flexibility?
The current requirements.txt is:
confluent-kafka # https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/confluent_kafka-1.9.2-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
tensorflow # https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/tensorflow-2.8.4-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
tensorflow-addons # https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/tensorflow_addons-0.17.1-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
tensorflow-text # https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/tensorflow_text-2.8.2-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
rasa==3.4.2
SQLAlchemy==1.4.45
phonetics==1.0.5
de-core-news-md # https://github.com/explosion/spacy-models/releases/download/de_core_news_md-3.4.0/de_core_news_md-3.4.0-py3-none-any.whl
For platform_machine=='aarch64' I need something similar for x86_64 but only executed on Atom processor environments.
The old setup.py was:
import platform
import subprocess
import os
from setuptools import setup
def get_requirements():
requirements = []
if platform.machine() == 'x86_64':
command = "cat /proc/cpuinfo"
all_info = subprocess.check_output(command, shell=True).strip()
# AVX extension is the missing important information
if b'avx' not in all_info or ("NO_AVX" in os.environ and os.environ['NO_AVX']):
requirements.append(f'tensorflow # file://localhost/'+os.getcwd()+'/pip-wheels/amd64/tensorflow-2.3.2-cp38-cp38-linux_x86_64.whl')
elif platform.machine() == 'aarch64':
...
requirements.append('rasa==3.3.3')
requirements.append('SQLAlchemy==1.4.45')
requirements.append('phonetics==1.0.5')
requirements.append('de-core-news-md # https://github.com/explosion/spacy-models/releases/download/de_core_news_md-3.4.0/de_core_news_md-3.4.0-py3-none-any.whl')
return requirements
setup(
...
install_requires=get_requirements(),
...
)
The line if b'avx' not in all_info or ("NO_AVX" in os.environ and os.environ['NO_AVX']) does the necessary differentiation.
If a pyproject.toml approach is not for my needs, what is recommended for Python with more installation power which is not marked as legacy? Maybe there is something similar for Python what is Gradle for building projects in the Java world, which was introduced to overcome the XML limitations and providing a complete scripting language which I'm not aware of?
My recommendation would be to migrate pyproject.toml as intended. I would declare dependencies such as tensorflow according to the standard specification for dependencies but I would not use any direct references at all.
Then I would create some requirements.txt files in which I would list the dependencies that need special treatment (no need to list all dependencies), for example those that require a direct reference (and/or a pinned version). I would probably create one requirements file per platform, for example I would create a requirements-atom.txt.
As far as I know it should be possible to instruct pip to install from a remote requirements file via its URL. Something like this:
python -m pip install --requirement 'https://server.tld/path/requirements-atom.txt'
If you need to create multiple requirements.txt files with common parts, then probably a tool like pip-tools can help.
Maybe something like the following (untested):
requirements-common.in
# Application (or main project)
MyApplication # git+https://github.com/HandsFreeGadgets/MyApplication.git
# Common dependencies
CommonLibrary
AnotherCommonLibrary==1.2.3
requirements-atom.in:
--requirement requirements-common.in
# Atom CPU specific
tensorflow # https://github.com/HandsFreeGadgets/tensorflow-atom/releases/download/v0.1/tensorflow-2.8.4-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
pip-compile requirements-atom.in > requirements-atom.txt

How do you get the filename of a Python wheel when running setup.py?

I have a build process that creates a Python wheel using the following command:
python setup.py bdist_wheel
The build process can be run on many platforms (Windows, Linux, py2, py3 etc.) and I'd like to keep the default output names (e.g. mapscript-7.2-cp27-cp27m-win_amd64.whl) to upload to PyPI.
Is there anyway to get the generated wheel's filename (e.g. mapscript-7.2-cp27-cp27m-win_amd64.whl) and save to a variable so I can then install the wheel later on in the script for testing?
Ideally the solution would be cross platform. My current approach is to try and clear the folder, list all files and select the first (and only) file in the list, however this seems a very hacky solution.
setuptools
If you are using a setup.py script to build the wheel distribution, you can use the bdist_wheel command to query the wheel file name. The drawback of this method is that it uses bdist_wheel's private API, so the code may break on wheel package update if the authors decide to change it.
from setuptools.dist import Distribution
def wheel_name(**kwargs):
# create a fake distribution from arguments
dist = Distribution(attrs=kwargs)
# finalize bdist_wheel command
bdist_wheel_cmd = dist.get_command_obj('bdist_wheel')
bdist_wheel_cmd.ensure_finalized()
# assemble wheel file name
distname = bdist_wheel_cmd.wheel_dist_name
tag = '-'.join(bdist_wheel_cmd.get_tag())
return f'{distname}-{tag}.whl'
The wheel_name function accepts the same arguments you pass to the setup() function. Example usage:
>>> wheel_name(name="mydist", version="1.2.3")
mydist-1.2.3-py3-none-any.whl
>>> wheel_name(name="mydist", version="1.2.3", ext_modules=[Extension("mylib", ["mysrc.pyx", "native.c"])])
mydist-1.2.3-cp36-cp36m-linux_x86_64.whl
Notice that the source files for native libs (mysrc.pyx or native.c in the above example) don't have to exist to assemble the wheel name. This is helpful in case the sources for the native lib don't exist yet (e.g. you are generating them later via SWIG, Cython or whatever).
This makes the wheel_name easily reusable in the setup.py script where you define the distribution metadata:
# setup.py
from setuptools import setup, find_packages, Extension
from setup_helpers import wheel_name
setup_kwargs = dict(
name='mydist',
version='1.2.3',
packages=find_packages(),
ext_modules=[Extension(...), ...],
...
)
file = wheel_name(**setup_kwargs)
...
setup(**setup_kwargs)
If you want to use it outside of the setup script, you have to organize the access to setup() args yourself (e.g. reading them from a setup.cfg script or whatever).
This part is loosely based on my other answer to setuptools, know in advance the wheel filename of a native library
poetry
Things can be simplified a lot (it's practically a one-liner) if you use poetry because all the relevant metadata is stored in the pyproject.toml. Again, this uses an undocumented API:
from clikit.io import NullIO
from poetry.factory import Factory
from poetry.masonry.builders.wheel import WheelBuilder
from poetry.utils.env import NullEnv
def wheel_name(rootdir='.'):
builder = WheelBuilder(Factory().create_poetry(rootdir), NullEnv(), NullIO())
return builder.wheel_filename
The rootdir argument is the directory containing your pyproject.toml script.
flit
AFAIK flit can't build wheels with native extensions, so it can give you only the purelib name. Nevertheless, it may be useful if your project uses flit for distribution building. Notice this also uses an undocumented API:
from flit_core.wheel import WheelBuilder
from io import BytesIO
from pathlib import Path
def wheel_name(rootdir='.'):
config = str(Path(rootdir, 'pyproject.toml'))
builder = WheelBuilder.from_ini_path(config, BytesIO())
return builder.wheel_filename
Implementing your own solution
I'm not sure whether it's worth it. Still, if you want to choose this path, consider using packaging.tags before you find some old deprecated stuff or even decide to query the platform yourself. You will still have to fall back to private stuff to assemble the correct wheel name, though.
My current approach to install the wheel is to point pip to the folder containing the wheel and let it search itself:
python -m pip install --no-index --find-links=build/dist mapscript
twine also can be pointed directly at a folder without needing to know the exact wheel name.
I used a modified version of hoefling's solution. My goal was to copy the build to a "latest" wheel file. The setup() function will return an object with all the info you need, so you can find out what it actually built, which seems simpler than the solution above. Assuming you have a variable version in use, the following will get the file name I just built and then copies it.
setup = setuptools.setup(
# whatever options you currently have
)
wheel_built = 'dist/{}-{}.whl'.format(
setup.command_obj['bdist_wheel'].wheel_dist_name,
'-'.join(setup.command_obj['bdist_wheel'].get_tag()))
wheel_latest = wheel_built.replace(version, 'latest')
shutil.copy(wheel_built, wheel_latest)
print('Copied {} >> {}'.format(wheel_built, wheel_latest))
I guess one possible drawback is you have to actually do the build to get the name, but since that was part of my workflow, I was ok with that. hoefling's solution has the benefit of letting you plan the name without doing the build, but it seems more complex.

setuptools and the bdist_wheel command [duplicate]

I am working on a python2 package in which the setup.py contains some custom install commands. These commands actually build some Rust code and output some .dylib files that are moved into the python package.
An important point is that the Rust code is outside the python package.
setuptools is supposed to detect automatically if the python package is pure python or platform specific (if it contains some C extensions for instance).
In my case, when I run python setup.py bdist_wheel, the generated wheel is tagged as a pure python wheel: <package_name>-<version>-py2-none-any.whl.
This is problematic because I need to run this code on different platforms, and thus I need to generated one wheel per platform.
Is there a way, when building a wheel, to force the build to be platform specific ?
Here's the code that I usually look at from uwsgi
The basic approach is:
setup.py
# ...
try:
from wheel.bdist_wheel import bdist_wheel as _bdist_wheel
class bdist_wheel(_bdist_wheel):
def finalize_options(self):
_bdist_wheel.finalize_options(self)
self.root_is_pure = False
except ImportError:
bdist_wheel = None
setup(
# ...
cmdclass={'bdist_wheel': bdist_wheel},
)
The root_is_pure bit tells the wheel machinery to build a non-purelib (pyX-none-any) wheel. You can also get fancier by saying there are binary platform-specific components but no cpython abi specific components.
The modules setuptools, distutils and wheel decide whether a python distribution is pure by checking if it has ext_modules.
If you build an external module on your own, you can still list it in ext_modules so that the building tools know it exists. The trick is to provide an empty list of sources so that setuptools and distutils will not try to build it. For example,
setup(
...,
ext_modules=[
setuptools.Extension(
name='your.external.module',
sources=[]
)
]
)
This solution worked better for me than patching the bdist_wheel command. The reason is that bdist_wheel calls the install command internally and that command checks again for the existence of ext_modules to decide between purelib or platlib install. If you don't list the external module, you end up with the lib installed in a purelib subfolder inside the wheel. That causes problems when using auditwheel repair, which complains about the extensions being installed in a purelib folder.
You can also specify/spoof a specific platform name when building wheels by specifying a --plat-name:
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
Neither the root_is_pure trick nor the empty ext_modules trick worked for me, but after MUCH searching myself, I finally found a working solution in 'pip setup.py bdist_wheel' no longer builds forced non-pure wheels
Basically, you override the 'has_ext_modules' function in the Distribution class, and set distclass to point to the overriding class. At that point, setup.py will believe you have a binary distribution, and will create a wheel with the specific version of python, the ABI, and the current architecture. As suggested by https://stackoverflow.com/users/5316090/py-j:
from setuptools import setup
from setuptools.dist import Distribution
DISTNAME = "packagename"
DESCRIPTION = ""
MAINTAINER = ""
MAINTAINER_EMAIL = ""
URL = ""
LICENSE = ""
DOWNLOAD_URL = ""
VERSION = '1.2'
PYTHON_VERSION = (2, 7)
# Tested with wheel v0.29.0
class BinaryDistribution(Distribution):
"""Distribution which always forces a binary package with platform name"""
def has_ext_modules(foo):
return True
setup(name=DISTNAME,
description=DESCRIPTION,
maintainer=MAINTAINER,
maintainer_email=MAINTAINER_EMAIL,
url=URL,
license=LICENSE,
download_url=DOWNLOAD_URL,
version=VERSION,
packages=["packagename"],
# Include pre-compiled extension
package_data={"packagename": ["_precompiled_extension.pyd"]},
distclass=BinaryDistribution)
I find Anthony Sottile's answer great, but didn't work for me.
My case is that I have to enforce the wheel to be created for x86_64, but any python3, so making root impure actually caused my wheel to be py36-cp36 :(
A better way, IMO, in general is just to use sys.argv:
from setuptools import setup
import sys
sys.argv.extend(['--plat-name', 'x86_64'])
setup(name='example-wheel')

Custom logic in setup.py to change environment header

I have a Python package that I'm distributing on PyPI. I create a script called run_program1 that launches a GUI on the command line.
A snippet of my setup.py file:
setup(
name='my_package',
...
entry_points={
'gui_scripts': [
'run_program1 = program1:start_func',
]
}
)
Unfortunately, the run_program1 executable fails to when installed with Anaconda Python, with an error like this:
This program needs access to the screen. Please run with a Framework build of python, and only when you are logged in on the main display of your Mac.
This issue turns out to be a fundamental issue between Anaconda and setuptools:
https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/9kQreoBIj3A
I'm trying to create an ugly hack to change the environment in the executable that pip creates -- run_program1 -- from #!/Users/***/anaconda2/bin/python to #/usr/bin/env pythonw. I can do this manually after installing on my machine by opening ~/anaconda2/bin/run_program1 and simply replacing the first line. With that edit, the executable works as expected. However, I need to create a hack that will allow me to do this for all users who use pip to install my_package.
I am using this approach to insert custom logic into my setup.py file: https://blog.niteoweb.com/setuptools-run-custom-code-in-setup-py/
class CustomInstallCommand(install):
"""Customized setuptools install command - prints a friendly greeting."""
def run(self):
print "Hello, developer, how are you? :)"
install.run(self)
setup(
...
cmdclass={
'install': CustomInstallCommand,
}, ...)
What I can't figure out is, what should I put into the custom class to change the header in the run_program1 executable? Any ideas of how to approach this?

How to perform custom build steps in setup.py?

The distutils module allows to include and install resource files together with Python modules. How to properly include them if resource files should be generated during a building process?
For example, the project is a web application which contains CoffeeScript sources that should be compiled into JavaScript and included in a Python package then. Is there a way to integrate this into a normal sdist/bdist process?
I spent a fair while figuring this out, the various suggestions out there are broken in various ways - they break installation of dependencies, or they don't work in pip, etc. Here's my solution:
in setup.py:
from setuptools import setup, find_packages
from setuptools.command.install import install
from distutils.command.install import install as _install
class install_(install):
# inject your own code into this func as you see fit
def run(self):
ret = None
if self.old_and_unmanageable or self.single_version_externally_managed:
ret = _install.run(self)
else:
caller = sys._getframe(2)
caller_module = caller.f_globals.get('__name__','')
caller_name = caller.f_code.co_name
if caller_module != 'distutils.dist' or caller_name!='run_commands':
_install.run(self)
else:
self.do_egg_install()
# This is just an example, a post-install hook
# It's a nice way to get at your installed module though
import site
site.addsitedir(self.install_lib)
sys.path.insert(0, self.install_lib)
from mymodule import install_hooks
install_hooks.post_install()
return ret
Then, in your call to the setup function, pass the arg:
cmdclass={'install': install_}
You could use the same idea for build as opposed to install, write yourself a decorator to make it easier, etc. This has been tested via pip, and direct 'python setup.py install' invocation.
The best way would be to write a custom build_coffeescript command and make it a subcommand of build. More details are given in other replies to similar/duplicate questions, for example this one:
https://stackoverflow.com/a/1321345/150999

Categories