I'm packaging my new python library for PyPi. The repository contains:
Sphinx documentation sources
Supplemental JavaScript library
Examples
Is it a good idea to include such things into a python egg?
What's the convention?
You can see the guts of the library at https://github.com/peterhudec/authomatic
You shall not make everything into the python egg, but anyway, that's up to the python setup.py bdist_egg to choose what to include or not. But in the source package you upload to pypi, yes, include everything that can't be generated by setup.py. You can upload separately the documentation, so it can get published as well.
But generally, what you need to get included in the egg, is what is necessary for the egg to run as-is. Everything else can be included, but can be distributed through other ways, that's up to you.
There are packages on PyPI that are entirey (or almost) entirely written in bash (virtualenvwrapper.sh is one).
If there is a supplemental JavaScript library that you can package, that wouldn't be a bad thing. This prevents the case where the user might not have npm installed, so it makes your library easier to use and your users happier.
Documentation doesn't NEED to be included but if you want to, then by all means do it. Libraries both include and don't include documentation. github3.py now includes it while requests does not. It's up to your preference.
I personally always have the examples in the documentation, so they're included in my packages that include the documentation. I can't think of any packages off the top of my head that include a separate package of examples, but if you feel it's necessary, then go ahead. I might, however, make that a sub-directory of the library itself though. It will make the name-spacing better when it is installed.
But basically, there are no set conventions beyond having the code to perform the task you say the package will perform.
What I can tell for PyQT4:
it includes doc, examples, plugins, ...
I do not know about your JavaScript library but I think it is no problem to include that as well.
This is an example - I do not know the convention. I would put in everything that could be important to the user of your library.
Related
This question already has answers here:
Python: How do I find which pip package a library belongs to?
(2 answers)
Closed 24 days ago.
Ok, so you clone a repo, there's an import
import yaml
ok, so you do pip install yaml and you get:
ERROR: No matching distribution found for yaml
Ok, so you look for a package with yaml in it, and there's like a gazillion of them... usually adding py in front does the job, but...
How on earth should I know which one was used?!
And it's not just yaml, oh no... there's:
import cv2 # python-opencv
import PIL # Pillow
and the list goes on and on...
How can I know which import uses which package? Shouldn't there be a PEP for this? Or a naming convention, e.g. import is always the same as the package name?
There's a similar topic here, if you're not frustrated enough :)
[When I clone a repo,] How can I know which import uses which package?
In short: it is the cloned code's responsibility to explain this, and it is an expected courtesy that the cloned code includes an installer that will take care of it.
If this is just some random person's bundle of .py files on GitHub with no installation instructions, look for notes in the associated documentation; failing that, make an issue on the tracker. (Or just give up. Maybe look for a better-engineered project that does the same thing.)
However, most "serious", contemporary Python projects are meant to be installed by using some form of packaging system. These have evolved over the years, and best practices have changed many times; but generally speaking, a properly "packaged" and "distributed" project will have either a setup.py or (newer; better in many ways, but not universally adopted yet) pyproject.toml file at the top level.
A pyproject.toml file is a config file in TOML format that simply describes a bunch of project metadata. This requires a build backend conforming to PEP 517. For a while, this required third-party tools, such as Poetry; but the standard setuptools can handle this since version 40.8.0. (As of this writing, the current release is 65.7.0.)
A setup.py script is executable code that pip will invoke after downloading a package from PyPI (or another package index). Generally, this script will use either setuptools or distutils (the predecessor to setuptools; it has finally been officially deprecated in 3.10, and will be removed in 3.12) to install the project, by calling a function named setup and passing it a big dict with some project metadata.
Security warning: this file is still executable code. It is arbitrary code, and it doesn't have to be following the standard conventions. Also, the package that is actually downloaded from PyPI doesn't necessarily match the project's source shown on GitHub (or another Git provisioning website), if such is even available. (This problem also affects package managers in other languages and ecosystems, notably npm for Javascript.)
With the setup.py based approach, package dependencies are specified using a keyword argument to the setup function. The specification has changed many times; currently, projects still using a setup.py should use the install_requires keyword argument.
With the pyproject.toml based approach, using setuptools' backend, dependencies will be an array (using JSON terminology, as TOML is a superset) stored under project.dependencies. This will vary for other backends; for example, Poetry expects this information under tool.poetry.dependencies.
In any event, pip freeze will output a list of what's installed in the current environment. It's a somewhat common practice for developers to test the code in a virtual environment where the dependencies are installed, dump this output to a requirements.txt file, and include that as documentation.
[When I want to use a third-party library in my own code,] How can I know which import uses which package?
It's worth considering the question the other way around, too: given that we have installed OpenCV for Python using pip install opencv-python, and want to use it in our own code, how do we know to import cv2 specifically?
The answer: there is no convention, and certainly no requirement for the installed package name to match the PyPI name, nor the GitHub etc. repository name. Read the documentation. Everyone who intends for their code to be used as a library, will be more than willing to show how, on at least a basic level.
Watch for requirements.txt . Big projects usually have it. You can import packages from this file. Else just google.
Keep in mind that it might not be a pip package.
Probably what is happening is that the main script is trying to import a secondary script (yaml.py, in this case) with functions or utils for the main script to use.
Check if the repo contains a file named yaml.py. If it's the case make sure to run the main script while the yaml.py is in the same directory.
Also, check for a requirements.txt file.
You can install all the requirements inside the file running in shell this line:
pip install -r *path to your requirements.txt*
Hope that this helps.
Any package on PyPI or cloned from some online repository is free to set itself up with a base directory name it chooses. That base directory xyz determines the import xyz line. Additionally a package name on PyPI doesn't have to match the repository name where its source code revisions are kept (assuming there is any).
This has the disadvantage that there is no one-to-one relation between package name, repo and/or import-line. But the advantage is that you e.g. can install Pillow, which is backwards compatible with PIL and still use import PIL instead of changing all your sources to use import Pillow as PIL.
If the repo you clone has a requirements.txt look there, you can also look in the setup.py for extra_require. But there is no guarantee that these are available, or contain the names of the packages to install (e.g. I use a generic setup.py that reads its info from a datastructure in the __init__.py file when creating/installing a package).
yaml seems to be a reserved name on PyPI (at least when I tried to upload a package with that name a few years ago). So that might be the reason the package is named PyYAML, although the Py is not very informative as the python code will not function in another programming language. PyPI' search is not very helpful as it relevance ordering is not relevant (at least not for yaml).
PyPI has no entry in the metadata for the import line, but you could extract that from .whl package file as the import line is the top level directory that doesn't match .dist-info. This is normally not possible from a .tar.gz` package file. I don't know of any site that does this kind of automatic scraping.
You can click through the packages on PyPI, after searching the import term, and hope you find something that matches the import in the documentation, but that is no guarantee you get the right one.
You might be best of searching for import yaml here on stackoverflow, and hope that the question or the answer mentions the package name.
thank you very much for your help and ideas. Big thanks to Karl Knechter for his exhaustive answer.
tl;dr: I think using some sort of "package" / "distribution" as a standard, would make everyone's lives easier.
However, my question was half-theoretical, to point out something I'd call, an incoherence in Python. You are of course right, there should be setuptools or requirements.txt or at least some documentation. But, if there isn't any, we're prone to error or additional browsing.
GospelBG pointed out something important. There could be a script yaml.py in the main folder and we need to check and/or guess.
Most importantly, naming imports differently than packages is just plainly misleading. There should be a naming convention or a PEP for this. Again, you can of course eventually get the proper package etc., but it's not explicit and obvious, and it should be! Because in programming, we like it that way, don't we?
I'm no seasoned dev in Python and I'm learning C++, but e.g. in C++, you import a header file with a particular name and static or dynamic libraries by their filename. Now I know this is very "step-by-step, on foot method", but at least you use the exact filenames.
On the upper level you have CMake, which would be an equivalent of setuptools where using find_package or find_library you can import package / library. To be honest, I'm not sure if all packages have the exact equivalent name, but at least the ones I used, did match.
Thanks again for your help and answers! I'm open for discussion and comments :)
I have a big Python 3.7+ project and I am currently in the process of splitting it into multiple packages that can be installed separately. My initial thought was to have a single Git repository with multiple packages, each with its own setup.py. However, while doing some research on Google, I found people suggesting one repository per package: (e.g., Python - setuptools - working on two dependent packages (in a single repo?)). However, nobody provides a good explanation as to why they prefer such structure.
So, my question are the following:
What are the implications of having multiple packages (each with its own setup.py) on the same GitHub repo?
Am I going to face issues with such a setup?
Are the common Python tools (documentation generators, pypi packaging, etc) compatible with with such a setup?
Is there a good reason to prefer one setup over the other?
Please keep in mind that this is not an opinion-based question. I want to know if there are any technical issues or problems with any of the two approaches.
Also, I am aware (and please correct me if I am wrong) that setuptools now allow to install dependencies from GitHub repos, even if the GitHub URL of the setup.py is not at the root of the repository.
One aspect is covered here
https://pip.readthedocs.io/en/stable/reference/pip_install/#vcs-support
In particular, if setup.py is not in the root directory you have to specify the subdirectory where to find setup.py in the pip install command.
So if your repository layout is:
pkg_dir/
setup.py # setup.py for package pkg
some_module.py
other_dir/
some_file
some_other_file
You’ll need to use pip install -e vcs+protocol://repo_url/#egg=pkg&subdirectory=pkg_dir.
"Best" approach? That's a matter of opinion, which is not the domain of SO. But here are a couple of justifications for creating separate packages:
Package is functionally independent of the other packages in your project.
That is, doesn't import from them and performs a function that could be useful to other developers. Extra points if the function this package performs is similar to packages already in PyPI.
Extra points if the package has a stable API and clear documentation. Penalty points if package is a thin grab bag of unrelated functions that you factored out of multiple packages for ease of maintenance, but the functions don't have an unifying principle.
The package is optional with respect to your main project, so there'd be cases where users could reasonably choose to skip installing it.
Perhaps one package is a "client" and the other is the "server". Or perhaps the package provides OS-specific capabilities.
Note that a package like this is not functionally independent of the main project and so does not qualify under the previous bullet point, but this would still be a good reason to separate it.
I agree with #boriska's point that the "single package" project structure is a maintenance convenience well worth striving for. But not (and this is just my opinion, I'm going to get downvoted for expressing it) at the expense of cluttering up the public package index with a large number of small packages that are never installed separately.
I am researching the same issue myself. PyPa documentation recommends the layout described in 'native' subdirectory of: https://github.com/pypa/sample-namespace-packages
I find the single package structure described below, very useful, see the discussion around testing the 'installed' version.
https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure
I think this can be extended to multiple packages. Will post as I learn more.
The major problem I've with faced when splitting two interdependent packages into two repos came from CI and testing. Specifically branch protections.
Say you have package A and package B and you make some (breaking) changes in both. The automated tests for package A fail because they use the main branch of B (which is no longer compatible with the new version of A) so you can't merge B. And the same problem the other way around.
tldr:
After breaking changees automated tests on merge will fail because they use the main branch of the other repo. Making it impossible to merge.
So I have a Python package – it’s all set up on PyPI, and on GitHub, no problem. This is something I’m relatively familiar with.
What is unknown to me is: the notion of installing a CMake script as part of the python package install process. The python package in question is a development tool – you use it to preprocess some of your C/C++/Obj-C/Obj-C++ source files and generate some predefined macros in a header – and it works well when it’s wrapped in a CMake macro (for example like so) and executed as part of a proper chain of dependencies.
For one, I am not sure how to approach this, as there seem to be significant differences between the setuptools sandbox stance and distutils’ willing systems-level installer integration – and then even if I did know how to go about setting things up correctly in setup.py, I can’t find a good precedent on where a CMake script pertaining to a Python package might live.
All thoughts and insights on the matter are welcome.
It took me a while to understand your question. If I understand correctly, what you are trying to do is provide the IodSymbolize.cmake in the standard installation location of cmake so that other users/projects who rely on your software(symbolizer) can use it in their build process. I think you are thinking in a good direction, trying to provide services for end users of your package. Good question!
Here is my understanding of how things work in the cmake world.
Say I am an end user who wants to use "symbolizer" executable. What I would do is
find_package(symbolizer). This would try to figure out the location of the executable and it would set certain variables which can be used in the build process.
You need to provide Findsymbolizer.cmake file.
Please take a look at : http://www.cmake.org/Wiki/CMake:How_To_Find_Libraries
Also look at the Find*.cmake files provided in /usr//share/cmake/Modules directory if you are Unix/Linux platform.
Once the Findsymbolizer.cmake file is working properly, send it to the cmake mailing list for review. Once accepted it can be packaged in the next release of cmake. Then your module is usable with cmake. Hope I answered your question. Please update if you need more info.
After installing the BitTorrent-bencode package, either via easy_install BitTorrent-bencode or pip install BitTorrent-bencode, or by downloading the tarball and installing that via easy_install $tarball, I discover that /usr/local/lib/python2.6/dist-packages/BitTorrent_bencode-5.0.8-py2.6.egg/ contains EGG-INFO/ and test/ directories. Although both of these subdirectories contain files, there are no files in the BitTorr* directory itself. The tarball does contain bencode.py, which is meant to be the actual source for this package, but it's not installed by either of those utils.
I'm pretty new to all of this so I'm not sure if this is a problem with the package or with what I'm doing. The package was packaged a while ago (2007), so perhaps it's using some deprecated configuration aspect that I need to supply a command-line flag for.
I'm more interested in learning what's wrong with either the package or my procedures than in getting this particular package installed; there is another package called hunnyb that seems to do a decent enough job of decoding bencoded data. Mostly I'd like to know how to deal with such problems in other packages. I'd also like to let the package maintainer know if the package needs updating.
edit
#Andrey Popp explains that the problem is likely with the setup.py file. I guess the only way I can really get an answer to my question is by actually R-ing TFM. However since I likely won't have time to do that thoroughly for a while yet, I've posted the setup.py file here.
A quick browse through the easy_install manual reveals that the function find_modules(), which this module's setup.py makes use of, searches for files named __init__.py within the package. The source code file in question is named bencode.py, so perhaps this is the problem: it should be named __init__.py?
edit 2
Having now learned Python packaging, I gather that the problem is that this module is using setuptools.find_packages, and has its source at the root of its directory structure, but hasn't passed anything in package_dir. It would seem to be fairly trivial to fix. However, the author is not reachable by his PyPI contact info. The module's PyPI page lists a "Package Index Owner" as well. I'm not sure what that's supposed to mean, but I did manage to get in touch with that person, who I think is maybe not in a position to maintain the module. In any case, it's still in the same state as when I posted this question back in June.
Given that the module seems to be more or less abandoned, and that there's a suitable replacement for it in hunnyb, I've accepted that #andreypopp's answer is about as good of one as I'm going to get.
It seems this package's setup.py is broken — it does not define right package for distribution. I think, you need to check setup.py in source release and if it is true — report a bug to author of this package.
For my project I would be using the argparse library. My question is, how do I distribute it with my project. I am asking this because of the technicalities and legalities involved.
Do I just:
Put the argparse.py file along with
my project. That is, in the tar file for my project.
Create a package for it for my
distro?
Tell the user to install it himself?
What's your target Python version? It appears that argparse is included from version 2.7.
If you're building a small library with minimal dependencies, I would consider removing the dependency on an external module and only use facilities offered by the standard Python library. You can access command line parameters with sys.argv and parse them yourself, it's usually not that hard to do. Your users will definitely appreciate not having to install yet another third party module just to use your code.
It would be best for the user to install it so that only one copy is present on the system and so that it can be updated if there are any issues, but including it with your project is a viable option if you abide by all requirements specified in the license.
Try to import it from the public location, and if that fails then resort to using the included module.
You could go with Ignacio's suggestion.
But... For what it is worth, there's another library for argument parsing built into Python, which is quite powerful. Have you tried optparse? It belongs to the base Python distribution and has been there for a while...
Good luck!