For a package I've authored, I've done python setup.py sdist bdist_wheel, which generates some package artifacts in the dist/ directory. Now I'd like to run the package's unit tests in those artifacts. What's a good way to do it?
To be clear: an alternative would be to run the tests directly from the local source files, but I want to avoid that to make sure I'm testing the exact pre-built artifact users would be installing (as suggested here).
I'm using Python 3, and I'm on a Linux or Mac OS environment. My context is a build server that builds, tests, and then publishes artifacts (to a private PyPI-like repo) as commits are made to a Git repository.
If there's some other approach I should be using instead, I'm all ears.
What you can do is:
Create a virtual environment
Install your package
Run the tests against your installed library using tools like pytest, you can read more about pytest good practices here: http://pytest.org/dev/goodpractises.html
As pointed in the pytest docs take a look to tox as well for your CI server: http://pytest.org/dev/goodpractises.html#use-tox-and-continuous-integration-servers
This is a related question regarding how to test using the installed package: Force py.test to use installed version of module
Related
I'm writing a Python package to be distributed and installed via PyPi.org. There are plenty of examples out there, but I can't get my mind wrapped around the proper usage of the install_requires, setup_requires, and tests_require arguments in the call to setup().
I know install_requires is the minimum set of dependencies for the library itself. This one is easy.
What is the difference (if there need be any) between setup_requires and tests_require?
What needs to go into each one if I want unit tests to run in a CI environment? And should unit tests run when the library gets installed?
When I set up a local virtualenv for developing and testing the library, which set of requires do I want installed?
setup_requires: Don't use it. It was a failed experiment of setuptools. It has been obsoleted by PEP517 now (see deprecation note here) where the build system specifies build requirements declaratively in that config section, for example:
[build-system] # in pyproject.toml
requires = ["setuptools >= 40.6.0", "wheel"]
build-backend = "setuptools.build_meta"
tests_require: Don't use it. It was a failed experiment of distutils. It has been obsoleted by projects such as pytest and tox (see deprecation note here). Nobody runs their tests by calling python setup.py test anymore, and nobody wants their test dependencies downloaded into the project directory - they want them installed into the virtualenv instead:
[options.extras_require] # in setup.cfg
test =
pytest
pytest-cov
So, to address the three points directly:
Both of those are cruft, omit them.
Specify your test requirements elsewhere (either in a setuptools "extras_require" or in a plain old requirements_test.txt file). Yes, tests should run against the installed code.
When you set up a local virtualenv for developing and testing the library, both the local package and the test requirements should be installed, with e.g. pip install -e ".[test]"
When I use the command python setup.py test, all of the documentation I've seen says setuptools will handle installing the testing dependencies. Where does it install them and are they deleted from the machine after the test suite runs? I've noticed none of the testing modules are actually installed into my virtual environment after this command completes.
I understand it takes all of the modules in the tests_require list and installs them somewhere but I'm not sure where, what it does with them afterward and why it does this. Also, is there any way to pass arguments to the command without using flags, like with a config file or something?
Avoid python setup.py test and tests_require, it's crufty and is now deprecated.
That old feature just downloads the test deps to the project's setup directory, which is seldom what the developer wanted or expected to happen! That doesn't work well in a modern CI workflows with virtual environments, where you would want your dependencies installed to site-packages.
The recommended way to do it using setuptools these days is with an extras_require tag.. See here for an example.
It installs them into an automatically-created subdirectory of the code base named .eggs as .eggs. That's because .eggs are designed to be importable from any location.
This will thus most likely not work in a modern environment because packages are not distributed as .eggs (which lost competition to .whls) so setuptools will have to build them from source (with bdist_egg). Which is likely to fail for many widely-used binary packages with nontrivial build requirements (not to mention the time needed and the fact that packages are not tested as .eggs, either, and may fail when packaged like this).
Instead, listing build requirements in requirements.txt and invoking pip install -r requirements.txt before the build seems to have become widespread practice. This does not make setup.py automatically buildable from source by pip though.
I tried to install them myself from setup.py but this proved to be fragile (e.g. if the user doesn't have write access to site-packages).
The best solution adopted by at least a number of high-profile projects seems to be to just make setup.py fail if they are not present. This is especially useful if the requirements are not Python but C libraries as setup.py doesn't know how to install these in the specific environment anyway. As you can see, this complements requirements.txt naturally.
I have a python app with its setup.py that's working just fine to install it through setuptools. I am then packaging it up in DEB and PKGNG using the excellent Effing package management. I've also made some quick tests with setuptools-pkg and that seems to work too.
Now I have a need to distribute the packages including init scripts to start/stop/manage the service. I have my init scripts in the source repo and, according to what seems to be best practice, I'm not doing anything with them in setuptools and I'm handling them in the os-specific packaging: for debian-based systems I use the --deb-init, --deb-upstart and --deb-systemd FPM options as needed.
How can I build a FreeBSD package that includes the correct rc.d script, using FPM or through any other means?
All the examples I've seen are adding the rc.d script when building a package through the ports collection but this is an internal app and is not going to be published to the Ports or on PyPi. I want to be able to check out the repository on a FreeBSD system, launch a command that gives me a package, distribute it to other FreeBSD systems, install it using pkg and have my init script correctly deployed to /usr/local/etc/rc.d/<myappname>. There's no need to keep using FPM for that, anything works as long as it gives me a well-formed package.
I would highly suggest creating your package as if it were any other port either if is going to be published or not.
One of the advantages you can inherit by doing this is that you could also include all your test and automate the deployment having out of the box the base for a continues integration/delivery setup.
Check out poudriere. You could indeed maintain a set of custom ports with your very own settings and distribute them across your environments without any hassle:
pkg install -r your-poudriere yourpkg
In case this is probably too much or probably doesn't adapt well to your use case, you can always fallback to ansible, in where you could create a custom rc.d within a template of an ansible role.
If you just want to build and deploy something, let's say a microservice, then probably pkg is not the best tool, maybe you just need a supervisor that can work on all your platforms (sysutils/immortal) so that you could just distribute your code and have a single recipe for starting/stoping the service.
nbari's answer is probably the Right Way™ to do this and I'd probably create my own "port" and use that to build the package on a central host.
At the time of my original question I had taken a different approach that I'm reporting here for the sake of completeness.
I am still building the applications package (ie. myapp-1.0.0.txz) with fpm -s python -t freebsd, which basically uses Python's setuptools infrastructure to get the necessary informations, and I don't include any rc.d file in it.
I also build a second package which I will call myapp-init-1.0.0.txz with the source directory type (ie. fpm -s dir -t freebsd) and I only include the init script in that package.
Both packages get distributed to hosts and installed, thus solving my distribution issue.
I am currently about to deploy my Python application. I used to think that the only good way to install the pure Python app is to just copy the source code files along with the requirements file, and install packages listed in the requirements file (also, Python onbuild Docker image does suppose this way).
But I can see that folks often install their apps using setuptools like ./setup.py install(it seems that Warehouse project does this, for example).
Which of the two is considered a better practice?
What are the benefits from installing your app as a package?
I'm newish to the python ecosystem, and have a question about module editing.
I use a bunch of third-party modules, distributed on PyPi. Coming from a C and Java background, I love the ease of easy_install <whatever>. This is a new, wonderful world, but the model breaks down when I want to edit the newly installed module for two reasons:
The egg files may be stored in a folder or archive somewhere crazy on the file system.
Using an egg seems to preclude using the version control system of the originating project, just as using a debian package precludes development from an originating VCS repository.
What is the best practice for installing modules from an arbitrary VCS repository? I want to be able to continue to import foomodule in other scripts. And if I modify the module's source code, will I need to perform any additional commands?
Pip lets you install files gives a URL to the Subversion, git, Mercurial or bzr repository.
pip install -e svn+http://path_to_some_svn/repo#egg=package_name
Example:
pip install -e hg+https://rwilcox#bitbucket.org/ianb/cmdutils#egg=cmdutils
If I wanted to download the latest version of cmdutils. (Random package I decided to pull).
I installed this into a virtualenv (using the -E parameter), and pip installed cmdutls into a src folder at the top level of my virtualenv folder.
pip install -E thisIsATest -e hg+https://rwilcox#bitbucket.org/ianb/cmdutils#egg=cmdutils
$ ls thisIsATest/src
cmdutils
Are you wanting to do development but have the developed version be handled as an egg by the system (for instance to get entry-points)? If so then you should check out the source and use Development Mode by doing:
python setup.py develop
If the project happens to not be a setuptools based project, which is required for the above, a quick work-around is this command:
python -c "import setuptools; execfile('setup.py')" develop
Almost everything you ever wanted to know about setuptools (the basis of easy_install) is available from the the setuptools docs. Also there are docs for easy_install.
Development mode adds the project to your import path in the same way that easy_install does. An changes you make will be available to your apps the next time they import the module.
As others mentioned, you can also directly use version control URLs if you just want to get the latest version as it is now without the ability to edit, but that will only take a snapshot, and indeed creates a normal egg as part of the process. I know for sure it does Subversion and I thought it did others but I can't find the docs on that.
You can use the PYTHONPATH environment variable or symlink your code to somewhere in site-packages.
Packages installed by easy_install tend to come from snapshots of the developer's version control, generally made when the developer releases an official version. You're therefore going to have to choose between convenient automatic downloads via easy_install and up-to-the-minute code updates via version control. If you pick the latter, you can build and install most packages seen in the python package index directly from a version control checkout by running python setup.py install.
If you don't like the default installation directory, you can install to a custom location instead, and export a PYTHONPATH environment variable whose value is the path of the installed package's parent folder.