How to run Tox with Travis-CI - python

How do you test different Python versions with Tox from within Travis-CI?
I have a tox.ini:
[tox]
envlist = py{27,33,34,35}
recreate = True
[testenv]
basepython =
py27: python2.7
py33: python3.3
py34: python3.4
py35: python3.5
deps =
-r{toxinidir}/pip-requirements.txt
-r{toxinidir}/pip-requirements-test.txt
commands = py.test
which runs my Python unittests in several Python versions and works perfectly.
I want to setup a build in Travis-CI to automatically run this when I push changes to Github, so I have a .travis.yml:
language: python
python:
- "2.7"
- "3.3"
- "3.4"
- "3.5"
install:
- pip install tox
script:
- tox
This technically seems to work, but it redundantly runs all my tests in each version of Python...from each version of Python. So a build that takes 5 minutes now takes 45 minutes.
I tried removing the python list from my yaml file, so Travis will only run a single Python instance, but that causes my Python3.5 tests to fail because the 3.5 interpreter can't be found. Apparently, that's a known limitation as Travis-CI won't install Python3.5 unless you specify that exact version in your config...but it doesn't do that for the other versions.
Is there a way I can workaround this?

For this I would consider using tox-travis. This is a plugin which allows use of Travis CI’s multiple python versions and Tox’s full configurability.
To do this you will configure the .travis.yml file to test with Python:
sudo: false
language: python
python:
- "2.7"
- "3.4"
install: pip install tox-travis
script: tox
This will run the appropriate testenvs, which are any declared env with py27 or py34 as factors of the name by default. Py27 or py34 will be used as fallback if no environments match the given factor.
Further Reading

For more control and flexibility you can manually define your matrix so that the Python version and tox environment match up:
language: python
matrix:
include:
- python: 2.7
env: TOXENV=py27
- python: 3.3
env: TOXENV=py33
- python: 3.4
env: TOXENV=py34
- python: 3.5
env: TOXENV=py35
- python: pypy
env: TOXENV=pypy
- env: TOXENV=flake8
install:
- pip install tox
script:
- tox
In case it's not obvious, each entry in the matrix starts on a line which begins with a hyphen (-). Any items following that line which are indented are additional lines for that single item.
For example, all entries except for the last, are two lines. the last entry is only one line and does not contain a python setting; therefore, it simply uses the default Python version (Python 2.7 according to the Travis documentation). Of course, a specific Python version is not as important for that test. If you wanted to run such a test against both Python 2 and 3 (once each), then it is recommended to use the versions Travis installs by default (2.7 and 3.4) so that the tests complete more quickly as they don't need to install a non-standard Python version first. For example:
- python: 2.7
env: TOXENV=flake8
- python: 3.4
env: TOXENV=flake8
The same works with pypy (second to last entry on matrix) and pypy3 (not shown) in addition to Python versions 2.5-3.6.
While the various other answers provide shortcuts which give you this result in the end, sometimes its helpful to define the matrix manually. Then you can define specific things for individual environments within the matrix. For example, you can define dependencies for only a single environment and avoid the wasted time installing that dependency in every environment.
- python: 3.5
env: TOXENV=py35
- env: TOXENV=checkspelling
before_install: install_spellchecker.sh
- env: TOXENV=flake8
In the above matrix, the install_spellchecker.sh script is only run for the relevant environment, but not the others. The before_install setting was used (rather than install), as using the install setting would have overridden the global install setting. However, if that's what you want (to override/replace a global setting), simply redefine it in the matrix entry. No doubt, various other settings could be defined for individual environments within the matrix as well.
Manually defining the matrix can provide a lot of flexibility. However, if you don't need the added flexibility, one of the various shortcuts in the other answers will keep your config file simpler and easier to read and edit later on.

Travis provides the python version for each test as TRAVIS_PYTHON_VERSION, but in the form '3.4', while tox expects 'py34'.
If you don't want to rely on an external lib (tox-travis) to do the translation, you can do that manually:
language: python
python:
- "2.7"
- "3.3"
- "3.4"
- "3.5"
install:
- pip install tox
script:
- tox -e $(echo py$TRAVIS_PYTHON_VERSION | tr -d .)
Search this pattern in a search engine and you'll find many projects using it.
This works for pypy as well:
tox -e $(echo py$TRAVIS_PYTHON_VERSION | tr -d . | sed -e 's/pypypy/pypy/')
Source: flask-mongoengine's .travis.yml.

TOXENV environment variable can be used to select subset of tests for each version of Python via specified matrix:
language: python
python:
- "2.7"
- "3.4"
- "3.5"
env:
matrix:
- TOXENV=py27-django-19
- TOXENV=py27-django-110
- TOXENV=py27-django-111
- TOXENV=py34-django-19
- TOXENV=py34-django-110
- TOXENV=py34-django-111
- TOXENV=py35-django-19
- TOXENV=py35-django-110
- TOXENV=py35-django-111
install:
- pip install tox
script:
- tox -e $TOXENV
In tox config specify to skip missing versions of Python:
[tox]
skip_missing_interpreters=true

Related

Discover inside Makefile the python interpreter to call

I am delivering a Makefile to some people to run some python steps, example:
build:
pip install -r requirements.txt
package:
python3 -m build
The assumption here is that some of the people running the Makefile will have python 3 responding by calling python while some other will have it responding at python3 (some of them will even have an alias that will make both respond). Same happens for pip/pip3 command.
How can I discover which is the name of the interpreters to invoke programmatically through the same Makefile?
I don't want to tell them to change the Makefile manually to fit their system, I want it simply to work.
You can figure out what version of python (or pip?) it is by parsing the output of shell command asking for the version string:
# Variable for the python command (later overwritten if not working)
PYTHON_COMMAND := python
# Get version string (e.g. Python 3.4.5)
PYTHON_VERSION_STRING := $(shell $(PYTHON_COMMAND) -V)
# If PYTHON_VERSION_STRING is empty there probably isn't any 'python'
# on PATH, and you should try set it using 'python3' (or python2) instead.
ifeq $(PYTHON_VERSION_STRING),
PYTHON_COMMAND := python3
PYTHON_VERSION_STRING := $(shell $(PYTHON_COMMAND) -V)
ifeq $(PYTHON_VERSION_STRING),
$(error No Python 3 interpreter found on PATH)
endif
endif
# Split components (changing "." into " ")
PYTHON_VERSION_TOKENS := $(subst ., ,$(PYTHON_VERSION_STRING)) # Python 3 4 5
PYTHON_MAJOR_VERSION := $(word 2,$(PYTHON_VERSION_TOKENS)) # 3
PYTHON_MINOR_VERSION := $(word 3,$(PYTHON_VERSION_TOKENS)) # 4
# What python version pip targets is a little more difficult figuring out from pip
# version, having python version at the end of a string containing a path.
# Better call pip through the python command instead.
PIP_COMMAND := $(PYTHON_COMMAND) -m pip

Running multiple tox testenv matching a name fragment

Rationale
With complex dependency matrix, tox testenv name patterns end up being a list like
py37-pytest5-framework1
py37-pytest5-framework2
py37-pytest6-framework1
py37-pytest6-framework2
py38-pytest5-framework1
py38-pytest5-framework2
py38-pytest6-framework1
py38-pytest6-framework2
...
py310-pytest6-framework2
While the inner tox.ini syntax allows configuring a lot of things with name fragments, e.g.
[testenv]
basepython =
py37: python3.7
py38: python3.8
py39: python3.9
py310: python3.10
deps =
pytest5: pytest ~= 5.0
pytest6: pytest ~= 6.0
framework1: framework ~= 1.0
framework2: framework ~= 2.0
setenv =
framework2: FOO=bar
I find there is no way in telling the tox CLI in running all testenvs matching a name fragment like tox -e py39 or tox -e framework2.
Issues
The main drawback is that most usually CI testing jobs will end up being segregated by python version, so you end up writing instructions like
tox -e $PY-pytest5-framework1,$PY-pytest5-framework2,$PY-pytest6-framework1,$PY-pytest6-framework2
but then the CI jobs definition is coupled to the tox test matrix because it must be aware of:
testenvs being added or removed
matrix exclusions like pytest-5 is not compatible with python-3.10
And this is cumbersome to maintain.
Incomplete workaround
An easy-to-go workaround is simply running tox --skip-missing-interpreters, but the drawbacks are:
CI jobs can't be segregated by framework version instead of python version, for example to reuse some special framework cache
CI VMs could feature system python installations beyond the one targeted by each job, so you could en up with e.g. python-3.8 being run in all CI jobs.
Question
Am I missing some out-of-the-box mechanism to filter the testenvs to be run with a fragment that powers me to write CI jobs agnostic to the tox dependency matrix? I mean something like tox -e '*-framework2'.
Am I bound to filter and aggregate the output of tox --listenvs with shell tricks?
You could negate a regex pattern for the TOX_SKIP_ENV as the following:
$ env TOX_SKIP_ENV='.*[^-framework2]$' tox
tox4, which will be introduced within the next couple of months, introduces labels. While this may be not an immediate help for your problem, maybe you see a way to simplify your tox.ini.

shebang changed to /usr/libexec/platform-python when building python rpm packages

I am trying to build a RPM from a python application on RHEL8.2 machine.
the shebangs on the scripts are set correctly to #!/usr/bin/python3
however for some reason the shebang gets changed to #!/usr/libexec/platform-python -s when the RPM is built.
I have tried almost everything.
I have undefined the mangling according to doc: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/advanced-topics
%undefine __brp_mangle_shebangs
but the shebangs gets still changed.
this is the relevant parts of the specs file:
%undefine __brp_mangle_shebangs
Name: myapp
Version: 2.0.0
Release: 1%{?dist}
summary: rpm for my APP
BuildArch: noarch
### Build Dependencies ###
BuildRequires: python3-setuptools
BuildRequires: python3-devel
%?python_enable_dependency_generator
%build
%py3_build
%install
%py3_install
%files
....
I can include python*-rpm-macros to the specs and that would set the shebang to something like /usr/bin/python3.6 but it is too restrictive. Our code works in anything > python3.6 so if we deploy the rpm in a system with python3.8 it should work.
how can I set /usr/bin/python3 or leave the shebang unchanged on the python scripts? when the rpm is packaged?
%undefine __brp_mangle_shebangs works for me.
$ rpmbuild --version
RPM version 4.14.3
Perhaps you need to put it later in your preamble? eg:
Name: myapp
Version: 2.0.0
Release: 1%{?dist}
summary: rpm for my APP
BuildArch: noarch
### Build Dependencies ###
BuildRequires: python3-setuptools
BuildRequires: python3-devel
## Fixes
# disable shebang mangling of python scripts
%undefine __brp_mangle_shebangs
...
Note also there seem to be some additional variables that give finer grained control, however I have not tried these:
Excluding based on shebang:
%global __mangle_shebangs_exclude ruby
Excluding based on path:
%global __mangle_shebangs_exclude_from /test/
Reference: https://src.fedoraproject.org/rpms/redhat-rpm-config/pull-request/19
I would also note that the document cited above is incorrect: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/advanced-topics section 4.6.2
To prevent the BRP script from checking and modifying interpreter
directives, use the following RPM directive:
%undefine %brp_mangle_shebangs
as it is %undefine __brp_mangle_shebangs

.gitlab-ci.yml cache/artifact configuration for saving large binaries for subsequent pipeline/stages based on condition

background
I am setting up a ci/cd for a python project that has a c++ library dependence that is in the folder foo/bar/c_code_src. The build stage runs python setup.py install and compiles the c++ library binaries and output it to foo/bar/bin/
I then run a python unittest that fails if foo/bar/bin doesn't exists.
The script below is my first attempt at using .gitlab-ci.yml:
stages:
- build
- test
build:
stage: build
script:
- python setup.py install
artifacts:
paths:
- foo/bar/c_code_src
test:
stage: test
dependencies:
- build
script:
- python -m unittest foo/bar/test_bar.py
This is able to work fine however because it takes a relatively long time to compile c_code_src, and the resulting bin is fairly large, and the code in c_code_src doesn't change much I want to be able to cache the bin folder for future pipelines and only run build stage if the code in c_code_src changes. after reading the documentation it seems that I want to use cache instead of (or along side with) artifacts.
Here is my attempt at revising the .gitlab-ci.yml:
stages:
- build
- test
build:
stage: build
script:
- python setup.py install
cache:
- key: bar_cache
paths:
- foo/bar/bin
test_bar:
stage: test
dependencies:
- build
cache:
key: bar_cache
paths:
- foo/bar/bin
policy: pull
script:
- python -m unittest foo/bar/test_bar.py
What I am unsure of is how to set the condition that only run build if c_code_src changes.
In short I want:
only run build if bin does not exists or there's changes to c_code_src
cache bin such that the test stage always have the up-to-date bin even if build stage did not run
Not sure you'd be able to have a condition where bin does not exist, because typically, the job condition will look at what changes are being made in a commit without running through the whole repository code.
You can however fairly easily make the job check for changes. You can either use only:changes or rules:changes. So for the relevant job, add:
rules:
changes:
- foo/bar/c_code_src/*
As to caching, what you've written looks fine. Cache isn't always guaranteed to work but will pull a new version if necessary, so having the most recent shouldn't be a problem.

In NixOS, how can I install an environment with the Python packages SpaCy, pandas, and jenks-natural-breaks?

I'm very new to NixOS, so please forgive my ignorance. I'm just trying to set up a Python environment---any kind of environment---for developing with SpaCy, the SpaCy data, pandas, and jenks-natural-breaks. Here's what I've tried so far:
pypi2nix -V "3.6" -E gcc -E libffi -e spacy -e pandas -e numpy --default-overrides, followed by nix-build -r requirements.nix -A packages. I've managed to get the first command to work, but the second fails with Could not find a version that satisfies the requirement python-dateutil>=2.5.0 (from pandas==0.23.4)
Writing a default.nix that looks like this: with import <nixpkgs> {};
python36.withPackages (ps: with ps; [ spacy pandas scikitlearn ]). This fails with collision between /nix/store/9szpqlby9kvgif3mfm7fsw4y119an2kb-python3.6-msgpack-0.5.6/lib/python3.6/site-packages/msgpack/_packer.cpython-36m-x86_64-linux-gnu.so and /nix/store/d08bgskfbrp6dh70h3agv16s212zdn6w-python3.6-msgpack-python-0.5.6/lib/python3.6/site-packages/msgpack/_packer.cpython-36m-x86_64-linux-gnu.so
Making a new virtualenv, and then running pip install on all these packages. Scikit-learn fails to install, with fish: Unknown command 'ar rc build/temp.linux-x86_64-3.6/liblibsvm-skl.a build/temp.linux-x86_64-3.6/sklearn/svm/src/libsvm/libsvm_template.o'
I guess ideally I'd like to install this environment with nix, so that I could enter it with nix-shell, and so other environments could reuse the same python packages. How would I go about doing that? Especially since some of these packages exist in nixpkgs, and others are only on Pypi.
Caveat
I had trouble with jenks-natural-breaks to the tune of
nix-shell ❯ poetry run python -c 'import jenks_natural_breaks'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/matt/2022/12/28-2/.venv/lib/python3.10/site-packages/jenks_natural_breaks/__init__.py", line 5, in <module>
from ._jenks_matrices import ffi as _ffi
ModuleNotFoundError: No module named 'jenks_natural_breaks._jenks_matrices'
So I'm going to use jenkspy which appears to be a bit livelier. If that doesn't scratch your itch, I'd contact the maintainer of jenks-natural-breaks for guidance
Flakes
you said:
so other environments could reuse the same python packages
Which makes me think that a flake.nix is what you need. What's cool about flakes is that you can define an environment that has spacy, pandas, and jenkspy with one flake. And then you (or somebody else) might say:
I want an env like Jonathan's, except I also want sympy
and rather than copying your env and making tweaks, they can declare your env as a build input and write a flake.nix with their modifications--which can be further modified by others.
One could imagine a sort of family-tree of environments, so you just need to pick the one that suits your task. The python community has not yet converged on this vision.
Poetry
Poetry will treat you like you're trying to publish a library when all you asked for is an environment, but a library's dependencies are pretty much an environment so there's nothing wrong with having an empty package and just using poetry as an environment factory.
Bonus: if you decide to publish a library after all, you're ready.
The Setup
nix flakes thinks in terms of git repo's, so we'll start with one:
$ git init
Then create a file called flake.nix. Usually I end up with poetry handling 90% of the python stuff, but both pandas and spacy are in that 10% that has dependencies which link to system libraries. So we ask nix to install them so that when poetry tries to install them in the nix develop shell, it has what it needs.
{
description = "Jonathan's awesome env";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs";
};
outputs = { self, nixpkgs, flake-utils }: (flake-utils.lib.eachSystem [
"x86_64-linux"
"x86_64-darwin"
"aarch64-linux"
"aarch64-darwin"
] (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in
rec {
packages.jonathansenv = pkgs.poetry2nix.mkPoetryApplication {
projectDir = ./.;
};
defaultPackage = packages.jonathansenv;
devShell = pkgs.mkShell {
buildInputs = [
pkgs.poetry
pkgs.python310Packages.pandas
pkgs.python310Packages.spacy
];
};
}));
}
Now we let git know about the flake and enter the environment:
❯ git add flake.nix
❯ nix develop
$
Then we initialize the poetry project. I've found that poetry, installed by nix, is kind of odd about which python it uses by default, so we'll set it explicitly
$ poetry init # follow prompts
$ poetry env use $(which python)
$ poetry run python --version
Python 3.10.9 # declared in the flake.nix
At this point, we should have a pyproject.toml:
[tool.poetry]
name = "jonathansenv"
version = "0.1.0"
description = ""
authors = ["Your Name <you#example.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.10"
jenkspy = "^0.3.2"
spacy = "^3.4.4"
pandas = "^1.5.2"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
Usage
Now we create the venv that poetry will use, and run a command that depends on these.
$ poetry install
$ poetry run python -c 'import jenkspy, spacy, pandas'
You can also have poetry put you in a shell:
$ poetry shell
(venv)$ python -c 'import jenkspy, spacy, pandas'
It's kind of awkward to do so though, because we're two subshells deep and any shell customizations that we have the grandparent shell are not available. So I recommend using direnv, to enter the dev shell whenever I navigate to that directory and then just use poetry run ... to run commands in the environment.
Publishing the env
In addition to running nix develop with the flake.nix in your current dir, you can also do nix develop /local/path/to/repo or develop nix develop github:/githubuser/githubproject to achieve the same result.
To demonstrate the github example, I have pushed the files referenced above here. So you ought to be able to run this from any linux shell with nix installed:
❯ nix develop github:/MatrixManAtYrService/nix-flake-pandas-spacy
$ poetry install
$ poetry run python -c 'import jenkspy, spacy, pandas'
I say "ought" because if I run that command on a mac it complains about linux-headers-5.19.16 being unsupported on x86_64-darwin.
Presumably there's a way to write the flake (or fix a package) so that it doesn't insist on building linux stuff on a mac, but until I figure it out I'm afraid that this is only a partial answer.

Categories