Why is pytest asking to specify --tx - python

After multiple successful tests, pytest is suddenly throwing this error:
$ pytest -vvv -x --internal --oclint -n -32
============================= test session starts ==============================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.7.0, pluggy-0.13.1 -- /usr/local/opt/python/bin/python3.7
cachedir: .pytest_cache
rootdir: /Users/pre-commit-hooks/code/pre-commit-hooks, inifile: pytest.ini
plugins: xdist-1.31.0, forked-1.1.3
ERROR: MISSING test execution (tx) nodes: please specify --tx
This is my pytest.ini:
[pytest]
markers =
oclint: marks tests as slow (deselect with '-m "not slow"')
internal: marks tests as checking internal components. Use this if you are developing hooks
-n # is from pytest-xdist, which is an addon to pytest.
This was strange to me because pytest on a travis linux build was working just fine until the last iteration.
Question
Why is pytest asking me to add --tx for my tests?

If you fat-finger a dash after the -n number like -n -32, pytest will complain about a missing --tx option. In other words the number after -n cannot be negative and what you want to use instead is -n 32.
--tx is normally used like so to invoke a Python2.7 subprocess:
pytest -d --tx popen//python=python2.7
More information about using --tx as part of pytest can be found in the pytest-xdist [docs on this flag](pytest -d --tx popen//python=python2.7).

I got this same pytest-xdist error but for a different reason. I had set --forked --dist=loadfile but did not specify --numprocesses. Setting that option fixed the error.

Just FYI: Similar error can happen if one specifies e.g. dist=loadfile without specifying -n 32.

Related

Reuse environment on Tox 4

This is my tox.ini file:
# Tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
#
# See also https://tox.readthedocs.io/en/latest/config.html for more
# configuration options.
[tox]
# Choose your Python versions. They have to be available
# on the system the tests are run on.
# skipsdist=True
ignore_basepython_conflict=false
[testenv:{setup,lint,codestyle,docstyle,tests,doc-linux,doc-darwin,doc-win32}]
basepython=python3.9
envdir = {toxworkdir}/py39
setenv =
PROJECT_NAME = project_name
passenv =
WINDIR
install_command=
pip install \
--find-links=pkg \
--trusted-host=pypi.python.org \
--trusted-host=pypi.org \
--trusted-host=files.pythonhosted.org \
{opts} {packages}
platform = doc-linux: linux
doc-darwin: darwin
doc-win32: win32
deps =
-r{toxinidir}/requirements-dev.txt
-r{toxinidir}/requirements.txt
commands =
setup: python -c "print('All SetUp')"
# Mind the gap, use a backslash :)
lint: pylint -f parseable -r n --disable duplicate-code \
lint: --extension-pkg-whitelist=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-modules=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-classes=PyQt5,numpy,torch,cv2,boto3 \
lint: project_name \
lint: {toxinidir}/script
lint: pylint -f parseable -r n --disable duplicate-code \
lint: demo/demo_file.py
codestyle: pycodestyle --max-line-length=100 \
codestyle: --exclude=project_name/third_party/* \
codestyle: project_name demo script
docstyle: pydocstyle \
docstyle: --match-dir='^((?!(third_party|deprecated)).)*' \
docstyle: project_name demo script
doc-linux: make -C {toxinidir}/doc html
doc-darwin: make -C {toxinidir}/doc html
doc-win32: {toxinidir}/doc/make.bat html
tests: python -m pytest -v -s --cov-report xml --durations=10 \
tests: --cov=project_name --cov=script \
tests: {toxinidir}/test
tests: coverage report -m --fail-under 100
On tox<4.0 it was very convinient to run tox -e lint to fix linting stuff or tox -e codestyle tox fix codestyle stuff, etc. But now, with version tox>4.0 each time I run one of these commands I get this message (for instance):
codestyle: recreate env because env type changed from {'name': 'lint', 'type': 'VirtualEnvRunner'} to {'name': 'codestyle', 'type': 'VirtualEnvRunner'}
codestyle: remove tox env folder .tox/py39
And it takes forever to run these commands since the evironments are recreated each time ...
I also use these structure for running tests on jenkins so I can map each of these commands to a jenkins stage.
How can I reuse the environment? I have read that it is possible to do it using plugins, but no idea how this can be done, or how to install/use plugins.
I have tried this:
tox multiple tests, re-using tox environment
But it does not work in my case.
I spect to reuse the environment for each of the environments defined in the tox file.
As an addition to N1ngu's excellent answer...
You could re-structure your tox.ini as following:
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
<here goes the lint specific configuration>
[testenv:codestyle]
...
And so on. This is a common setup.
While still the environments need to be created at least once, they won't get recreated on each invocation.
This all said, you could also have a look at https://pre-commit.com/ to run your linters, which is very common in the Python community.
Then you would have a tox.ini like the following...
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
deps = pre-commit
commands = pre-commit run --all-files
There is now a definite answer about re-use of environments in the faq:
https://tox.wiki/en/latest/upgrading.html#re-use-of-environments
I fear the generative names + factor-specific commands solution you linked relied on tox-3 not auto-recreating the environments by default, which is among the new features in tox 4. Now, environment recreation is something that can be forced (--recreate) but can't be opted-out.
Official answer on this https://github.com/tox-dev/tox/issues/425 boils down to
Officially we don't allow sharing tox environments at the moment [...] As of today, each tox environment has to have it's own virtualenv even if the Python version and dependencies are identical [...] We'll not plan to support this. However, tox 4 allows one to do this via a plugin, so we'd encourage people [...] to try it [...]. Once the project is stable and widely used we can revisit accepting it in core.
So that's it, write a plugin. No idea on how to do that either, so my apologies if this turns out as "not an answer"

'No module named pytest' on Gitlab CI nox session despite installing it and using 'external'

When I launch my code to gitlab it goes through the CI, but the pytest session fails while on local machine doesn't.
On my gitlab-ci.yaml I tried installing pytest inside and outside the requirements.txt file, this is how it looks now:
.test:
tags: ["CsLib"]
before_script:
- python3 -m pip install -r requirements.txt
- python3 -m pip install pytest
pytest:
stage: test
extends: ".test"
script:
- nox -s test
After doing the installation it goes to the pytest defined session, that looks like this:
#session(python=["3.9"])
def test(s: Session) -> None:
s.posargs.append("--no-install")
s.run("python", "-m", "pytest", "tests", external=True)
I have enabled the use of existing virtualenvs, and typed external=True for it to use already installed packages. Yet it gives the error
$ nox -s test
nox > Running session test-3.9
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/test-3-9
nox > python -m pytest tests
/builds/RnD/python-libs/.nox/test-3-9/bin/python: No module named pytest
nox > Command python -m pytest tests failed with exit code 1
nox > Session test-3.9 failed.
The thing is, when doing a session lint that it has almost the same structure as the test session but using flake8 it gives no error.
#session(python=["3.9"])
def lint(s: Session) -> None:
s.posargs.append("--no-install")
s.run("flake8", external=True)
This 'flake8' is installed within the requirements file, I tried doing that with pytest but it does not work.
If I type 'nox -s test' in my local machine it executes it without any problem, so I must be doing something wrong on the CI part that I can not see.

VSCode Python test failing ERROR: file not found: ./test_mything_plugin.py::test_get_conn

very similar to issue #8222 on the vscode-python github.com issues list, but that thread seemed dead, so I am opening a new one
Environment Data
VSCode install
Version: 1.39.2 (user setup)
Commit: 6ab598523be7a800d7f3eb4d92d7ab9a66069390
Date: 2019-10-15T15:35:18.241Z
Electron: 4.2.10
Chrome: 69.0.3497.128
Node.js: 10.11.0
V8: 6.9.427.31-electron.0
OS: Windows_NT x64 6.1.7601
VSCode Remote - SSH
I am using VSCode Remote - SSH to do all dev and testing on a remote linux system, version 0.48.0
VSCode Extensions
Using only the VSCode Python extension ms-python.python version 2019.11.50794
I used to use the python test extension but that capability is now absorbed into the python extension, which is great
VSCode project settings.json
{
"python.pythonPath": "/local/me/opt/miniconda3/envs/deathstar/bin/python",
"python.testing.pytestArgs": [
"test",
"--disable-warnings",
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.pytestEnabled": true,
}
Python
$ python --version
Python 3.6.8 :: Anaconda, Inc.
$ python -c "import pytest;print(pytest.__version__)"
5.3.1
Expected behavior
VSCode is able to execute the tests as shown in the test discovery.
VSCode adornments show in test .py files
Actual behavior
Test discover works and the tests show in the Test Explorer, great!
Test adornments do not show in the Text Editor window
When running a test, the test file, which was already discovered, cannot be found and the test execution fails out with stack trace in DEBUG CONSOLE
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-5.3.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/me/project/project_name
collected 0 items
-------------- generated xml file: /tmp/tmp-304736fTj9ikMPptk.xml --------------
============================== 1 warning in 0.01s ==============================
ERROR: file not found: ./test_mything_plugin.py::test_get_conn
Output
Developer Tools Console
[Extension Host] Info Python Extension: 2019-12-31 20:10:20: Cached data exists ActivatedEnvironmentVariables, /home/tjones/project/airflow_etl
console.ts:137 [Extension Host] Info Python Extension: 2019-12-31 20:10:20: getActivatedEnvironmentVariables, Class name = b, completed in 1ms, Arg 1: <Uri:/home/tjones/project/airflow_etl>, Arg 2: undefined, Arg 3: undefined
console.ts:137 [Extension Host] Info Python Extension: 2019-12-31 20:10:20: > /local/tjones/opt/miniconda3/envs/airflow/bin/python -m pytest --rootdir ~/project/airflow_etl --junitxml=/tmp/tmp-1575s6J3FtN4Ho55.xml --disable-warnings ./test_bam_ctds_plugin.py::test_get_conn
console.ts:137 [Extension Host] Info Python Extension: 2019-12-31 20:10:20: cwd: ~/project/airflow_etl
Python Output
_when I hit the debug button, I get nothing, but when I hit the Play button, I get this
> /local/tjones/opt/miniconda3/envs/airflow/bin/python -m pytest --rootdir ~/project/airflow_etl --junitxml=/tmp/tmp-1575D8SX75zh6k5j.xml --disable-warnings ./test_bam_ctds_plugin.py::test_get_conn
cwd: ~/project/airflow_etl
> /local/tjones/opt/miniconda3/envs/airflow/bin/python -m pytest --rootdir ~/project/airflow_etl --junitxml=/tmp/tmp-1575s6J3FtN4Ho55.xml --disable-warnings ./test_bam_ctds_plugin.py::test_get_conn
cwd: ~/project/airflow_etl
Typically the code lenses for tests fail because either another extension interferes (e.g. gitlens) or the code isn't being picked up by IntelliSense (i.e. Jedi or the Microsoft Python Language Server). I would try turning off your other extensions and see if that solves the problem. I would also check that you get code completion in your tests files.

Test output not visible in Jenkins build log

I want to see stdout coming from a python test inside the jenkins build logs. I'm running pytest (==5.3.1) from within my Jenkins pipeline inside an sh script:
stage('unit tests') {
print "starting unit tests"
sh script: """
source env-test/bin/activate && \
python -m pytest -x -s src/test/test*.py
""", returnStdout: true, returnStatus: true
}
Note that I'm running my tests from with a virtual environment (env-test).
Unfortunately, the Jenkins logs do not display output that I send from within my tests:
def test_it(self):
print('\nhello world')
self.assertTrue(True)
But it only shows the initial call:
+ python -m pytest -x -s src/test/testModel.py
[Pipeline] }
[Pipeline] // stage
Whereas my local pycharm ide and gitbash shows all output:
============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-5.3.1, py-1.8.0, pluggy-0.13.1 -- C:\...\Anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\...\src\test
collecting ... collected 1 item
testModel.py::TestModel::test_it
PASSED [100%]
hello world
============================== 1 passed in 0.57s ==============================
The pytest docs are talking about Capturing of the stdout/stderr output. So I tried to use the -s parameter in order to disable capturing but without success.
The issue was the returnStdout parameter of the groovy sh script command:
returnStdout (optional) If checked, standard output from the task is
returned as the step value as a String, rather than being printed to
the build log. (Standard error, if any, will still be printed to the
log.) You will often want to call .trim() on the result to strip off a
trailing newline. Type: boolean
So I simply remove that option from the sh script command.

Disabling pytest plugin works locally, doesn't work on CI

I have a pytest setup with the following config file I use for integration tests:
[pytest]
addopts = -p no:python -p no:random-order --tb=short
junit_suite_name = Integration
filterwarnings =
ignore::DeprecationWarning
The aim is to not load the random-order plugin. When running locally, I get this as the output:
$ pytest -c pytest-integration.ini --junitxml=integration-tests.xml tests/integration/
======================= test session starts ===========================
platform darwin -- Python 3.6.4, pytest-3.7.2, py-1.7.0, pluggy-0.8.0
rootdir: /Users/ringods/Projects/customer/project/tests/integration,
inifile: pytest-integration.ini
plugins: cov-2.6.0, mamba-1.0.0
collected 629 items
As expected, no trace of the random-order plugin. I pushed my changes to our build server (Jenkins), and this is the output from Jenkins:
+ pytest -c pytest-integration.ini --junitxml=integration-tests.xml tests/integration/
===================== test session starts ========================
platform linux -- Python 3.6.3, pytest-3.7.2, py-1.7.0, pluggy-0.8.0
Test order randomisation NOT enabled. Enable with --random-order or -- random-order-bucket=<bucket_type>
rootdir: /home/centos/workspace/test-reporting-L2CS5UFPVK3I5UNI6BJIMJPWQQMDOV465LKDS2BSKJ5UXDZGAI6Q/tests/integration, inifile: pytest-integration.ini
plugins: random-order-1.0.4, cov-2.6.0, mamba-1.0.0
collected 629 items
I can not seem to find why the random-order plugin is still loaded. Can anyone help me out?
Make sure that config file which you use locally is same as in Jenkins.
Please check if you have pushed changes in config file as well.

Categories