I want to see stdout coming from a python test inside the jenkins build logs. I'm running pytest (==5.3.1) from within my Jenkins pipeline inside an sh script:
stage('unit tests') {
print "starting unit tests"
sh script: """
source env-test/bin/activate && \
python -m pytest -x -s src/test/test*.py
""", returnStdout: true, returnStatus: true
}
Note that I'm running my tests from with a virtual environment (env-test).
Unfortunately, the Jenkins logs do not display output that I send from within my tests:
def test_it(self):
print('\nhello world')
self.assertTrue(True)
But it only shows the initial call:
+ python -m pytest -x -s src/test/testModel.py
[Pipeline] }
[Pipeline] // stage
Whereas my local pycharm ide and gitbash shows all output:
============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-5.3.1, py-1.8.0, pluggy-0.13.1 -- C:\...\Anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\...\src\test
collecting ... collected 1 item
testModel.py::TestModel::test_it
PASSED [100%]
hello world
============================== 1 passed in 0.57s ==============================
The pytest docs are talking about Capturing of the stdout/stderr output. So I tried to use the -s parameter in order to disable capturing but without success.
The issue was the returnStdout parameter of the groovy sh script command:
returnStdout (optional) If checked, standard output from the task is
returned as the step value as a String, rather than being printed to
the build log. (Standard error, if any, will still be printed to the
log.) You will often want to call .trim() on the result to strip off a
trailing newline. Type: boolean
So I simply remove that option from the sh script command.
Related
On Ubuntu 20.04.5 using pytest 7.2.0 with python 3.8.10 I have a test script which is run by
py.test -s /path/to/script ...
but it does not show any output of print statements. When running it with
py.test -v -s /path/to/script ...
I double checked that the test case I am working on is actually executed - but still no print output.
What could be going on?
After multiple successful tests, pytest is suddenly throwing this error:
$ pytest -vvv -x --internal --oclint -n -32
============================= test session starts ==============================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.7.0, pluggy-0.13.1 -- /usr/local/opt/python/bin/python3.7
cachedir: .pytest_cache
rootdir: /Users/pre-commit-hooks/code/pre-commit-hooks, inifile: pytest.ini
plugins: xdist-1.31.0, forked-1.1.3
ERROR: MISSING test execution (tx) nodes: please specify --tx
This is my pytest.ini:
[pytest]
markers =
oclint: marks tests as slow (deselect with '-m "not slow"')
internal: marks tests as checking internal components. Use this if you are developing hooks
-n # is from pytest-xdist, which is an addon to pytest.
This was strange to me because pytest on a travis linux build was working just fine until the last iteration.
Question
Why is pytest asking me to add --tx for my tests?
If you fat-finger a dash after the -n number like -n -32, pytest will complain about a missing --tx option. In other words the number after -n cannot be negative and what you want to use instead is -n 32.
--tx is normally used like so to invoke a Python2.7 subprocess:
pytest -d --tx popen//python=python2.7
More information about using --tx as part of pytest can be found in the pytest-xdist [docs on this flag](pytest -d --tx popen//python=python2.7).
I got this same pytest-xdist error but for a different reason. I had set --forked --dist=loadfile but did not specify --numprocesses. Setting that option fixed the error.
Just FYI: Similar error can happen if one specifies e.g. dist=loadfile without specifying -n 32.
very similar to issue #8222 on the vscode-python github.com issues list, but that thread seemed dead, so I am opening a new one
Environment Data
VSCode install
Version: 1.39.2 (user setup)
Commit: 6ab598523be7a800d7f3eb4d92d7ab9a66069390
Date: 2019-10-15T15:35:18.241Z
Electron: 4.2.10
Chrome: 69.0.3497.128
Node.js: 10.11.0
V8: 6.9.427.31-electron.0
OS: Windows_NT x64 6.1.7601
VSCode Remote - SSH
I am using VSCode Remote - SSH to do all dev and testing on a remote linux system, version 0.48.0
VSCode Extensions
Using only the VSCode Python extension ms-python.python version 2019.11.50794
I used to use the python test extension but that capability is now absorbed into the python extension, which is great
VSCode project settings.json
{
"python.pythonPath": "/local/me/opt/miniconda3/envs/deathstar/bin/python",
"python.testing.pytestArgs": [
"test",
"--disable-warnings",
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.pytestEnabled": true,
}
Python
$ python --version
Python 3.6.8 :: Anaconda, Inc.
$ python -c "import pytest;print(pytest.__version__)"
5.3.1
Expected behavior
VSCode is able to execute the tests as shown in the test discovery.
VSCode adornments show in test .py files
Actual behavior
Test discover works and the tests show in the Test Explorer, great!
Test adornments do not show in the Text Editor window
When running a test, the test file, which was already discovered, cannot be found and the test execution fails out with stack trace in DEBUG CONSOLE
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-5.3.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/me/project/project_name
collected 0 items
-------------- generated xml file: /tmp/tmp-304736fTj9ikMPptk.xml --------------
============================== 1 warning in 0.01s ==============================
ERROR: file not found: ./test_mything_plugin.py::test_get_conn
Output
Developer Tools Console
[Extension Host] Info Python Extension: 2019-12-31 20:10:20: Cached data exists ActivatedEnvironmentVariables, /home/tjones/project/airflow_etl
console.ts:137 [Extension Host] Info Python Extension: 2019-12-31 20:10:20: getActivatedEnvironmentVariables, Class name = b, completed in 1ms, Arg 1: <Uri:/home/tjones/project/airflow_etl>, Arg 2: undefined, Arg 3: undefined
console.ts:137 [Extension Host] Info Python Extension: 2019-12-31 20:10:20: > /local/tjones/opt/miniconda3/envs/airflow/bin/python -m pytest --rootdir ~/project/airflow_etl --junitxml=/tmp/tmp-1575s6J3FtN4Ho55.xml --disable-warnings ./test_bam_ctds_plugin.py::test_get_conn
console.ts:137 [Extension Host] Info Python Extension: 2019-12-31 20:10:20: cwd: ~/project/airflow_etl
Python Output
_when I hit the debug button, I get nothing, but when I hit the Play button, I get this
> /local/tjones/opt/miniconda3/envs/airflow/bin/python -m pytest --rootdir ~/project/airflow_etl --junitxml=/tmp/tmp-1575D8SX75zh6k5j.xml --disable-warnings ./test_bam_ctds_plugin.py::test_get_conn
cwd: ~/project/airflow_etl
> /local/tjones/opt/miniconda3/envs/airflow/bin/python -m pytest --rootdir ~/project/airflow_etl --junitxml=/tmp/tmp-1575s6J3FtN4Ho55.xml --disable-warnings ./test_bam_ctds_plugin.py::test_get_conn
cwd: ~/project/airflow_etl
Typically the code lenses for tests fail because either another extension interferes (e.g. gitlens) or the code isn't being picked up by IntelliSense (i.e. Jedi or the Microsoft Python Language Server). I would try turning off your other extensions and see if that solves the problem. I would also check that you get code completion in your tests files.
Jenkins console output from jobs running py.test tests contains unexpected characters "[1m" "[0m" like
[1m============== test session starts ==============[0m
Apparently these characters are leftovers from py.test output formatting ("test session starts" shows up as bold and colored in a terminal window). Is there a way to disable the output formatting? py.test's "--color no" option is not enough.
In my case I'm running the pytest inside a docker by suing Jenkins descriptive pipeline, so need to verify several things:
First, add ansiColor to the options
pipeline {
...
options {
ansiColor('xterm')
...
}
...
}
Second, verify that you added the docker run command the flag -t
-t : Allocate a pseudo-tty
for docker-compose it's tty: true
Third, you can force colorize by adding --color=yes to the pytest command
python -m pytest --color=yes ...
--color=color color terminal output (yes/no/auto).
Install the AnsiColor plugin in Jenkins. In its configuration panel there will be a new item "Ansi Color" with a xterm color map
Your pipeline should contain something like:
stage('Pytest'){
wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'xterm']) {
sh """
source ./$PYTHON3_ENV_NAME/bin/activate
# Execute tests
python3 -m pytest test_cases/${TEST_FILTER} --color=yes ....
"""
}
}
During run you should see then colors for pytest stage.
If you see the bash color formatting but not the colors, press F5 on browser!
Riccardo
I'm writing some integration tests that involve a Python application running under uwsgi.
To test an aspect of this, I am running an uwsgi spooler, which requires that the master process is running.
If pytest has a failed test, it returns a non-zero exit code, which is great.
Without the master process, the entire uwsgi process also returns this exit code, and so our continuous integration server responds appropriately.
However, when the master process is running, it always exits with a zero exit code - regardless of failed tests.
I need it to pass on the first non-zero exit code of a subprocess if there is one.
Note: I'm not really interested in mocking this out - I need to test this working.
I've created a Dockerized Minimal, Complete, and Verifiable Example that illustrates my issue:
Dockerfile:
FROM python:3.6.4-slim-stretch
WORKDIR /srv
RUN apt-get update \
&& apt-get install -y build-essential \
&& pip install uwsgi pytest
COPY test_app.py /srv/
CMD ['/bin/bash']
test_app.py:
import pytest
def test_this():
assert 1==0
Given the above 2 files in a directory, the following shows the return code if I run this failing test under uwsgi without the master process:
$ docker build -t=test .
$ docker run test uwsgi --chdir /srv --pyrun /usr/local/bin/pytest
...
============================= test session starts ==============================
platform linux -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /srv, inifile:
collected 1 item
test_app.py F [100%]
=================================== FAILURES ===================================
__________________________________ test_this ___________________________________
def test_this():
> assert 1==0
E assert 1 == 0
test_app.py:4: AssertionError
=========================== 1 failed in 0.05 seconds ===========================
$ echo $?
1
Note: you can see that the return code from this process (last line) is non-zero as required
Now, changing nothing other than running uwsgi with the master process, we get the following output:
$ docker run test uwsgi --set master=true --chdir /srv --pyrun /usr/local/bin/pytest
...
============================= test session starts ==============================
platform linux -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /srv, inifile:
collected 1 item
test_app.py F [100%]
=================================== FAILURES ===================================
__________________________________ test_this ___________________________________
def test_this():
> assert 1==0
E assert 1 == 0
test_app.py:4: AssertionError
=========================== 1 failed in 0.05 seconds ===========================
worker 1 buried after 0 seconds
goodbye to uWSGI.
$ echo $?
0
Note: this time the return code from this process (last line) is zero - even though the test failed
How can I get uwsgi to forward the exit code from a failing process to the master?
This works, but feels a little hacky. I'll happily accept a better answer if one comes along.
I've made this work with the addition of two additional files (and a small update to the Dockerfile):
Dockerfile:
FROM python:3.6.4-slim-stretch
WORKDIR /srv
RUN apt-get update \
&& apt-get install -y build-essential \
&& pip install uwsgi pytest
COPY test_app.py test run_tests.py /srv/
CMD ['/bin/bash']
test:
#!/bin/bash
uwsgi --set master=true --chdir /srv --pyrun /srv/run_tests.py
exit $(cat /tmp/test_results)
run_tests.py:
#!/usr/bin/python
import re
import subprocess
import sys
from pytest import main
def write_result(retcode):
path = r'/tmp/test_results'
with open(path, 'w') as f:
f.write(str(retcode))
def run():
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
retcode = 1
try:
retcode = main()
finally:
write_result(retcode)
sys.exit(retcode)
if __name__ == '__main__':
run()
The way it works is that I've copied and tweaked the pytest program into run_tests.py, where it writes out the return code of the tests to a temporary file. The tests are run via a bash script: test, that runs uwsgi, which runs the tests, then exits the script with the return code from the tests.
Results now look like:
$ docker build -t=test .
$ docker run test /srv/test
...
============================= test session starts ==============================
platform linux -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /srv, inifile:
collected 1 item
test_app.py F [100%]
=================================== FAILURES ===================================
__________________________________ test_this ___________________________________
def test_this():
> assert 1==0
E assert 1 == 0
test_app.py:4: AssertionError
=========================== 1 failed in 0.05 seconds ===========================
worker 1 buried after 0 seconds
goodbye to uWSGI.
$ echo $?
1