I would like to be able to run some python unit tests apart from other tests by means of CTest and the following command:
make unit_tests
I tried the following combination but it does not work:
ADD_TEST(unit_test_1 ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_1.py --verbose)
ADD_TEST(unit_test_2 ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_2.py --verbose)
ADD_CUSTOM_TARGET(unit_tests COMMAND ${CMAKE_CTEST_COMMAND}
DEPENDS unit_test_1 unit_test_2)
Do you know how to do it?
This works for me (I replaced the test commands with some dummy statements, but adjusting it to invoke python should be doable):
cmake_minimum_required(VERSION 3.11)
enable_testing()
add_test(unit_test_1 echo "Unit test 1")
add_test(unit_test_2 echo "Unit test 2")
add_custom_target(unit_tests COMMAND ${CMAKE_CTEST_COMMAND})
No need to add any dependencies to the unit_tests target. By default, ctest runs all tests.
Related
When I run my testsuite with parallel executions I receive different results compared to run only one test runner.
I tried two approaches and both did not provide proper results. These commands did either only produce one partial result or less than we had runners.
I've tried with coverage and pytest combined:
COVERAGE_PROCESS_START=./my_app coverage --parallel-mode --concurrency=multiprocessing run -m pytest -m "not e2e" -n 4
Also with pytest and pytest-cov:
pytest -m "not e2e" -n 4 --cov=my_app
The second one also had the issue that some templatetags were not seen as registered even though others in the same directory were registered.
After running these I executed coverage combine and coverage report. When run in parallel the results are always incomplete compared to running it with only one test runner, which works perfectly fine:
coverage run -m pytest -m "not e2e"
This is my coveragerc:
[run]
include = my_app/*
omit = *migrations*, *tests*
plugins =
django_coverage_plugin
[report]
show_missing = true
Does some know how to get proper coverage results when running pytest in parallel?
I'm getting deprecation warning from my pipelines at circleci.
Message.
/home/circleci/evobench/env/lib/python3.7/site-packages/_pytest/junitxml.py:436: PytestDeprecationWarning: The 'junit_family' default value will change to 'xunit2' in pytest 6.0.
Command
- run:
name: Tests
command: |
. env/bin/activate
mkdir test-reports
python -m pytest --junitxml=test-reports/junit.xml
How should I modify command to use xunit?
Is it possible to a default tool, as it is mentioned in the message?
I mean without specyfing xunit or junit.
Here's full pipeline.
Run your command in this ways.
with xunit2
python -m pytest -o junit_family=xunit2 --junitxml=test-reports/junit.xml
with xunit1
python -m pytest -o junit_family=xunit1 --junitxml=test-reports/junit.xml or
python -m pytest -o junit_family=legacy --junitxml=test-reports/junit.xml
This here describes the change in detail:
The default value of junit_family option will change to xunit2 in
pytest 6.0, given that this is the version supported by default in
modern tools that manipulate this type of file.
In order to smooth the transition, pytest will issue a warning in case
the --junitxml option is given in the command line but junit_family is
not explicitly configured in pytest.ini:
PytestDeprecationWarning: The `junit_family` default value will change to 'xunit2' in pytest 6.0. Add `junit_family=legacy` to your
pytest.ini file to silence this warning and make your suite
compatible.
In order to silence this warning, users just need to configure the
junit_family option explicitly:
[pytest]
junit_family=legacy
In your pytest.ini file add the following line:
junit_family=legacy
If you want to keep the default behavior of the --junitxml option. Or you can accept the new version, xunit2 but not explicitly defining the junit_family variable.
Essentially what the warning is saying is you are giving the --junitxml option in your
run
name: Tests
section not specifying the junit_family variable. You need to start to explicitly defining it to remove the warning or accept the new default.
This thread goes into more details about where to find the .ini file for pytest.
For official statement/documentation about moving from xunit1 to xunit2 read: docs.pytest.org
Also if your project contains pytest.ini file you can set junit_family usage directly from the file like:
# pytest.ini
[pytest]
minversion = 6.0
junit_family=xunit2
junit_suite_name = Pytest Tests
addopts = -ra -q -v -s --junitxml=path/to/pytest_results/pytest.xml
The other answers pretty much covered the means of specifying the junit family, either within pytest.ini or at the commandline with an ini option override.
It's worth looking at the differences between xunit1 and xunit2 for a concrete xml file. Doing a quick spot check on the differences I found these differences show in the image below for the following test module...
# test_stub.py
import sys
import pytest
def test_pass():
assert True
def test_fail():
assert False
if __name__ == "__main__":
sys.exit(pytest.main([__file__] + sys.argv[1:]))
Pytest was ran under three separate configurations (which match the vertical order in the image)
# Default execution
pytest test_stub.py --junit-xml=out.xml
pytest test_stub.py --junit-xml=out_xunit2.xml -o junit_family=xunit2
# Junit prefix execution
pytest test_stub.py --junit-prefix=FOOOP --junit-xml=out_prefix.xml
pytest test_stub.py --junit-prefix=FOOOP --junit-xml=out_prefix_xunit2.xml -o junit_family=xunit2
# Junit suite execution
pytest -o junit_suite_name=SUITE test_stub.py --junit-xml=out_suite.xml
pytest -o junit_suite_name=SUITE test_stub.py --junit-xml=out_suite_xunit2.xml -o junit_family=xunit2
All of the diffs pretty much highlight the fact that xunit2 omits the file and line attributes that showed up previously in xunit1. The other diffs were merely timestamp differences. Both junit_suite_name and junit-prefix behave as before.
Another major difference is that for some reason record_property has been deprecated under the xunit2 schema.
PytestWarning: record_property is incompatible with junit_family 'xunit2' (use 'legacy' or 'xunit1')
https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.junitxml.record_property
I'm running pytest test from shell script.
The relevant line in the script looks something like:
pytest pytest_tests --param=$my_param
According to pytest documentation, "Running pytest can result in six different exit codes" (0-5).
My question is how can I get this exit code from the script?
I tried something like
exit_code = pytest pytest_tests --param=$my_param
echo $exit_code
But I got this:
exit_code: command not found
How can I get it? Or is there a better way to get pytest results in the shell script?
After a command runs its exit code should be available via the $? variable. Try something like this:
pytest pytest_tests --param=$my_param
echo Pytest exited $?
This works in Bash, and should work in the regular sh Bourne shell and zsh as well.
If you need to assign this to another variable, use
my_var=$?
Note the lack of spaces.
I'd like to only run selenium tests in my test suite, in addition to filtering it down to only run tests in a specific file/folder. It seems like I should be able to accomplish this with the -m option, and the path positional argument. Furthermore, I'm doing this in a bash script.
So for example, I tried something like this:
#!/bin/bash
# ...some logic here for determining `EXTRA` arg
EXTRA = "not selenium"
py.test -m $EXTRA -v -s --tb=long --no-flaky-report ~/project/mytests/test_blerg.py
And then my test looks like this (still using xunit-style classes):
#pytest.mark.selenium
class BaseTest(UnitTest):
pass
class ChildTest(BaseTest):
def test_first_case(self):
pass
When I run the py.test command as I described above, I get this:
============================================================================ no tests ran in 0.01 seconds ============================================================================
ERROR: file not found: selenium"
Not completely sure why this doesn't work. I'll try manually overriding pytest_runtest_setup(), but I'm feel like I should be able to accomplish what I want without doing that. Also, just FYI, this is a django project, using Django==1.8.7 and pytest-django==2.9.1.
Any help would be greatly appreciated :)
Figured it out. This has nothing to do with py.test itself. I had an error in how I was calling the py.test command in my bash script. The amended command looks like this:
py.test -m "$EXTRA" -v -s --tb=long --no-flaky-report ~/project/mytests/test_blerg.py
Works as expected!
How can I generate test report using pytest? I searched for it but whatever i got was about coverage report.
I tried with this command:
py.test sanity_tests.py --cov=C:\Test\pytest --cov-report=xml
But as parameters represents it generates coverage report not test report.
Ripped from the comments: you can use the --junitxml argument.
$ py.test sample_tests.py --junitxml=C:\path\to\out_report.xml
You can use a pytest plugin 'pytest-html' for generating html reports which can be forwarded to different teams as well
First install the plugin:
$ pip install pytest-html
Second, just run your tests with this command:
$ pytest --html=report.html
You can also make use of the hooks provided by the plugin in your code.
import pytest
from py.xml import html
def pytest_html_report_title(report)
report.title = "My very own title!"
Reference: https://pypi.org/project/pytest-html/
I haven't tried it yet but you can try referring to https://github.com/pytest-dev/pytest-html. A python library that can generate HTML output.
py.test --html=Report.html
Here you can specify your python file as well. In this case, when there is no file specified it picks up all the files with a name like 'test_%' present in the directory where the command is run and executes them and generates a report with the name Report.html
You can also modify the name of the report accordingly.