How to add custom sections to terminal report in pytest - python

In pytest, when a test case is failed, you have in the report the following categories:
Failure details
Captured stdout call
Captured stderr call
Captured log call
I would like to add some additional custom sections (I have a server that turns in parallel and would like to display the information logged by this server in a dedicated section).
How could I do that (if ever possible)?
Thanks
NOTE:
I have currently found the following in source code but don't know whether that shall be right approach
nodes.py
class Item(Node):
...
def add_report_section(self, when, key, content):
"""
Adds a new report section, similar to what's done internally
to add stdout and stderr captured output::
...
"""
reports.py
class BaseReport:
...
#property
def caplog(self):
"""Return captured log lines, if log capturing is enabled
.. versionadded:: 3.5
"""
return "\n".join(
content for (prefix, content) in self.get_sections("Captured log")
)

To add custom sections to terminal output, you need to append to report.sections list. This can be done in pytest_report_teststatus hookimpl directly, or in other hooks indirectly (via a hookwrapper); the actual implementation heavily depends on your particular use case. Example:
# conftest.py
import os
import random
import pytest
def pytest_report_teststatus(report, config):
messages = (
'Egg and bacon',
'Egg, sausage and bacon',
'Egg and Spam',
'Egg, bacon and Spam'
)
if report.when == 'teardown':
line = f'{report.nodeid} says:\t"{random.choice(messages)}"'
report.sections.append(('My custom section', line))
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
if content:
terminalreporter.ensure_newline()
terminalreporter.section('My custom section', sep='-', blue=True, bold=True)
terminalreporter.line(content)
Example tests:
def test_spam():
assert True
def test_eggs():
assert True
def test_bacon():
assert False
When running the tests, you should see My custom section header at the bottom colored blue and containing a message for every test:
collected 3 items
test_spam.py::test_spam PASSED
test_spam.py::test_eggs PASSED
test_spam.py::test_bacon FAILED
============================================= FAILURES =============================================
____________________________________________ test_bacon ____________________________________________
def test_bacon():
> assert False
E assert False
test_spam.py:9: AssertionError
---------------------------------------- My custom section -----------------------------------------
test_spam.py::test_spam says: "Egg, bacon and Spam"
test_spam.py::test_eggs says: "Egg and Spam"
test_spam.py::test_bacon says: "Egg, sausage and bacon"
================================ 1 failed, 2 passed in 0.07 seconds ================================

The other answer shows how to add a custom section to the terminal report summary, but it's not the best way for adding a custom section per test.
For this goal, you can (and should) use the higher-level API add_report_section of an Item node (docs). A minimalist example is shown below, modify it to suit your needs. You can pass state from the test instance through an item node, if necessary.
In test_something.py, here is one passing test and two failing:
def test_good():
assert 2 + 2 == 4
def test_bad():
assert 2 + 2 == 5
def test_ugly():
errorerror
In conftest.py, setup a hook wrapper:
import pytest
content = iter(["first", "second", "third"])
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
outcome = yield
item.add_report_section("call", "custom", next(content))
The report will now display custom sections per-test:
$ pytest
============================== test session starts ===============================
platform linux -- Python 3.9.0, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /tmp/example
collected 3 items
test_something.py .FF [100%]
==================================== FAILURES ====================================
____________________________________ test_bad ____________________________________
def test_bad():
> assert 2 + 2 == 5
E assert (2 + 2) == 5
test_something.py:5: AssertionError
------------------------------ Captured custom call ------------------------------
second
___________________________________ test_ugly ____________________________________
def test_ugly():
> errorerror
E NameError: name 'errorerror' is not defined
test_something.py:8: NameError
------------------------------ Captured custom call ------------------------------
third
============================ short test summary info =============================
FAILED test_something.py::test_bad - assert (2 + 2) == 5
FAILED test_something.py::test_ugly - NameError: name 'errorerror' is not defined
========================== 2 failed, 1 passed in 0.02s ===========================

Related

Pytest missing 1 required positional argument with fixture

I'm using vscode as IDE
I have code a very simple usage of pytest fixture but it doesn't working when basic example fixture found in the pytest documentation are working well :
#pytest.fixture
def declare_hexidict():
hd = hexidict()
rvc = ReferenceValueCluster()
rv = ReferenceValue(init=3)
hd_var = (hd, rvc, rv)
return hd_var
def setitem_getitem(declare_hexidict):
print('start')
# hd = hexidict()
# rvc = ReferenceValueCluster()
# rv = ReferenceValue(init=3)
hd, rvc, rv = declare_hexidict
print('datastruct defined')
hd[rvc("key1").reflink] = rv[0].reflink
hd[rvc["key1"]] == rv[0]
assert rvc["key1"] in hd.keys(), "key :{} is not int this hexidict".format(
rvc("key1")
)
assert hd[rvc["key1"]] == rv[0], "key :{} return {} instead of {}".format(
rvc["key1"], hd[rvc["key1"]], rv[0]
)
#set non value item (on set une liste)
hd[rvc("key2").reflink] = [rv[1].reflink]
hd[rvc["key2"]]
assert type(hd[rvc["key2"]]) == list
#on verifie que l'item dans la list est bien celui qui provient de rv
assert hd[rvc["key2"]][0] in rv
I get in the test summary info :
ERROR test/process/hexidict/test_hd_basic_function.py - TypeError: setitem_getitem() missing 1 required positional argument: 'declare_hexidict'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
pytest does not recognize setitem_getitem like test, so you should rename it to test_setitem_getitem and try it out:
def test_setitem_getitem(declare_hexidict):
The problem is that your test is not detected by Pytest's test discovery.
Depending on how you execute your tests (whether you provide a full path to your test file, provide path with sub directories and multiple test files or want to execute all tests matching a specific mark in the entire project) you will want to make sure all test modules, classes and functions are discovered properly. By default test files need to match test_*.py or *_test.py, classes - Test* and functions - test*.
https://docs.pytest.org/en/7.1.x/explanation/goodpractices.html#conventions-for-python-test-discovery
Test discovery can also be configured to match your needs in pytest.ini.
Example pytest.ini:
[pytest]
python_files = *_pytest.py
python_functions = mytest_*
python_classes = *Tests

Different exit code with pytest when tests raise an unhandled exception (other than AssertionError)

I would like to know if a pytest test suite has failed because of:
1. a given test failed on an assertion, or
2. a given test raised an unhandled exception
So given the following tests:
def test_ok():
assert 0 == 0
def test_failed():
assert 1 == 0
def test_different_exit_code():
open('/nonexistent', 'r')
I want to differentiate (with different exit codes) between the test_failed and the test_different_exit_code case.
pytest has a handled exceptions, let me show you one example:
import pytest
from pytest_bdd import given
def pytest_bdd_step_error(request, feature, scenario, step, step_func, step_func_args, exception):
print(f'Step failed: {step}')
This is a hook, you can use to handle errors in you pytest step_defs you only call in your def test_failed() , so you can write this
def test_ok():
assert 0 == 0 # Test is ok
def test_failed():
#assert 1 == 0 # Test failed REMOVE THIS PART AND USE assert not
assert not 1 == 0
def test_different_exit_code():
open('/nonexistent', 'r')
You can use pytest-finer-verdicts plugin to differentiate between test failures and other failures (e.g., due to exceptions).
Edit1:
For example, in the below fragment,
import pytest
def test_fail():
assert 75 <= 70
def test_error():
open("/nonexistent", 'r')
pytest-finer-verdicts will differentiate the following two tests fail due to different kinds of reasons.
collected 2 items
t.py FE [100%]
==================================== ERRORS =====================================
_________________________ ERROR at setup of test_error1 _________________________
def test_error1():
> open("/nonexistent", 'r')
E FileNotFoundError: [Errno 2] No such file or directory: '/nonexistent'
t.py:7: FileNotFoundError
=================================== FAILURES ====================================
___________________________________ test_fail ___________________________________
def test_fail():
> assert 75 <= 70
E assert 75 <= 70
t.py:4: AssertionError
============================ short test summary info ============================
FAILED t.py::test_fail - assert 75 <= 70
ERROR t.py::test_error1 - FileNotFoundError: [Errno 2] No such file or directo...
========================== 1 failed, 1 error in 0.19s ===========================
I finally ended up with
https://pypi.org/project/pytest-unhandled-exception-exit-code/
This makes setting the exit code possible when an unhandled exception happens in any of the tests, like with the example command line:
pytest --unhandled-exc-exit-code 13

Would like to see list of deselected tests and their node ids in pytest output

Is there an option to list the deselected tests in the cli output along with the mark that triggered their deselection?
I know that in suites with many tests this would not be good as a default but would be a useful option in something like api testing where the tests are likely to be more limited.
The numeric summary
collected 21 items / 16 deselected / 5 selected
is helpful but not enough when trying to organize marks and see what happened in a ci build.
pytest has a hookspec pytest_deselected for accessing the deselected tests. Example: add this code to conftest.py in your test root dir:
def pytest_deselected(items):
if not items:
return
config = items[0].session.config
reporter = config.pluginmanager.getplugin("terminalreporter")
reporter.ensure_newline()
for item in items:
reporter.line(f"deselected: {item.nodeid}", yellow=True, bold=True)
Running the tests now will give you an output similar to this:
$ pytest -vv
...
plugins: cov-2.8.1, asyncio-0.10.0
collecting ...
deselected: test_spam.py::test_spam
deselected: test_spam.py::test_bacon
deselected: test_spam.py::test_ham
collected 4 items / 3 deselected / 1 selected
...
If you want a report in another format, simply store the deselected items in the config and use them for the desired output somewhere else, e.g. pytest_terminal_summary:
# conftest.py
import os
def pytest_deselected(items):
if not items:
return
config = items[0].session.config
config.deselected = items
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
deselected = getattr(config, "deselected", [])
if deselected:
terminalreporter.ensure_newline()
terminalreporter.section('Deselected tests', sep='-', yellow=True, bold=True)
content = os.linesep.join(item.nodeid for item in deselected)
terminalreporter.line(content)
gives the output:
$ pytest -vv
...
plugins: cov-2.8.1, asyncio-0.10.0
collected 4 items / 3 deselected / 1 selected
...
---------------------------------------- Deselected tests -----------------------------------------
test_spam.py::test_spam
test_spam.py::test_bacon
test_spam.py::test_ham
================================= 1 passed, 3 deselected in 0.01s =================================

In py.test is it possible to report arbitrary values generated in the test runs?

I know there are plugins for performance tests and profiling for py.test but is there a way to generate arbitrary values which are reported or somehow accessible after the test?
Imagine I have a test like this
def test_minimum_learning_rate():
"""Make some fancy stuff and generate a learning performance value"""
learning_rate = fancy_learning_function().rate
pytest.report("rate", learning_rate)
assert learning_rate > 0.5
The pytest.report(..) line is what I'd like to have (but isn't there, is it?)
And now I'd like to have something like minimum_learning_rate[rate] being written along with the actual test results to the report (or maybe at least on the screen).
Really nice would be some plugin for Jenkins which creates a nice chart from that data.
Is there a typical wording for this? I've been looking for kpi, arbitrary values, user defined values but without any luck yet..
If you just want to output some debug values, a print call combined with the -s argument will already suffice:
def test_spam():
print('debug')
assert True
Running pytest -s:
collected 1 item
test_spam.py debug
.
If you are looking for a solution that is better integrated into pytest execution flow, write custom hooks. The examples below should give you some ideas.
Printing custom lines after each test execution
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'teardown': # you may e.g. also check the outcome here to filter passed or failed tests only
rate = getattr(config, '_rate', None)
if rate is not None:
terminalreporter = config.pluginmanager.get_plugin('terminalreporter')
terminalreporter.ensure_newline()
terminalreporter.write_line(f'test {report.nodeid}, rate: {rate}', red=True, bold=True)
Tests:
def report(rate, request):
request.config._rate = rate
def test_spam(request):
report(123, request)
def test_eggs(request):
report(456, request)
Output:
collected 2 items
test_spam.py .
test test_spam.py::test_spam, rate: 123
test_spam.py .
test test_spam.py::test_eggs, rate: 456
===================================================== 2 passed in 0.01 seconds =====================================================
Collecting data and printing after test execution
# conftest.py
def pytest_configure(config):
config._rates = dict()
def pytest_terminal_summary(terminalreporter, exitstatus, config):
terminalreporter.ensure_newline()
for testid, rate in config._rates.items():
terminalreporter.write_line(f'test {testid}, rate: {rate}', yellow=True, bold=True)
Tests:
def report(rate, request):
request.config._rates[request.node.nodeid] = rate
def test_spam(request):
report(123, request)
def test_eggs(request):
report(456, request)
Output:
collected 2 items
test_spam.py ..
test test_spam.py::test_spam, rate: 123
test test_spam.py::test_eggs, rate: 456
===================================================== 2 passed in 0.01 seconds =====================================================
Appending data in JUnit XML report
Using the record_property fixture:
def test_spam(record_property):
record_property('rate', 123)
def test_eggs(record_property):
record_property('rate', 456)
Resulting report:
$ pytest --junit-xml=report.xml
...
$ xmllint --format report.xml
<testsuite errors="0" failures="0" name="pytest" skipped="0" tests="2" time="0.056">
<testcase classname="test_spam" file="test_spam.py" line="12" name="test_spam" time="0.001">
<properties>
<property name="rate" value="123"/>
</properties>
</testcase>
<testcase classname="test_spam" file="test_spam.py" line="15" name="test_eggs" time="0.001">
<properties>
<property name="rate" value="456"/>
</properties>
</testcase>
</testsuite>

assert pytest command has been run

I have an django app route that will run a pytest.main() command if some conditions are met:
def run_single_test(request, single_test_name):
# get dict of test names, test paths
test_dict = get_single_test_names()
# check to see if test is in the dict
if single_test_name in test_dict:
for test_name,test_path in test_dict.items():
# if testname is valid run associated test
if test_name == single_test_name:
os.chdir('/lib/tests/')
run_test = pytest.main(['-v', '--json-report', test_path])
else:
return 'The requested test could not be found.'
I would like to include a unit test that validates run_test has been executed.
What is the best approach to doing this? Mock and unittest are new to me.
I tried messing around with stdout:
def test_run_single_test_flow_control(self):
mock_get = patch('test_automation_app.views.get_single_test_names')
mock_get = mock_get.start()
mock_get.return_value = {'test_search': 'folder/test_file.py::TestClass::test'}
results = run_single_test('this-request', 'test_search')
output = sys.stdout
self.assertEqual(output, '-v --json-report folder/test_file.py::TestClass::test')
but this returns:
<_pytest.capture.EncodedFile object at XXXXXXXXXXXXXX>
Here are two example tests that verify that pytest.main is invoked when a valid test name is passed and not invoked otherwise. I also added some different invocations of mock_pytest_main.assert_called as an example; they all do pretty much the same, with extra check for args that were passed on function call. Hope this helps you to write more complex tests!
from unittest.mock import patch
from test_automation_app.views import run_single_test
def test_pytest_invoked_when_test_name_valid():
with patch('pytest.main') as mock_pytest_main, patch('test_automation_app.views.get_single_test_names') as mock_get:
mock_get.return_value = {'test_search': 'folder/test_file.py::TestClass::test'}
results = run_single_test('this-request', 'test_search')
mock_pytest_main.assert_called()
mock_pytest_main.assert_called_with(['-v', '--json-report', 'folder/test_file.py::TestClass::test'])
mock_pytest_main.assert_called_once()
mock_pytest_main.assert_called_once_with(['-v', '--json-report', 'folder/test_file.py::TestClass::test'])
def test_pytest_not_invoked_when_test_name_invalid():
with patch('pytest.main') as mock_pytest_main, patch('test_automation_app.views.get_single_test_names') as mock_get:
mock_get.return_value = {'test_search': 'folder/test_file.py::TestClass::test'}
results = run_single_test('this-request', 'test_non_existent')
mock_pytest_main.assert_not_called()

Categories