I'm using PyTest for python code testing. Since I use googletest for my C++ code testing, I like the output format of googletest.
I'm wondering, is it possible to make pytest output like googletest? The pytest output line is too long, while googletest is short:
// pytest example:
(base) zz#home% pytest test_rle_v2.py
================================================================================== test session starts ===================================================================================
platform linux -- Python 3.8.1, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/zz/work/test/learn-hp/.hypothesis/examples')
rootdir: /home/zz/work/test/learn-hp
plugins: env-0.6.2, hypothesis-4.38.0
collected 1 item
test_rle_v2.py . [100%]
=================================================================================== 1 passed in 0.46s ====================================================================================
// googletest example
(base) zz#home% ./test_version
[==========] Running 5 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 5 tests from VERSION
[ RUN ] VERSION.str
[ OK ] VERSION.str (0 ms)
[ RUN ] VERSION.parts
[ OK ] VERSION.parts (0 ms)
[ RUN ] VERSION.metadata
[ OK ] VERSION.metadata (1 ms)
[ RUN ] VERSION.atLeast
[ OK ] VERSION.atLeast (0 ms)
[ RUN ] VERSION.hasFeature
[ OK ] VERSION.hasFeature (0 ms)
[----------] 5 tests from VERSION (1 ms total)
[----------] Global test environment tear-down
[==========] 5 tests from 1 test suite ran. (1 ms total)
[ PASSED ] 5 tests.
After several hours searching and trying, I found a conftest.py file required for my purpose. In conftest.py, people can override default pytest function, i.e. by providing hooks.
The following is an WIP example:
# conftest.py
import os
import random
def pytest_runtest_call(item):
item.add_report_section("call", "custom", " [ Run ] " + str(item))
def pytest_report_teststatus(report, config):
#print(">>> outcome:", report.outcome)
if report.when == 'call':
# line = f' [ Run ] {report.nodeid}'
# report.sections.append(('ChrisZZ', line))
if (report.outcome == 'failed'):
line = f' [ FAILED ] {report.nodeid}'
report.sections.append(('failed due to', line))
if report.when == 'teardown':
if (report.outcome == 'passed'):
line = f' [ OK ] {report.nodeid}'
report.sections.append(('ChrisZZ', line))
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
if content:
terminalreporter.ensure_newline()
#terminalreporter.section('', sep=' ', green=True, bold=True)
#terminalreporter.section('My custom section2', sep='------]', green=True, bold=True, fullwidth=None)
terminalreporter.line(content)
Related
This question already has answers here:
Python coverage.py exclude_lines
(2 answers)
is there a python-version specific "#pragma nocover" available for python coverage tool?
(4 answers)
Closed last year.
I wrote the following code.
https://gitlab.com/ksaito11/click-test
$ cat commands/cmd.py
import click
from commands.hello import hello
def print_version(ctx, param, value):
if not value or ctx.resilient_parsing:
return
click.echo('Version 1.0')
ctx.exit()
#click.group()
#click.option('--opt1')
#click.option('--version', is_flag=True, callback=print_version,
expose_value=False, is_eager=True)
#click.pass_context
def cmd(ctx, **kwargs):
ctx.obj = kwargs
def main():
cmd.add_command(hello)
cmd(auto_envvar_prefix='HELLOCLI')
if __name__ == '__main__':
main()
$ cat commands/hello.py
import click
#click.command()
def hello():
click.echo('Hello World!')
The code works correctly.
$ export PYTHONPATH=.
$ python commands/cmd.py
Usage: cmd.py [OPTIONS] COMMAND [ARGS]...
Options:
--opt1 TEXT
--version
--help Show this message and exit.
Commands:
hello
$ python commands/cmd.py --version
Version 1.0
$ python commands/cmd.py hello
Hello World!
I wrote the following test case.
$ cat tests/test_cmd.py
from click.testing import CliRunner
import click
import pytest
from commands.cmd import cmd, main
from commands.hello import hello
def test_version():
runner = CliRunner()
result = runner.invoke(cmd, ["--version"])
assert result.exit_code == 0
def test_help():
runner = CliRunner()
result = runner.invoke(cmd)
assert result.exit_code == 0
def test_hello():
runner = CliRunner()
result = runner.invoke(hello)
assert result.exit_code == 0
I measured the coverage with the following command.
$ pytest --cov-branch --cov=commands
================================================================ test session starts ================================================================
platform linux -- Python 3.9.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/ksaito/ghq/gitlab.com/ksaito11/click-test
plugins: cov-3.0.0
collected 3 items
tests/test_cmd.py ... [100%]
----------- coverage: platform linux, python 3.9.9-final-0 -----------
Name Stmts Miss Branch BrPart Cover
--------------------------------------------------------
commands/__init__.py 0 0 0 0 100%
commands/cmd.py 18 5 4 2 68%
commands/hello.py 4 0 0 0 100%
--------------------------------------------------------
TOTAL 22 5 4 2 73%
================================================================= 3 passed in 0.15s =================================================================
I didn't know how to write the code to test the part below and couldn't get 100% coverage.
def cmd(ctx, **kwargs):
ctx.obj = kwargs
def main():
cmd.add_command(hello)
cmd(auto_envvar_prefix='HELLOCLI')
The code below may not be needed when using "# click.group", but I couldn't determine.
def print_version(ctx, param, value):
if not value or ctx.resilient_parsing:
return
Please give me advice.
By adding the following settings, code that does not need to be included in coverage is excluded.
$ cat .coveragerc
[run]
branch = True
[report]
exclude_lines =
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
def main
ctx.obj = kwargs
I deleted the code below because I thought it was unnecessary.
if not value or ctx.resilient_parsing:
return
The coverage is now 100%.
$ pytest --cov-branch --cov=commands
================================================================ test session starts ================================================================
platform linux -- Python 3.9.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/ksaito/ghq/gitlab.com/ksaito11/click-test
plugins: cov-3.0.0
collected 3 items
tests/test_cmd.py ... [100%]
----------- coverage: platform linux, python 3.9.9-final-0 -----------
Name Stmts Miss Branch BrPart Cover
--------------------------------------------------------
commands/__init__.py 0 0 0 0 100%
commands/cmd.py 10 0 0 0 100%
commands/hello.py 4 0 0 0 100%
--------------------------------------------------------
TOTAL 14 0 0 0 100%
================================================================= 3 passed in 0.22s =================================================================
Is there an option to list the deselected tests in the cli output along with the mark that triggered their deselection?
I know that in suites with many tests this would not be good as a default but would be a useful option in something like api testing where the tests are likely to be more limited.
The numeric summary
collected 21 items / 16 deselected / 5 selected
is helpful but not enough when trying to organize marks and see what happened in a ci build.
pytest has a hookspec pytest_deselected for accessing the deselected tests. Example: add this code to conftest.py in your test root dir:
def pytest_deselected(items):
if not items:
return
config = items[0].session.config
reporter = config.pluginmanager.getplugin("terminalreporter")
reporter.ensure_newline()
for item in items:
reporter.line(f"deselected: {item.nodeid}", yellow=True, bold=True)
Running the tests now will give you an output similar to this:
$ pytest -vv
...
plugins: cov-2.8.1, asyncio-0.10.0
collecting ...
deselected: test_spam.py::test_spam
deselected: test_spam.py::test_bacon
deselected: test_spam.py::test_ham
collected 4 items / 3 deselected / 1 selected
...
If you want a report in another format, simply store the deselected items in the config and use them for the desired output somewhere else, e.g. pytest_terminal_summary:
# conftest.py
import os
def pytest_deselected(items):
if not items:
return
config = items[0].session.config
config.deselected = items
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
deselected = getattr(config, "deselected", [])
if deselected:
terminalreporter.ensure_newline()
terminalreporter.section('Deselected tests', sep='-', yellow=True, bold=True)
content = os.linesep.join(item.nodeid for item in deselected)
terminalreporter.line(content)
gives the output:
$ pytest -vv
...
plugins: cov-2.8.1, asyncio-0.10.0
collected 4 items / 3 deselected / 1 selected
...
---------------------------------------- Deselected tests -----------------------------------------
test_spam.py::test_spam
test_spam.py::test_bacon
test_spam.py::test_ham
================================= 1 passed, 3 deselected in 0.01s =================================
The docs say that the "stamp" parameter will "enable link stamping" and that it will "encode build information into the binary" and "stamp the build information into the binary"
But where does that information actually go, and how can I retrieve it?
It does not seem to be a simple search-and-replace, nor variable definition, nor environment variable, nor format-string variable. Test python script:
BUILD_HOST="ASDF"
print("{BUILD_HOST}")
print(BUILD_HOST)
Build rule:
py_binary(name="catself", srcs=["catself.py"], stamp=1)
Output:
$ bazel run --stamp catself
INFO: Analyzed target //:catself (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //:catself up-to-date:
bazel-bin/catself
INFO: Elapsed time: 0.083s, Critical Path: 0.01s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
{BUILD_HOST}
ASDF
It looks like stamping might not be enabled for py_binary. It should work for genrule though, so something simple like this should work:
py_binary(
name = "foo",
srcs = ["foo.py"],
data = [":stable-status.txt"],
)
genrule(
name = "copy_stable-status.txt",
outs = ["stable-status.txt"],
cmd = "cp bazel-out/stable-status.txt $#",
stamp = 1,
)
foo.py:
build_info = {}
with open("stable-status.txt") as stable_status:
for line in stable_status.readlines():
key, val = line.split(" ", 1)
build_info[key] = val.strip()
print("Build label is:")
print(build_info['BUILD_EMBED_LABEL'])
then:
$ bazel run foo --embed_label=foobar
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:foo (17 packages loaded, 102 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 2.985s, Critical Path: 0.11s
INFO: 1 process: 1 linux-sandbox.
INFO: Build completed successfully, 6 total actions
INFO: Build completed successfully, 6 total actions
Build label is:
foobar
Or something fancier:
py_binary(
name = "bar",
srcs = ["bar.py"],
deps = [":buildinfo"],
)
py_library(
name = "buildinfo",
srcs = [":buildinfo.py"],
)
genrule(
name = "gen_buildinfo",
outs = ["buildinfo.py"],
cmd = r"""sed -E 's/(.*) (.*)/\1 = "\2"/' bazel-out/stable-status.txt > $#""",
stamp = 1,
)
bar.py:
import buildinfo
print("Build label is:")
print(buildinfo.BUILD_EMBED_LABEL)
In pytest, when a test case is failed, you have in the report the following categories:
Failure details
Captured stdout call
Captured stderr call
Captured log call
I would like to add some additional custom sections (I have a server that turns in parallel and would like to display the information logged by this server in a dedicated section).
How could I do that (if ever possible)?
Thanks
NOTE:
I have currently found the following in source code but don't know whether that shall be right approach
nodes.py
class Item(Node):
...
def add_report_section(self, when, key, content):
"""
Adds a new report section, similar to what's done internally
to add stdout and stderr captured output::
...
"""
reports.py
class BaseReport:
...
#property
def caplog(self):
"""Return captured log lines, if log capturing is enabled
.. versionadded:: 3.5
"""
return "\n".join(
content for (prefix, content) in self.get_sections("Captured log")
)
To add custom sections to terminal output, you need to append to report.sections list. This can be done in pytest_report_teststatus hookimpl directly, or in other hooks indirectly (via a hookwrapper); the actual implementation heavily depends on your particular use case. Example:
# conftest.py
import os
import random
import pytest
def pytest_report_teststatus(report, config):
messages = (
'Egg and bacon',
'Egg, sausage and bacon',
'Egg and Spam',
'Egg, bacon and Spam'
)
if report.when == 'teardown':
line = f'{report.nodeid} says:\t"{random.choice(messages)}"'
report.sections.append(('My custom section', line))
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
if content:
terminalreporter.ensure_newline()
terminalreporter.section('My custom section', sep='-', blue=True, bold=True)
terminalreporter.line(content)
Example tests:
def test_spam():
assert True
def test_eggs():
assert True
def test_bacon():
assert False
When running the tests, you should see My custom section header at the bottom colored blue and containing a message for every test:
collected 3 items
test_spam.py::test_spam PASSED
test_spam.py::test_eggs PASSED
test_spam.py::test_bacon FAILED
============================================= FAILURES =============================================
____________________________________________ test_bacon ____________________________________________
def test_bacon():
> assert False
E assert False
test_spam.py:9: AssertionError
---------------------------------------- My custom section -----------------------------------------
test_spam.py::test_spam says: "Egg, bacon and Spam"
test_spam.py::test_eggs says: "Egg and Spam"
test_spam.py::test_bacon says: "Egg, sausage and bacon"
================================ 1 failed, 2 passed in 0.07 seconds ================================
The other answer shows how to add a custom section to the terminal report summary, but it's not the best way for adding a custom section per test.
For this goal, you can (and should) use the higher-level API add_report_section of an Item node (docs). A minimalist example is shown below, modify it to suit your needs. You can pass state from the test instance through an item node, if necessary.
In test_something.py, here is one passing test and two failing:
def test_good():
assert 2 + 2 == 4
def test_bad():
assert 2 + 2 == 5
def test_ugly():
errorerror
In conftest.py, setup a hook wrapper:
import pytest
content = iter(["first", "second", "third"])
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
outcome = yield
item.add_report_section("call", "custom", next(content))
The report will now display custom sections per-test:
$ pytest
============================== test session starts ===============================
platform linux -- Python 3.9.0, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /tmp/example
collected 3 items
test_something.py .FF [100%]
==================================== FAILURES ====================================
____________________________________ test_bad ____________________________________
def test_bad():
> assert 2 + 2 == 5
E assert (2 + 2) == 5
test_something.py:5: AssertionError
------------------------------ Captured custom call ------------------------------
second
___________________________________ test_ugly ____________________________________
def test_ugly():
> errorerror
E NameError: name 'errorerror' is not defined
test_something.py:8: NameError
------------------------------ Captured custom call ------------------------------
third
============================ short test summary info =============================
FAILED test_something.py::test_bad - assert (2 + 2) == 5
FAILED test_something.py::test_ugly - NameError: name 'errorerror' is not defined
========================== 2 failed, 1 passed in 0.02s ===========================
I have a parameterized test which takes str, and dict as an argument and so the name look pretty weird if I allow pytest to generate ids.
I want to generate custom ids using a function, however it seems it's not working as intended.
def id_func(param):
if isinstance(param, str):
return param
#pytest.mark.parametrize(argnames=('date', 'category_value'),
argvalues=[("2017.01", {"bills": "0,10", "shopping": "100,90", "Summe": "101,00"}),
("2017.02", {"bills": "20,00", "shopping": "10,00", "Summe": "30,00"})],
ids=id_func)
def test_demo(date, category_value):
pass
I was thinking it would return something like this
test_file.py::test_demo[2017.01] PASSED
test_file.py::test_demo[2017.02] PASSED
but it's returning this.
test_file.py::test_demo[2017.01-category_value0] PASSED
test_file.py::test_demo[2017.02-category_value1] PASSED
Could someone tell me what's wrong with this, or is there any way to achieve this?
Update:
I realize what's the issue, if_func will be called for each parameter and if I won't return str for any parameter default function will be called. I have fix but that's also ugly.
def id_func(param):
if isinstance(param, str):
return param
return " "
Now it returns something like this,
test_file.py::test_demo[2017.01- ] PASSED
test_file.py::test_demo[2017.02- ] PASSED
The problem is even If I return empty string (i.e. return "" )it takes the default representation. Could someone let me know why?
One way is to move your argvalues to another variable and write your test like this:
import pytest
my_args = [
("2017.01", {"bills": "0,10", "shopping": "100,90", "Summe": "101,00"}),
("2017.02", {"bills": "20,00", "shopping": "10,00", "Summe": "30,00"})
]
#pytest.mark.parametrize(
argnames=('date', 'category_value'), argvalues=my_args,
ids=[i[0] for i in my_args]
)
def test_demo(date, category_value):
pass
Test execution:
$ pytest -v tests.py
================= test session starts =================
platform linux2 -- Python 2.7.12, pytest-3.2.1, py-1.4.34, pluggy-0.4.0 -- /home/kris/.virtualenvs/2/bin/python2
cachedir: .cache
rootdir: /home/kris/projects/tmp, inifile:
collected 2 items
tests.py::test_demo[2017.01] PASSED
tests.py::test_demo[2017.02] PASSED
============== 2 passed in 0.00 seconds ===============
I think it's not possible with a function (idfn in your case), because if it's not generating label for an object the default pytest representation is used.
Check pytest site for details.
Usually, when I want to be specific about test case being executing in the params I use named tuples as a workaround for the id funcion being executed once per param, that way a get a cleaner test description.
import pytest
from collections import namedtuple
TCase = namedtuple("TCase", "x,y,expected,description")
test_cases = [
TCase(10, 10, 20, "10 + 10 should be 20"),
TCase(1, 1, 2, "1 + 1 should be 2"),
]
def idfn(tc: TCase):
return tc.description
#pytest.mark.parametrize("tc", test_cases, ids=idfn)
def test_sum(tc):
assert tc.x + tc.y == tc.expected
Output:
example.py::test_sum[10 + 10 should be 20] PASSED
example.py::test_sum[1 + 1 should be 2] PASSED
So I would write your example as:
from collections import namedtuple
import pytest
TCase = namedtuple("TCase", "date,data")
my_args = [
TCase("2017.01", {"bills": "0,10", "shopping": "100,90", "Summe": "101,00"}),
TCase("2017.02", {"bills": "20,00", "shopping": "10,00", "Summe": "30,00"}),
]
#pytest.mark.parametrize("tc", my_args, ids=lambda tc: tc.date)
def test_demo(tc):
# Do something in here with tc.date and tc.data
pass
Output:
migration.py::test_demo[2017.01] PASSED
migration.py::test_demo[2017.02] PASSED
Alternatively you can also use a list comprehension to generate your ids as follows:
import pytest
values = [
("2017.01", {"bills": "0,10", "shopping": "100,90", "Summe": "101,00"}),
("2017.02", {"bills": "20,00", "shopping": "10,00", "Summe": "30,00"})
]
value_ids = [e[0] for e in values]
#pytest.mark.parametrize('date,category_value', values, ids=value_ids
)
def test_demo(date, category_value):
pass
Assuming these tests are in test_file.py at the root of your directory, if you run pytest test_file.py --co -q, you will get the following output:
test_file.py::test_demo[2017.01]
test_file.py::test_demo[2017.02]