How can I show the path to the test function next to the function name when a test fails? What I would like to see:
======================FAILURES==========================
_____________path/to/module::function_name______________
The headline is controlled by the head_line property of the TestReport class, although beware it's marked experimental, so it's not impossible that it may be renamed or replaced in the next versions. Create a file named conftest.py in your project root dir with the contents:
import pytest
from _pytest.reports import TestReport
class CustomReport(TestReport):
#TestReport.head_line.getter
def head_line(self):
return f'my headline: {self.nodeid}'
#pytest.hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
return CustomReport.from_item_and_call(item, call)
Example output:
$ pytest -v
======================================= test session starts =======================================
...
test_spam.py::test_spam PASSED [ 20%]
test_spam.py::test_eggs FAILED [ 40%]
test_spam.py::test_bacon[1] FAILED [ 60%]
test_spam.py::test_bacon[2] FAILED [ 80%]
test_spam.py::TestFizz::test_buzz FAILED [100%]
============================================ FAILURES =============================================
______________________________ my headline: test_spam.py::test_eggs _______________________________
def test_eggs():
> assert False
E assert False
test_spam.py:8: AssertionError
____________________________ my headline: test_spam.py::test_bacon[1] _____________________________
n = 1
#pytest.mark.parametrize('n', range(1,3))
def test_bacon(n):
> assert False
E assert False
test_spam.py:13: AssertionError
____________________________ my headline: test_spam.py::test_bacon[2] _____________________________
n = 2
#pytest.mark.parametrize('n', range(1,3))
def test_bacon(n):
> assert False
E assert False
test_spam.py:13: AssertionError
_________________________ my headline: test_spam.py::TestFizz::test_buzz __________________________
self = <test_spam.TestFizz object at 0x7f5e44ba2438>
def test_buzz(self):
> assert False
E assert False
test_spam.py:18: AssertionError
=============================== 4 failed, 1 passed in 0.06 seconds ================================
Related
my test_sample.py is like:
import pytest
a,b = myclass('cmdopts').get_spec()
def gen_args(a, b):
for i in a:
for j in b:
yield (i,j)
#pytest.mark.parametrize('a,b', gen_args(a,b))
def test_c1(a, b):
assert a == b
my question is how can I pass cmdopts to the test script, not to the test_c1 function?
pytest.mark.parametrize is not intended to be used in this way.
with pytest_addoption of pytest you can inject yours a, b parameters using metafunc.parametrize
using this technique allow you to set also parameters in environment variables.
conftest.py
def pytest_addoption(parser):
parser.addoption("--a", action="store")
parser.addoption("--b", action="store")
def pytest_generate_tests(metafunc):
for par_name, par_value in (
('a',metafunc.config.option.a),
('b',metafunc.config.option.b)
):
if par_name in metafunc.fixturenames and par_value:
metafunc.parametrize(par_name, [par_value])
test_sample.py
def test_c1(a, b):
assert a == b
CLI
$ pytest test_sample.py --a 42 --b 42
collected 1 item
test_sample.py . [100%]
= 1 passed in 0.01s =
$ pytest test_sample.py --a 42 --b 24
collected 1 item
test_sample.py F [100%]
= FAILURES =
_ test_c1[42-24] _
a = '42', b = '24'
def test_c1(a, b):
> assert a == b
E AssertionError: assert '42' == '24'
E - 24
E + 42
test_sample.py:2: AssertionError
================================================================================== short test summary info ==================================================================================
FAILED test_sample.py::test_c1[42-24] - AssertionError: assert '42' == '24'
===================================================================================== 1 failed in 0.07s
Something similar here: How to pass arguments in pytest by command line
I have the following function in the file myfile.py:
#myfile.py
import psutil
class RunnableObject:
def run(self):
parent = psutil.Process()
print(parent)
children = parent.children(recursive=True)
print(children)
Then I have a unit test where runnable_object is an instance of the RunnableObject class which I setup using a pytest fixture.
#patch("myfile.psutil")
def test_run_post_request(self, psutil_, runnable_object):
runnable_object.run()
assert psutil_.Process.call_count == 1
assert psutil_.Process.children.call_count == 1
When I run my test however I get the following error:
assert psutil_.Process.call_count == 1
> assert psutil_.Process.children.call_count == 1
E assert 0 == 1
E +0
E -1
-1
tests/unit/test_experiment.py:1651: AssertionError
My stdout:
<MagicMock name='psutil.Process()' id='3001903696'>
<MagicMock name='psutil.Process().children()' id='3000968624'>
I also tried to use #patch.object(psutil.Process, "children") as well as#patch("myfile.psutil.Process") and #patch("myfile.psutil.Process.children") but that gave me the same problem.
children is the property of the return value of psutil.Process(). NOT the property of the Process method.
So the correct assertion is:
test_myfile.py:
from unittest import TestCase
import unittest
from unittest.mock import patch
from myfile import RunnableObject
class TestRunnableObject(TestCase):
#patch("myfile.psutil")
def test_run_post_request(self, psutil_):
runnable_object = RunnableObject()
runnable_object.run()
assert psutil_.Process.call_count == 1
assert psutil_.Process().children.call_count == 1
if __name__ == '__main__':
unittest.main()
test result:
<MagicMock name='psutil.Process()' id='4394128192'>
<MagicMock name='psutil.Process().children()' id='4394180912'>
.
----------------------------------------------------------------------
Ran 1 test in 0.002s
OK
Name Stmts Miss Cover Missing
-------------------------------------------------------------------------
src/stackoverflow/67362647/myfile.py 7 0 100%
src/stackoverflow/67362647/test_myfile.py 13 0 100%
-------------------------------------------------------------------------
TOTAL 20 0 100%
I am trying to implement a new pytest marker, called #pytest.mark.must_pass, to indicate that if the marked test fails, pytest should skip all subsequent tests and terminate.
I have been able to use the pytest_runtest_call hook to get pytest to terminate if the marked test failed, but I am using pytest.exit, which does not print a traceback, nor does it show the failure indication for the test in question.
I need this failure to appear as any other test failure, except that pytest stops testing after it prints whatever it needs to print to detail the failure.
My code so far:
# Copied this implementation from _pytest.runner
def pytest_runtest_call(item):
_update_current_test_var(item, "call")
try:
del sys.last_type
del sys.last_value
del sys.last_traceback
except AttributeError:
pass
try:
item.runtest()
except Exception:
# Store trace info to allow postmortem debugging
type, value, tb = sys.exc_info()
assert tb is not None
tb = tb.tb_next # Skip *this* frame
sys.last_type = type
sys.last_value = value
sys.last_traceback = tb
del type, value, tb # Get rid of these in this frame
# If test is marked as must pass, stop testing here
if item.iter_markers(name = "must_pass"):
pytest.exit('Test marked as "must_pass" failed, terminating.')
raise
Is there already a mechanism for doing this built into pytest?
Any help will be greatly appreciated.
Thanks.
So this can be achieved by using pytest_runtest_makereport and pytest_runtest_setup
In your conftest.py you would place the following:
import pytest
def pytest_runtest_makereport(item, call):
if item.iter_markers(name='must_pass'):
if call.excinfo is not None:
parent = item.parent
parent._mpfailed = item
def pytest_runtest_setup(item):
must_pass_failed = getattr(item.parent, '_mpfailed', None)
if must_pass_failed is not None:
pytest.skip('must pass test failed (%s)' % must_pass_failed.name)
And now when we test it with the following:
import pytest
def foo(a, b):
return a + b
def test_foo_1():
assert foo(1, 1) == 2
#pytest.mark.must_pass
def test_foo_2():
assert foo(2, 2) == 6
def test_foo_3():
assert foo(3, 3) == 6
def test_foo_4():
assert foo(4, 4) == 8
We see the desired output:
▲ = pytest test.py
=============================================================== test session starts ================================================================
platform darwin -- Python 3.6.5, pytest-4.6.2, py-1.8.0, pluggy-0.12.0
rootdir: /Users/foo/Desktop/testing, inifile: pytest.ini
plugins: cov-2.7.1
collected 4 items
test.py .Fss [100%]
===================================================================== FAILURES =====================================================================
____________________________________________________________________ test_foo_2 ____________________________________________________________________
#pytest.mark.must_pass
def test_foo_2():
> assert foo(2, 2) == 6
E assert 4 == 6
E + where 4 = foo(2, 2)
test.py:14: AssertionError
================================================== 1 failed, 1 passed, 2 skipped in 0.08 seconds ===================================================
Pasting the code in the accepted answer yields this:
'must_pass' not found in `markers` configuration option
For anyone coming here wanting to use – not implement – this, the same can be achieved with pytest-dependency:
pip install pytest-dependency
See this answer for usage.
For some reason I cannot get mock.patch to work in any scenario when using Pytest. It simply doesn't do the patching. Am I using it incorrectly or is something messed up with my configuration?
base.py
def foo():
return 'foo'
def get_foo():
return foo()
test_base.py
import pytest
import mock
from pytest_mock import mocker
from base import get_foo
#mock.patch('base.foo')
def test_get_foo(mock_foo):
mock_foo.return_value = 'bar'
assert get_foo() == 'bar'
def test_get_foo2(mocker):
m = mocker.patch('base.foo', return_value='bar')
assert get_foo() == 'bar'
def test_get_foo3():
with mock.patch('base.foo', return_value='bar') as mock_foo:
assert get_foo() == 'bar'
pytest results
============================================================= test session starts =============================================================
platform linux2 -- Python 2.7.13, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /projects/git/ABC/query, inifile:
plugins: mock-1.6.0
collected 13 items
test_base.py .....FFF.....
================================================================== FAILURES ===================================================================
________________________________________________________________ test_get_foo _________________________________________________________________
mock_foo = <MagicMock name='foo' id='140418877133648'>
#mock.patch('base.foo')
def test_get_foo(mock_foo):
mock_foo.return_value = 'bar'
> assert get_foo() == 'bar'
E AssertionError: assert 'foo' == 'bar'
E - foo
E + bar
test_base.py:67: AssertionError
________________________________________________________________ test_get_foo2 ________________________________________________________________
mocker = <pytest_mock.MockFixture object at 0x7fb5d14bc210>
def test_get_foo2(mocker):
m = mocker.patch('base.foo', return_value='bar')
> assert get_foo() == 'bar'
E AssertionError: assert 'foo' == 'bar'
E - foo
E + bar
test_base.py:71: AssertionError
________________________________________________________________ test_get_foo3 ________________________________________________________________
def test_get_foo3():
with mock.patch('base.foo', return_value='bar') as mock_foo:
> assert get_foo() == 'bar'
E AssertionError: assert 'foo' == 'bar'
E - foo
E + bar
test_base.py:75: AssertionError
The problem was due to the relationship between my import specification and PATH variable. If I specified the entire path in the patch argument, like: #mock.patch('<PROJECT_ROOT>.<SUBPACKAGE>.base.foo') where PATH had the parent directory of as an entry, then it worked. I don't know why it wasn't throwing an import error if it wasn't finding base.foo. And if it wasn't finding it, I don't understand how the scope was different.
I'm trying to follow Chapter 3 of David Sale's Testing Python, but using nose2 instead of nosetests. So far I've written a calculate.py:
class Calculate(object):
def add(self, x, y):
if type(x) == int and type(y) == int:
return x + y
else:
raise TypeError("Invalid type: {} and {}".format(type(x), type(y)))
if __name__ == '__main__': # pragma: no cover
calc = Calculate()
result = calc.add(2, 2)
print(result)
and, in a subdirectory test, a test_calculator.py:
import unittest
from calculate import Calculate
class TestCalculate(unittest.TestCase):
def setUp(self):
self.calc = Calculate()
def test_add_method_returns_correct_result(self):
self.assertEqual(4, self.calc.add(2,2))
def test_add_method_raises_typeerror_if_not_ints(self):
self.assertRaises(TypeError, self.calc.add, "Hello", "World")
if __name__ == '__main__':
unittest.main()
If I run nose2 --with-coverage in the main directory, I get
..
----------------------------------------------------------------------
Ran 2 tests in 0.002s
OK
----------- coverage: platform linux, python 3.5.2-final-0 -----------
Name Stmts Miss Cover
--------------------------------------------
calculate.py 5 0 100%
test/test_calculate.py 11 1 91%
--------------------------------------------
TOTAL 16 1 94%
I don't understand why a coverage is calculated for the testing program test/test_calculate.py as well as for the main program, calculate.py. Is there any way to disable this behavior?
You can use the --coverage parameter to measure coverage only for the given path. In your example, you need to run
nose2 --with-coverage --coverage calculate
This will give you the expected output:
..
----------------------------------------------------------------------
Ran 2 tests in 0.002s
OK
----------- coverage: platform linux, python 3.5.2-final-0 -----------
Name Stmts Miss Cover
----------------------------------
calculate.py 5 0 100%