I'm interested in using unittest's subTest for looping through some very similar tests. I found that, when I run tests written in this way under pytest (or nosetests), the output does not contain information about the individual failures. Taking the example from the docs:
import unittest
class NumbersTest(unittest.TestCase):
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i % 2, 0)
if __name__ == '__main__':
unittest.main()
If I run python test_even.py, it clearly shows three failures, as expected:
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_even.py", line 10, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=3)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_even.py", line 10, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=5)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_even.py", line 10, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (failures=3)
However, if I run pytest -v test_even.py, it only tells me there was a failure in this test. I can't see which elements failed:
test_even.py::NumbersTest::test_even FAILED [100%]
======================================================= FAILURES =======================================================
________________________________________________ NumbersTest.test_even _________________________________________________
self = <test_even.NumbersTest testMethod=test_even>
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
> self.assertEqual(i % 2, 0)
E AssertionError: 1 != 0
test_even.py:10: AssertionError
=============================================== 1 failed in 0.15 seconds ===============================================
Is there a way to show up the individual failures? Ideally, I'd also like some sort of output for the ones that passed, just to reassure myself that the test discovery is working properly!
It seems that pytest does not yet support subTest. One solution might be to ditch unittest altogether and write native pytest tests:
import pytest
#pytest.mark.parametrize("test_input", range(0, 6))
def test_even(test_input):
assert test_input % 2 == 0
if __name__ == '__main__':
pytest.main([__file__])
Once pytest-subtests is added to the environment, this just works with the script from the original question:
$ pytest test_even.py
============================= test session starts ==============================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /net/home/h04/hadru/python
plugins: subtests-0.7.0
collected 1 item
test_even.py . [100%]
=================================== FAILURES ===================================
_________________________ NumbersTest.test_even (i=1) __________________________
self = <test_even.NumbersTest testMethod=test_even>
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
> self.assertEqual(i % 2, 0)
E AssertionError: 1 != 0
test_even.py:10: AssertionError
_________________________ NumbersTest.test_even (i=3) __________________________
self = <test_even.NumbersTest testMethod=test_even>
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
> self.assertEqual(i % 2, 0)
E AssertionError: 1 != 0
test_even.py:10: AssertionError
_________________________ NumbersTest.test_even (i=5) __________________________
self = <test_even.NumbersTest testMethod=test_even>
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
> self.assertEqual(i % 2, 0)
E AssertionError: 1 != 0
test_even.py:10: AssertionError
=========================== short test summary info ============================
SUBFAIL test_even.py::NumbersTest::test_even - AssertionError: 1 != 0
SUBFAIL test_even.py::NumbersTest::test_even - AssertionError: 1 != 0
SUBFAIL test_even.py::NumbersTest::test_even - AssertionError: 1 != 0
========================= 3 failed, 1 passed in 0.10s ==========================
Related
If the second parameter of assert is used normally it prints it out debug information.
E.g. with the following code:
class TestTest:
def test_test(self):
assert 1 == 2, "TEST"
The following debug information is printed:
tests\test_test.py:1 (TestTest.test_test)
1 != 2
Expected :2
Actual :1
<Click to see difference>
self = <tests.test_test.TestTest object at 0x000002147E5B1B08>
def test_test(self):
> assert 1 == 2, "TEST"
E AssertionError: TEST
E assert 1 == 2
E +1
E -2
test_test.py:3: AssertionError
However, if the assert happens in a helper function, this is not printed:
class TestTest:
def test_test(self):
self.verify(1)
def verify(self, parameter):
assert 1 == 2, "TEST"
results in:
tests\test_test.py:1 (TestTest.test_test)
1 != 2
Expected :2
Actual :1
<Click to see difference>
self = <tests.test_test.TestTest object at 0x000002AFDF1D5AC8>
def test_test(self):
> self.verify(1)
test_test.py:3:
So the stack trace is incomplete and the debug information not shown.
UPDATE:
This only happens when starting the test through PyCharm (2021.3).
Is there any way to fix this?
I am new to python and trying to run a unit test. My function works well when using the print method, but while trying to develop a test module I keep getting errors.
my function in the python file(work.py)
def lee(n):
for i in range(1,n+1):
print (i)
my unit test module
import unittest
import work
class TestWork(unittest.TestCase):
def test_lee(self):
result = work.lee(3)
self.assertEqual(result, [1,2,3])
if __name__ == '__main__':
unittest.main()
errors generated
======================================================================
FAIL: test_lee (__main__.TestWork)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\test_work.py", line
12, in test_lee
self.assertEqual(result, [1,2,3])
AssertionError: None != [1, 2, 3]
Here,
def lee(n):
list = []
for i in range(1,n+1):
list.append(i)
return list
As for printing, do print(work.lee(3))
I have the following function in the file myfile.py:
#myfile.py
import psutil
class RunnableObject:
def run(self):
parent = psutil.Process()
print(parent)
children = parent.children(recursive=True)
print(children)
Then I have a unit test where runnable_object is an instance of the RunnableObject class which I setup using a pytest fixture.
#patch("myfile.psutil")
def test_run_post_request(self, psutil_, runnable_object):
runnable_object.run()
assert psutil_.Process.call_count == 1
assert psutil_.Process.children.call_count == 1
When I run my test however I get the following error:
assert psutil_.Process.call_count == 1
> assert psutil_.Process.children.call_count == 1
E assert 0 == 1
E +0
E -1
-1
tests/unit/test_experiment.py:1651: AssertionError
My stdout:
<MagicMock name='psutil.Process()' id='3001903696'>
<MagicMock name='psutil.Process().children()' id='3000968624'>
I also tried to use #patch.object(psutil.Process, "children") as well as#patch("myfile.psutil.Process") and #patch("myfile.psutil.Process.children") but that gave me the same problem.
children is the property of the return value of psutil.Process(). NOT the property of the Process method.
So the correct assertion is:
test_myfile.py:
from unittest import TestCase
import unittest
from unittest.mock import patch
from myfile import RunnableObject
class TestRunnableObject(TestCase):
#patch("myfile.psutil")
def test_run_post_request(self, psutil_):
runnable_object = RunnableObject()
runnable_object.run()
assert psutil_.Process.call_count == 1
assert psutil_.Process().children.call_count == 1
if __name__ == '__main__':
unittest.main()
test result:
<MagicMock name='psutil.Process()' id='4394128192'>
<MagicMock name='psutil.Process().children()' id='4394180912'>
.
----------------------------------------------------------------------
Ran 1 test in 0.002s
OK
Name Stmts Miss Cover Missing
-------------------------------------------------------------------------
src/stackoverflow/67362647/myfile.py 7 0 100%
src/stackoverflow/67362647/test_myfile.py 13 0 100%
-------------------------------------------------------------------------
TOTAL 20 0 100%
how can I generate automatically the numbers of test cases in unittest? I mean something like test_01, test_02, test_{generate number}.
import unittest
class TestSum(unittest.TestCase):
def test_01_sum(self):
self.assertEqual(sum([1, 2, 3]), 6, "Should be 6")
def test_02_sum_tuple(self):
self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
if name == '__main__':
unittest.main()
To achieve this at runtime, you can actually rename the test methods for your test class:
def generate_test_numbers(test_class):
counter = 1
for method_name in dir(test_class):
if not method_name.startswith('test_N_'):
continue
method = getattr(test_class, method_name)
if not callable(method):
continue
new_method_name = method_name.replace('_N_', '_{:02d}_'.format(counter))
counter += 1
setattr(test_class, new_method_name, method)
delattr(test_class, method_name)
return test_class
You can either simply call this function from main:
generate_test_numbers(TestSum)
or as #VPfB suggested it, use it as a class decorator:
#generate_test_numbers
class TestSum(unittest.TestCase):
def test_N_sum(self):
self.assertEqual(sum([1, 2, 3]), 6, "Should be 6")
def test_N_sum_tuple(self):
self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
Either will output (with -v)
test_01_sum (__main__.TestSum) ... ok
test_02_sum_tuple (__main__.TestSum) ... FAIL
======================================================================
FAIL: test_02_sum_tuple (__main__.TestSum)
----------------------------------------------------------------------
Traceback (most recent call last):
File "q.py", line 8, in test_N_sum_tuple
self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
AssertionError: 5 != 6 : Should be 6
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
How can I show the path to the test function next to the function name when a test fails? What I would like to see:
======================FAILURES==========================
_____________path/to/module::function_name______________
The headline is controlled by the head_line property of the TestReport class, although beware it's marked experimental, so it's not impossible that it may be renamed or replaced in the next versions. Create a file named conftest.py in your project root dir with the contents:
import pytest
from _pytest.reports import TestReport
class CustomReport(TestReport):
#TestReport.head_line.getter
def head_line(self):
return f'my headline: {self.nodeid}'
#pytest.hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
return CustomReport.from_item_and_call(item, call)
Example output:
$ pytest -v
======================================= test session starts =======================================
...
test_spam.py::test_spam PASSED [ 20%]
test_spam.py::test_eggs FAILED [ 40%]
test_spam.py::test_bacon[1] FAILED [ 60%]
test_spam.py::test_bacon[2] FAILED [ 80%]
test_spam.py::TestFizz::test_buzz FAILED [100%]
============================================ FAILURES =============================================
______________________________ my headline: test_spam.py::test_eggs _______________________________
def test_eggs():
> assert False
E assert False
test_spam.py:8: AssertionError
____________________________ my headline: test_spam.py::test_bacon[1] _____________________________
n = 1
#pytest.mark.parametrize('n', range(1,3))
def test_bacon(n):
> assert False
E assert False
test_spam.py:13: AssertionError
____________________________ my headline: test_spam.py::test_bacon[2] _____________________________
n = 2
#pytest.mark.parametrize('n', range(1,3))
def test_bacon(n):
> assert False
E assert False
test_spam.py:13: AssertionError
_________________________ my headline: test_spam.py::TestFizz::test_buzz __________________________
self = <test_spam.TestFizz object at 0x7f5e44ba2438>
def test_buzz(self):
> assert False
E assert False
test_spam.py:18: AssertionError
=============================== 4 failed, 1 passed in 0.06 seconds ================================