IDEA shows branches as not covered that can't be reached (python) - python

Given this function under test with coverage:
1 def func_to_test(param):
2
3 if param == 'foo':
4 return 'bar'
5
6 return param
And these two unit tests:
def test_given_param_is_foo_it_returns_bar(self):
result = func_to_test('foo')
self.assertEquals(result, 'bar')
def test_given_param_is_not_foo_it_returns_the_param(self):
result = func_to_test('something else')
self.assertEquals(result, 'something else')
The coverage view in IDEA shows that all lines of the function under test where hit but in line 3 (the line with the if) it shows this:
Line was hit
Line 2 didn't jump to line 4,6
I have the impression after looking at multiple of these cases that the coverage tool expects the if block to be executed and then the code execution to continue below the block. however this is not possible if the if block contains a return statement that has to be hit.
Do I misinterpret the message or is there anything else that I have to configure to have that detected correctly?
In my coverage.rc I have branch = on. But just disabling it would lead to reachable branches not being detected as "not hit".

I don't see the same results. When I run it, I get 100% for both statements and branches. Maybe something is different about your code?
Here is my test run:
$ cat tryit.py
def func_to_test(param):
if param == 'foo':
return 'bar'
return param
import unittest
class TestIt(unittest.TestCase):
def test_given_param_is_foo_it_returns_bar(self):
result = func_to_test('foo')
self.assertEquals(result, 'bar')
def test_given_param_is_not_foo_it_returns_the_param(self):
result = func_to_test('something else')
self.assertEquals(result, 'something else')
$ coverage run --branch --source=. -m unittest tryit
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
$ coverage report -m
Name Stmts Miss Branch BrPart Cover Missing
------------------------------------------------------
tryit.py 12 0 2 0 100%
$

Related

Algorithm for extracting first and last lines from sectionalized output file

I am trying to parse the FAILURES section from the terminal output of a Pytest session, line by line, identifying the testname and test filename for each test, which I then want to append together to form a "fully qualified test name" (FQTN), e.g. tests/test_1.py::test_3_fails. I also want to get and save the traceback info (which is what is between the testname and the test filename).
The parsing part is straightforward and I already have working regex's that match the test name and the test filename, and I can extract the traceback info based on that. My issue with the FQTNs is algorithmic - I can't seem to figure out the overall logic to identify a testname, then the test's filename, wihch occurs on a later line. I need to accommodate for not only the tests that are in the middle of the FAILURES section, but also the first test and the last test of the FAILURES section.
Here's an example. This is the output section for all failures during a test run, along with some of the terminal output that comes right before FAILURES, and right after.
.
.
.
============================================== ERRORS ===============================================
__________________________________ ERROR at setup of test_c_error ___________________________________
#pytest.fixture
def error_fixture():
> assert 0
E assert 0
tests/test_2.py:19: AssertionError
============================================= FAILURES ==============================================
___________________________________________ test_3_fails ____________________________________________
log_testname = None
def test_3_fails(log_testname):
> assert 0
E assert 0
tests/test_1.py:98: AssertionError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
______________________________________ test_8_causes_a_warning ______________________________________
log_testname = None
def test_8_causes_a_warning(log_testname):
> assert api_v1() == 1
E TypeError: api_v1() missing 1 required positional argument: 'log_testname'
tests/test_1.py:127: TypeError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
___________________________ test_16_fail_compare_dicts_for_pytest_icdiff ____________________________
def test_16_fail_compare_dicts_for_pytest_icdiff():
listofStrings = ["Hello", "hi", "there", "at", "this"]
listofInts = [7, 10, 45, 23, 77]
assert len(listofStrings) == len(listofInts)
> assert listofStrings == listofInts
E AssertionError: assert ['Hello', 'hi... 'at', 'this'] == [7, 10, 45, 23, 77]
E At index 0 diff: 'Hello' != 7
E Full diff:
E - [7, 10, 45, 23, 77]
E + ['Hello', 'hi', 'there', 'at', 'this']
tests/test_1.py:210: AssertionError
____________________________________________ test_b_fail ____________________________________________
def test_b_fail():
> assert 0
E assert 0
tests/test_2.py:27: AssertionError
============================================== PASSES ===============================================
___________________________________________ test_4_passes ___________________________________________
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
.
.
.
Is anyone here good with algorithms, maybe some pseudo code that shows an overall way of getting each testname and its associated test filename?
Here is my proposal to get the rendered summary for a test case report. Use this stub as a rough idea - you might want to iterate through the reports and dump the rendered summaries first, then do the curses magic to display the collected data.
Some tests to play with:
import pytest
def test_1():
assert False
def test_2():
raise RuntimeError('call error')
#pytest.fixture
def f():
raise RuntimeError('setup error')
def test_3(f):
assert True
#pytest.fixture
def g():
yield
raise RuntimeError('teardown error')
def test_4(g):
assert True
Dummy plugin example that renders the summary for test_3 case. Put the snippet in conftest.py:
def pytest_unconfigure(config):
# example: get rendered output for test case `test_spam.py::test_3`
# get the reporter
reporter = config.pluginmanager.getplugin('terminalreporter')
# create a buffer to dump reporter output to
import io
buf = io.StringIO()
# fake tty or pytest will not colorize the output
buf.isatty = lambda: True
# replace writer in reporter to dump the output in buffer instead of stdout
from _pytest.config import create_terminal_writer
# I want to use the reporter again later to dump the rendered output,
# so I store the original writer here (you probably don't need it)
original_writer = reporter._tw
writer = create_terminal_writer(config, file=buf)
# replace the writer
reporter._tw = writer
# find the report for `test_spam.py::test_3` (we already know it will be an error report)
errors = reporter.stats['error']
test_3_report = next(
report for report in errors if report.nodeid == 'test_spam.py::test_3'
)
# dump the summary along with the stack trace for the report of `test_spam.py::test_3`
reporter._outrep_summary(test_3_report)
# print dumped contents
# you probably don't need this - this is just for demo purposes
# restore the original writer to write to stdout again
reporter._tw = original_writer
reporter.section('My own section', sep='>')
reporter.write(buf.getvalue())
reporter.write_sep('<')
A pytest run now yields an additional section
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My own section >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
#pytest.fixture
def f():
> raise RuntimeError('setup error')
E RuntimeError: setup error
test_spam.py:14: RuntimeError
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
with the stack trace rendered same way pytest does in the ERRORS summary section. You can play with outcomes for different test cases if you want - replace the reporter.stats section if necessary (errors or failed, or even passed - although the summary should be empty for passed tests) and amend the test case nodeid.

How can I ensure tests with a marker are only run if explicitly asked in pytest?

I have some tests I marked with an appropriate marker. If I run pytest, by default they run, but I would like to skip them by default. The only option I know is to explicitly say "not marker" at pytest invocation, but I would like them not to run by default unless the marker is explicitly asked at command line.
A slight modification of the example in Control skipping of tests according to command line option:
# conftest.py
import pytest
def pytest_collection_modifyitems(config, items):
keywordexpr = config.option.keyword
markexpr = config.option.markexpr
if keywordexpr or markexpr:
return # let pytest handle this
skip_mymarker = pytest.mark.skip(reason='mymarker not selected')
for item in items:
if 'mymarker' in item.keywords:
item.add_marker(skip_mymarker)
Example tests:
import pytest
def test_not_marked():
pass
#pytest.mark.mymarker
def test_marked():
pass
Running the tests with the marker:
$ pytest -v -k mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Or:
$ pytest -v -m mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Without the marker:
$ pytest -v
...
collected 2 items
test_spam.py::test_not_marked PASSED
test_spam.py::test_marked SKIPPED
...
Instead of explicitly say "not marker" at pytest invocation, you can add following to pytest.ini
[pytest]
addopts = -m "not marker"

Python SonarQube Integration with nosetests and coverage not shown cover code

I have created sample project (PyCharm+Mac) to integrate SonarQube into python using nosetests and coverage :
src/Sample.py
import sys
def fact(n):
"""
Factorial function
:arg n: Number
:returns: factorial of n
"""
if n == 0:
return 1
return n * fact(n - 1)
def main(n):
res = fact(n)
print(res)
if __name__ == '__main__' and len(sys.argv) > 1:
main(int(sys.argv[1]))
test/SampleTest.py
import unittest
from src.Sample import fact
class TestFactorial(unittest.TestCase):
"""
Our basic test class
"""
def test_fact1(self):
"""
The actual test.
Any method which starts with ``test_`` will considered as a test case.
"""
res = fact(0)
self.assertEqual(res, 1)
def test_fac2(self):
"""
The actual test.
Any method which starts with ``test_`` will considered as a test case.
"""
res = fact(5)
self.assertEqual(res, 120)
if __name__ == '__main__':
unittest.main()
sonar-project.properties
sonar.projectKey=SonarQubeSample
sonar.projectName=Sonar Qube Sample
sonar.projectVersion=1.0
sonar.sources=src
sonar.tests=test
sonar.language=py
sonar.sourceEncoding=UTF-8
sonar.python.xunit.reportPath=nosetests.xml
sonar.python.coverage.reportPath=coverage.xml
sonar.python.coveragePlugin=cobertura
Below command will create nosetests.xml file successfully :
nosetests --with-xunit ./test/SampleTest.py
When i run below command :
nosetests --with-coverage --cover-package=src --cover-inclusive --cover-xml
It will given below result :
Name Stmts Miss Cover
-------------------------------------
src/Sample.py 10 6 40%
src/__init__.py 0 0 100%
-------------------------------------
TOTAL 10 6 40%
----------------------------------------------------------------------
Ran 0 tests in 0.011s
OK
Why fact function code not shown cover into SonarQube my project as below after running sonar-scanner command?
You should always try to make one test fail to be sure that your command tests something. The following command does not execute any tests:
nosetests --with-coverage --cover-package=src --cover-inclusive --cover-xml
One solution is to add test/*Test.py at the end.
To generate nosetests.xml and coverage.xml with only one command, you can execute:
nosetests --with-xunit --with-coverage --cover-package=src --cover-inclusive --cover-xml test/*Test.py
Note: You need to create a test/__init__.py file (even empty), so the file path in nosetests.xml can be resolved.
Note: You need at least SonarPython version 1.9 to parse coverage.xml

Exclude a list of tests with py.test but on the command line?

I would like to exclude a list (about 5 items) of tests with py.test.
I would like to give this list to py.test via the command line.
I would like to avoid to modify the source.
How to do that?
You could use tests selecting expression, option is -k. If you have following tests:
def test_spam():
pass
def test_ham():
pass
def test_eggs():
pass
invoke pytest with:
pytest -v -k 'not spam and not ham' tests.py
you will get:
collected 3 items
pytest_skip_tests.py::test_eggs PASSED [100%]
=================== 2 tests deselected ===================
========= 1 passed, 2 deselected in 0.01 seconds =========
You could get this to work by creating a conftest.py file:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--skiplist", action="store_true",
default="", help="skip listed tests")
def pytest_collection_modifyitems(config, items):
tests_to_skip = config.getoption("--skiplist")
if not tests_to_skip:
# --skiplist not given in cli, therefore move on
return
skip_listed = pytest.mark.skip(reason="included in --skiplist")
for item in items:
if item.name in tests_to_skip:
item.add_marker(skip_listed)
You would use it with:
$ pytest --skiplist test1 test2
Note that if you always skip the same test the list can be defined in conftest.
See also this useful link

pytest recording results in addition to the pass/fail

I've just started using pytest. Is there any way to record results in addition to the pass/fail status?
For example, suppose I have a test function like this:
#pytest.fixture(scope="session")
def server():
# something goes here to setup the server
def test_foo(server):
server.send_request()
response = server.get_response()
assert len(response) == 42
The test passes if the length of the response is 42. But I'd also like to record the response value as well ("...this call will be recorded for quality assurance purposes...."), even though I don't strictly require an exact value for the pass/fail criteria.
Use print result, then run py.test -s
-s tells py.test to not capture stdout and stdout.
Adapting your example:
# test_service.py
# ---------------
def test_request():
# response = server.get_response()
response = "{'some':'json'}"
assert len(response) == 15
print response, # comma prevents default newline
Running py.test -s produces
$ py.test -s test_service.py
=========================== test session starts ===========================
platform linux2 -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4
collected 1 items
test_service.py {'some':'json'}.
======================== 1 passed in 0.04 seconds =========================
$
Or use python logging instead
# test_logging.py
# ---------------
import logging
logging.basicConfig(
filename="logresults.txt",
format="%(filename)s:%(lineno)d:%(funcName)s %(message)s")
def test_request():
response = "{'some':'json'}"
# print response, # comma prevents default newline
logging.warn("{'some':'json'}") # sorry, newline unavoidable
logging.warn("{'some':'other json'}")
Running py.test produces the machine readable file logresults.txt:
test_logging.py:11:test_request {'some':'json'}
test_logging.py:12:test_request {'some':'other json'}
Pro tip
Run vim logresults.txt +cbuffer to load the logresults.txt as your quickfix list.
see my example of passing test data to ELK
http://fruch.github.io/blog/2014/10/30/ELK-is-fun/
later I've made it a bit like this:
def pytest_configure(config):
# parameter to add analysis from tests teardowns, and etc.
config.analysis = []
def pytest_unconfigure(config):
# send config.analysis to where you want, i.e. file / DB / ELK
send_to_elk(config.analysis)
def test_example():
pytest.config.analysis += [ "My Data I want to keep" ]
this is per run/session data, and not per test (but I'm working on figuring out how to do it per test)
I'll try updating once I have a working example...

Categories