I have created sample project (PyCharm+Mac) to integrate SonarQube into python using nosetests and coverage :
src/Sample.py
import sys
def fact(n):
"""
Factorial function
:arg n: Number
:returns: factorial of n
"""
if n == 0:
return 1
return n * fact(n - 1)
def main(n):
res = fact(n)
print(res)
if __name__ == '__main__' and len(sys.argv) > 1:
main(int(sys.argv[1]))
test/SampleTest.py
import unittest
from src.Sample import fact
class TestFactorial(unittest.TestCase):
"""
Our basic test class
"""
def test_fact1(self):
"""
The actual test.
Any method which starts with ``test_`` will considered as a test case.
"""
res = fact(0)
self.assertEqual(res, 1)
def test_fac2(self):
"""
The actual test.
Any method which starts with ``test_`` will considered as a test case.
"""
res = fact(5)
self.assertEqual(res, 120)
if __name__ == '__main__':
unittest.main()
sonar-project.properties
sonar.projectKey=SonarQubeSample
sonar.projectName=Sonar Qube Sample
sonar.projectVersion=1.0
sonar.sources=src
sonar.tests=test
sonar.language=py
sonar.sourceEncoding=UTF-8
sonar.python.xunit.reportPath=nosetests.xml
sonar.python.coverage.reportPath=coverage.xml
sonar.python.coveragePlugin=cobertura
Below command will create nosetests.xml file successfully :
nosetests --with-xunit ./test/SampleTest.py
When i run below command :
nosetests --with-coverage --cover-package=src --cover-inclusive --cover-xml
It will given below result :
Name Stmts Miss Cover
-------------------------------------
src/Sample.py 10 6 40%
src/__init__.py 0 0 100%
-------------------------------------
TOTAL 10 6 40%
----------------------------------------------------------------------
Ran 0 tests in 0.011s
OK
Why fact function code not shown cover into SonarQube my project as below after running sonar-scanner command?
You should always try to make one test fail to be sure that your command tests something. The following command does not execute any tests:
nosetests --with-coverage --cover-package=src --cover-inclusive --cover-xml
One solution is to add test/*Test.py at the end.
To generate nosetests.xml and coverage.xml with only one command, you can execute:
nosetests --with-xunit --with-coverage --cover-package=src --cover-inclusive --cover-xml test/*Test.py
Note: You need to create a test/__init__.py file (even empty), so the file path in nosetests.xml can be resolved.
Note: You need at least SonarPython version 1.9 to parse coverage.xml
Related
I have a file TestProtocol.py that has unittests. I can run that script and get test results for my 30 tests as expected. Now I want to run those tests from another file tester.py that is located in the same directory. Inside tester.py I tried import TestProtocol, but it runs 0 tests.
Then I found the documentation which says I should do something like this:
suite = unittest.TestLoader().discover(".", pattern = "*")
unittest.run(suite)
This should go through all files in the current directory . that match the pattern *, so all tests in all files. Unfortunately it again runs 0 tests.
There is a related QA that suggests to do
import TestProtocol
suite = unittest.findTestCases(TestProtocol)
unittest.run(suite)
but that also does not find any tests.
How do I import and run my tests?
You can try with following
# preferred module name would be test_protol as CamelCase convention are used for class name
import TestProtocol
# try to load all testcases from given module, hope your testcases are extending from unittest.TestCase
suite = unittest.TestLoader().loadTestsFromModule(TestProtocol)
# run all tests with verbosity
unittest.TextTestRunner(verbosity=2).run(suite)
Here is a full example
file 1: test_me.py
# file 1: test_me.py
import unittest
class TestMe(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
if __name__ == '__main__':
unittest.main()
file 2: test_other.py, put this under same directory
# file 2: test_other.py, put this under same directory
import unittest
import test_me
suite = unittest.TestLoader().loadTestsFromModule(test_me)
unittest.TextTestRunner(verbosity=2).run(suite)
run each file, it will show the same result
# python test_me.py - Ran 1 test in 0.000s
# python test_other.py - Ran 1 test in 0.000s
Given this function under test with coverage:
1 def func_to_test(param):
2
3 if param == 'foo':
4 return 'bar'
5
6 return param
And these two unit tests:
def test_given_param_is_foo_it_returns_bar(self):
result = func_to_test('foo')
self.assertEquals(result, 'bar')
def test_given_param_is_not_foo_it_returns_the_param(self):
result = func_to_test('something else')
self.assertEquals(result, 'something else')
The coverage view in IDEA shows that all lines of the function under test where hit but in line 3 (the line with the if) it shows this:
Line was hit
Line 2 didn't jump to line 4,6
I have the impression after looking at multiple of these cases that the coverage tool expects the if block to be executed and then the code execution to continue below the block. however this is not possible if the if block contains a return statement that has to be hit.
Do I misinterpret the message or is there anything else that I have to configure to have that detected correctly?
In my coverage.rc I have branch = on. But just disabling it would lead to reachable branches not being detected as "not hit".
I don't see the same results. When I run it, I get 100% for both statements and branches. Maybe something is different about your code?
Here is my test run:
$ cat tryit.py
def func_to_test(param):
if param == 'foo':
return 'bar'
return param
import unittest
class TestIt(unittest.TestCase):
def test_given_param_is_foo_it_returns_bar(self):
result = func_to_test('foo')
self.assertEquals(result, 'bar')
def test_given_param_is_not_foo_it_returns_the_param(self):
result = func_to_test('something else')
self.assertEquals(result, 'something else')
$ coverage run --branch --source=. -m unittest tryit
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
$ coverage report -m
Name Stmts Miss Branch BrPart Cover Missing
------------------------------------------------------
tryit.py 12 0 2 0 100%
$
I am writing integration tests for a project in which I am making HTTP calls and testing whether they were successful or not.
Since I am not importing any module and not calling functions directly coverage.py report for this is 0%.
I want to know how can I generate coverage report for such integration HTTP request tests?
The recipe is pretty much this:
Ensure the backend starts in code coverage mode
Run the tests
Ensure the backend coverage is written to file
Read the coverage from file and append it to test run coverage
Example:
backend
Imagine you have a dummy backend server that responds with a "Hello World" page on GET requests:
# backend.py
from http.server import BaseHTTPRequestHandler, HTTPServer
class DummyHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write('<html><body><h1>Hello World</h1></body></html>'.encode())
if __name__ == '__main__':
HTTPServer(('127.0.0.1', 8000), DummyHandler).serve_forever()
test
A simple test that makes an HTTP request and verifies the response contains "Hello World":
# tests/test_server.py
import requests
def test_GET():
resp = requests.get('http://127.0.0.1:8000')
resp.raise_for_status()
assert 'Hello World' in resp.text
Recipe
# tests/conftest.py
import os
import signal
import subprocess
import time
import coverage.data
import pytest
#pytest.fixture(autouse=True)
def run_backend(cov):
# 1.
env = os.environ.copy()
env['COVERAGE_FILE'] = '.coverage.backend'
serverproc = subprocess.Popen(['coverage', 'run', 'backend.py'], env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid)
time.sleep(3)
yield # 2.
# 3.
serverproc.send_signal(signal.SIGINT)
time.sleep(1)
# 4.
backendcov = coverage.data.CoverageData()
with open('.coverage.backend') as fp:
backendcov.read_fileobj(fp)
cov.data.update(backendcov)
cov is the fixture provided by pytest-cov (docs).
Running the test adds the coverage of backend.py to the overall coverage, although only tests selected:
$ pytest --cov=tests --cov-report term -vs
=============================== test session starts ===============================
platform linux -- Python 3.6.5, pytest-3.4.1, py-1.5.3, pluggy-0.6.0 --
/data/gentoo64/usr/bin/python3.6
cachedir: .pytest_cache
rootdir: /data/gentoo64/home/u0_a82/projects/stackoverflow/so-50689940, inifile:
plugins: mock-1.6.3, cov-2.5.1
collected 1 item
tests/test_server.py::test_GET PASSED
----------- coverage: platform linux, python 3.6.5-final-0 -----------
Name Stmts Miss Cover
------------------------------------------
backend.py 12 0 100%
tests/conftest.py 18 0 100%
tests/test_server.py 5 0 100%
------------------------------------------
TOTAL 35 0 100%
============================ 1 passed in 5.09 seconds =============================
With Coverage 5.1, based on the "Measuring sub-processes" section of the coverage.py docs, you can set the COVERAGE_PROCESS_START env-var, call the coverage.process_startup() somewhere in your code. If you set parallel=True in your .coveragerc
Somewhere in your process, call this code:
import coverage
coverage.process_startup()
This can be done in sitecustomize.py globally, but in my case it was easy to add this to my application's __init__.py, where I added:
import os
if 'COVERAGE_PROCESS_START' in os.environ:
import coverage
coverage.process_startup()
Just to be safe, I added an additional check to this if statement (checking if MYAPP_COVERAGE_SUBPROCESS is also set)
In your test case, set the COVERAGE_PROCESS_START to the path to your .coveragerc file (or an empty string if don't need this config), for example:
import os
import subprocess
env = os.environ.copy()
env['COVERAGE_PROCESS_START'] = '.coveragerc'
cmd = [sys.executable, 'run_my_app.py']
p = subprocess.Popen(cmd, env=env)
p.communicate()
assert p.returncode == 0 # ..etc
Finally, you create .coveragerc containing:
[run]
parallel = True
source = myapp # Which module to collect coverage for
This ensures the .coverage files created by each process go to a unique file, which pytest-cov appears to merge automatically (or can be done manually with coverage combine). It also describes which modules to collect data for (the --cov=myapp arg doesn't get passed to child processes)
To run your tests, just invoke pytest --cov=
I would like to exclude a list (about 5 items) of tests with py.test.
I would like to give this list to py.test via the command line.
I would like to avoid to modify the source.
How to do that?
You could use tests selecting expression, option is -k. If you have following tests:
def test_spam():
pass
def test_ham():
pass
def test_eggs():
pass
invoke pytest with:
pytest -v -k 'not spam and not ham' tests.py
you will get:
collected 3 items
pytest_skip_tests.py::test_eggs PASSED [100%]
=================== 2 tests deselected ===================
========= 1 passed, 2 deselected in 0.01 seconds =========
You could get this to work by creating a conftest.py file:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--skiplist", action="store_true",
default="", help="skip listed tests")
def pytest_collection_modifyitems(config, items):
tests_to_skip = config.getoption("--skiplist")
if not tests_to_skip:
# --skiplist not given in cli, therefore move on
return
skip_listed = pytest.mark.skip(reason="included in --skiplist")
for item in items:
if item.name in tests_to_skip:
item.add_marker(skip_listed)
You would use it with:
$ pytest --skiplist test1 test2
Note that if you always skip the same test the list can be defined in conftest.
See also this useful link
I'm using contracts for Python to specify preconditons/postconditions/invariants. I'm also using doctests for doing unit testing.
I'd like to have all of my doctest unit tests run with contracts enabled, and I'd like to run my tests using nose. Unfortunately, if I run the tests with nose, it does not execute the pre/post/invariant assertions. I put a setup function in each .py file to make sure that contract.checkmod gets called
def setup():
import contract
contract.checkmod(__name__)
I can confirm that this function is being executed by nose before it runs the tests, but the contracts still don't get executed.
On the other hand, if I run the doctest by calling doctest.testmod, the pre/post/inv do get called:
def _test():
import contract
contract.checkmod(__name__)
import doctest
doctest.testmod()
if __name__=='__main__':
_test()
Here's an example of a Python script whose test will succeed if called directly, but failed if called with nose:
import os
def setup():
import contract
contract.checkmod(__name__)
def delete_file(path):
"""Delete a file. File must be present.
>>> import minimock
>>> minimock.mock('os.remove')
>>> minimock.mock('os.path.exists', returns=True)
>>> delete_file('/tmp/myfile.txt')
Called os.path.exists('/tmp/myfile.txt')
Called os.remove('/tmp/myfile.txt')
>>> minimock.restore()
pre: os.path.exists(path)
"""
os.remove(path)
if __name__ == '__main__':
setup()
import doctest
doctest.testmod()
When I run the above file standalone, the tests pass:
$ python contracttest.py -v
Trying:
import minimock
Expecting nothing
ok
Trying:
minimock.mock('os.remove')
Expecting nothing
ok
Trying:
minimock.mock('os.path.exists', returns=True)
Expecting nothing
ok
Trying:
delete_file('/tmp/myfile.txt')
Expecting:
Called os.path.exists('/tmp/myfile.txt')
Called os.remove('/tmp/myfile.txt')
ok
Trying:
minimock.restore()
Expecting nothing
ok
2 items had no tests:
__main__
__main__.setup
1 items passed all tests:
5 tests in __main__.delete_file
5 tests in 3 items.
5 passed and 0 failed.
Test passed.
Here it is with nose:
$ nosetests --with-doctest contracttest.py
F
======================================================================
FAIL: Doctest: contracttest.delete_file
----------------------------------------------------------------------
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/doctest.py", line 2131, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for contracttest.delete_file
File "/Users/lorin/Desktop/contracttest.py", line 10, in delete_file
----------------------------------------------------------------------
File "/Users/lorin/Desktop/contracttest.py", line 17, in contracttest.delete_file
Failed example:
delete_file('/tmp/myfile.txt')
Expected:
Called os.path.exists('/tmp/myfile.txt')
Called os.remove('/tmp/myfile.txt')
Got:
Called os.remove('/tmp/myfile.txt')
----------------------------------------------------------------------
Ran 1 test in 0.055s