I am using the python coverage package to determine line percent coverage for the following file
coverage report -m math_test.py
Once running the command however I ended up having 0 lines covered.
import example
import pytest
import unittest
class SampleTest(unittest.TestCase):
def testAddition(self):
expected = 10
math_addition = example.add(5,5)
self.assertEqual(math_addition, expected)
def add(x,y):
return x+y
Running math_test.py won't do anything. It defines a class and a function, but doesn't do anything with either of them. Coverage.py is not a test runner. You need to use something like pytest or unittest to run the tests:
coverage run -m unittest discover
coverage report -m
Related
When I run my testsuite with parallel executions I receive different results compared to run only one test runner.
I tried two approaches and both did not provide proper results. These commands did either only produce one partial result or less than we had runners.
I've tried with coverage and pytest combined:
COVERAGE_PROCESS_START=./my_app coverage --parallel-mode --concurrency=multiprocessing run -m pytest -m "not e2e" -n 4
Also with pytest and pytest-cov:
pytest -m "not e2e" -n 4 --cov=my_app
The second one also had the issue that some templatetags were not seen as registered even though others in the same directory were registered.
After running these I executed coverage combine and coverage report. When run in parallel the results are always incomplete compared to running it with only one test runner, which works perfectly fine:
coverage run -m pytest -m "not e2e"
This is my coveragerc:
[run]
include = my_app/*
omit = *migrations*, *tests*
plugins =
django_coverage_plugin
[report]
show_missing = true
Does some know how to get proper coverage results when running pytest in parallel?
I am running Springsource Tool Suite 3.9.7.RELEASE with the PyDev 7.5.0 plugin. Eclipse Pydev is giving me an error with unittest cases which I do not get in Spyder.
I have constructed the following test script:
from unittest import TestCase
from unittest import TextTestRunner
class Test1(TestCase):
def runTest(self):
print("Running Test1 - This should not appear")
self.assertTrue(True, "Test1")
class Test2(TestCase):
def runTest(self):
print("Running Test2 - This should not appear")
self.assertTrue(True, "Test2")
if __name__ == '__main__':
runner = TextTestRunner()
print("Finished Running - This should appear")
When I run it from STS using Run As/Python unit-test it gives the following output:-
Finding files... done.
Importing test modules ... done.
Running Test1 - This should not appear
Running Test2 - This should not appear
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
When I run exactly the same scipt in Spyder is gives me:-
Finished Running - This should appear
The Spyder output is what I would expect and want. It looks like PyDev is grabbing every unittest TestCase object it can find and running them all when TextTestRunner is instantiated.
I have created this trivial example, but it arose from a real world project I am working on which uses both Python and Java. I can only develop it using Eclipse PyDev. I need to be able to specify which test cases I run using TextTestRunnner's run() method in the normal way. Can anyone help me here?
I want to have a little script, that will find, run and report about all the tests in the folder, like this one:
#!/bin/bash
coverage run -m unittest discover
coverage report -m
But, when I run it, I get some errors, which I do not get on Windows (like using of super() without an argument). As I've understood, it's connected with the fact, that build-in and default version of Python on Linux is 2.x, whereas I am using 3.6. How should I change the script, so it would use Python 3.6 interpreter?
EDIT:
So here's one of the files with tests that I run:
#!/usr/bin/env python3
import unittest
import random
import math
import sort_functions as s
from comparison_functions import less, greater
class BaseTestCases:
class BaseTest(unittest.TestCase):
sort_func = None
def setUp(self):
self.array_one = [101, -12, 99, 3, 2, 1]
self.array_two = [random.random() for _ in range(100)]
self.array_three = [random.random() for _ in range(500)]
self.result_one = sorted(self.array_one)
self.result_two = sorted(self.array_two)
self.result_three = sorted(self.array_three)
def tearDown(self):
less.calls = 0
greater.calls = 0
def test_sort(self):
result_one = self.sort_func(self.array_one)
result_two = self.sort_func(self.array_two)
result_three = self.sort_func(self.array_three)
self.assertEqual(self.result_one, result_one)
self.assertEqual(self.result_two, result_two)
self.assertEqual(self.result_three, result_three)
# and some more tests here
class TestBubble(BaseTestCases.BaseTest):
def setUp(self):
self.sort_func = s.bubble_sort
super().setUp()
# and some more classes looking like this
And the error:
ERROR: test_key (test_sort_func.TestBubble)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/lelik/Desktop/Sorters/test_sort_func.py", line 67, in setUp
super().setUp()
TypeError: super() takes at least 1 argument (0 given)
First, install it for your python3 (if you have it and pip installed)
sudo python3 -m pip install coverage
Then, in order to run coverage for python3, run python3 -m coverage report -m
So your final script should look like this:
#!/bin/bash
python3 -m coverage run -m unittest discover
python3 -m coverage report -m
Also you can replace python3 with path to your pythons bin. For example /usr/bin/python3. So You can call it this way as well:
#!/bin/bash
/usr/bin/python3 -m coverage run -m unittest discover
/usr/bin/python3 -m coverage report -m
The problem is that the coverage command on your Linux host has been installed for Python 2. That is, somewhere there exists a coverage script that starts with:
#!/usr/bin/python
And on your system, /usr/bin/python is python 2.
The best solution here is probably to set up a Python 3 virtual environment for running your tests (and then installing coverage into that virtualenv). You may also want to investigate tox, which will handle this for you automatically.
I want to run NoseTest from a Python script. But I want not only run it, but also measure test coverage.
Just now I have the following code:
import os
import sys
import nose
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
import tests
if __name__ == "__main__":
config = nose.config.Config(verbosity=3, stopOnError=False, argv=["--with-coverage"])
result = nose.run(module=tests, config=config)
What should I add to get my coverage report?
Hell yeah! After some small debugging of Nose Test I've managed to do it!
if __name__ == "__main__":
file_path = os.path.abspath(__file__)
tests_path = os.path.join(os.path.abspath(os.path.dirname(file_path)), "tests")
result = nose.run(argv=[os.path.abspath(__file__),
"--with-cov", "--verbosity=3", "--cover-package=phased", tests_path])
EDIT: To run plugins with nose.run(), you need to use the 'plugins' keyword:
http://nose.readthedocs.org/en/latest/usage.html#using-plugins
Your code is all set -- you need to enable coverage via the runner. Simply run nose like this:
nosetests --with-coverage
There are more options here:
http://nose.readthedocs.org/en/latest/plugins/cover.html
FYI, you might need to run the following to get the coverage package:
pip install coverage
How can I generate test report using pytest? I searched for it but whatever i got was about coverage report.
I tried with this command:
py.test sanity_tests.py --cov=C:\Test\pytest --cov-report=xml
But as parameters represents it generates coverage report not test report.
Ripped from the comments: you can use the --junitxml argument.
$ py.test sample_tests.py --junitxml=C:\path\to\out_report.xml
You can use a pytest plugin 'pytest-html' for generating html reports which can be forwarded to different teams as well
First install the plugin:
$ pip install pytest-html
Second, just run your tests with this command:
$ pytest --html=report.html
You can also make use of the hooks provided by the plugin in your code.
import pytest
from py.xml import html
def pytest_html_report_title(report)
report.title = "My very own title!"
Reference: https://pypi.org/project/pytest-html/
I haven't tried it yet but you can try referring to https://github.com/pytest-dev/pytest-html. A python library that can generate HTML output.
py.test --html=Report.html
Here you can specify your python file as well. In this case, when there is no file specified it picks up all the files with a name like 'test_%' present in the directory where the command is run and executes them and generates a report with the name Report.html
You can also modify the name of the report accordingly.