How to measure coverage in a proper way - python

Pytest + coverage are showing very strange coverage statistics.
They are counting only those modules where tests were added, but other Python modules are not calculated for some reason.
I have a simple Python Microservice with a structure similar to:
README.rst
Dockerfile
manage.py
api_service/
setup.py
requirements.txt
tests/
Where api_service contains all the logic, and tests contains unit tests.
API is written in Python 3.X
Unit tests - Pytest 3.10.0
I'm running these commands to get a code coverage statistics:
python coverage run pytest -v --junit-xml=junit-report.xml tests/
python coverage xml --fail-under 80
python coverage report
It shows really strange and unexpected results for me.
e.g. there are empty init.py modules in the final report (with 100% coverage) and they affects the final coverage percentage.
Also, it adds a lot of modules with just abstract classes, etc.
But what is really not expected at all - it's not counting Python modules without tests. It's awful!
Are there any commands, flags etc. to handle this situation is a proper way?
I've tried also to run something like:
python coverage run --source=service_api -v --junit-xml=junit-report.xml tests/
But it also returns not expected results.

CD into prj directory and run:
pytest --cov=. tests/ --cov-report xml
in order to get the code coverage for your source files in xml format.
prereq:
pip install pytest pytest-cov

Related

How to get coverage reporting when testing a pytest plugin?

Context
I am updating an inherited repository which has poor test coverage. The repo itself is a pytest plugin. I've changed the repo to use tox along with pytest-cov, and converted the "raw" tests to use pytester as suggested in the pytest documentation when testing plugins.
The testing and tox build, etc. works great. However, the coverage is reporting false misses with things like class definitions, imports, etc. This is because the code itself is being imported as part of pytest instantiation, and isn't getting "covered" until the testing actually starts.
I've read pytest docs, pytest-cov and coverage docs, and tox docs, and tried several configurations, but to no avail. I've exhausted my pool of google keyword combinations that might lead me to a good solution.
Repository layout
pkg_root/
.tox/
py3/
lib/
python3.7/
site-pacakges/
plugin_module/
supporting_module.py
plugin.py
some_data.dat
plugin_module/
supporting_module.py
plugin.py
some_data.dat
tests/
conftest.py
test_my_plugin.py
tox.ini
setup.py
Some relevant snippets with commentary:
tox.ini
[pytest]
addopts = --cov={envsitepackagesdir}/plugin_module --cov-report=html
testpaths = tests
This configuration gives me an error that no data was collected; no htmlcov is created in this case.
If I just use --cov, I get (expected) very noisy coverage, which shows the functional hits and misses, but with the false misses reported above for imports, class definitions, etc.
conftest.py
pytest_plugins = ['pytester'] # Entire contents of file!
test_my_plugin.py
def test_a_thing(testdir):
testdir.makepyfile(
"""
def test_that_fixture(my_fixture):
assert my_fixture.foo == 'bar'
"""
)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
How can I get an accurate report? Is there a way to defer the plugin loading until it's demanded by the pytester tests?
Instead of using the pytest-cov plugin, use coverage to run pytest:
coverage run -m pytest ....
That way, coverage will be started before pytest.
You can achieve what you want without pytest-cov.
❯ coverage run --source=<package> --module pytest --verbose <test-files-dirs> && coverage report --show-missing
OR SHORTER
❯ coverage run --source=<package> -m pytest -v <test-files-dirs> && coverage report -m
Example: (for your directory structure)
❯ coverage run --source=plugin_module -m pytest -v tests && coverage report -m
======================= test session starts ========================
platform darwin -- Python 3.9.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/johndoe/.local/share/virtualenvs/plugin_module--WYTJL20/bin/python
cachedir: .pytest_cache
rootdir: /Users/johndoe/projects/plugin_module, configfile: pytest.ini
collected 1 items
tests/test_my_plugin.py::test_my_plugin PASSED [100%]
======================== 1 passed in 0.04s =========================
Name Stmts Miss Cover Missing
-------------------------------------------------------------
plugin_module/supporting_module.py 4 0 100%
plugin_module/plugin.py 6 0 100%
-------------------------------------------------------------
TOTAL 21 0 100%
For an even nicer output, you can use:
❯ coverage html && open htmlcov/index.html
Documentation
❯ coverage -h
❯ pytest -h
coverage
run -- Run a Python program and measure code execution.
-m, --module --- Show line numbers of statements in each module that weren't executed.
--source=SRC1,SRC2, --- A list of packages or directories of code to be measured.
report -- Report coverage stats on modules.
-m, --show-missing --- Show line numbers of statements in each module that weren't executed.
html -- Create an HTML report.
pytest
-v, --verbose -- increase verbosity.

Python adding unit test inside package

I have a python package for which I am trying to write unit tests
The package looks as below
helper/
utils/
app/
requirements.txt
README.md
tests/
I come from java background so I thought of organizing the tests in the same package as that of their source therefore my tests directory looks as below
tests/
helper/
helper_a_test.py
utils/
util_a_test.py
app/
myapp_test.py
when I trying invoking the tests as below
python -m unittest discover
The test fails due to import error from source with error module app, helper, utils not found.
I have __init__.py file is all my packages.
I moved all the tests inside tests sub-directory into tests root directory as below.
tests/
helper_a_test.py
util_a_test.py
myapp_test.py
Now all test works as expected.
Can someone explain why is this happening, also is it good practice to keep all tests inside one directory rather that on its own package?
You have at least two ways to go back to your original structure and make it work:
The first and most direct one is adding an __init__.py in your test structure (at all levels, also the tests folder, that you may have missed).
The second one is to transform your code into a python package (adding a setup.py, so that your app will be installable with pip), install your package in the local interpreter and then run the tests.
I would also suggest using pytest and call directly pytest instead of python -m unittest.

How to omit (remove) virtual environment (venv) from python coverage unit testing?

https://coverage.readthedocs.io/en/coverage-4.5.1a/source.html#source
My coverage is also including “venv” folder and I would like to exclude it
no matter what I do even with --include or omit nothing works
coverage run --omit /venv/* tests.py
This runs the test but still adds "venv" folder and dependencies and their % coverage
When I do
coverage run --include tests.py
To run only tests - it says
Nothing to do.
It is pretty annoying... can someone please help?
The help text for the --omit option says (documentation)
--omit=PAT1,PAT2,... Omit files whose paths match one of these patterns.
Accepts shell-style wildcards, which must be quoted.
It will not work without quoting the wildcard, as bash will expand the wildcards before handing the argument list to the coverage binary. Use single-quotes to avoid bash wildcard expansion.
To run my tests without getting coverage from any files within venv/*:
$ coverage run --omit 'venv/*' -m unittest tests/*.py && coverage report -m
........
----------------------------------------------------------------------
Ran 8 tests in 0.023s
OK
Name Stmts Miss Cover Missing
-------------------------------------------------------
ruterstop.py 84 8 90% 177, 188, 191-197, 207
tests/test_ruterstop.py 108 0 100%
-------------------------------------------------------
TOTAL 192 8 96%
If you usually use plain python -m unittest to run your tests you can of course omit the test target argument as well.
$ coverage run --omit 'venv/*' -m unittest
$ coverage report -m
For those who don't want to pass --omit each time when executing coverage you can define following in .coveragerc or in pyproject.toml. Example for .coveragerc:
# .coveragerc file content
[run]
omit = [
.venv/*
tests/*
]
Example for pyproject.toml:
# pyproject.toml file content
[tool.coverage.run]
omit = [
"tests/*",
".venv/*",
]
The command:
coverage run --omit /venv/* tests.py
omits coverage from /venv (ie: venv is off of root).
You should instead try a relative directory like:
coverage run --omit venv tests.py
Use the * for /venv/ and it will eliminate all files within your virtual environment.
coverage run tests.py && coverage report --omit=*/venv/*

Making py.test, coverage and tox work together: __init__.py in tests folder?

I'm having a weird problem with tox, py.test, coverage and pytest-cov: when py.test with the --cov option is launched from tox, it seems to require an __init__.py file in the tests folder which is not immediately obvious.
While writing this post, I have kind of solved the initial problem by adding the aforesaid tests/__init__.py, but to this moment I don't fully understand why exactly it works or doesn't work, so I'm still asking for help. Please see below for details.
I've found a related question on SO but it only makes it more confusing because the answer seems to be opposite to what I've figured out so far:
`py.test` and `__init__.py` files
See also the official docs here: py.test - Good Integration Practices (the very bottom of the page).
Simplified project structure:
setup.py
tox.ini
.coveragerc
project/
__init__.py
module1.py
module2.py
tests/
__init__.py (optional, an empty file)
test_module1.py
test_module2.py
Relevant part of tox.ini:
[testenv:check]
commands = py.test --cov=project --cov-report=term
deps =
pytest
coverage
pytest-cov
[pytest]
python_files = test_*.py
norecursedirs = .tox
Relevant part of .coveragerc:
[run]
branch = True
omit = project/tests/*
Now, the results:
py.test --cov=project --cov-report=term run from project root => correct coverage whether tests/__init__.py file is present or not.
tox -e check without tests/__init__.py => the tests are discovered and run, but I get a warning "Coverage.py warning: No data was collected." and the coverage is 0% for all modules
tox -e check with tests/__init__.py => correct coverage again.
It's not immediately obvious to me why the tests/__init__.py file has to be there (adding this empty file solved the initial problem) for the tox run, but it doesn't matter when you run the tests/coverage manually. Any ideas?
Use --cov {envsitepackagesdir}/<your-package-name> in tox.ini.
See:
Using py.test with coverage doesn't include imports
I got rid of using pytest-cov and run coverage outright instead..
Also noticed with pytest, I did need the blank __init__.py in my test directory to function correctly. There is probably a reason for it somewhere.
I realize this is a couple of years old, but in case someone else comes across this..

`python -m unittest discover` does not discover tests

Python's unittest discover does not find my tests!
I have been using nose to discover my unit tests and it is working fine. From the top level of my project, if I run nosetests I get:
Ran 31 tests in 0.390s
Now that Python 2.7 unittest has discovery, I have tried using
python -m unittest discover
but I get
Ran 0 tests in 0.000s
My directory structure is:
myproj/
reporter/
__init__.py
report.py
[other app modules]
tests/
__init__.py
test-report.py
[other test modules]
Do you have any ideas why unittest's discovery algorithm can't find the tests?
I'm using Python 2.7.1 and nose 1.0.0 on Windows 7.
The behaviour is intentional, but the documentation could make this clearer. If you look at the first paragraph in the test discovery section, it says:
For a project’s tests to be compatible with test discovery they must all be importable from the top level directory of the project (in other words, they must all be in Python packages).
A corollary to that is that the file names must also be valid Python module names. test-report.py fails that test, since test-report is not a legal Python identifier.
A docs bug suggesting that this be mentioned explicitly in the documentation for the -p pattern option would probably be a good way forward.
I had this problem because some directories in a project were missing __init__.py. I thought I don't need them in Python 3.7.
Just add __init__.py to every directory and python3 -m unittest will find tests automatically.
As someone relatively new to Python, the naming convention in the docs implied the opposite. Ben's comment was very helpful: the default discovery pattern looks for test-modules prefixed with the string "test"
I thought the introspection would just look for class names and not require a specific file naming convention.
Here is what the docs say:
https://docs.python.org/3/library/unittest.html
python -m unittest discover -s project_directory -p "_test.py"
I couldn't get this to work, but by changing my file names to be "test_.py" - success!

Categories