Context
I am updating an inherited repository which has poor test coverage. The repo itself is a pytest plugin. I've changed the repo to use tox along with pytest-cov, and converted the "raw" tests to use pytester as suggested in the pytest documentation when testing plugins.
The testing and tox build, etc. works great. However, the coverage is reporting false misses with things like class definitions, imports, etc. This is because the code itself is being imported as part of pytest instantiation, and isn't getting "covered" until the testing actually starts.
I've read pytest docs, pytest-cov and coverage docs, and tox docs, and tried several configurations, but to no avail. I've exhausted my pool of google keyword combinations that might lead me to a good solution.
Repository layout
pkg_root/
.tox/
py3/
lib/
python3.7/
site-pacakges/
plugin_module/
supporting_module.py
plugin.py
some_data.dat
plugin_module/
supporting_module.py
plugin.py
some_data.dat
tests/
conftest.py
test_my_plugin.py
tox.ini
setup.py
Some relevant snippets with commentary:
tox.ini
[pytest]
addopts = --cov={envsitepackagesdir}/plugin_module --cov-report=html
testpaths = tests
This configuration gives me an error that no data was collected; no htmlcov is created in this case.
If I just use --cov, I get (expected) very noisy coverage, which shows the functional hits and misses, but with the false misses reported above for imports, class definitions, etc.
conftest.py
pytest_plugins = ['pytester'] # Entire contents of file!
test_my_plugin.py
def test_a_thing(testdir):
testdir.makepyfile(
"""
def test_that_fixture(my_fixture):
assert my_fixture.foo == 'bar'
"""
)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
How can I get an accurate report? Is there a way to defer the plugin loading until it's demanded by the pytester tests?
Instead of using the pytest-cov plugin, use coverage to run pytest:
coverage run -m pytest ....
That way, coverage will be started before pytest.
You can achieve what you want without pytest-cov.
❯ coverage run --source=<package> --module pytest --verbose <test-files-dirs> && coverage report --show-missing
OR SHORTER
❯ coverage run --source=<package> -m pytest -v <test-files-dirs> && coverage report -m
Example: (for your directory structure)
❯ coverage run --source=plugin_module -m pytest -v tests && coverage report -m
======================= test session starts ========================
platform darwin -- Python 3.9.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/johndoe/.local/share/virtualenvs/plugin_module--WYTJL20/bin/python
cachedir: .pytest_cache
rootdir: /Users/johndoe/projects/plugin_module, configfile: pytest.ini
collected 1 items
tests/test_my_plugin.py::test_my_plugin PASSED [100%]
======================== 1 passed in 0.04s =========================
Name Stmts Miss Cover Missing
-------------------------------------------------------------
plugin_module/supporting_module.py 4 0 100%
plugin_module/plugin.py 6 0 100%
-------------------------------------------------------------
TOTAL 21 0 100%
For an even nicer output, you can use:
❯ coverage html && open htmlcov/index.html
Documentation
❯ coverage -h
❯ pytest -h
coverage
run -- Run a Python program and measure code execution.
-m, --module --- Show line numbers of statements in each module that weren't executed.
--source=SRC1,SRC2, --- A list of packages or directories of code to be measured.
report -- Report coverage stats on modules.
-m, --show-missing --- Show line numbers of statements in each module that weren't executed.
html -- Create an HTML report.
pytest
-v, --verbose -- increase verbosity.
Related
Using the following setup, the calculated coverage is less than if I use a single thread without parallelization. Coverage creates only 1 coverage file in the project root directory, which I expect is where the problem lies.
I cannot identify what I am doing wrong, the reported coverage is less than if I simply run coverage -m pytest (on a single thread). The tests themselves run in parallel just fine.
Can anyone identify my mistake? I wonder if an environment variable is missing. I run the command from the project root, which contains .coveragerc and sitecustomize.py.
coverage erase && COVERAGE_PROCESS_START=./.coveragerc coverage run --concurrency=multiprocessing --parallel-mode -m pytest -n 8 && coverage combine && coverage report
sitecustomize.py
import coverage
coverage.process_startup()
.coveragerc
[run]
include =
lettergun/*
omit =
*migrations*
*tests*
*.html
plugins = django_coverage_plugin
parallel = True
concurrency = multiprocessing
branch = True
pytest.ini
[pytest]
addopts = --ds=config.settings.test --reuse-db -n 8
python_files = test_*.py
norecursedirs = node_modules
DJANGO_SETTINGS_MODULE = config.settings.test
https://github.com/nedbat/coveragepy/issues/1341 provides some context (and possible solution to this), specially this comment: https://github.com/nedbat/coveragepy/issues/1341#issuecomment-1302863172
I tried to add the coverage-enable-subprocess package but didn't get it to work.
I switched to use pytest-cov and got coverage report when using pytest-xdist)
When I run the coverage.py commands, the only results in the report are from the test directory. Obviously, the value of the report is from code coverage in the source directory. How do I view that result?
I'm using:
coverage run -m unittest test.tests1.sometestfile1
and all the imports/functions from the source directory execute and pass, but the report looks like this:
$ coverage report
Name Stmts Miss Cover
-----------------------------------------------
test/__init__.py 1 0 100%
test/../sometestfile1.py 116 69 41%
test/../sometestfile2.py 116 69 41%
test/../sometestfile3.py 116 69 41%
...
-----------------------------------------------
TOTAL 373 137 63%
I've experimented with adding the source dir to coverage's --source and --include options, but it didn't resolve the issue.
How can I view the coverage from the actual source files?
The solution is to use the --source= option for pytest.
you have:
coverage run -m unittest test.tests1.sometestfile1
which is showing you only the test files coverage.
You need to ensure that the files from your test/Lib/site-packages are getting coverage. (I'm assuming you're making a package for a user to import and maybe for PyPi to hold).
These are being created by tox in the .tox/test subdirectory.
This is why coverage run -m --source=. unittest test.tests1.sometestfile1 does not work.
You need to use the lib name.
coverage run -m --source=<module> unittest test.tests1.sometestfile1
E.g. for my project pyweaving which uses this directory structure:
pyweaving/
tests/
docs/
src/
pyweaving/
__init__.py
foo.py
data/
generators/
__init__.py
bar.py
setup.py (just the stub code)
setup.cfg
tox.ini
my coverage command line looks like this:
coverage run -m --source=pyweaving pytest --basetemp="{envtmpdir}" {posargs}
pytest is issuing a warning due to an unknown custom mark when running my test suite despite (hopefully) registering it correctly as per the pytest documentation (see https://docs.pytest.org/en/stable/mark.html#registering-marks).
My Python project's structure is (simplified for the sake of this query):
my_project/
src/
tests/
integration/
unit/
conftest.py
pytest.ini
My pytest.ini is (again, simplified):
# pytest.ini
[pytest]
markers =
incremental: marks related sequential tests to xfail after an earlier failure
because my conftest.py contains the #pytest.mark.incremental recipe outlined in the pytest documentation (see https://docs.pytest.org/en/latest/example/simple.html#incremental-testing-test-steps).
When I run pytest from the command line from within the root directory of my project (i.e. /my_project/ $ pytest), pytest issues the following warning:
PytestUnknownMarkWarning: Unknown pytest.mark.incremental - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
#pytest.mark.incremental
-- Docs: https://docs.pytest.org/en/stable/warnings.html
However, if I tell pytest to run the tests directory (i.e. /my_project/ $ pytest tests) I get no such warning. Similarly if I move my pytest.ini file from within the tests directory back to the project root directory then again I don't get the error.
Is there a way to configure pytest so that I can keep my structure as tests/pytest.ini to avoid over-cluttering the project root directory, while still ensuring that my pytest.ini is read correctly by pytest when I just run /my_project/ $ pytest from the command line?
Extra info incase useful:
(my-project-env) /Users/me/Documents/my_project/ $ pytest
============================================= test session starts =============================================
platform darwin -- Python 3.8.5, pytest-6.0.2, py-1.9.0, pluggy-0.13.1
rootdir: /Users/me/Documents/my_project
I know this is an old question, but may I suggest that you run pytest from inside the folder where you put your pytest.ini file?
cd tests
pytest
I think that may solve your problem.
Pytest + coverage are showing very strange coverage statistics.
They are counting only those modules where tests were added, but other Python modules are not calculated for some reason.
I have a simple Python Microservice with a structure similar to:
README.rst
Dockerfile
manage.py
api_service/
setup.py
requirements.txt
tests/
Where api_service contains all the logic, and tests contains unit tests.
API is written in Python 3.X
Unit tests - Pytest 3.10.0
I'm running these commands to get a code coverage statistics:
python coverage run pytest -v --junit-xml=junit-report.xml tests/
python coverage xml --fail-under 80
python coverage report
It shows really strange and unexpected results for me.
e.g. there are empty init.py modules in the final report (with 100% coverage) and they affects the final coverage percentage.
Also, it adds a lot of modules with just abstract classes, etc.
But what is really not expected at all - it's not counting Python modules without tests. It's awful!
Are there any commands, flags etc. to handle this situation is a proper way?
I've tried also to run something like:
python coverage run --source=service_api -v --junit-xml=junit-report.xml tests/
But it also returns not expected results.
CD into prj directory and run:
pytest --cov=. tests/ --cov-report xml
in order to get the code coverage for your source files in xml format.
prereq:
pip install pytest pytest-cov
For Jedi we want to generate our test coverage. There is a related question in stackoverflow, but it didn't help.
We're using py.test as a test runner. However, we are unable to add the imports and other "imported" stuff to the report. For example __init__.py is always reported as being uncovered:
Name Stmts Miss Cover
--------------------------------------------------
jedi/__init__ 5 5 0%
[..]
Clearly this file is being imported and should therefore be reported as tested.
We start tests like this [*]:
py.test --cov jedi
As you can see we're using pytest-coverage.
So how is it possible to properly count coverage of files like __init__.py?
[*] We also tried starting test without --doctest-modules (removed from pytest.ini) and activate the coverage module earlier by py.test -p pytest_cov --cov jedi. Neither of them work.
I've offered a bounty. Please try to fix it within Jedi. It's publicly available.
#hynekcer gave me the right idea. But basically the easiest solution lies somewhere else:
Get rid of pytest-cov!
Use
coverage run --source jedi -m py.test
coverage report
instead!!! This way you're just running a coverage on your current py.test configuration, which works perfectly fine! It's also philosophically the right way to go: Make each program do one thing well - py.test runs tests and coverage checks the code coverage.
Now this might sound like a rant, but really. pytest-cov hasn't been working properly for a while now. Some tests were failing, just because we used it.
As of 2014, pytest-cov seems to have changed hands. py.test --cov jedi test seems to be a useful command again (look at the comments). However, you don't need to use it. But in combination with xdist it can speed up your coverage reports.
I fixed the test coverage to 94% by this patch that simplifies import dependencies and by the command:
py.test --cov jedi test # or
py.test --cov jedi test --cov-report=html # + a listing with red uncovered lines
Uncovered lines are only in conditional commands or in some less used functions but all headers are completely covered.
The problem was that the tests configuration test/conftest.py did import prematurely by dependencies almost all files in the project. The conftest file defines also additional command line options and settings that should be set before running the test. Therefore I think that pytest_cov plugin works correctly if it ignores everything that was imported together with this file, although it is a pain. I excluded also __init__.py and settings.py from the report because they are simple and with the complete coverage but they are also imported prematurely in the dependency of conftest.
In my case, all the tests run, but coverage was 0%.
The fix was:
$ export PYTHONPATH="."
After the results were correct.
I had in past few problems with py.test command having problems to import something and setting the PYTHONPATH env var was the solution. It worked for me this time too.
My real example with awslogs
First with PYTHONPATH unset:
$ py.test --cov=awslogs tests/
========================================= test session starts =========================================
platform linux2 -- Python 2.7.9, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /home/javl/sandbox/awslogs/github/awslogs, inifile:
plugins: cov-2.2.0
collected 11 items
tests/test_it.py ...........Coverage.py warning: No data was collected.
--------------------------- coverage: platform linux2, python 2.7.9-final-0 ---------------------------
Name Stmts Miss Cover
-------------------------------------------
awslogs/__init__.py 2 2 0%
awslogs/bin.py 85 85 0%
awslogs/core.py 143 143 0%
awslogs/exceptions.py 12 12 0%
-------------------------------------------
TOTAL 242 242 0%
====================================== 11 passed in 0.38 seconds ======================================
Resulting coverage is 0%.
Then I set the PYTHONPATH:
$ export PYTHONPATH="."
and rerun the test:
$ py.test --cov=awslogs tests/
========================================= test session starts =========================================
platform linux2 -- Python 2.7.9, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /home/javl/sandbox/awslogs/github/awslogs, inifile:
plugins: cov-2.2.0
collected 11 items
tests/test_it.py ...........
--------------------------- coverage: platform linux2, python 2.7.9-final-0 ---------------------------
Name Stmts Miss Cover
-------------------------------------------
awslogs/__init__.py 2 0 100%
awslogs/bin.py 85 9 89%
awslogs/core.py 143 12 92%
awslogs/exceptions.py 12 2 83%
-------------------------------------------
TOTAL 242 23 90%
====================================== 11 passed in 0.44 seconds ======================================
Now is the coverage 90%.
WARNING: Manipulating PYTHONPATH can have strange side effects. Currently I run into problem, that pbr based package is creating egg directory when building distributable and if PYTHONPATH is set to ".", it automatically considers the egg related package as installed. For this reason I stopped using pytest-cov and follow the advice to use coverage tool instead.
I had this problem with py.test, the coverage and the django plugin.
Apparently the model files are imported before coverage is started.
Not even "-p coverage" for early-loading of the coverage-plugin worked.
I fixed it (ugly?) by removing the models module from sys.modules and re-importing it in the test file that tests the model:
import sys
del sys.modules['project.my_app.models']
from project.my_app import models
def test_my_model():
...
if you are using flask then this will help you to resolve the issue-
pytest --cov=src --cov-report=html