When I run the coverage.py commands, the only results in the report are from the test directory. Obviously, the value of the report is from code coverage in the source directory. How do I view that result?
I'm using:
coverage run -m unittest test.tests1.sometestfile1
and all the imports/functions from the source directory execute and pass, but the report looks like this:
$ coverage report
Name Stmts Miss Cover
-----------------------------------------------
test/__init__.py 1 0 100%
test/../sometestfile1.py 116 69 41%
test/../sometestfile2.py 116 69 41%
test/../sometestfile3.py 116 69 41%
...
-----------------------------------------------
TOTAL 373 137 63%
I've experimented with adding the source dir to coverage's --source and --include options, but it didn't resolve the issue.
How can I view the coverage from the actual source files?
The solution is to use the --source= option for pytest.
you have:
coverage run -m unittest test.tests1.sometestfile1
which is showing you only the test files coverage.
You need to ensure that the files from your test/Lib/site-packages are getting coverage. (I'm assuming you're making a package for a user to import and maybe for PyPi to hold).
These are being created by tox in the .tox/test subdirectory.
This is why coverage run -m --source=. unittest test.tests1.sometestfile1 does not work.
You need to use the lib name.
coverage run -m --source=<module> unittest test.tests1.sometestfile1
E.g. for my project pyweaving which uses this directory structure:
pyweaving/
tests/
docs/
src/
pyweaving/
__init__.py
foo.py
data/
generators/
__init__.py
bar.py
setup.py (just the stub code)
setup.cfg
tox.ini
my coverage command line looks like this:
coverage run -m --source=pyweaving pytest --basetemp="{envtmpdir}" {posargs}
Related
Context
I am updating an inherited repository which has poor test coverage. The repo itself is a pytest plugin. I've changed the repo to use tox along with pytest-cov, and converted the "raw" tests to use pytester as suggested in the pytest documentation when testing plugins.
The testing and tox build, etc. works great. However, the coverage is reporting false misses with things like class definitions, imports, etc. This is because the code itself is being imported as part of pytest instantiation, and isn't getting "covered" until the testing actually starts.
I've read pytest docs, pytest-cov and coverage docs, and tox docs, and tried several configurations, but to no avail. I've exhausted my pool of google keyword combinations that might lead me to a good solution.
Repository layout
pkg_root/
.tox/
py3/
lib/
python3.7/
site-pacakges/
plugin_module/
supporting_module.py
plugin.py
some_data.dat
plugin_module/
supporting_module.py
plugin.py
some_data.dat
tests/
conftest.py
test_my_plugin.py
tox.ini
setup.py
Some relevant snippets with commentary:
tox.ini
[pytest]
addopts = --cov={envsitepackagesdir}/plugin_module --cov-report=html
testpaths = tests
This configuration gives me an error that no data was collected; no htmlcov is created in this case.
If I just use --cov, I get (expected) very noisy coverage, which shows the functional hits and misses, but with the false misses reported above for imports, class definitions, etc.
conftest.py
pytest_plugins = ['pytester'] # Entire contents of file!
test_my_plugin.py
def test_a_thing(testdir):
testdir.makepyfile(
"""
def test_that_fixture(my_fixture):
assert my_fixture.foo == 'bar'
"""
)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
How can I get an accurate report? Is there a way to defer the plugin loading until it's demanded by the pytester tests?
Instead of using the pytest-cov plugin, use coverage to run pytest:
coverage run -m pytest ....
That way, coverage will be started before pytest.
You can achieve what you want without pytest-cov.
❯ coverage run --source=<package> --module pytest --verbose <test-files-dirs> && coverage report --show-missing
OR SHORTER
❯ coverage run --source=<package> -m pytest -v <test-files-dirs> && coverage report -m
Example: (for your directory structure)
❯ coverage run --source=plugin_module -m pytest -v tests && coverage report -m
======================= test session starts ========================
platform darwin -- Python 3.9.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/johndoe/.local/share/virtualenvs/plugin_module--WYTJL20/bin/python
cachedir: .pytest_cache
rootdir: /Users/johndoe/projects/plugin_module, configfile: pytest.ini
collected 1 items
tests/test_my_plugin.py::test_my_plugin PASSED [100%]
======================== 1 passed in 0.04s =========================
Name Stmts Miss Cover Missing
-------------------------------------------------------------
plugin_module/supporting_module.py 4 0 100%
plugin_module/plugin.py 6 0 100%
-------------------------------------------------------------
TOTAL 21 0 100%
For an even nicer output, you can use:
❯ coverage html && open htmlcov/index.html
Documentation
❯ coverage -h
❯ pytest -h
coverage
run -- Run a Python program and measure code execution.
-m, --module --- Show line numbers of statements in each module that weren't executed.
--source=SRC1,SRC2, --- A list of packages or directories of code to be measured.
report -- Report coverage stats on modules.
-m, --show-missing --- Show line numbers of statements in each module that weren't executed.
html -- Create an HTML report.
pytest
-v, --verbose -- increase verbosity.
Pytest + coverage are showing very strange coverage statistics.
They are counting only those modules where tests were added, but other Python modules are not calculated for some reason.
I have a simple Python Microservice with a structure similar to:
README.rst
Dockerfile
manage.py
api_service/
setup.py
requirements.txt
tests/
Where api_service contains all the logic, and tests contains unit tests.
API is written in Python 3.X
Unit tests - Pytest 3.10.0
I'm running these commands to get a code coverage statistics:
python coverage run pytest -v --junit-xml=junit-report.xml tests/
python coverage xml --fail-under 80
python coverage report
It shows really strange and unexpected results for me.
e.g. there are empty init.py modules in the final report (with 100% coverage) and they affects the final coverage percentage.
Also, it adds a lot of modules with just abstract classes, etc.
But what is really not expected at all - it's not counting Python modules without tests. It's awful!
Are there any commands, flags etc. to handle this situation is a proper way?
I've tried also to run something like:
python coverage run --source=service_api -v --junit-xml=junit-report.xml tests/
But it also returns not expected results.
CD into prj directory and run:
pytest --cov=. tests/ --cov-report xml
in order to get the code coverage for your source files in xml format.
prereq:
pip install pytest pytest-cov
https://coverage.readthedocs.io/en/coverage-4.5.1a/source.html#source
My coverage is also including “venv” folder and I would like to exclude it
no matter what I do even with --include or omit nothing works
coverage run --omit /venv/* tests.py
This runs the test but still adds "venv" folder and dependencies and their % coverage
When I do
coverage run --include tests.py
To run only tests - it says
Nothing to do.
It is pretty annoying... can someone please help?
The help text for the --omit option says (documentation)
--omit=PAT1,PAT2,... Omit files whose paths match one of these patterns.
Accepts shell-style wildcards, which must be quoted.
It will not work without quoting the wildcard, as bash will expand the wildcards before handing the argument list to the coverage binary. Use single-quotes to avoid bash wildcard expansion.
To run my tests without getting coverage from any files within venv/*:
$ coverage run --omit 'venv/*' -m unittest tests/*.py && coverage report -m
........
----------------------------------------------------------------------
Ran 8 tests in 0.023s
OK
Name Stmts Miss Cover Missing
-------------------------------------------------------
ruterstop.py 84 8 90% 177, 188, 191-197, 207
tests/test_ruterstop.py 108 0 100%
-------------------------------------------------------
TOTAL 192 8 96%
If you usually use plain python -m unittest to run your tests you can of course omit the test target argument as well.
$ coverage run --omit 'venv/*' -m unittest
$ coverage report -m
For those who don't want to pass --omit each time when executing coverage you can define following in .coveragerc or in pyproject.toml. Example for .coveragerc:
# .coveragerc file content
[run]
omit = [
.venv/*
tests/*
]
Example for pyproject.toml:
# pyproject.toml file content
[tool.coverage.run]
omit = [
"tests/*",
".venv/*",
]
The command:
coverage run --omit /venv/* tests.py
omits coverage from /venv (ie: venv is off of root).
You should instead try a relative directory like:
coverage run --omit venv tests.py
Use the * for /venv/ and it will eliminate all files within your virtual environment.
coverage run tests.py && coverage report --omit=*/venv/*
I'm have the following script section in my .travis.yml file:
script:
# run all tests in mymodule/tests and check coverage of the mymodule dir
# don't report on coverage of files in the mymodule/tests dir itself
- coverage run -m --source mymodule --omit mymodule/tests/* py.test mymodule/tests -v
This works fine on my own (Windows) machine, but throws an error on both Linux and OSX on the Travis build. The error is:
Import by filename is not supported.
With the flags in a different order I see a different error (only on the Linux build - the OSX tests pass with this order of the flags):
-coverage run --source eppy --omit eppy/tests/* -m py.test eppy/tests -v
Can't find '__main__' module in 'mymodule/tests/geometry_tests'
What am I doing wrong here?
Solved by changing from using coverage directly to using pytest-cov.
script:
# run all tests in mymodule/tests and check coverage of the mymodule dir
- py.test -v --cov-config .coveragerc --cov=mymodule mymodule/tests
And the .coveragerc file:
# .coveragerc to control coverage.py
[run]
# don't report on coverage of files in the tests dir itself
omit =
mymodule/tests/*
I don't know why this works where the previous approach didn't, but this at least solves the problem.
I am porting a series of tests from nosetests + python unittest to py.test. I was pleasantly surprised to learn that py.test supports python unittests and running existing tests with py.test is just as easy as calling py.test instead of nosetests on the command line. However I am having problems with specifying the working directory for the tests. They are not in the root project dir but in a subdir. Currently the tests are run like this:
$ nosetests -w test-dir/ tests.py
which changes the current working directory to test-dir and run all the tests in tests.py. However when I use py.test
$ py.test test-dir/tests.py
all the tests in tests.py are run but the current working directory is not changed to test-dir. Most tests assume that the working directory is test-dir and try to open and read files from it which obviously fails.
So my question is how to change the current working directory for all tests when using py.test.
The are a lot of tests and I don't want to invest the time to fix them all and make them work regardless of the cwd.
Yes, I can simply do cd test-dir; py.test tests.py but I am used to working from the project root directory and don't want to cd every time I want to run a test.
Here is some code that may give you better idea what I am trying to achieve:
content of tests.py:
import unittest
class MyProjectTestCase(unittest.TestCase):
def test_something(self):
with open('testing-info.txt', 'r') as f:
test something with f
directory layout:
my-project/
test-dir/
tests.py
testing-info.txt
And then when I try to run the tests:
$ pwd
my-project
$ nosetests -w test-dir tests.py
# all is fine
$ py.test ttest-dir/tests.py
# tests fail because they cannot open testing-info.txt
So this is the best I could come up with:
# content of conftest.py
import pytest
import os
def pytest_addoption(parser):
parser.addoption("-W", action="store", default=".",
help="Change current working dir before running the collected tests.")
def pytest_sessionstart(session):
os.chdir(session.config.getoption('W'))
And then when running the tests
$ py.test -W test-dir test-dir/tests.py
It's not clean but it will do the trick until I fix all the tests.