I have a custom-built integration test suite in python, which is technically just a run of python my_script.py --config=config.json. I want to compare using different configs in terms of what fraction of lines of code in my project will be activated.
Specific content of my_script.py is not relevant - it is a launch point that parses config, then imports and calls functions defined in multiple files from ./src folder.
I know tools to measure coverage in pytest, e.g. coverage.py; however is there a way to measure coverage of a non-test python run?
Coverage.py doesn't care whether you are running tests or not. You can use it to run any Python program. Just replace python with python -m coverage run
Since your usual command line is:
python my_script.py --config=config.json
try this:
python -m coverage run my_script.py --config=config.json
Then report on the data with coverage report -m or coverage html
Related
I searched for a long time and surprisingly found no satisfactory answer.
I have multiple modules/files in my Python project that I wrote unit tests for using unittest. The structure is such that I have production-modules module_A.py and module_B.py in one directory (say myproject/production) and corresponding test-files test_module_A.py and test_module_B.py in a sibling directory (say myproject/tests).
Now I have coverage.py installed and want to run all the tests associated with the project (i.e. all .py-files with the prefix test_ from the tests directory) and receive a coverage report showing the coverage for all the production-modules (module_A.py and module_B.py).
I figured out that I can do this by running the following commands from the myproject/tests directory:
coverage erase
coverage run -a --source myproject.production test_module_A.py
coverage run -a --source myproject.production test_module_B.py
coverage report
This gives me that nice table with all my production modules listed and their coverage results. So far so good.
But can I do this with just one command? Assuming I have not 2 but 20 or 200 tests that I want to include in one report, doing this "by hand" seems ridiculous.
There must be a way to automate this, but I can't seem to find it. Sure a shell-script might do it, but that is rather clumsy. I am thinking of something akin to unittest discover, but for coverage.py this doesn't seem to work.
Or could I accomplish this using the coverage-API somehow? So far I had no luck trying.
.
SOLUTION: (credit to Mr. Ned Batchelder)
From myproject/tests directory run:
coverage run --source myproject.production -m unittest discover && coverage report
One line, doing exactly what was needed.
This should do it:
coverage.py run -m unittest discover
I have a Python application I'm running within a Docker container. That application is normally started with the command /usr/local/bin/foo_service, which is a Python entry point (so it's just a Python file).
I want to collect code coverage for this application while running functional tests against it. I've found that coverage run /usr/local/bin/foo_service works nicely and, once the application exits, outputs a coverage file whose report appears accurate.
However, this is in the single-process mode. The application has another mode that uses the multiprocessing module to fork two or more child processes. I'm not sure if this is compatible with the the way I'm invoking coverage. I did coverage run --parallel-mode /usr/local/bin/foo_service -f 4, and it did output [one] coverage [file] without omitting any errors, but I don't know that this is correct. I half-expected it to output a coverage file per-process, but I don't know that it should do that. I couldn't find much coverage (pardon the pun) of this topic in the documentation.
Will this work? Or do I need to forego using the coverage binary and instead use the coverage Python API within my forking code?
$ python --version
Python 3.7.4
$ coverage --version
Coverage.py, version 4.5.3 with C extension
I searched for a long time and surprisingly found no satisfactory answer.
I have multiple modules/files in my Python project that I wrote unit tests for using unittest. The structure is such that I have production-modules module_A.py and module_B.py in one directory (say myproject/production) and corresponding test-files test_module_A.py and test_module_B.py in a sibling directory (say myproject/tests).
Now I have coverage.py installed and want to run all the tests associated with the project (i.e. all .py-files with the prefix test_ from the tests directory) and receive a coverage report showing the coverage for all the production-modules (module_A.py and module_B.py).
I figured out that I can do this by running the following commands from the myproject/tests directory:
coverage erase
coverage run -a --source myproject.production test_module_A.py
coverage run -a --source myproject.production test_module_B.py
coverage report
This gives me that nice table with all my production modules listed and their coverage results. So far so good.
But can I do this with just one command? Assuming I have not 2 but 20 or 200 tests that I want to include in one report, doing this "by hand" seems ridiculous.
There must be a way to automate this, but I can't seem to find it. Sure a shell-script might do it, but that is rather clumsy. I am thinking of something akin to unittest discover, but for coverage.py this doesn't seem to work.
Or could I accomplish this using the coverage-API somehow? So far I had no luck trying.
.
SOLUTION: (credit to Mr. Ned Batchelder)
From myproject/tests directory run:
coverage run --source myproject.production -m unittest discover && coverage report
One line, doing exactly what was needed.
This should do it:
coverage.py run -m unittest discover
I am using vimrunner-python library to test my vim plugin written in python with py-test and pytest-cov.
Vimrunner python executes a vim server and controls a client vim instance via the server remote interface.
However, pytest-cov (obviously) does not see the lines executed by the vim process. Is there a way how to make this work, i.e. point the coverage to the vim's server PID?
You need to run the coverage measurement from the plugin itself, i.e. like this:
# Start measuring coverage if in testing
if vim.vars.get('measure_coverage'):
import os
import atexit
import coverage
coverage_path = os.path.expanduser('~/coverage-data/.coverage.{0}'.format(os.getpid()))
cov = coverage.coverage(data_file=coverage_path)
cov.start()
def save_coverage():
cov.stop()
cov.save()
atexit.register(save_coverage)
If the plugin was invoked multiple times, you will need to combine the coverage files, using the coverage tool:
$ cd ~/coverage-data
$ coverage combine
This will generate combined .coverage file, which can be then used to generate the desired report.
Note: Make sure you're executing the measurement part only once per vim instance, otherwise the coverage file might get rewritten. In such case, another source of uniqueness (i.e. random number) other than PID should be used to generate the name of the .coverage file.
I am using tox to test my python egg. And I want to know the coverage.
But the problem is that the tests are executing with python 2 (2.6 and 2.7) and python 3 (3.3) and some lines should be executed in python 2 and other in python 3, but this look like if only count the lines that are executed with python 2 (the last section in the tox, py26-dj12). You can see this here:
https://coveralls.io/files/64922124#L33
Of this way pass with the differents django version...
Is there some way to get the global coverage?
Yesterday I receipted an email answering this question:
coverage.py (the tool coveralls uses to measure coverage in Python programs) has a "coverage combine" command.
Yesterday, I got the global coverage executing something like this:
coverage erase
tox
coverage combine
coveralls
In tox.ini I added the "p" param:
python {envbindir}/coverage run -p testing/run_tests.py
python {envbindir}/coverage run -p testing/run_tests.py testing.settings_no_debug
I fixed the problem with these commits:
https://github.com/Yaco-Sistemas/django-inplaceedit/commit/200d58b2170b9122369df73fbfe12ceeb8efd36c
https://github.com/Yaco-Sistemas/django-inplaceedit/commit/bf0a7dcfc935dedda2f23d5e01964e27f01c7461