I searched for a long time and surprisingly found no satisfactory answer.
I have multiple modules/files in my Python project that I wrote unit tests for using unittest. The structure is such that I have production-modules module_A.py and module_B.py in one directory (say myproject/production) and corresponding test-files test_module_A.py and test_module_B.py in a sibling directory (say myproject/tests).
Now I have coverage.py installed and want to run all the tests associated with the project (i.e. all .py-files with the prefix test_ from the tests directory) and receive a coverage report showing the coverage for all the production-modules (module_A.py and module_B.py).
I figured out that I can do this by running the following commands from the myproject/tests directory:
coverage erase
coverage run -a --source myproject.production test_module_A.py
coverage run -a --source myproject.production test_module_B.py
coverage report
This gives me that nice table with all my production modules listed and their coverage results. So far so good.
But can I do this with just one command? Assuming I have not 2 but 20 or 200 tests that I want to include in one report, doing this "by hand" seems ridiculous.
There must be a way to automate this, but I can't seem to find it. Sure a shell-script might do it, but that is rather clumsy. I am thinking of something akin to unittest discover, but for coverage.py this doesn't seem to work.
Or could I accomplish this using the coverage-API somehow? So far I had no luck trying.
.
SOLUTION: (credit to Mr. Ned Batchelder)
From myproject/tests directory run:
coverage run --source myproject.production -m unittest discover && coverage report
One line, doing exactly what was needed.
This should do it:
coverage.py run -m unittest discover
Related
I searched for a long time and surprisingly found no satisfactory answer.
I have multiple modules/files in my Python project that I wrote unit tests for using unittest. The structure is such that I have production-modules module_A.py and module_B.py in one directory (say myproject/production) and corresponding test-files test_module_A.py and test_module_B.py in a sibling directory (say myproject/tests).
Now I have coverage.py installed and want to run all the tests associated with the project (i.e. all .py-files with the prefix test_ from the tests directory) and receive a coverage report showing the coverage for all the production-modules (module_A.py and module_B.py).
I figured out that I can do this by running the following commands from the myproject/tests directory:
coverage erase
coverage run -a --source myproject.production test_module_A.py
coverage run -a --source myproject.production test_module_B.py
coverage report
This gives me that nice table with all my production modules listed and their coverage results. So far so good.
But can I do this with just one command? Assuming I have not 2 but 20 or 200 tests that I want to include in one report, doing this "by hand" seems ridiculous.
There must be a way to automate this, but I can't seem to find it. Sure a shell-script might do it, but that is rather clumsy. I am thinking of something akin to unittest discover, but for coverage.py this doesn't seem to work.
Or could I accomplish this using the coverage-API somehow? So far I had no luck trying.
.
SOLUTION: (credit to Mr. Ned Batchelder)
From myproject/tests directory run:
coverage run --source myproject.production -m unittest discover && coverage report
One line, doing exactly what was needed.
This should do it:
coverage.py run -m unittest discover
I have a custom-built integration test suite in python, which is technically just a run of python my_script.py --config=config.json. I want to compare using different configs in terms of what fraction of lines of code in my project will be activated.
Specific content of my_script.py is not relevant - it is a launch point that parses config, then imports and calls functions defined in multiple files from ./src folder.
I know tools to measure coverage in pytest, e.g. coverage.py; however is there a way to measure coverage of a non-test python run?
Coverage.py doesn't care whether you are running tests or not. You can use it to run any Python program. Just replace python with python -m coverage run
Since your usual command line is:
python my_script.py --config=config.json
try this:
python -m coverage run my_script.py --config=config.json
Then report on the data with coverage report -m or coverage html
I'm setting up a Python Continuous Integration server, using Jenkins, and nosetests keeps running the same tests twice. I'm not importing the tests anywhere. Here's the command I'm running:
nosetests --with-xcoverage --with-xunit --all-modules --traverse-namespace --cover-package=app --cover-inclusive --cover-erase -x
Any ideas? It's a Flask-Restful app.
I had a similar issue. After turning up verbosity (as suggested by Schollii above) and comparing notes on this question what worked for me was deleting the init.py (and init.pyc of course) in my main code folder (of which tests were a subdirectory).
I just had this one. Apparently I messed up the command line syntax. It's not:
nosetests module.py module.class_name
It's:
nosetests module.class_name
I've successfully installed and configured django-nose with coverage
Problem is that if I just run coverage for ./manage.py shell and exit out of that shell - it shows me 37% code coverage. I fully understand that executed code doesn't mean tested code. My only question is -- what now?
What I'm envisioning is being able to import all the python modules and "settle down" before executing any tests, and directly communicating with coverage saying "Ok, start counting reached code here."
Ideally this would be done by nose essentially resetting the "touched" lines of code right before executing each test suite.
I don't know where to start looking/developing. I've searched online and haven't found anything fruitful. Any help/guidelines would be greatly appreciated.
P.S.
I tried executing something like this:
DJANGO_SETTINGS_MODULE=app.settings_dev coverage run app/tests/gme_test.py
And it worked (showed 1% coverage) but I can't figure out how to do this for the entire app
Edit: Here's my coverage config:
[run]
source = .
branch = False
timid = True
[report]
show_missing = False
include = *.py
omit =
tests.py
*_test.py
*_tests.py
*/site-packages/*
*/migrations/*
[html]
title = Code Coverage
directory = local_coverage_report
since you use django-nose you have two options on how to run coverage. The first was already pointed out by DaveB:
coverage run ./manage.py test myapp
The above actually runs coverage which then monitors all code executed by the test command.
But then, there is also a nose coverage plugin included by default in the django-nose package (http://nose.readthedocs.org/en/latest/plugins/cover.html). You can use it like this:
./manage.py test myapp --with-coverage
(There are also some additional options like which modules should be covered, whether to include an html report or not etc . These are all documented in the above link - you can also type ./manage.py test --help for some quick info).
Running the nose coverage plugin will result in coverage running after the django bootstrapping code is executed and therefore the corresponding code will not be reported as covered.
Most of the code you see reported as covered when running coverage the original way, are import statements, class definitions, class members etc. As python evaluates them during import time, coverage will naturally mark them as covered. However, running the nose plugin will not report bootstrapping code as covered since the test runner starts after the django environment is loaded. Of course, a side effect of this is you can never achieve 100% coverage (...or close :)) as your global scope statements will never get covered.
After switching back and forth and playing around with coverage options, I now have ended up using coverage like this:
coverage run --source=myapp,anotherapp --omit=*/migrations/* ./manage.py test
so that
a. coverage will report import statements, class member definitions etc as covered (which is actually the truth - this code was successfully imported and interpreted)
b. it will only cover my code and not django code, or any other third-party app I use; The coverage percentage will reflect how well my project is covered.
Hope this helps!
The "Ok, start counting reached code here." can be done through the API of the coverage module. You can check this out through the shell. Stole directly from http://nedbatchelder.com/code/coverage/api.html:
import coverage
cov = coverage.coverage()
cov.start()
# .. call your code ..
cov.stop()
cov.save()
cov.html_report()
You can make your own test-runner to do this exactly to match your needs (some would consider coverage made from any unit-test to be OK, and others would only accept the coverage of a unit caused by the unit-test for that unit.)
I had the same issue. I saved some time by creating a .coveragerc file that specified options similar to those outlined in the bounty-awarded answer.
Now running 'coverage run manage.py test' and then 'coverage report -m' will show me the coverage report and the lines that aren't covered.
(See here for details on the .coveragerc file: http://nedbatchelder.com/code/coverage/config.html)
I'm a bit confused by what you are trying to achieve here.
Testing in Django is covered very well here: https://docs.djangoproject.com/en/dev/topics/testing/overview/
You write tests in your app as test.py - I don't see the need for nose, as the standard django way is pretty simple.
Then run them as coverage run ./manage.py test main - where 'main' is your app
Specify the source files for your code as documented here: http://nedbatchelder.com/code/coverage/cmd.html so that only your code is counted
e.g. coverage run --source=main ./manage.py test main
You'll still get a certain percentage marked as covered with the simple test provided as an example. This is because parts of your code are executed in order to start up the server e.g definitions in modules etc.
I know I can
list what tests would run with nosetests --collect-only
and run particular test with nosetests path/to/module:TestClass.test_method.
But I don't know how to combine these two steps. The output from "--collect-only" mode outputs the test docstrings, which is not usable for the other syntax.
I would like to do something like this somewhere in my bash script:
#!/bin/bash
nosetests --some-mode | while read test_spec;
do
nosetests $test_spec
# i.e. nosetest test/SomeTest:ATestSomeClass.test_something
# and then do something else with $? and $test_spec
done
So is there "--some-mode" like this? Or another way to obtain list of test_specs?
Background is that I have a test suite from an upstream project which is laid out to run by simply calling nosetests. However, in our situation it would make lot of sense to perform tests separately (even at the cost of losing ability to parallelize).
I could catch the output and parse it but that's dirty and would not allow early termination.
You can also use nosetests --with-xunit to output XUnit-formatted XML representation of test results, which will all pass when --collect-only is used. You'll have nosetests.xml to work with so that you do not have to rely on stdout.
I have put together a Perl script that naively parses debug output from nosetests -vvv --collect-only and reports it so that it can be used as above (noselist | while read test_spec;...).
It works for me now, althought it's kind of a hack so I'd rather have nosetests be able to do this, or have a more sane utility script, e.g. using internal Nose library.