I am using vimrunner-python library to test my vim plugin written in python with py-test and pytest-cov.
Vimrunner python executes a vim server and controls a client vim instance via the server remote interface.
However, pytest-cov (obviously) does not see the lines executed by the vim process. Is there a way how to make this work, i.e. point the coverage to the vim's server PID?
You need to run the coverage measurement from the plugin itself, i.e. like this:
# Start measuring coverage if in testing
if vim.vars.get('measure_coverage'):
import os
import atexit
import coverage
coverage_path = os.path.expanduser('~/coverage-data/.coverage.{0}'.format(os.getpid()))
cov = coverage.coverage(data_file=coverage_path)
cov.start()
def save_coverage():
cov.stop()
cov.save()
atexit.register(save_coverage)
If the plugin was invoked multiple times, you will need to combine the coverage files, using the coverage tool:
$ cd ~/coverage-data
$ coverage combine
This will generate combined .coverage file, which can be then used to generate the desired report.
Note: Make sure you're executing the measurement part only once per vim instance, otherwise the coverage file might get rewritten. In such case, another source of uniqueness (i.e. random number) other than PID should be used to generate the name of the .coverage file.
Related
I searched for a long time and surprisingly found no satisfactory answer.
I have multiple modules/files in my Python project that I wrote unit tests for using unittest. The structure is such that I have production-modules module_A.py and module_B.py in one directory (say myproject/production) and corresponding test-files test_module_A.py and test_module_B.py in a sibling directory (say myproject/tests).
Now I have coverage.py installed and want to run all the tests associated with the project (i.e. all .py-files with the prefix test_ from the tests directory) and receive a coverage report showing the coverage for all the production-modules (module_A.py and module_B.py).
I figured out that I can do this by running the following commands from the myproject/tests directory:
coverage erase
coverage run -a --source myproject.production test_module_A.py
coverage run -a --source myproject.production test_module_B.py
coverage report
This gives me that nice table with all my production modules listed and their coverage results. So far so good.
But can I do this with just one command? Assuming I have not 2 but 20 or 200 tests that I want to include in one report, doing this "by hand" seems ridiculous.
There must be a way to automate this, but I can't seem to find it. Sure a shell-script might do it, but that is rather clumsy. I am thinking of something akin to unittest discover, but for coverage.py this doesn't seem to work.
Or could I accomplish this using the coverage-API somehow? So far I had no luck trying.
.
SOLUTION: (credit to Mr. Ned Batchelder)
From myproject/tests directory run:
coverage run --source myproject.production -m unittest discover && coverage report
One line, doing exactly what was needed.
This should do it:
coverage.py run -m unittest discover
I have a custom-built integration test suite in python, which is technically just a run of python my_script.py --config=config.json. I want to compare using different configs in terms of what fraction of lines of code in my project will be activated.
Specific content of my_script.py is not relevant - it is a launch point that parses config, then imports and calls functions defined in multiple files from ./src folder.
I know tools to measure coverage in pytest, e.g. coverage.py; however is there a way to measure coverage of a non-test python run?
Coverage.py doesn't care whether you are running tests or not. You can use it to run any Python program. Just replace python with python -m coverage run
Since your usual command line is:
python my_script.py --config=config.json
try this:
python -m coverage run my_script.py --config=config.json
Then report on the data with coverage report -m or coverage html
I have a Python application I'm running within a Docker container. That application is normally started with the command /usr/local/bin/foo_service, which is a Python entry point (so it's just a Python file).
I want to collect code coverage for this application while running functional tests against it. I've found that coverage run /usr/local/bin/foo_service works nicely and, once the application exits, outputs a coverage file whose report appears accurate.
However, this is in the single-process mode. The application has another mode that uses the multiprocessing module to fork two or more child processes. I'm not sure if this is compatible with the the way I'm invoking coverage. I did coverage run --parallel-mode /usr/local/bin/foo_service -f 4, and it did output [one] coverage [file] without omitting any errors, but I don't know that this is correct. I half-expected it to output a coverage file per-process, but I don't know that it should do that. I couldn't find much coverage (pardon the pun) of this topic in the documentation.
Will this work? Or do I need to forego using the coverage binary and instead use the coverage Python API within my forking code?
$ python --version
Python 3.7.4
$ coverage --version
Coverage.py, version 4.5.3 with C extension
I'm fairly new to Python, trying to learn the toolsets.
I've figured out how to get py.test -f to watch my tests as I code. One thing I haven't been able to figure out is if there's a way to do a smarter watcher, that works like Ruby's Guard library.
Using guard + minitest the behavior I get is if I save a file like my_class.rb then my_class_test.rb is executed, and if I hit enter in the cli it runs all tests.
With pytest so far I haven't been able to figure out a way to only run the test file corresponding to the last touched file, thus avoiding the wait for the entire test suite to run until I've got the current file passing.
How would you pythonistas go about that?
Thanks!
One possibility is using pytest-testmon together with pytest-watch.
It uses coverage.py to track which test touches which lines of code, and as soon as you change a line of code, it re-runs all tests which execute that line in some way.
To add to #The Compiler's answer above, you can get pytest-testmon and pytest-watch to play together by using pytest-watch's --runner option:
ptw --runner "pytest --testmon"
Or simply:
ptw -- --testmon
There is also pytest-xdist which has a feature called:
--looponfail: run your tests repeatedly in a subprocess. After each run py.test waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed.
The fastest setup I got was when I combines #lmiguelvargasf #BenR and #TheCompiler answer into this
ptw --runner "pytest --picked --testmon"
you first gotta have them installed by
pip3 install pytest-picked pytest-testmon pytest-watch
If you are using git as version control, you could consider using pytest-picked. This is a plugin that according to the docs:
Run the tests related to the unstaged files or the current branch
Demo
Basic features
Run only tests from modified test files
Run tests from modified test files first, followed by all unmodified tests
Usage
pytest --picked
I've successfully installed and configured django-nose with coverage
Problem is that if I just run coverage for ./manage.py shell and exit out of that shell - it shows me 37% code coverage. I fully understand that executed code doesn't mean tested code. My only question is -- what now?
What I'm envisioning is being able to import all the python modules and "settle down" before executing any tests, and directly communicating with coverage saying "Ok, start counting reached code here."
Ideally this would be done by nose essentially resetting the "touched" lines of code right before executing each test suite.
I don't know where to start looking/developing. I've searched online and haven't found anything fruitful. Any help/guidelines would be greatly appreciated.
P.S.
I tried executing something like this:
DJANGO_SETTINGS_MODULE=app.settings_dev coverage run app/tests/gme_test.py
And it worked (showed 1% coverage) but I can't figure out how to do this for the entire app
Edit: Here's my coverage config:
[run]
source = .
branch = False
timid = True
[report]
show_missing = False
include = *.py
omit =
tests.py
*_test.py
*_tests.py
*/site-packages/*
*/migrations/*
[html]
title = Code Coverage
directory = local_coverage_report
since you use django-nose you have two options on how to run coverage. The first was already pointed out by DaveB:
coverage run ./manage.py test myapp
The above actually runs coverage which then monitors all code executed by the test command.
But then, there is also a nose coverage plugin included by default in the django-nose package (http://nose.readthedocs.org/en/latest/plugins/cover.html). You can use it like this:
./manage.py test myapp --with-coverage
(There are also some additional options like which modules should be covered, whether to include an html report or not etc . These are all documented in the above link - you can also type ./manage.py test --help for some quick info).
Running the nose coverage plugin will result in coverage running after the django bootstrapping code is executed and therefore the corresponding code will not be reported as covered.
Most of the code you see reported as covered when running coverage the original way, are import statements, class definitions, class members etc. As python evaluates them during import time, coverage will naturally mark them as covered. However, running the nose plugin will not report bootstrapping code as covered since the test runner starts after the django environment is loaded. Of course, a side effect of this is you can never achieve 100% coverage (...or close :)) as your global scope statements will never get covered.
After switching back and forth and playing around with coverage options, I now have ended up using coverage like this:
coverage run --source=myapp,anotherapp --omit=*/migrations/* ./manage.py test
so that
a. coverage will report import statements, class member definitions etc as covered (which is actually the truth - this code was successfully imported and interpreted)
b. it will only cover my code and not django code, or any other third-party app I use; The coverage percentage will reflect how well my project is covered.
Hope this helps!
The "Ok, start counting reached code here." can be done through the API of the coverage module. You can check this out through the shell. Stole directly from http://nedbatchelder.com/code/coverage/api.html:
import coverage
cov = coverage.coverage()
cov.start()
# .. call your code ..
cov.stop()
cov.save()
cov.html_report()
You can make your own test-runner to do this exactly to match your needs (some would consider coverage made from any unit-test to be OK, and others would only accept the coverage of a unit caused by the unit-test for that unit.)
I had the same issue. I saved some time by creating a .coveragerc file that specified options similar to those outlined in the bounty-awarded answer.
Now running 'coverage run manage.py test' and then 'coverage report -m' will show me the coverage report and the lines that aren't covered.
(See here for details on the .coveragerc file: http://nedbatchelder.com/code/coverage/config.html)
I'm a bit confused by what you are trying to achieve here.
Testing in Django is covered very well here: https://docs.djangoproject.com/en/dev/topics/testing/overview/
You write tests in your app as test.py - I don't see the need for nose, as the standard django way is pretty simple.
Then run them as coverage run ./manage.py test main - where 'main' is your app
Specify the source files for your code as documented here: http://nedbatchelder.com/code/coverage/cmd.html so that only your code is counted
e.g. coverage run --source=main ./manage.py test main
You'll still get a certain percentage marked as covered with the simple test provided as an example. This is because parts of your code are executed in order to start up the server e.g definitions in modules etc.