Running Python coverage collection around a process that forks - python

I have a Python application I'm running within a Docker container. That application is normally started with the command /usr/local/bin/foo_service, which is a Python entry point (so it's just a Python file).
I want to collect code coverage for this application while running functional tests against it. I've found that coverage run /usr/local/bin/foo_service works nicely and, once the application exits, outputs a coverage file whose report appears accurate.
However, this is in the single-process mode. The application has another mode that uses the multiprocessing module to fork two or more child processes. I'm not sure if this is compatible with the the way I'm invoking coverage. I did coverage run --parallel-mode /usr/local/bin/foo_service -f 4, and it did output [one] coverage [file] without omitting any errors, but I don't know that this is correct. I half-expected it to output a coverage file per-process, but I don't know that it should do that. I couldn't find much coverage (pardon the pun) of this topic in the documentation.
Will this work? Or do I need to forego using the coverage binary and instead use the coverage Python API within my forking code?
$ python --version
Python 3.7.4
$ coverage --version
Coverage.py, version 4.5.3 with C extension

Related

How to measure coverage of a non-test run of python

I have a custom-built integration test suite in python, which is technically just a run of python my_script.py --config=config.json. I want to compare using different configs in terms of what fraction of lines of code in my project will be activated.
Specific content of my_script.py is not relevant - it is a launch point that parses config, then imports and calls functions defined in multiple files from ./src folder.
I know tools to measure coverage in pytest, e.g. coverage.py; however is there a way to measure coverage of a non-test python run?
Coverage.py doesn't care whether you are running tests or not. You can use it to run any Python program. Just replace python with python -m coverage run
Since your usual command line is:
python my_script.py --config=config.json
try this:
python -m coverage run my_script.py --config=config.json
Then report on the data with coverage report -m or coverage html

Debugging python tests in TensorFlow

We want to debug Python tests in TensorFlow such as sparse_split_op_test and string_to_hash_bucket_op_test
The other c++ tests we could debug using gdb, however we cannot find a way to debug python tests.
Is there a way in which we can debug specific python test case run via Bazel test command (for example, bazel test //tensorflow/python/kernel_tests:sparse_split_op_test)
I would first build the test:
bazel build //tensorflow/python/kernel_tests:sparse_split_op_test
Then use pdb on the resulting Python binary:
pdb bazel-bin/tensorflow/python/kernel_tests/sparse_split_op_test
That seems to work for me stepping through the first few lines of the test.

Run a python script from bamboo

I'm trying to run a python script from bamboo. I created a script task and wrote inline "python myFile.py". Should I be listing the full path for python?
I changed the working directory to the location of myFile.py so that is not a problem. Is there anything else I need to do within the configuration plan to properly run this script? It isn't running but I know it should be running because the script works fine from terminal on my local machine. Thanks
I run a lot of python tasks from bamboo, so it is possible. Using the Script task is generally painless...
You should be able to use your script task to run the commands directly and have stdout written to the logs. Since this is true, you can run:
'which python' -- Output the path of which python that is being ran.
'pip list' -- Output a list of which modules are installed with pip.
You should verify that the output from the above commands matches the output when ran from the server. I'm guessing they won't match up and once that is addressed, everything will work fine.
If not, comment back and we can look at a few other things.
For the future, there are a handful of different ways you can package things with python which could assist with this problem (e.g. automatically installing missing modules, etc).
You can also use the Script Task directly with an inline Python script to run your myFile.py:
/usr/bin/python <<EOF
print "Hello, World!"
EOF
Check this page for a more complex example:
https://www.langhornweb.com/display/BAT/Run+Python+script+as+a+Bamboo+task?desktop=true&macroName=seo-metadata

Is there a way how to test coverage of a vim plugin?

I am using vimrunner-python library to test my vim plugin written in python with py-test and pytest-cov.
Vimrunner python executes a vim server and controls a client vim instance via the server remote interface.
However, pytest-cov (obviously) does not see the lines executed by the vim process. Is there a way how to make this work, i.e. point the coverage to the vim's server PID?
You need to run the coverage measurement from the plugin itself, i.e. like this:
# Start measuring coverage if in testing
if vim.vars.get('measure_coverage'):
import os
import atexit
import coverage
coverage_path = os.path.expanduser('~/coverage-data/.coverage.{0}'.format(os.getpid()))
cov = coverage.coverage(data_file=coverage_path)
cov.start()
def save_coverage():
cov.stop()
cov.save()
atexit.register(save_coverage)
If the plugin was invoked multiple times, you will need to combine the coverage files, using the coverage tool:
$ cd ~/coverage-data
$ coverage combine
This will generate combined .coverage file, which can be then used to generate the desired report.
Note: Make sure you're executing the measurement part only once per vim instance, otherwise the coverage file might get rewritten. In such case, another source of uniqueness (i.e. random number) other than PID should be used to generate the name of the .coverage file.

How calculate the global coverage?

I am using tox to test my python egg. And I want to know the coverage.
But the problem is that the tests are executing with python 2 (2.6 and 2.7) and python 3 (3.3) and some lines should be executed in python 2 and other in python 3, but this look like if only count the lines that are executed with python 2 (the last section in the tox, py26-dj12). You can see this here:
https://coveralls.io/files/64922124#L33
Of this way pass with the differents django version...
Is there some way to get the global coverage?
Yesterday I receipted an email answering this question:
coverage.py (the tool coveralls uses to measure coverage in Python programs) has a "coverage combine" command.
Yesterday, I got the global coverage executing something like this:
coverage erase
tox
coverage combine
coveralls
In tox.ini I added the "p" param:
python {envbindir}/coverage run -p testing/run_tests.py
python {envbindir}/coverage run -p testing/run_tests.py testing.settings_no_debug
I fixed the problem with these commits:
https://github.com/Yaco-Sistemas/django-inplaceedit/commit/200d58b2170b9122369df73fbfe12ceeb8efd36c
https://github.com/Yaco-Sistemas/django-inplaceedit/commit/bf0a7dcfc935dedda2f23d5e01964e27f01c7461

Categories