We want to debug Python tests in TensorFlow such as sparse_split_op_test and string_to_hash_bucket_op_test
The other c++ tests we could debug using gdb, however we cannot find a way to debug python tests.
Is there a way in which we can debug specific python test case run via Bazel test command (for example, bazel test //tensorflow/python/kernel_tests:sparse_split_op_test)
I would first build the test:
bazel build //tensorflow/python/kernel_tests:sparse_split_op_test
Then use pdb on the resulting Python binary:
pdb bazel-bin/tensorflow/python/kernel_tests/sparse_split_op_test
That seems to work for me stepping through the first few lines of the test.
Related
I have a custom-built integration test suite in python, which is technically just a run of python my_script.py --config=config.json. I want to compare using different configs in terms of what fraction of lines of code in my project will be activated.
Specific content of my_script.py is not relevant - it is a launch point that parses config, then imports and calls functions defined in multiple files from ./src folder.
I know tools to measure coverage in pytest, e.g. coverage.py; however is there a way to measure coverage of a non-test python run?
Coverage.py doesn't care whether you are running tests or not. You can use it to run any Python program. Just replace python with python -m coverage run
Since your usual command line is:
python my_script.py --config=config.json
try this:
python -m coverage run my_script.py --config=config.json
Then report on the data with coverage report -m or coverage html
I have a Python application I'm running within a Docker container. That application is normally started with the command /usr/local/bin/foo_service, which is a Python entry point (so it's just a Python file).
I want to collect code coverage for this application while running functional tests against it. I've found that coverage run /usr/local/bin/foo_service works nicely and, once the application exits, outputs a coverage file whose report appears accurate.
However, this is in the single-process mode. The application has another mode that uses the multiprocessing module to fork two or more child processes. I'm not sure if this is compatible with the the way I'm invoking coverage. I did coverage run --parallel-mode /usr/local/bin/foo_service -f 4, and it did output [one] coverage [file] without omitting any errors, but I don't know that this is correct. I half-expected it to output a coverage file per-process, but I don't know that it should do that. I couldn't find much coverage (pardon the pun) of this topic in the documentation.
Will this work? Or do I need to forego using the coverage binary and instead use the coverage Python API within my forking code?
$ python --version
Python 3.7.4
$ coverage --version
Coverage.py, version 4.5.3 with C extension
My friend has following setup with his Nose tests. Basically, he sees "Script" option for test but I do not.
IDE information Pycharm CE:2017.2
Friend setup with test/script option
My set up is as follows.
My Setup without test option
Question: How do I get "test" instead of target and "Script" instead of python/path/custom.
I am debugging decode_raw_op_test from TensorFlow. The test file is written in python however it executes code from underlying C++ files.
Using pdb, I could debug python test file however it doesn't recognize c++ file. Is there a way in which we can debug underlying c++ code?
(I tried using gdb on decode_raw_op_test but it gives "File not in executable format: File format not recognized")
Debugging a mixed Python and C++ program is tricky. You can use gdb to debug the C++ parts of TensorFlow, however. There are two main ways to do this:
Run python under gdb, rather than the test script itself. Let's say that your test script is in bazel-bin/tensorflow/python/kernel_tests/decode_raw_op_test. You would run the following command:
$ gdb python bazel-bin/tensorflow/python/kernel_tests/decode_raw_op_test
(gdb) run
Note that gdb does not have great support for debugging the Python parts of the code. I'd recommend narrowing down the test case that you run to a single, simple test, and setting a breakpoint on a TensorFlow C API method, such as TF_Run, which is the main entry point from Python into C++ in TensorFlow.
Attach gdb to a running process. You can get the process ID of the Python test using ps and then run (where $PID is the process ID):
$ gdb -p $PID
You will probably need to arrange for your Python code to block so that there's time to attach. Calling the raw_input() function is an easy way to do this.
Could debug using below steps:
gdb python
then on gdb prompt, type
run bazel-bin/tensorflow/python/kernel_tests/decode_raw_op_test
Adding on mrry's answer, in today's TF2 environment, the main entry point would be TFE_Execute, this should be where you add the breakpoint.
I am using tox to test my python egg. And I want to know the coverage.
But the problem is that the tests are executing with python 2 (2.6 and 2.7) and python 3 (3.3) and some lines should be executed in python 2 and other in python 3, but this look like if only count the lines that are executed with python 2 (the last section in the tox, py26-dj12). You can see this here:
https://coveralls.io/files/64922124#L33
Of this way pass with the differents django version...
Is there some way to get the global coverage?
Yesterday I receipted an email answering this question:
coverage.py (the tool coveralls uses to measure coverage in Python programs) has a "coverage combine" command.
Yesterday, I got the global coverage executing something like this:
coverage erase
tox
coverage combine
coveralls
In tox.ini I added the "p" param:
python {envbindir}/coverage run -p testing/run_tests.py
python {envbindir}/coverage run -p testing/run_tests.py testing.settings_no_debug
I fixed the problem with these commits:
https://github.com/Yaco-Sistemas/django-inplaceedit/commit/200d58b2170b9122369df73fbfe12ceeb8efd36c
https://github.com/Yaco-Sistemas/django-inplaceedit/commit/bf0a7dcfc935dedda2f23d5e01964e27f01c7461