I know I can
list what tests would run with nosetests --collect-only
and run particular test with nosetests path/to/module:TestClass.test_method.
But I don't know how to combine these two steps. The output from "--collect-only" mode outputs the test docstrings, which is not usable for the other syntax.
I would like to do something like this somewhere in my bash script:
#!/bin/bash
nosetests --some-mode | while read test_spec;
do
nosetests $test_spec
# i.e. nosetest test/SomeTest:ATestSomeClass.test_something
# and then do something else with $? and $test_spec
done
So is there "--some-mode" like this? Or another way to obtain list of test_specs?
Background is that I have a test suite from an upstream project which is laid out to run by simply calling nosetests. However, in our situation it would make lot of sense to perform tests separately (even at the cost of losing ability to parallelize).
I could catch the output and parse it but that's dirty and would not allow early termination.
You can also use nosetests --with-xunit to output XUnit-formatted XML representation of test results, which will all pass when --collect-only is used. You'll have nosetests.xml to work with so that you do not have to rely on stdout.
I have put together a Perl script that naively parses debug output from nosetests -vvv --collect-only and reports it so that it can be used as above (noselist | while read test_spec;...).
It works for me now, althought it's kind of a hack so I'd rather have nosetests be able to do this, or have a more sane utility script, e.g. using internal Nose library.
Related
I searched for a long time and surprisingly found no satisfactory answer.
I have multiple modules/files in my Python project that I wrote unit tests for using unittest. The structure is such that I have production-modules module_A.py and module_B.py in one directory (say myproject/production) and corresponding test-files test_module_A.py and test_module_B.py in a sibling directory (say myproject/tests).
Now I have coverage.py installed and want to run all the tests associated with the project (i.e. all .py-files with the prefix test_ from the tests directory) and receive a coverage report showing the coverage for all the production-modules (module_A.py and module_B.py).
I figured out that I can do this by running the following commands from the myproject/tests directory:
coverage erase
coverage run -a --source myproject.production test_module_A.py
coverage run -a --source myproject.production test_module_B.py
coverage report
This gives me that nice table with all my production modules listed and their coverage results. So far so good.
But can I do this with just one command? Assuming I have not 2 but 20 or 200 tests that I want to include in one report, doing this "by hand" seems ridiculous.
There must be a way to automate this, but I can't seem to find it. Sure a shell-script might do it, but that is rather clumsy. I am thinking of something akin to unittest discover, but for coverage.py this doesn't seem to work.
Or could I accomplish this using the coverage-API somehow? So far I had no luck trying.
.
SOLUTION: (credit to Mr. Ned Batchelder)
From myproject/tests directory run:
coverage run --source myproject.production -m unittest discover && coverage report
One line, doing exactly what was needed.
This should do it:
coverage.py run -m unittest discover
Sometimes I want to print some statements, to make sure unittests are running fine (even if it passes), but can't find an option that enables it.
If tests fail, then it does show custom prints as output, but if it passes, it does ignore prints or logs (I mean, it dont see them on terminal output).
I tried using verbosity, like -vvvv, but it still ignores my prints. With nose there is an option like --nologcapture. Is there something similar in tox?
tox as such is just a generic venv creator, deps installer and command executor. It doesn't do output capturing (unless you use --parallel). So it is on a different abstraction level from nose. tox can run your tests or anything else that is runnable via command line.
Like you already mentioned for nose: depending on your test runner you might need to deactivate output capturing to see prints coming from the tests. So if you use pytest for example you can use pytest -s to disable all output capturing (also see docs)
You can also print something after you ran the test by adding something like this in your tox.ini testenv:
[testenv:test]
[...]
commands =
<some test command>
python -c 'print("All is fine."))'
I created a Python library and a set of Python script around it. An example of this small script could be something like this rna_ex2x.py:
./rna_ec2x.py
usage: rna_ec2x.py [-h] [--sep SEP] [--chain CHAIN] [--ec-pairs]
[--ss-pairs SS_PAIRS] [--pairs-delta]
interaction_fn
rna_ec2x.py: error: too few arguments
I want to test these script with pytest. I know how to test my functions with pytest, but I can't find in the documentation what would be the best practice in testing standalone Python script. Any suggestions?
I don't know what would be the best practice but I simply call my programs using subprocess.call(), check the result code and verify that the program did what it intended to do. See my tests as examples.
I'm fairly new to Python, trying to learn the toolsets.
I've figured out how to get py.test -f to watch my tests as I code. One thing I haven't been able to figure out is if there's a way to do a smarter watcher, that works like Ruby's Guard library.
Using guard + minitest the behavior I get is if I save a file like my_class.rb then my_class_test.rb is executed, and if I hit enter in the cli it runs all tests.
With pytest so far I haven't been able to figure out a way to only run the test file corresponding to the last touched file, thus avoiding the wait for the entire test suite to run until I've got the current file passing.
How would you pythonistas go about that?
Thanks!
One possibility is using pytest-testmon together with pytest-watch.
It uses coverage.py to track which test touches which lines of code, and as soon as you change a line of code, it re-runs all tests which execute that line in some way.
To add to #The Compiler's answer above, you can get pytest-testmon and pytest-watch to play together by using pytest-watch's --runner option:
ptw --runner "pytest --testmon"
Or simply:
ptw -- --testmon
There is also pytest-xdist which has a feature called:
--looponfail: run your tests repeatedly in a subprocess. After each run py.test waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed.
The fastest setup I got was when I combines #lmiguelvargasf #BenR and #TheCompiler answer into this
ptw --runner "pytest --picked --testmon"
you first gotta have them installed by
pip3 install pytest-picked pytest-testmon pytest-watch
If you are using git as version control, you could consider using pytest-picked. This is a plugin that according to the docs:
Run the tests related to the unstaged files or the current branch
Demo
Basic features
Run only tests from modified test files
Run tests from modified test files first, followed by all unmodified tests
Usage
pytest --picked
I need to run my tests (written with Python and Behave) without using console. I prefer to create simple Python script and use some unit test runner. I'm thinking about unittest, but pytest and nose solutions are also welcome:) I couldn't find any hint on behave's homepage.
behave is the "test runner". Use the "-o " option to store your results somewhere for whatever format(her) you want to use.
NOTE:
It is basically the same like py.test.