Most of the times when I run my tests, I do this via python -m unittest discover (I'm lazy).
Lets say that one of the tests raises an exception. Is there a way to make unittest framework run a post-mortem on it (preferably ipdb.pm()), without modifying the code of the tests?
I know I could add it directly to the code, but since I also use automatic runners / GitlabCI, I don't want those to hang up on the pdb shell.
Related
I'm writing tests in python's unittest and decided to use the module parameterized to deal with test parameterization. Now all's fine and dandy when I'm running tests directly with unittest's CLI - simply running python -m unittest in root directory launches all the tests as expected. However, I decided for my script to have it's own command flag to run the tests, so when you run, say, python ./main.py -t [additional arguments for unittest], the script itself runs python -m unittest [additional arguments for unittest]. For that to happen, I'm using subprocess.run. And this also works... well, to some extent. The problem is the following - when I'm using python -m unittest, no errors (except for the ones being tested) are raised, but using my script to run the tests raises ModuleNotFoundError: No module named 'parameterized', along with a few other dependencies my code is using. I'm clueless why is that happening.
To be honest I'm not that familiar with unittest, so maybe my approach is the problem, maybe I should handle this in completely different way. Any feedback would be much appreciated.
I saw this question: Can I debug with python debugger when using py.test somehow? but it doesn't really help, because I need to debug hooks, some of them not written by me, where modifying the code of the hook is really cumbersome.
Also, pytest runs through pipenv run. It's already difficult to make them both work together. I couldn't so far find a combination of pdb, pipenv and pytest that would launch each other.
Another way I could do it is by calling pytest.main() from my code, however, this means that other people who want to run my tests will have to use this "trampoline" to run other tests. I can live with this, but it still feels like it shouldn't be necessary.
I guess this is what you need, invoke pdb as early as possible:
`pipenv --py` -c 'import pdb, pytest; pdb.set_trace(); pytest.main()'
I created a Python library and a set of Python script around it. An example of this small script could be something like this rna_ex2x.py:
./rna_ec2x.py
usage: rna_ec2x.py [-h] [--sep SEP] [--chain CHAIN] [--ec-pairs]
[--ss-pairs SS_PAIRS] [--pairs-delta]
interaction_fn
rna_ec2x.py: error: too few arguments
I want to test these script with pytest. I know how to test my functions with pytest, but I can't find in the documentation what would be the best practice in testing standalone Python script. Any suggestions?
I don't know what would be the best practice but I simply call my programs using subprocess.call(), check the result code and verify that the program did what it intended to do. See my tests as examples.
I'm fairly new to Python, trying to learn the toolsets.
I've figured out how to get py.test -f to watch my tests as I code. One thing I haven't been able to figure out is if there's a way to do a smarter watcher, that works like Ruby's Guard library.
Using guard + minitest the behavior I get is if I save a file like my_class.rb then my_class_test.rb is executed, and if I hit enter in the cli it runs all tests.
With pytest so far I haven't been able to figure out a way to only run the test file corresponding to the last touched file, thus avoiding the wait for the entire test suite to run until I've got the current file passing.
How would you pythonistas go about that?
Thanks!
One possibility is using pytest-testmon together with pytest-watch.
It uses coverage.py to track which test touches which lines of code, and as soon as you change a line of code, it re-runs all tests which execute that line in some way.
To add to #The Compiler's answer above, you can get pytest-testmon and pytest-watch to play together by using pytest-watch's --runner option:
ptw --runner "pytest --testmon"
Or simply:
ptw -- --testmon
There is also pytest-xdist which has a feature called:
--looponfail: run your tests repeatedly in a subprocess. After each run py.test waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed.
The fastest setup I got was when I combines #lmiguelvargasf #BenR and #TheCompiler answer into this
ptw --runner "pytest --picked --testmon"
you first gotta have them installed by
pip3 install pytest-picked pytest-testmon pytest-watch
If you are using git as version control, you could consider using pytest-picked. This is a plugin that according to the docs:
Run the tests related to the unstaged files or the current branch
Demo
Basic features
Run only tests from modified test files
Run tests from modified test files first, followed by all unmodified tests
Usage
pytest --picked
I need to run my tests (written with Python and Behave) without using console. I prefer to create simple Python script and use some unit test runner. I'm thinking about unittest, but pytest and nose solutions are also welcome:) I couldn't find any hint on behave's homepage.
behave is the "test runner". Use the "-o " option to store your results somewhere for whatever format(her) you want to use.
NOTE:
It is basically the same like py.test.