pytest run only the changed file? - python

I'm fairly new to Python, trying to learn the toolsets.
I've figured out how to get py.test -f to watch my tests as I code. One thing I haven't been able to figure out is if there's a way to do a smarter watcher, that works like Ruby's Guard library.
Using guard + minitest the behavior I get is if I save a file like my_class.rb then my_class_test.rb is executed, and if I hit enter in the cli it runs all tests.
With pytest so far I haven't been able to figure out a way to only run the test file corresponding to the last touched file, thus avoiding the wait for the entire test suite to run until I've got the current file passing.
How would you pythonistas go about that?
Thanks!

One possibility is using pytest-testmon together with pytest-watch.
It uses coverage.py to track which test touches which lines of code, and as soon as you change a line of code, it re-runs all tests which execute that line in some way.

To add to #The Compiler's answer above, you can get pytest-testmon and pytest-watch to play together by using pytest-watch's --runner option:
ptw --runner "pytest --testmon"
Or simply:
ptw -- --testmon

There is also pytest-xdist which has a feature called:
--looponfail: run your tests repeatedly in a subprocess. After each run py.test waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed.

The fastest setup I got was when I combines #lmiguelvargasf #BenR and #TheCompiler answer into this
ptw --runner "pytest --picked --testmon"
you first gotta have them installed by
pip3 install pytest-picked pytest-testmon pytest-watch

If you are using git as version control, you could consider using pytest-picked. This is a plugin that according to the docs:
Run the tests related to the unstaged files or the current branch
Demo
Basic features
Run only tests from modified test files
Run tests from modified test files first, followed by all unmodified tests
Usage
pytest --picked

Related

Python tox: show stdout/prints on successful test run?

Sometimes I want to print some statements, to make sure unittests are running fine (even if it passes), but can't find an option that enables it.
If tests fail, then it does show custom prints as output, but if it passes, it does ignore prints or logs (I mean, it dont see them on terminal output).
I tried using verbosity, like -vvvv, but it still ignores my prints. With nose there is an option like --nologcapture. Is there something similar in tox?
tox as such is just a generic venv creator, deps installer and command executor. It doesn't do output capturing (unless you use --parallel). So it is on a different abstraction level from nose. tox can run your tests or anything else that is runnable via command line.
Like you already mentioned for nose: depending on your test runner you might need to deactivate output capturing to see prints coming from the tests. So if you use pytest for example you can use pytest -s to disable all output capturing (also see docs)
You can also print something after you ran the test by adding something like this in your tox.ini testenv:
[testenv:test]
[...]
commands =
<some test command>
python -c 'print("All is fine."))'

Python: Discover Unit Tests fails in VS Code

I followed this tutorial to set up unit tests in VS Code for Python:
I have a problem in the section "Test discovery".
When I execute the command "Python: Discover Unit Tests"
from the command palette in VS Code,
absolutely nothing happens.
As shown in the tutorial, I did enable the unit test framework,
and I created unit test files.
Using the commands for unit testing from the command palette does not work.
When I execute my unit test files manually from the command line, it works:
python -m unittest test_my_code.py
That means that my code, and the code that tests it, are fine,
the problem is somewhere in the connection between the VS Code editor
and unit test, test framework.
Other issues are:
when I open my project in VS Code, in the status bar it says "Discovering Tests", but it keeps going forever, nothing happens
when I right-click on a test file and I call command for running the unit tests, also nothing happens
As requested, I also attach settings.json as image.
Thank you
I had the same issue as you. In my case opening the project folder in VSCode solved the problem, then magically all the tests were discoverable and could be run.
Before I came out with this idea I only had my single test.py file opened using VSCode and also all settings configured as the tutorial stated.
I had the exact same problem.
I needed to add pytest.ini to the folder. It was successful after that.
I tried adding __init__.py but it did not make any difference.
Adding pytest.ini helped.
See Run Test | Debug Test code lens above the test!

Run pytest in PDB with pipenv

I saw this question: Can I debug with python debugger when using py.test somehow? but it doesn't really help, because I need to debug hooks, some of them not written by me, where modifying the code of the hook is really cumbersome.
Also, pytest runs through pipenv run. It's already difficult to make them both work together. I couldn't so far find a combination of pdb, pipenv and pytest that would launch each other.
Another way I could do it is by calling pytest.main() from my code, however, this means that other people who want to run my tests will have to use this "trampoline" to run other tests. I can live with this, but it still feels like it shouldn't be necessary.
I guess this is what you need, invoke pdb as early as possible:
`pipenv --py` -c 'import pdb, pytest; pdb.set_trace(); pytest.main()'

How to run Python & Behave tests with some unit test framework?

I need to run my tests (written with Python and Behave) without using console. I prefer to create simple Python script and use some unit test runner. I'm thinking about unittest, but pytest and nose solutions are also welcome:) I couldn't find any hint on behave's homepage.
behave is the "test runner". Use the "-o " option to store your results somewhere for whatever format(her) you want to use.
NOTE:
It is basically the same like py.test.

Django test coverage vs code coverage

I've successfully installed and configured django-nose with coverage
Problem is that if I just run coverage for ./manage.py shell and exit out of that shell - it shows me 37% code coverage. I fully understand that executed code doesn't mean tested code. My only question is -- what now?
What I'm envisioning is being able to import all the python modules and "settle down" before executing any tests, and directly communicating with coverage saying "Ok, start counting reached code here."
Ideally this would be done by nose essentially resetting the "touched" lines of code right before executing each test suite.
I don't know where to start looking/developing. I've searched online and haven't found anything fruitful. Any help/guidelines would be greatly appreciated.
P.S.
I tried executing something like this:
DJANGO_SETTINGS_MODULE=app.settings_dev coverage run app/tests/gme_test.py
And it worked (showed 1% coverage) but I can't figure out how to do this for the entire app
Edit: Here's my coverage config:
[run]
source = .
branch = False
timid = True
[report]
show_missing = False
include = *.py
omit =
tests.py
*_test.py
*_tests.py
*/site-packages/*
*/migrations/*
[html]
title = Code Coverage
directory = local_coverage_report
since you use django-nose you have two options on how to run coverage. The first was already pointed out by DaveB:
coverage run ./manage.py test myapp
The above actually runs coverage which then monitors all code executed by the test command.
But then, there is also a nose coverage plugin included by default in the django-nose package (http://nose.readthedocs.org/en/latest/plugins/cover.html). You can use it like this:
./manage.py test myapp --with-coverage
(There are also some additional options like which modules should be covered, whether to include an html report or not etc . These are all documented in the above link - you can also type ./manage.py test --help for some quick info).
Running the nose coverage plugin will result in coverage running after the django bootstrapping code is executed and therefore the corresponding code will not be reported as covered.
Most of the code you see reported as covered when running coverage the original way, are import statements, class definitions, class members etc. As python evaluates them during import time, coverage will naturally mark them as covered. However, running the nose plugin will not report bootstrapping code as covered since the test runner starts after the django environment is loaded. Of course, a side effect of this is you can never achieve 100% coverage (...or close :)) as your global scope statements will never get covered.
After switching back and forth and playing around with coverage options, I now have ended up using coverage like this:
coverage run --source=myapp,anotherapp --omit=*/migrations/* ./manage.py test
so that
a. coverage will report import statements, class member definitions etc as covered (which is actually the truth - this code was successfully imported and interpreted)
b. it will only cover my code and not django code, or any other third-party app I use; The coverage percentage will reflect how well my project is covered.
Hope this helps!
The "Ok, start counting reached code here." can be done through the API of the coverage module. You can check this out through the shell. Stole directly from http://nedbatchelder.com/code/coverage/api.html:
import coverage
cov = coverage.coverage()
cov.start()
# .. call your code ..
cov.stop()
cov.save()
cov.html_report()
You can make your own test-runner to do this exactly to match your needs (some would consider coverage made from any unit-test to be OK, and others would only accept the coverage of a unit caused by the unit-test for that unit.)
I had the same issue. I saved some time by creating a .coveragerc file that specified options similar to those outlined in the bounty-awarded answer.
Now running 'coverage run manage.py test' and then 'coverage report -m' will show me the coverage report and the lines that aren't covered.
(See here for details on the .coveragerc file: http://nedbatchelder.com/code/coverage/config.html)
I'm a bit confused by what you are trying to achieve here.
Testing in Django is covered very well here: https://docs.djangoproject.com/en/dev/topics/testing/overview/
You write tests in your app as test.py - I don't see the need for nose, as the standard django way is pretty simple.
Then run them as coverage run ./manage.py test main - where 'main' is your app
Specify the source files for your code as documented here: http://nedbatchelder.com/code/coverage/cmd.html so that only your code is counted
e.g. coverage run --source=main ./manage.py test main
You'll still get a certain percentage marked as covered with the simple test provided as an example. This is because parts of your code are executed in order to start up the server e.g definitions in modules etc.

Categories