I want to run tests in pytest according to the execution time of the last run. Is there a plugin to do so? Or how would you write the plugin?
I guess I need to cache the execution times of the last run, but I'm not sure on how to do that.
Also, I'd like to run using the --ff option, giving priority to tests that failed first, and then ordering by execution time.
Thanks in advance!
I believe there is no such plugin but, as you said, you can write your own.
Since pytest plugin is just a set of hooks and fixtures may be you can go just with some hooks in your conftest.py file.
First of all you need to learn how to write your hooks here.
Then, referencing to this page find hooks that you need.
I believe that algorithm will be something like this:
Run tests
After test execution collect test results to some file (you will need test's full name, duration and status) OR use files generated by some pytest reporting plugin.
Run tests again
During pytest initialization stage register new command line option -ff
During pytest collection stage read saved file and -ff parameter value and change the test's execution order according to desired rules.
To implement this thing some of these hooks may be useful:
pytest_addoption -
Register argparse-style options and ini-style config values, called once at the beginning of a test run. Must be used to implement step 4.
pytest_collection_modifyitems - Called after collection has been performed. May filter or re-order the items in-place. Must be used to implement step 5. Test order must be changed in items object (list).
pytest_report_teststatus - Must be used to implement step 2. report object will contain duration, outcome (str) or passed (bool), nodeid (which is basically the full name).
Wrote my own plugin: https://github.com/david26694/pytest-slow-last
You can install it via:
pip install pytest-slow-last
and run
pytest --slow-last
Related
The project has so many modules. There are functional test cases being written for almost every api written like for GET requests, POST requests and PUT requests. To test an individual file we use the syntact pytest tests/file_name.py
but I want to test a specific method in that file. Is there any way to test it like that??
Duplicate of Is there a way to specify which pytest tests to run from a file?
In a few words, you can use the -k option of pytest to specify the name of the test you would like to run.
I'm trying to write a workaround for the inability of pytest/xdist to run some tests in serial, rather than all tests in parallel.
In order to do what I'm trying to do, I need to get a list of all the collected parameterized tests (so they look something like path/to/test_module_name.py::TestClassName::test_method_name[parameterization info]). I'm attempting to do so in a session scoped fixture, but can't figure out where this info is stored. Is there a way to do this?
I noticed at one point, when calling pytest with --cache-show, that 'cache/nodeids' was being populated with the exact node id information I need, but I can't seem to figure out when that does/doesn't happen, as it isn't consistent.
While I couldn't find exactly what I was looking for, the problem with serializing tests while using the xdist plugin can be resolved with the following two fixtures:
#pytest.fixture(scope='session')
def lock():
lock_file = pathlib.Path('serial.lock')
yield filelock.FileLock(lock_file=str(lock_file))
with contextlib.suppress(OSError):
os.remove(path=lock_file)
#pytest.fixture() # Add this fixture to each test that needs to be serialized
def serial(lock):
with lock.acquire(poll_intervall=0.1):
yield
I am using pytest. I like the way I call pytest (re-try the failed tests first, verbose, grab and show serial output, stop at first failure):
pytest --failed-first -v -s -x
However there is one more thing I want:
I want pytest to run the new tests (ie tests never tested before) immediately after the --failed-first ones. This way, when working with tests that are long to perform, I would get most relevant information as soon as possible.
Any way to do that?
This may not be directly what you are asking about, but, my understanding is that the test execution order is important for you when you create new tests during development.
Since you are already working with these "new" tests, the pytest-ordering plugin might be a good option to consider. It allows you to influence the execution order by decorating your tests with #pytest.mark.first, #pytest.mark.second etc decorators.
pytest-ordering is able to change the execution order by using a pytest_collection_modifyitems hook. There is also pytest-random-order plugin which also uses the same hook to control/change the order.
You can also have your own hook defined and adjusted to your specific needs. For example, here another hook is used to shuffle the tests:
Dynamically control order of tests with pytest
For anyone coming to this now, pytest added a --new-first option to run new tests before all other tests. It can be combined with --failed-first to run new and failed tests. For test-driven development, I've found it helpful to use these options with pytest-watch, which I described in my blog.
I have a lot of tests that take a long time to run. Luckily, the time these tests take is evenly distributed among tests for several subsystems of my project.
I'm using IPython's pytest magic commands. I'd like to be able to just say things like
pytest potato_peeler --donttesti18n --runstresstests
or
pytest garlic_squeezer --donttestsmell --logperfdata
but I can't add the garlic_squeezer and potato_peeler options the same way I do logperfdata et al because parser.addoption gets upset if the option name doesn't start with a --.
I know this seems like a tiny inconvenience, but I have a ton of people running these tests several times a day and I'd like the way they're invoked to make as much sense as possible, by emulating how you issue commands on a command line (command, thing you want to run the command on, then --flags.)
Is there a way have non-dashed options? (that doesn't involve writing a full-blown pytest plugin that overrides the option parsing?)
I was hoping to use the pytest_commandline_parse hook but you can't use that hook in a conftest.py, you have to write a full-blown plugin.
You can mark your tests and run only those with some mark.
For example:
import pytest
#pytest.mark.runstresstests
def test_stress_something():
pass
#pytest.mark.logperfdata
def test_something_quick():
pass
...
If you only want to run stress tests: pytest -m runstresstests
Full documentation at https://docs.pytest.org/en/latest/example/markers.html
I'm working on a system that needs to be able to test python files with py.test, and use the output (what tests passed and failed) within the program. Is there anyway to call py.test from within python, tell it to run the testing code in [name].py on the code in [otherName].py, and have it return the results of the test?
I think you are looking for Calling pytest from Python code at Usage and Invocations page.
Also limiting tests to the specific file could be done by Specifying tests / selecting tests.
In other words, this should do the trick:
pytest.main(['my_test_file.py'])
P.S.: Py.test Documantation is pretty good, you can find most of the answers there ;).