I'm developing a test suite using pytest for a project of mine. Because of the nature of the project, I need to create a Pytest plugin that controls how the tests are being run; they are not run locally, but sent to a different process to run. (I know about xdist but I think it doesn't solve my problem.)
I've been writing my own Pytest plugin by overriding the various pytest_runtest_* methods. So far it's been progressing well. Here is where I've hit a wall: I want my implementations of pytest_runtest_setup, pytest_runtest_call and pytest_runtest_teardown to actually be responsible for doing the setup, call and teardown. They're going to do it in a different process. My problem is: After Pytest calls my pytest_runtest_setup, it also calls all the other pytest_runtest_setup down the line of plugins. This is because the hook specification for pytest_runtest_setup has firstresult=False.
I don't want this, because I don't want pytest_runtest_setup to actually run on the current process. I want to be responsible for running it on my own. I want to override how it's being run, not add to it. I want the other implementations of pytest_runtest_setup below my own to not be run.
How can I do this?
Generic “runtest” hooks
All runtest related hooks receive a pytest.Item object.
pytest_runtest_protocol(item, nextitem)[source]
implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.
Parameters:
item – test item for which the runtest protocol is performed.
nextitem – the scheduled-to-be-next test item (or None if this is the end my friend). This argument is passed on to pytest_runtest_teardown().
Return boolean:
True if no further hook implementations should be invoked.
pytest_runtest_setup(item)[source]
called before pytest_runtest_call(item).
pytest_runtest_call(item)[source]
called to execute the test item.
pytest_runtest_teardown(item, nextitem)[source]
called after pytest_runtest_call.
Parameters: nextitem – the scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.
pytest_runtest_makereport(item, call)[source]
return a _pytest.runner.TestReport object for the given pytest.Item and _pytest.runner.CallInfo.
For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.
The _pytest.terminal reported specifically uses the reporting hook to print information about a test run.
Related
I have a basic Pytest test suite working, and I need to dynamically parametrize a fixture at runtime based on configuration/arguments.
The "heart" of my test suite is a session-scoped fixture that does some very expensive initialization based on some custom CLI arguments, and all of the tests use this fixture. To oversimplify a bit, think of the fixture as providing a remote connection to a server to run the tests on, the initialization involves configuring the OS/dependencies on that server (hence why it's expensive and session-scoped), and the CLI arguments specify aspects of the OS/environment configuration.
This works great for individual test runs, but now I want to take this test suite and run the entire thing over a list of different test configurations. The list of configurations needs to be determined at runtime based on a configuration file and some additional CLI arguments.
As I understand it, the "obvious" way of doing this kind of thing in Pytest is by parametrizing my initialization fixture. But the problem is my need to do it dynamically based on other arguments; it looks like you can call a function to populate the params list in #pytest.fixture, but that function wouldn't have access to the Pytest CLI args, since arg parsing hasn't happened yet.
I've managed to figure out how to do this by implementing pytest_generate_tests to dynamically parametrize each test using indirect parametrization, and taking advantage of fixture caching to ensure that each unique fixture configuration is only actually initialized once, even though the fixture is actually being parametrized for each test. This is working, although it's questionable whether Pytest officially supports this. Is there a better way of achieving what I'm trying to do here?
The "Right" way
def generate_parameters():
# This is just a plain function, and arg parsing hasn't happened yet,
# so I can't use Pytest options to generate my parameters
pass
#pytest.fixture(scope="session", params=generate_parameters())
def expensive_init_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
The way that works
#pytest.fixture(scope="session")
def expensive_initialization_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
def pytest_generate_tests(metafunc):
if 'expensive_init_fixture' in metafunc.fixturenames:
parameters = [] # Generate parameters based on config in metafunc.config
metafunc.parametrize('expensive_init_fixture', parameters, scope="session", indirect=True)
I'm not sure if what im seggusting is what your looking for, but using pytest_testconfig sounds a good solution.
testconfig lets you use a global config variables in all tests, you can set theyre value however you like, meaning you can get them from the CLI.
This means you can have the default value of config when you start a run, and if you want to change it in the next run, you take the param from the CLI.
I'm not sure what them motive beheind this, but you should see if you really need a global_config.py file here, changing tests values between runs shouldn't be something configurable from the CLI, tests are meant to be persistant, and if something needs changing then a commit is a better solution since you can see it's history.
I have a number of test files, such as
test_func1.py
test_func2.py
test_func3.py
I know in advance that test_func3.py won't pass if I run Pytest in parallel, e.g. pytest -n8. The reason is that test_func3.py contains a number of parametrized tests that handle file i/o processes. Parallel writing to the same file leads to failures. In serial testing mode, all tests in this module passes.
I'm wondering how I can skip the whole module in case Pytest will be started with the option -n? My thinking is to apply the skipif marker. I need to check in my code whether the -n argument has been passed to pytest.
...>pytest # run all modules
...>pytest -n8 # skip module test_func3.py automatically
The pytest-xdist package supports four scheduling algorithms:
each
load
loadscope
loadfile
Calling pytest -n is a shortcut for load scheduling, i.e. the scheduler will load balance the tests across all workers.
Using loadfile scheduling, all test cases in a test file will be executed sequentially by the same worker.
pytest -n8 --dist=loadfile will do the trick. The drawback may be that the whole test suite execution may be slower than using load. The advantage is that all tests will be performed and no test will be skipped.
There may be a case of test that affects some service settings.
This test can not be run in parallel with any other test.
There is a way to skip individual test like this:
#pytest.mark.unparalleled
def test_to_skip(a_fixture):
if worker_id == "master":
pytest.skip('Can't run in parallel with anything')
The first drawback here is that it will be skipped, so you will need to run those test separately. For that matter you can put them in separate folder, or mark with some tag.
The second drawback is that any fixtures used in such will be initialized.
Old question, but xdist has a newer feature that may address the OP's answer.
Per the docs:
--dist loadgroup: Tests are grouped by the xdist_group mark. Groups are distributed to available workers as whole units. This guarantees that all tests with same xdist_group name run in the same worker.
#pytest.mark.xdist_group(name="group1")
def test1():
pass
class TestA:
#pytest.mark.xdist_group("group1")
def test2():
pass
This will make sure test1 and TestA::test2 will run in the same worker. Tests without the xdist_group mark are distributed normally as in the --dist=load mode.
Specifically, you could put the test functions in test_func3.py into the same xdist_group.
For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order?
The easiest way to control the order in which fixtures are executed, is to just request the previous fixture in the later fixture. So to make sure b runs before a:
#pytest.fixture(autouse=True, scope="function")
def b():
pass
#pytest.fixture(scope="function")
def a(b):
pass
For details on the general fixture resolution order, see Maxim's excellent answer below or have a look at the documentation.
TL;DR
There are 3 aspects being considered together when building fixture evaluation order, aspects themselves are placed in order of priority:
Fixture dependencies - fixtures are evaluated from rootest required back to one required in test function.
Fixture scope - fixtures are evaluated from session through module to function scoped. On the same scope autouse fixtures are evaluated before non-autouse ones.
Fixture position in test function arguments - fixtures are evaluated in order of presence in test function arguments, from leftest to rightest.
Official explanation with code example by link below
https://docs.pytest.org/en/stable/fixture.html#fixture-instantiation-order
UPDATE
Current documentation mention 3 factors, as only considered, while discouraging from relying on another factors (like order in test function arguments):
scope
dependencies
autouse
Link above is outdated. See updated link
I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa.
IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things.
With the help of below code we can easily set execution order of fixtures / functions
e.g:-
execution order first
#pytest.mark.order(1)
execution order second
#pytest.mark.order(2)
For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order?
The easiest way to control the order in which fixtures are executed, is to just request the previous fixture in the later fixture. So to make sure b runs before a:
#pytest.fixture(autouse=True, scope="function")
def b():
pass
#pytest.fixture(scope="function")
def a(b):
pass
For details on the general fixture resolution order, see Maxim's excellent answer below or have a look at the documentation.
TL;DR
There are 3 aspects being considered together when building fixture evaluation order, aspects themselves are placed in order of priority:
Fixture dependencies - fixtures are evaluated from rootest required back to one required in test function.
Fixture scope - fixtures are evaluated from session through module to function scoped. On the same scope autouse fixtures are evaluated before non-autouse ones.
Fixture position in test function arguments - fixtures are evaluated in order of presence in test function arguments, from leftest to rightest.
Official explanation with code example by link below
https://docs.pytest.org/en/stable/fixture.html#fixture-instantiation-order
UPDATE
Current documentation mention 3 factors, as only considered, while discouraging from relying on another factors (like order in test function arguments):
scope
dependencies
autouse
Link above is outdated. See updated link
I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa.
IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things.
With the help of below code we can easily set execution order of fixtures / functions
e.g:-
execution order first
#pytest.mark.order(1)
execution order second
#pytest.mark.order(2)
I'm currently in the process of writing some unit tests I want to constantly run every few minutes. If any of them ever fail, I want to grab the errors that are raised and do some custom processing on them (sending out alerts, in my case). Is there a standard way of doing this? I've been looking at unittest.TestResult, but haven't found any good example usage. Ideas?
We use a continious integration server jenkins for such task. It has cron like scheduling and can send an email when build becomes unstable (a test fails). There is an extention to python's unittest module to produce junit style xml report supported by jenkins.
In the end, I wound up running the test and returning the TestResult object. I then look at the failures attribute of that object, and run post processing on each test in the suite that failed. This works well enough for me, and let's me custom design my post-process.
For any extra meta data per test that I need, I subclass unittest.TestResult and add to the addFailure method anything extra that I need.