In which order are pytest fixtures executed? - python

For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order?

The easiest way to control the order in which fixtures are executed, is to just request the previous fixture in the later fixture. So to make sure b runs before a:
#pytest.fixture(autouse=True, scope="function")
def b():
pass
#pytest.fixture(scope="function")
def a(b):
pass
For details on the general fixture resolution order, see Maxim's excellent answer below or have a look at the documentation.

TL;DR
There are 3 aspects being considered together when building fixture evaluation order, aspects themselves are placed in order of priority:
Fixture dependencies - fixtures are evaluated from rootest required back to one required in test function.
Fixture scope - fixtures are evaluated from session through module to function scoped. On the same scope autouse fixtures are evaluated before non-autouse ones.
Fixture position in test function arguments - fixtures are evaluated in order of presence in test function arguments, from leftest to rightest.
Official explanation with code example by link below
https://docs.pytest.org/en/stable/fixture.html#fixture-instantiation-order
UPDATE
Current documentation mention 3 factors, as only considered, while discouraging from relying on another factors (like order in test function arguments):
scope
dependencies
autouse
Link above is outdated. See updated link

I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa.

IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things.

With the help of below code we can easily set execution order of fixtures / functions
e.g:-
execution order first
#pytest.mark.order(1)
execution order second
#pytest.mark.order(2)

Related

Dynamically parametrizing a Pytest fixture based on options/configuration

I have a basic Pytest test suite working, and I need to dynamically parametrize a fixture at runtime based on configuration/arguments.
The "heart" of my test suite is a session-scoped fixture that does some very expensive initialization based on some custom CLI arguments, and all of the tests use this fixture. To oversimplify a bit, think of the fixture as providing a remote connection to a server to run the tests on, the initialization involves configuring the OS/dependencies on that server (hence why it's expensive and session-scoped), and the CLI arguments specify aspects of the OS/environment configuration.
This works great for individual test runs, but now I want to take this test suite and run the entire thing over a list of different test configurations. The list of configurations needs to be determined at runtime based on a configuration file and some additional CLI arguments.
As I understand it, the "obvious" way of doing this kind of thing in Pytest is by parametrizing my initialization fixture. But the problem is my need to do it dynamically based on other arguments; it looks like you can call a function to populate the params list in #pytest.fixture, but that function wouldn't have access to the Pytest CLI args, since arg parsing hasn't happened yet.
I've managed to figure out how to do this by implementing pytest_generate_tests to dynamically parametrize each test using indirect parametrization, and taking advantage of fixture caching to ensure that each unique fixture configuration is only actually initialized once, even though the fixture is actually being parametrized for each test. This is working, although it's questionable whether Pytest officially supports this. Is there a better way of achieving what I'm trying to do here?
The "Right" way
def generate_parameters():
# This is just a plain function, and arg parsing hasn't happened yet,
# so I can't use Pytest options to generate my parameters
pass
#pytest.fixture(scope="session", params=generate_parameters())
def expensive_init_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
The way that works
#pytest.fixture(scope="session")
def expensive_initialization_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
def pytest_generate_tests(metafunc):
if 'expensive_init_fixture' in metafunc.fixturenames:
parameters = [] # Generate parameters based on config in metafunc.config
metafunc.parametrize('expensive_init_fixture', parameters, scope="session", indirect=True)
I'm not sure if what im seggusting is what your looking for, but using pytest_testconfig sounds a good solution.
testconfig lets you use a global config variables in all tests, you can set theyre value however you like, meaning you can get them from the CLI.
This means you can have the default value of config when you start a run, and if you want to change it in the next run, you take the param from the CLI.
I'm not sure what them motive beheind this, but you should see if you really need a global_config.py file here, changing tests values between runs shouldn't be something configurable from the CLI, tests are meant to be persistant, and if something needs changing then a commit is a better solution since you can see it's history.

pytest: How to execute setup code only if a certain fixture is loaded

I’m developing database based flask application (using flask-sqlalchemy). I use fixtures to define individual pieces of test data.
#pytest.fixture(scope='class')
def model_a(db):
a = ModelA()
db.session.add(a)
db.session.commit()
return a
#pytest.fixture(scope='class')
def model_b(db, a):
b = ModelB(a=a)
db.session.add(b)
db.session.commit()
return b
# …
While it works to call db.session.commit() for each and every test object it would be more efficient to call it only once right before executing the actual tests.
Is there a way to run db.session.commit() before every test, after all fixtures are loaded, but only if the test directly or indirectly needs db?
Things that I don’t think that they will work:
A python_runtest_setup-hook doesn’t seem to be able to access fixtures or to determine whether the db fixture is loaded for/required by the test.
A autouse fixture would need to depend on db and thus make all tests use the db fixture. Also I couldn’t find a way to make it executed last.
You can't specify fixtures ordering other than indirectly (fixtures depending on other fixtures), see discussion in issue #1216. You can access both fixture names and fixture values in hooks though, so using a hook is actually a good idea. However, pytest_runtest_setup is too early for all fixtures to be executed; use pytest_pyfunc_call instead. Example:
from _pytest.fixtures import FixtureRequest
def pytest_pyfunc_call(pyfuncitem):
if 'db' in pyfuncitem.fixturenames:
db = FixtureRequest(pyfuncitem, _ispytest=True).getfixturevalue('db')
db.session.commit()
# ensure this hook returns None, or your underlying test function won't be executed.
Warning: pytest.FixtureRequest() is considered non-public by the pytest maintainers for anything except type-hinting. That’s why its use issues a deprecation warning without the _ispytest=True flag. The API might change without a major release. There is no public API yet that supports this use-case. (Last updated for pytest 6.2.5)

Order of pytest fixtures [duplicate]

For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order?
The easiest way to control the order in which fixtures are executed, is to just request the previous fixture in the later fixture. So to make sure b runs before a:
#pytest.fixture(autouse=True, scope="function")
def b():
pass
#pytest.fixture(scope="function")
def a(b):
pass
For details on the general fixture resolution order, see Maxim's excellent answer below or have a look at the documentation.
TL;DR
There are 3 aspects being considered together when building fixture evaluation order, aspects themselves are placed in order of priority:
Fixture dependencies - fixtures are evaluated from rootest required back to one required in test function.
Fixture scope - fixtures are evaluated from session through module to function scoped. On the same scope autouse fixtures are evaluated before non-autouse ones.
Fixture position in test function arguments - fixtures are evaluated in order of presence in test function arguments, from leftest to rightest.
Official explanation with code example by link below
https://docs.pytest.org/en/stable/fixture.html#fixture-instantiation-order
UPDATE
Current documentation mention 3 factors, as only considered, while discouraging from relying on another factors (like order in test function arguments):
scope
dependencies
autouse
Link above is outdated. See updated link
I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa.
IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things.
With the help of below code we can easily set execution order of fixtures / functions
e.g:-
execution order first
#pytest.mark.order(1)
execution order second
#pytest.mark.order(2)

Pytest plugin: Overriding pytest_runtest_call and friends

I'm developing a test suite using pytest for a project of mine. Because of the nature of the project, I need to create a Pytest plugin that controls how the tests are being run; they are not run locally, but sent to a different process to run. (I know about xdist but I think it doesn't solve my problem.)
I've been writing my own Pytest plugin by overriding the various pytest_runtest_* methods. So far it's been progressing well. Here is where I've hit a wall: I want my implementations of pytest_runtest_setup, pytest_runtest_call and pytest_runtest_teardown to actually be responsible for doing the setup, call and teardown. They're going to do it in a different process. My problem is: After Pytest calls my pytest_runtest_setup, it also calls all the other pytest_runtest_setup down the line of plugins. This is because the hook specification for pytest_runtest_setup has firstresult=False.
I don't want this, because I don't want pytest_runtest_setup to actually run on the current process. I want to be responsible for running it on my own. I want to override how it's being run, not add to it. I want the other implementations of pytest_runtest_setup below my own to not be run.
How can I do this?
Generic “runtest” hooks
All runtest related hooks receive a pytest.Item object.
pytest_runtest_protocol(item, nextitem)[source]
implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.
Parameters:
item – test item for which the runtest protocol is performed.
nextitem – the scheduled-to-be-next test item (or None if this is the end my friend). This argument is passed on to pytest_runtest_teardown().
Return boolean:
True if no further hook implementations should be invoked.
pytest_runtest_setup(item)[source]
called before pytest_runtest_call(item).
pytest_runtest_call(item)[source]
called to execute the test item.
pytest_runtest_teardown(item, nextitem)[source]
called after pytest_runtest_call.
Parameters: nextitem – the scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.
pytest_runtest_makereport(item, call)[source]
return a _pytest.runner.TestReport object for the given pytest.Item and _pytest.runner.CallInfo.
For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.
The _pytest.terminal reported specifically uses the reporting hook to print information about a test run.

Mocking decorators in python with mock and pytest

How to patch my custom decorator with monkeypatch or pytest.mock? i manage to mock it by doing (answer to this question):
package.decorator = mytestdecorator
The problem is that it breaks some other tests where i actually need that decorator to work.
You have to control complete lifecycle of your mocked decorator and revert the decorator back to original state.
It can be done in few different ways:
context manager which builds the mocked decorator, and reverts it back by __exit__.
setup and teardown functions for your test, teardown must revert the decorator.
pytest fixture with finalizer
pytest fixture with yield expression.
Personally I like the #pytest.yield_fixture as it keeps the code short and as soon as you realize, that all what comes after yield statement in the fixture code is cleanup code, things are very clear to do.

Categories