I recently started coding using pytest fixtures and have setup function which bring up and configures various nodes in the network.
For writing automation test cases, creating setup is one time activity and is error prone as well. As this is one time hence, scope is set to "module" or "session"
While implementing frixtures, as per pytest documenation / recommendation, I implemented it somewhat like this:
class xxx:
#pytest.fixture(scop = "module", autouse=True)
def setup(self, request):
-----lines of code ---------
def teardown():
------cod for tear down --------
request.addfinalizer(teardown)
How to handle the situation when an error occurs in setup fixture, ideally it should invoke teardown and execution is stopped at that point i.e. test cases should not get executed further.
Thanks for you reply.
Related
I have a basic Pytest test suite working, and I need to dynamically parametrize a fixture at runtime based on configuration/arguments.
The "heart" of my test suite is a session-scoped fixture that does some very expensive initialization based on some custom CLI arguments, and all of the tests use this fixture. To oversimplify a bit, think of the fixture as providing a remote connection to a server to run the tests on, the initialization involves configuring the OS/dependencies on that server (hence why it's expensive and session-scoped), and the CLI arguments specify aspects of the OS/environment configuration.
This works great for individual test runs, but now I want to take this test suite and run the entire thing over a list of different test configurations. The list of configurations needs to be determined at runtime based on a configuration file and some additional CLI arguments.
As I understand it, the "obvious" way of doing this kind of thing in Pytest is by parametrizing my initialization fixture. But the problem is my need to do it dynamically based on other arguments; it looks like you can call a function to populate the params list in #pytest.fixture, but that function wouldn't have access to the Pytest CLI args, since arg parsing hasn't happened yet.
I've managed to figure out how to do this by implementing pytest_generate_tests to dynamically parametrize each test using indirect parametrization, and taking advantage of fixture caching to ensure that each unique fixture configuration is only actually initialized once, even though the fixture is actually being parametrized for each test. This is working, although it's questionable whether Pytest officially supports this. Is there a better way of achieving what I'm trying to do here?
The "Right" way
def generate_parameters():
# This is just a plain function, and arg parsing hasn't happened yet,
# so I can't use Pytest options to generate my parameters
pass
#pytest.fixture(scope="session", params=generate_parameters())
def expensive_init_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
The way that works
#pytest.fixture(scope="session")
def expensive_initialization_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
def pytest_generate_tests(metafunc):
if 'expensive_init_fixture' in metafunc.fixturenames:
parameters = [] # Generate parameters based on config in metafunc.config
metafunc.parametrize('expensive_init_fixture', parameters, scope="session", indirect=True)
I'm not sure if what im seggusting is what your looking for, but using pytest_testconfig sounds a good solution.
testconfig lets you use a global config variables in all tests, you can set theyre value however you like, meaning you can get them from the CLI.
This means you can have the default value of config when you start a run, and if you want to change it in the next run, you take the param from the CLI.
I'm not sure what them motive beheind this, but you should see if you really need a global_config.py file here, changing tests values between runs shouldn't be something configurable from the CLI, tests are meant to be persistant, and if something needs changing then a commit is a better solution since you can see it's history.
I’m developing database based flask application (using flask-sqlalchemy). I use fixtures to define individual pieces of test data.
#pytest.fixture(scope='class')
def model_a(db):
a = ModelA()
db.session.add(a)
db.session.commit()
return a
#pytest.fixture(scope='class')
def model_b(db, a):
b = ModelB(a=a)
db.session.add(b)
db.session.commit()
return b
# …
While it works to call db.session.commit() for each and every test object it would be more efficient to call it only once right before executing the actual tests.
Is there a way to run db.session.commit() before every test, after all fixtures are loaded, but only if the test directly or indirectly needs db?
Things that I don’t think that they will work:
A python_runtest_setup-hook doesn’t seem to be able to access fixtures or to determine whether the db fixture is loaded for/required by the test.
A autouse fixture would need to depend on db and thus make all tests use the db fixture. Also I couldn’t find a way to make it executed last.
You can't specify fixtures ordering other than indirectly (fixtures depending on other fixtures), see discussion in issue #1216. You can access both fixture names and fixture values in hooks though, so using a hook is actually a good idea. However, pytest_runtest_setup is too early for all fixtures to be executed; use pytest_pyfunc_call instead. Example:
from _pytest.fixtures import FixtureRequest
def pytest_pyfunc_call(pyfuncitem):
if 'db' in pyfuncitem.fixturenames:
db = FixtureRequest(pyfuncitem, _ispytest=True).getfixturevalue('db')
db.session.commit()
# ensure this hook returns None, or your underlying test function won't be executed.
Warning: pytest.FixtureRequest() is considered non-public by the pytest maintainers for anything except type-hinting. That’s why its use issues a deprecation warning without the _ispytest=True flag. The API might change without a major release. There is no public API yet that supports this use-case. (Last updated for pytest 6.2.5)
I am setting up the test platform for a multi-tenant system.
For each written test, I want to create a test for each tenant and wrap the entire test run to change the database connexion and some thread local variable before the test without breaking the database teardown where it flushes.
During my trial-and-error process, I've resorted to climbing higher and higher in the pytest hook chain: I started with pytest_generate_tests to create a test for each tenant with an associated fixture, but the teardown failed, I've ended up at the following idea:
def pytest_runtestloop(session):
for tenant in range(settings.TENANTS.keys()):
with set_current_tenant(tenant):
with environ({'DJANGO_CONFIGURATION': f'Test{tenant.capitalize()}Config'}):
session.config.pluginmanager.getplugin("main").pytest_runtestloop(session)
return True
Although this does not work (since django-configurations loads the settings during the earlier pytest_load_initial_conftests phase), this example should give an idea of what I am trying to achieve.
The big roadblock: The default database connection needs to point to the current tenant's database before any fixtures are loaded and after flush is ran.
I have disabled pytest-django's default session fixture mechanism and plan on using a external database for tests :
#pytest.fixture(scope='session')
def django_db_setup():
pass
I could have a wrapper python script that calls pytest multiple times with the right config, but I would lose a lot of nice tooling.
I am trying to write tests for aiohttp application. I am using pytest-aiohttp plugin. My intention is to initialize and run the application once before first test execution and tear down after all tests finished. pytest-aiohttp fixtures like 'loop', 'test_client' are very helpful but they have scope='function' which means I can not use them from my own fixture with scope='session'. Is there a way to workaround this? And if not then what would be a proper approach for achieving my goal without using built-in fixtures?
My code is as follows (conftest.py)
#pytest.fixture()
def client(test_client, loop):
app = init(loop)
return loop.run_until_complete(test_client(app))
My tests then use this
class TestGetAlerts:
async def test_get_login_url(self, client):
resp = await client.get('/api/get_login_url')
assert resp.status == 200
So my fixture 'client' runs for every test, which is what i want to avoid
test_client fixture is a simple wrapper around TestServer and TestClient classes from aiohttp.test_utils.
You can craft your own version of the fixture with 'session' scope.
But this way has own problems: tests should be isolated, in practice it means event loop recreation for every test.
But session-level aiohttp application doesn't support such loop recreation. Thus the app should be run in separate thread which makes writing test assertions much harder.
In my practice aiohttp application starts instantly but things like DB schema migration and DB fixtures applying takes time. These activities could be implemented in session scope easy as separate fixtures but app starting/stopping should be performed inside every test.
For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order?
The easiest way to control the order in which fixtures are executed, is to just request the previous fixture in the later fixture. So to make sure b runs before a:
#pytest.fixture(autouse=True, scope="function")
def b():
pass
#pytest.fixture(scope="function")
def a(b):
pass
For details on the general fixture resolution order, see Maxim's excellent answer below or have a look at the documentation.
TL;DR
There are 3 aspects being considered together when building fixture evaluation order, aspects themselves are placed in order of priority:
Fixture dependencies - fixtures are evaluated from rootest required back to one required in test function.
Fixture scope - fixtures are evaluated from session through module to function scoped. On the same scope autouse fixtures are evaluated before non-autouse ones.
Fixture position in test function arguments - fixtures are evaluated in order of presence in test function arguments, from leftest to rightest.
Official explanation with code example by link below
https://docs.pytest.org/en/stable/fixture.html#fixture-instantiation-order
UPDATE
Current documentation mention 3 factors, as only considered, while discouraging from relying on another factors (like order in test function arguments):
scope
dependencies
autouse
Link above is outdated. See updated link
I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa.
IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things.
With the help of below code we can easily set execution order of fixtures / functions
e.g:-
execution order first
#pytest.mark.order(1)
execution order second
#pytest.mark.order(2)