I have a basic Pytest test suite working, and I need to dynamically parametrize a fixture at runtime based on configuration/arguments.
The "heart" of my test suite is a session-scoped fixture that does some very expensive initialization based on some custom CLI arguments, and all of the tests use this fixture. To oversimplify a bit, think of the fixture as providing a remote connection to a server to run the tests on, the initialization involves configuring the OS/dependencies on that server (hence why it's expensive and session-scoped), and the CLI arguments specify aspects of the OS/environment configuration.
This works great for individual test runs, but now I want to take this test suite and run the entire thing over a list of different test configurations. The list of configurations needs to be determined at runtime based on a configuration file and some additional CLI arguments.
As I understand it, the "obvious" way of doing this kind of thing in Pytest is by parametrizing my initialization fixture. But the problem is my need to do it dynamically based on other arguments; it looks like you can call a function to populate the params list in #pytest.fixture, but that function wouldn't have access to the Pytest CLI args, since arg parsing hasn't happened yet.
I've managed to figure out how to do this by implementing pytest_generate_tests to dynamically parametrize each test using indirect parametrization, and taking advantage of fixture caching to ensure that each unique fixture configuration is only actually initialized once, even though the fixture is actually being parametrized for each test. This is working, although it's questionable whether Pytest officially supports this. Is there a better way of achieving what I'm trying to do here?
The "Right" way
def generate_parameters():
# This is just a plain function, and arg parsing hasn't happened yet,
# so I can't use Pytest options to generate my parameters
pass
#pytest.fixture(scope="session", params=generate_parameters())
def expensive_init_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
The way that works
#pytest.fixture(scope="session")
def expensive_initialization_fixture(pytestconfig, request):
# Do initialization based on CLI args and request.param
pass
def pytest_generate_tests(metafunc):
if 'expensive_init_fixture' in metafunc.fixturenames:
parameters = [] # Generate parameters based on config in metafunc.config
metafunc.parametrize('expensive_init_fixture', parameters, scope="session", indirect=True)
I'm not sure if what im seggusting is what your looking for, but using pytest_testconfig sounds a good solution.
testconfig lets you use a global config variables in all tests, you can set theyre value however you like, meaning you can get them from the CLI.
This means you can have the default value of config when you start a run, and if you want to change it in the next run, you take the param from the CLI.
I'm not sure what them motive beheind this, but you should see if you really need a global_config.py file here, changing tests values between runs shouldn't be something configurable from the CLI, tests are meant to be persistant, and if something needs changing then a commit is a better solution since you can see it's history.
Related
I’m developing database based flask application (using flask-sqlalchemy). I use fixtures to define individual pieces of test data.
#pytest.fixture(scope='class')
def model_a(db):
a = ModelA()
db.session.add(a)
db.session.commit()
return a
#pytest.fixture(scope='class')
def model_b(db, a):
b = ModelB(a=a)
db.session.add(b)
db.session.commit()
return b
# …
While it works to call db.session.commit() for each and every test object it would be more efficient to call it only once right before executing the actual tests.
Is there a way to run db.session.commit() before every test, after all fixtures are loaded, but only if the test directly or indirectly needs db?
Things that I don’t think that they will work:
A python_runtest_setup-hook doesn’t seem to be able to access fixtures or to determine whether the db fixture is loaded for/required by the test.
A autouse fixture would need to depend on db and thus make all tests use the db fixture. Also I couldn’t find a way to make it executed last.
You can't specify fixtures ordering other than indirectly (fixtures depending on other fixtures), see discussion in issue #1216. You can access both fixture names and fixture values in hooks though, so using a hook is actually a good idea. However, pytest_runtest_setup is too early for all fixtures to be executed; use pytest_pyfunc_call instead. Example:
from _pytest.fixtures import FixtureRequest
def pytest_pyfunc_call(pyfuncitem):
if 'db' in pyfuncitem.fixturenames:
db = FixtureRequest(pyfuncitem, _ispytest=True).getfixturevalue('db')
db.session.commit()
# ensure this hook returns None, or your underlying test function won't be executed.
Warning: pytest.FixtureRequest() is considered non-public by the pytest maintainers for anything except type-hinting. That’s why its use issues a deprecation warning without the _ispytest=True flag. The API might change without a major release. There is no public API yet that supports this use-case. (Last updated for pytest 6.2.5)
I'm new to testing with Python and am not sure if this is even possible.
I have a relatively long method that accepts an input, does some processing then sends the data to an API.
I would like to write a test that will send the inputted data to the test, run the processing on the data but NOT send it to the API. So basically run for a certain amount within the method but not to the end.
Unfortunately I'm not even sure where to start so I can't really provide relevant sample code - It would just be a standard unit test that runs a method with input and asserts the output.
You are taking the wrong approach. What you want to do is execute your test isolated from your external API function calls. Just mock your API calls. That means, run your test with the API calls replaced with mock methods. You don't need to change code under test, you can use a patch decorator to replace the API calls with mock objects. Please see the unittest.mock documentation and examples here
unittest.mock is very powerful, and can look a bit daunting or at least a bit puzzling at the beginning. Take your time to understand the kinds of things you can do with mocks in the documentation. A very simple example here, of one of the possibilites (in some test code):
#patch('myproject.db.api.os.path.exists')
def test_init_db(self, mock_exists):
...
# my mock function call will always returns False
mock_exists.return_value = False
# now calls to myproject.db.api.os.path.exists
# in the code under test act just like the db file does not exist
...
So you probably can bypass your external API calls (all of them or some of them) with ease. And you don't have to specify API results if you don't want to. Mocks exhibit "plastic" behaviour.
If you create a mock and call an arbitrary mock method you haven't even defined (think the API methods you want to isolate) it will run ok and simply return another mock object. That is, it will do nothing, but its client code will still run as if it did. So you can run your tests actually disabling the parts you want.
For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order?
The easiest way to control the order in which fixtures are executed, is to just request the previous fixture in the later fixture. So to make sure b runs before a:
#pytest.fixture(autouse=True, scope="function")
def b():
pass
#pytest.fixture(scope="function")
def a(b):
pass
For details on the general fixture resolution order, see Maxim's excellent answer below or have a look at the documentation.
TL;DR
There are 3 aspects being considered together when building fixture evaluation order, aspects themselves are placed in order of priority:
Fixture dependencies - fixtures are evaluated from rootest required back to one required in test function.
Fixture scope - fixtures are evaluated from session through module to function scoped. On the same scope autouse fixtures are evaluated before non-autouse ones.
Fixture position in test function arguments - fixtures are evaluated in order of presence in test function arguments, from leftest to rightest.
Official explanation with code example by link below
https://docs.pytest.org/en/stable/fixture.html#fixture-instantiation-order
UPDATE
Current documentation mention 3 factors, as only considered, while discouraging from relying on another factors (like order in test function arguments):
scope
dependencies
autouse
Link above is outdated. See updated link
I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa.
IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things.
With the help of below code we can easily set execution order of fixtures / functions
e.g:-
execution order first
#pytest.mark.order(1)
execution order second
#pytest.mark.order(2)
I'm developing a test suite using pytest for a project of mine. Because of the nature of the project, I need to create a Pytest plugin that controls how the tests are being run; they are not run locally, but sent to a different process to run. (I know about xdist but I think it doesn't solve my problem.)
I've been writing my own Pytest plugin by overriding the various pytest_runtest_* methods. So far it's been progressing well. Here is where I've hit a wall: I want my implementations of pytest_runtest_setup, pytest_runtest_call and pytest_runtest_teardown to actually be responsible for doing the setup, call and teardown. They're going to do it in a different process. My problem is: After Pytest calls my pytest_runtest_setup, it also calls all the other pytest_runtest_setup down the line of plugins. This is because the hook specification for pytest_runtest_setup has firstresult=False.
I don't want this, because I don't want pytest_runtest_setup to actually run on the current process. I want to be responsible for running it on my own. I want to override how it's being run, not add to it. I want the other implementations of pytest_runtest_setup below my own to not be run.
How can I do this?
Generic “runtest” hooks
All runtest related hooks receive a pytest.Item object.
pytest_runtest_protocol(item, nextitem)[source]
implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.
Parameters:
item – test item for which the runtest protocol is performed.
nextitem – the scheduled-to-be-next test item (or None if this is the end my friend). This argument is passed on to pytest_runtest_teardown().
Return boolean:
True if no further hook implementations should be invoked.
pytest_runtest_setup(item)[source]
called before pytest_runtest_call(item).
pytest_runtest_call(item)[source]
called to execute the test item.
pytest_runtest_teardown(item, nextitem)[source]
called after pytest_runtest_call.
Parameters: nextitem – the scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.
pytest_runtest_makereport(item, call)[source]
return a _pytest.runner.TestReport object for the given pytest.Item and _pytest.runner.CallInfo.
For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.
The _pytest.terminal reported specifically uses the reporting hook to print information about a test run.
For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order?
The easiest way to control the order in which fixtures are executed, is to just request the previous fixture in the later fixture. So to make sure b runs before a:
#pytest.fixture(autouse=True, scope="function")
def b():
pass
#pytest.fixture(scope="function")
def a(b):
pass
For details on the general fixture resolution order, see Maxim's excellent answer below or have a look at the documentation.
TL;DR
There are 3 aspects being considered together when building fixture evaluation order, aspects themselves are placed in order of priority:
Fixture dependencies - fixtures are evaluated from rootest required back to one required in test function.
Fixture scope - fixtures are evaluated from session through module to function scoped. On the same scope autouse fixtures are evaluated before non-autouse ones.
Fixture position in test function arguments - fixtures are evaluated in order of presence in test function arguments, from leftest to rightest.
Official explanation with code example by link below
https://docs.pytest.org/en/stable/fixture.html#fixture-instantiation-order
UPDATE
Current documentation mention 3 factors, as only considered, while discouraging from relying on another factors (like order in test function arguments):
scope
dependencies
autouse
Link above is outdated. See updated link
I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa.
IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things.
With the help of below code we can easily set execution order of fixtures / functions
e.g:-
execution order first
#pytest.mark.order(1)
execution order second
#pytest.mark.order(2)