Dynamically parametrizing test fixtures with addoption - python

I've got a test suite that is working well to test code across two separate databases (SQLite and Postgres). I want to extend this to run the exact same test suite across upgraded databases (to test that database schema upgrades are working as expected).
The upgrades to run are determined outside of pytest, from a shell script, based on information from Git, which determines what schema versions there are, compares against available upgrade scripts, and then should invoke pytest. I'd like to use something like:
pytest --dbupgrade=v1 --dbupgrade=v2 tests/test-upgrades.py
I have the following in conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--dbupgrade",
action="append",
default=[],
help="list of base schema versions to upgrade"
)
And I've been using parametrized fixtures for the other tests. I already have all the test cases written and working and I would like to avoid rewriting them to be parametrized themselves as I've seen when searching for a solution using pytest_generate_tests. So where I could easily hardcode:
#pytest.fixture(params=['v1', 'v2'])
def myfixture(request):
...
I would like to do:
#pytest.fixture(params=pytest.config.option.get('dbupgrade')
def myfixture(request):
...
However the results from pytest_addoption are available in either the pytestconfig fixture, or in the config attribute attached to various objects, and I can't find a way to get it in the declaration of the fixture--but I believe it's available by that point.
Update (workaround)
I don't love it, but I'm pulling the necessary information from environment variables and that's working fine. Something like:
# for this case I prefer this to fail noisily if it fails
schema_versions = os.environ['SCHEMA_VERSIONS'].split(',')
...
#pytest.fixture(params=schema_versions)
def myfixture(request):
...

Related

How do you run repeated routines/functions inside a test in pytest?

I know what pytest fixtures do and how they work, but they do things before and after the test. How do you do things inside the test in a composable fashion? I need to run tests composed of various smaller functions:
#pytest.mark.django_db
def my_example_test():
# Check if user does not exist
assert not Users.objects.filter(email='foo#bar.com').exists()
# Do a bunch of things to sign up and commit a user to the db
sign_up_routine()
# Check if user exists.
assert Users.objects.filter(email='foo#bar.com').exists()
# Checkout a shopping cart
checkout_shopping_cart(item="toothpaste", qty=10)
# do some more checks
...
Now, in my case, fixture doesn't work because it runs even before the test case starts. In general, I want to compose hundreds of tests like this:
Run a bunch of assert statements
Run a composable routine <--- how? function? fixture?
Assert more conditions
Run a different routine
What is a good way to structure composable tests like this in pytest? I am thinking of just writing a bunch of functions and give them database access?
I am sorry if this is just an obvious solution to run functions but I thought there was a pytest way to do this.
I think the short answer you're looking for is just use plain old functions! There's nothing wrong with that. If you want these reusable chunks of code to have access to other fixtures, just pass them through on invocation.
#pytest.mark.fixture
def db_session():
...
def create_some_users(session):
...
def my_test(db_session):
expected = ...
create_some_users(db_session)
actual = do_thing()
assert actual == expected
I like to think of tests with the AAA pattern - Arrange, Act, & Assert. First we get the universe in order, then we fire our cannon off at it, and finally we check to see that everything is how we'd expect it to be at the end. It's an ideal world if all tests are kept simple like this. Fixtures are pytest's way of managing and sharing sets of resources and the instructions to arrange them in some way. This is why they always run at start and -b/c we often want to do some related disposal afterwards- at end. And a nice side-effect is you can more explicitly state the dependencies of a given test within its declaration and can move common surrounding (beginning & end) "Arrange" code so that the test is more easily parsed as "do this X and expect Y".
For what you're looking for, you'd have to have some way to tell pytest when to run your reusable thing as it could be at any midpoint within the test function and at that point, mind as well just use a normal function. For example, you could write a fixture that returns a callable(function) and then just invoke the fixture with (..), but there's not a ton of difference. You could also have fixtures return classes and encapsulate reusable logic in that way. Any of these things would work fine.
I restructured the test with fixtures as follows. Instead of running one test with steps in a linear fashion, I read thoroughly through the fixtures documentation to end up with this:
#pytest.fixture(scope="function")
def sign_up_user(db)
# Check if user does not exist
assert not Users.objects.filter(email='foo#bar.com').exists()
# Do a bunch of things to sign up and commit a user to the db
# that were part of the sign_up_routine() here. Just showing an example below:
client = Client()
resp = client.post('url', kwargs={form:form})
#pytest.fixture(scope="function")
def assert_user_exists(db, sign_up_user):
# Check if user exists. You can imagine a lot more things to assert here, just example from my original post here.
assert Users.objects.filter(email='foo#bar.com').exists()
#pytest.fixture(scope="function")
def checkout_shopping_cart(db, assert_user_exists):
# Checkout shopping cart with 10 quantity of toothpaste
...
def test_order_in_database(db, checkout_shopping_cart):
# Check if order exists in the database
assert Orders.objects.filter(order__user__email='foo#bar.com').exists()
# This is the final test that calls all previous fixtures.
# Now, fixtures can be used to compose tests in various ways. For example, repeat 'sign_up_user' but checkout toothbrush instead of toothpaste.
I think this is pretty clean, not sure if it this is the intended way to use pytest. I welcome feedback. I can now compose smaller bits of tests that can run as fixtures by calling other fixtures in a long chain.
This is a toy example but you can imagine testing a lot of conditions in the database in each of these fixtures. Please note db fixture is needed for django-pytest package for database to work properly in fixtures. Otherwise, you'll get errors that won't be obvious ("use django_db mark" which doesn't fix the problem), see here: pytest django: unable to access db in fixture teardown
Also, the pytest.fixture scope must be function so each fixture runs again instead of caching.
More reading here: https://docs.pytest.org/en/6.2.x/fixture.html#running-multiple-assert-statements-safely

Maintaining two pyfakefs file systems in pytest with additional files

I'm trying to write unit tests using pytest for a configuration system that should look in a couple of different places for config files. I can use pyfakefs via the fs plugin to create a fixture that provides a set of files, including a config file in one location, but I'd like to have unit testing that both locations are checked and that the correct one is preferred.
My initial thought was that I could create the first fixture and then add a file:
#pytest.fixture()
def fake_files(fs):
fs.create_file('/path/to/datafile', contents="Foo")
yield fs
#pytest.fixture()
def user_config(fake_files):
fake_files.create_file('/path/to/user/config', contents="Bar")
yield fake_files
The idea behind this is that any test using fake_files would not find the config, but that using user_config would. However, unit tests using that user_config fixture do not find the file. Is this possible?
There is quite a lot more added in the first fixture in reality, so maintaining the two systems as two completely separate fixtures duplicates code and I am unclear if the underlying fs object can even be used in parallel.

pytest reuse fixture between projects

I want to create fixtures as library components.
A standard test database config is useful for several projects in different repos. It is currently copy/pasted into each independent project as they can't share a config.py.
I refactored the code into a pip installable library but can't work out an elegant way to use it in each project. This doesn't work:
import my_db_fixture
#pytest.fixture
def adapted_db_fixture(my_db_fixture):
# adapt the test setup
For the real code, the fixture I want to re-use is built from other fixtures. The best work-around I can find so far is to create a local conftest.py as copy/paste code but limited to importing functions and calling them in local fixture functions. I don't like copy/paste and unnecessarily exposes the inner workings of the fixtures.
It is possible to re-use fixtures from an installed library.
Define the fixtures as usual in the installable package. Then import them into a conftest.py local to the project. You need to import not just the fixture you want but also all fixtures it depends on and (if used) pytest_addoption
from my.package import (
the_fixture_i_want,
all_fixtures_it_uses,
pytest_addopt
)
I also discovered you can't un-decorate a library function with a teardown and call it in the local conftest.py:
# This doesn't work
# pip installed my_fixture.py
def my_fixture(dependencies)
# setup code
yield fixture_object
# teardown code
# local conftest.py
import pytest
import my_fixture # NB: the module
#pytest.fixture
def my_fixture(dependencies):
my_fixture.my_fixture()
# teardown code isn't called: pytest knows the function has no yield
# but doesn't realise it's returning a generator none the less
This article helped me:
peterhurford/pytest-fixture-modularization.md
I reckoned pytest should recognise something returning a generator as a generator so logged it as a bug. I imagine comments responding to it could be useful:
call_fixture_func should test the return value not the function

Is there a way in pytest to get a list of parameterized test node ids from within a fixture?

I'm trying to write a workaround for the inability of pytest/xdist to run some tests in serial, rather than all tests in parallel.
In order to do what I'm trying to do, I need to get a list of all the collected parameterized tests (so they look something like path/to/test_module_name.py::TestClassName::test_method_name[parameterization info]). I'm attempting to do so in a session scoped fixture, but can't figure out where this info is stored. Is there a way to do this?
I noticed at one point, when calling pytest with --cache-show, that 'cache/nodeids' was being populated with the exact node id information I need, but I can't seem to figure out when that does/doesn't happen, as it isn't consistent.
While I couldn't find exactly what I was looking for, the problem with serializing tests while using the xdist plugin can be resolved with the following two fixtures:
#pytest.fixture(scope='session')
def lock():
    lock_file = pathlib.Path('serial.lock')
    yield filelock.FileLock(lock_file=str(lock_file))
    with contextlib.suppress(OSError):
        os.remove(path=lock_file)
#pytest.fixture() # Add this fixture to each test that needs to be serialized
def serial(lock):
    with lock.acquire(poll_intervall=0.1):
        yield

Django_db mark for django parametrized tests

I have been learning django for the past few weeks and I tried using the parametrizing fixtures and test functions and from what I understood I can simply run multiple tests at once. With the parametrized test I am trying to test functions, which are found in all models. I read the documentation, but sadly, as soon as I tried it I got the following error message Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.. I did read about the error and possible fixes and what I found was to create an autouse fixture and put it in conftest.py:
import pytest
#pytest.fixture(autouse=True)
def enable_db_access_for_all_tests(db):
pass
Sadly, this change made 0 difference and I received the same exact error after running the test. I did also try to use the django_db mark to grant the test access to the database, but that also did not seem to work.
It took me a while to realize it, but the above WAS "working". If you look closely at the error, it changed. Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it. is still there, incorrectly, but also, for me anyway, it was running migrations, which I didn't want it to do, and it was crashing on some old data migration. Adding --nomigrations to the command resolved the issue for me.
Create a decorator #pytest.mark.django_db above the function that is concerned
or you can also use #pytest.mark.django_db(transaction=True) , this helps pytest to tell django that the concerned function will require database access

Categories