Django_db mark for django parametrized tests - python

I have been learning django for the past few weeks and I tried using the parametrizing fixtures and test functions and from what I understood I can simply run multiple tests at once. With the parametrized test I am trying to test functions, which are found in all models. I read the documentation, but sadly, as soon as I tried it I got the following error message Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.. I did read about the error and possible fixes and what I found was to create an autouse fixture and put it in conftest.py:
import pytest
#pytest.fixture(autouse=True)
def enable_db_access_for_all_tests(db):
pass
Sadly, this change made 0 difference and I received the same exact error after running the test. I did also try to use the django_db mark to grant the test access to the database, but that also did not seem to work.

It took me a while to realize it, but the above WAS "working". If you look closely at the error, it changed. Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it. is still there, incorrectly, but also, for me anyway, it was running migrations, which I didn't want it to do, and it was crashing on some old data migration. Adding --nomigrations to the command resolved the issue for me.

Create a decorator #pytest.mark.django_db above the function that is concerned
or you can also use #pytest.mark.django_db(transaction=True) , this helps pytest to tell django that the concerned function will require database access

Related

How do you run repeated routines/functions inside a test in pytest?

I know what pytest fixtures do and how they work, but they do things before and after the test. How do you do things inside the test in a composable fashion? I need to run tests composed of various smaller functions:
#pytest.mark.django_db
def my_example_test():
# Check if user does not exist
assert not Users.objects.filter(email='foo#bar.com').exists()
# Do a bunch of things to sign up and commit a user to the db
sign_up_routine()
# Check if user exists.
assert Users.objects.filter(email='foo#bar.com').exists()
# Checkout a shopping cart
checkout_shopping_cart(item="toothpaste", qty=10)
# do some more checks
...
Now, in my case, fixture doesn't work because it runs even before the test case starts. In general, I want to compose hundreds of tests like this:
Run a bunch of assert statements
Run a composable routine <--- how? function? fixture?
Assert more conditions
Run a different routine
What is a good way to structure composable tests like this in pytest? I am thinking of just writing a bunch of functions and give them database access?
I am sorry if this is just an obvious solution to run functions but I thought there was a pytest way to do this.
I think the short answer you're looking for is just use plain old functions! There's nothing wrong with that. If you want these reusable chunks of code to have access to other fixtures, just pass them through on invocation.
#pytest.mark.fixture
def db_session():
...
def create_some_users(session):
...
def my_test(db_session):
expected = ...
create_some_users(db_session)
actual = do_thing()
assert actual == expected
I like to think of tests with the AAA pattern - Arrange, Act, & Assert. First we get the universe in order, then we fire our cannon off at it, and finally we check to see that everything is how we'd expect it to be at the end. It's an ideal world if all tests are kept simple like this. Fixtures are pytest's way of managing and sharing sets of resources and the instructions to arrange them in some way. This is why they always run at start and -b/c we often want to do some related disposal afterwards- at end. And a nice side-effect is you can more explicitly state the dependencies of a given test within its declaration and can move common surrounding (beginning & end) "Arrange" code so that the test is more easily parsed as "do this X and expect Y".
For what you're looking for, you'd have to have some way to tell pytest when to run your reusable thing as it could be at any midpoint within the test function and at that point, mind as well just use a normal function. For example, you could write a fixture that returns a callable(function) and then just invoke the fixture with (..), but there's not a ton of difference. You could also have fixtures return classes and encapsulate reusable logic in that way. Any of these things would work fine.
I restructured the test with fixtures as follows. Instead of running one test with steps in a linear fashion, I read thoroughly through the fixtures documentation to end up with this:
#pytest.fixture(scope="function")
def sign_up_user(db)
# Check if user does not exist
assert not Users.objects.filter(email='foo#bar.com').exists()
# Do a bunch of things to sign up and commit a user to the db
# that were part of the sign_up_routine() here. Just showing an example below:
client = Client()
resp = client.post('url', kwargs={form:form})
#pytest.fixture(scope="function")
def assert_user_exists(db, sign_up_user):
# Check if user exists. You can imagine a lot more things to assert here, just example from my original post here.
assert Users.objects.filter(email='foo#bar.com').exists()
#pytest.fixture(scope="function")
def checkout_shopping_cart(db, assert_user_exists):
# Checkout shopping cart with 10 quantity of toothpaste
...
def test_order_in_database(db, checkout_shopping_cart):
# Check if order exists in the database
assert Orders.objects.filter(order__user__email='foo#bar.com').exists()
# This is the final test that calls all previous fixtures.
# Now, fixtures can be used to compose tests in various ways. For example, repeat 'sign_up_user' but checkout toothbrush instead of toothpaste.
I think this is pretty clean, not sure if it this is the intended way to use pytest. I welcome feedback. I can now compose smaller bits of tests that can run as fixtures by calling other fixtures in a long chain.
This is a toy example but you can imagine testing a lot of conditions in the database in each of these fixtures. Please note db fixture is needed for django-pytest package for database to work properly in fixtures. Otherwise, you'll get errors that won't be obvious ("use django_db mark" which doesn't fix the problem), see here: pytest django: unable to access db in fixture teardown
Also, the pytest.fixture scope must be function so each fixture runs again instead of caching.
More reading here: https://docs.pytest.org/en/6.2.x/fixture.html#running-multiple-assert-statements-safely

How to run a scoped function before all pytest fixtures of that scope?

I saw this question, asking the same about doing things before tests.
I need to do things before fixtures.
For example, I am setting up a dockerized environment, which I have to clean before building. To make things more complicated, I am using this plugin which defines fixtures I can't control or change, and I need something that comes before all fixtures (and does docker-compose down and other cleanup).
For example, when pytest starts, run the common per-session pre-step, then the fixtures, then the per-module pre-step, then the fixtures, and so on.
Is this a supported hook in pytest?
I couldn't find a relevant doc.
As #MrBean Bremen stated, pytest_sessionstart does the trick.
An annoying problem with that, is I can't use fixtures inside (naturally), with Argument(s) {'fixture_name'} are declared in the hookimpl but can not be found in the hookspec
I tried to fix that with pytest-dependency, but it doesn't seem to work on fixtures, only on tests.
I am left with the hacky workaround to extract required data for sessionstart to functions, and call them in sessionstart as well as in similarly named fixtures.
It works, but it is ugly.
Would accept a cleaner solution over this one

Dynamically parametrizing test fixtures with addoption

I've got a test suite that is working well to test code across two separate databases (SQLite and Postgres). I want to extend this to run the exact same test suite across upgraded databases (to test that database schema upgrades are working as expected).
The upgrades to run are determined outside of pytest, from a shell script, based on information from Git, which determines what schema versions there are, compares against available upgrade scripts, and then should invoke pytest. I'd like to use something like:
pytest --dbupgrade=v1 --dbupgrade=v2 tests/test-upgrades.py
I have the following in conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--dbupgrade",
action="append",
default=[],
help="list of base schema versions to upgrade"
)
And I've been using parametrized fixtures for the other tests. I already have all the test cases written and working and I would like to avoid rewriting them to be parametrized themselves as I've seen when searching for a solution using pytest_generate_tests. So where I could easily hardcode:
#pytest.fixture(params=['v1', 'v2'])
def myfixture(request):
...
I would like to do:
#pytest.fixture(params=pytest.config.option.get('dbupgrade')
def myfixture(request):
...
However the results from pytest_addoption are available in either the pytestconfig fixture, or in the config attribute attached to various objects, and I can't find a way to get it in the declaration of the fixture--but I believe it's available by that point.
Update (workaround)
I don't love it, but I'm pulling the necessary information from environment variables and that's working fine. Something like:
# for this case I prefer this to fail noisily if it fails
schema_versions = os.environ['SCHEMA_VERSIONS'].split(',')
...
#pytest.fixture(params=schema_versions)
def myfixture(request):
...

pytest run new tests (nearly) first

I am using pytest. I like the way I call pytest (re-try the failed tests first, verbose, grab and show serial output, stop at first failure):
pytest --failed-first -v -s -x
However there is one more thing I want:
I want pytest to run the new tests (ie tests never tested before) immediately after the --failed-first ones. This way, when working with tests that are long to perform, I would get most relevant information as soon as possible.
Any way to do that?
This may not be directly what you are asking about, but, my understanding is that the test execution order is important for you when you create new tests during development.
Since you are already working with these "new" tests, the pytest-ordering plugin might be a good option to consider. It allows you to influence the execution order by decorating your tests with #pytest.mark.first, #pytest.mark.second etc decorators.
pytest-ordering is able to change the execution order by using a pytest_collection_modifyitems hook. There is also pytest-random-order plugin which also uses the same hook to control/change the order.
You can also have your own hook defined and adjusted to your specific needs. For example, here another hook is used to shuffle the tests:
Dynamically control order of tests with pytest
For anyone coming to this now, pytest added a --new-first option to run new tests before all other tests. It can be combined with --failed-first to run new and failed tests. For test-driven development, I've found it helpful to use these options with pytest-watch, which I described in my blog.

Incorporating Django's system checks into unit test suite?

I recently deployed some broken code to our staging environment. The new code failed Django's system checks (error messages reproduced below, though this question is more general). Our unit test suite ran cleanly. My question is this: what is the right way to ensure that system checks get run before code can be deployed?
Initially I guessed that the tests were able to run without performing system checks because we use pytest instead of Django's test runner. However, adding a simple test and invoking manage.py test showed that the Django test runner also runs without performing system checks.
One idea I had is to run the manage.py check command in our build pipeline, and fail the build on a nonzero return value. A downside of this approach is that it'd introduce another developer step before code could be committed (e.g. remember to run manage.py check in addition to running the unit test suite).
Another idea is to add a unit test that runs the system checks. This seems technically feasible, but is it consistent with the purpose and design of Django's system check framework?
I note that the documentation has a section on writing tests for custom checks, which doesn't quite get at what I'm asking. I don't see other documentation on incorporating system checks into tests in the Django docs.
Error messages:
SystemCheckError: System check identified some issues:
ERRORS:
myapp.MyCoolModel.linked_groups: (fields.E304) Reverse accessor for 'MyCoolModel.linked_groups' clashes with reverse accessor for 'MyCoolModel.primary_group'.
HINT: Add or change a related_name argument to the definition for 'MyCoolModel.linked_groups' or 'MyCoolModel.primary_group'.
myapp.MyCoolModel.primary_group: (fields.E304) Reverse accessor for 'MyCoolModel.primary_group' clashes with reverse accessor for 'MyCoolModel.linked_groups'.
HINT: Add or change a related_name argument to the definition for 'MyCoolModel.primary_group' or 'MyCoolModel.linked_groups'.
According to this ticket, not running checks along with the tests was a regression that was introduced in version 1.8, and has recently been fixed.
As described there, an easy solution appears to be to create your own test runner that inserts a call_command('check') at the start of run_suite(). See the actual fix for an example.

Categories