pytest mark django db: avoid to save fixture in tests - python

I need to:
Avoid to use save() in tests
Use #pytest.mark.django_db on all tests inside this class
Create a number of trx fixtures (10/20) to act like false data.
import pytest
from ngg.processing import (
elab_data
)
class TestProcessing:
#pytest.mark.django_db
def test_elab_data(self, plan,
obp,
customer,
bac,
col,
trx_1,
trx_2,
...):
plan.save()
obp.save()
customer.save()
bac.save()
col.save()
trx.save()
elab_data(bac, col)
Where the fixtures are simply models like that:
#pytest.fixture
def plan():
plan = Plan(
name = 'test_plan',
status = '1'
)
return plan
I don't find this way really clean. How would you do that?

TL;DR
test.py
import pytest
from ngg.processing import elab_data
#pytest.mark.django_db
class TestProcessing:
def test_elab_data(self, plan, obp, customer, bac, col, trx_1, trx_2):
elab_data(bac, col)
conftest.py
#pytest.fixture(params=[
('test_plan', 1),
('test_plan2', 2),
])
def plan(request, db):
name, status = request.param
return Plan.objects.create(name=name, status=status)
I'm not quite sure if I got it correctly
Avoid to use save() in tests
You may create objects using instance = Model.objects.create() or just put instance.save() in fixtures.
As described at note section here
To access the database in a fixture, it is recommended that the fixture explicitly request one of the db, transactional_db or django_db_reset_sequences fixtures.
and at fixture section here
This fixture will ensure the Django database is set up. Only required for fixtures that want to use the database themselves. A test function should normally use the pytest.mark.django_db() mark to signal it needs the database.
you may want to use db fixture in you records fixtures and keep django_db mark in your test cases.
Use #pytest.mark.django_db on all tests inside this class
To mark whole classes you may use decorator on classes or pytestmark variable as described here.
You may use pytest.mark decorators with classes to apply markers to all of its test methods
To apply marks at the module level, use the pytestmark global variable
Create a number of trx fixtures (10/20) to act like false data.
I didn't quite get what you were trying to do but still would assume that it is one of the following things:
Create multiple objects and pass them as fixtures
In that case you may want to create a fixture that will return generator or list and use whole list instead of multiple fixtures
Test case using different variants of fixture but with one/few at a time
In that case you may want to parametrize your fixture so it will return different objects and test case will run multiple times - one time per one variant

Related

Correct use of pytest fixtures of objects with Django

I am relatively new to pytest, so I understand the simple use of fixtures that looks like that:
#pytest.fixture
def example_data():
return "abc"
and then using it in a way like this:
def test_data(self, example_data):
assert example_data == "abc"
I am working on a django app and where it gets confusing is when I try to use fixtures to create django objects that will be used for the tests.
The closest solution that I've found online looks like that:
#pytest.fixture
def test_data(self):
users = get_user_model()
client = users.objects.get_or_create(username="test_user", password="password")
and then I am expecting to be able to access this user object in a test function:
#pytest.mark.django_db
#pytest.mark.usefixtures("test_data")
async def test_get_users(self):
# the user object should be included in this queryset
all_users = await sync_to_async(User.objects.all)()
.... (doing assertions) ...
The issue is that when I try to list all the users I can't find the one that was created as part of the test_data fixture and therefore can't use it for testing.
I noticed that if I create the objects inside the function then there is no problem, but this approach won't work for me because I need to parametrize the function and depending on the input add different groups to each user.
I also tried some type of init or setup function for my testing class and creating the User test objects from there but this doesn't seem to be pytest's recommended way of doing things. But either way that approach didn't work either when it comes to listing them later.
Is there any way to create test objects which will be accessible when doing a queryset?
Is the right way to manually create separate functions and objects for each test case or is there a pytest-way of achieving that?

How to explicitly instruct PyTest to drop a database after some tests?

I am writing unit-tests with pytest-django for a django app. I want to make my tests more performant and doing so is requiring me to keep data saved in database for a certain time and not drop it after one test only. For example:
#pytest.mark.django_db
def test_save():
p1 = MyModel.objects.create(description="some description") # this object has the id 1
p1.save()
#pytest.mark.django_db
def test_modify():
p1 = MyModel.objects.get(id=1)
p1.description = "new description"
What I want is to know how to keep both tests separated but both will use the same test database for some time and drop it thereafter.
I think what you need are pytest fixtures. They allow you yo create objects (stored in database if needed) that will be used during tests. You can have a look at pytest fixtures scope that you can set so that the fixture is not deleted from database and reloading for each test that requires it but instead is created once for a bunch of tests and deleted afterwards.
You should read the documentation of pytest fixtures (https://docs.pytest.org/en/6.2.x/fixture.html) and the section dedicated to fixtures' scope (https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session).

How to organize tests in a class in Pytest?

According to "https://docs.pytest.org/en/stable/getting-started.html" in Pytest when grouping tests inside classes is that each test has a unique instance of the class. Having each test share the same class instance would be very detrimental to test isolation and would promote poor test practices.What does that mean ? This is outlined below:
content of test_class_demo.
class TestClassDemoInstance:
def test_one(self):
assert 0
def test_two(self):
assert 0
Imagine you are testing a user account system, where you can create users and change passwords. You need to have a user before you can change its password, and you don't want to repeat yourself, so you could naively structure the test like this:
class TestUserService:
def test_create_user(self):
# Store user_id on self to reuse it in subsequent tests.
self.user_id = UserService.create_user("timmy")
assert UserService.get_user_id("timmy") == self.user_id
def test_set_password(self):
# Set the password of the user we created in the previous test.
UserService.set_password(self.user_id, "hunter2")
assert UserService.is_password_valid(self.user_id, "hunter2")
But by using self to pass data from one test case to the next, you have created several problems:
The tests must be run in this order. First test_create_user, then test_set_password.
All tests must be run. You can't re-run just test_set_password independently.
All previous tests must pass. If test_create_user fails, we can no longer meaningfully run test_set_password.
Tests can no longer be run in parallel.
So to prevent this kind of design, pytest wisely decided that each test gets a brand new instance of the test class.
Just to add to Thomas answer, please be aware that currently there is an error in the documentation you mentioned. As described in this open issue on pytest Github, in what is shown below the explanation, both tests are demonstrated to share the same instance of the class, at 0xdeadbeef. This seems to demonstrate the opposite of what is stated (that the instances are not the same).

Is there a better way to ensure that a fixture runs after another without listing those fixtures in the arguments?

I have a set of pretest checks that I need to run, however one of the checks requires a provider to be initialized.
It would appear that the way around this is to add 'foo_provider' to the fixture arguments to make sure it runs after foo_provider is run. However, I don't want to list 20 fixtures in the args of the pretest fixture.
I've tried using pytest.mark.trylast, I've tried order markers. None of these work properly (or at all). I have tried adding various things to pytest_generate_tests and that tends to screw up the number of tests.
I finally managed it by adding kwargs to the fixture definition and then modifying metafunc._arg2fixturedefs through a function from pytest_generate_tests which feels really bad.
I tried and failed with this as it ran the check too early as well:
#pytest.fixture(params=[pytest.lazy_fixture('foo_provider')], autouse=True)
def pretest(test_skipper, logger, request):
logger().info('running pretest checks')
test_skipper()
Tried and failed at reordering the fixtures like this(called from pytest_generate_tests):
def execute_pretest_last(metafunc):
fixturedef = metafunc._arg2fixturedefs['pretest']
fixture = metafunc._arg2fixturedefs['pretest'][0]
names_order = metafunc.fixturenames
names_order.insert(len(names_order), names_order.pop(names_order.index(fixture.argname)))
funcarg_order = metafunc.funcargnames
funcarg_order.insert(len(funcarg_order),
funcarg_order.pop(funcarg_order.index(fixture.argname)))
The below code works as expected but is there a better way?
def pytest_generate_tests(metafunc):
for fixture in metafunc.fixturenames:
if fixture in PROVIDER_MAP:
parametrize_api(metafunc, fixture), indirect=True)
add_fixture_to_pretest_args(metafunc, fixture)
def add_fixture_to_pretest_args(metafunc, fixture):
pretest_fixtures = list(metafunc._arg2fixturedefs['pretest'][0].argnames)
pretest_fixtures.append(fixture)
metafunc._arg2fixturedefs['pretest'][0].argnames = tuple(pretest_fixtures)
#pytest.fixture(autouse=True)
def pretest(test_skipper, logger, **kwargs):
logger().info('running pretest checks')
test_skipper()

Python SQLAlchemy - Mocking a model attribute's "desc" method

In my application, there is a class for each model that holds commonly used queries (I guess it's somewhat of a "Repository" in DDD language). Each of these classes is passed the SQLAlchemy session object to create queries with upon construction. I'm having a little difficulty in figuring the best way to assert certain queries are being run in my unit tests. Using the ubiquitous blog example, let's say I have a "Post" model with columns and attributes "date" and "content". I also have a "PostRepository" with the method "find_latest" that is supposed to query for all posts in descending order by "date". It looks something like:
from myapp.models import Post
class PostRepository(object):
def __init__(self, session):
self._s = session
def find_latest(self):
return self._s.query(Post).order_by(Post.date.desc())
I'm having trouble mocking the Post.date.desc() call. Right now I'm monkey patching a mock in for Post.date.desc in my unit test, but I feel that there is likely a better approach.
Edit: I'm using mox for mock objects, my current unit test looks something like:
import unittest
import mox
class TestPostRepository(unittest.TestCase):
def setUp(self):
self._mox = mox.Mox()
def _create_session_mock(self):
from sqlalchemy.orm.session import Session
return self._mox.CreateMock(Session)
def _create_query_mock(self):
from sqlalchemy.orm.query import Query
return self._mox.CreateMock(Query)
def _create_desc_mock(self):
from myapp.models import Post
return self._mox.CreateMock(Post.date.desc)
def test_find_latest(self):
from myapp.models.repositories import PostRepository
from myapp.models import Post
expected_result = 'test'
session_mock = self._create_session_mock()
query_mock = self._create_query_mock()
desc_mock = self._create_desc_mock()
# Monkey patch
tmp = Post.date.desc
Post.date.desc = desc_mock
session_mock.query(Post).AndReturn(query_mock)
query_mock.order_by(Post.date.desc().AndReturn('test')).AndReturn(query_mock)
query_mock.offset(0).AndReturn(query_mock)
query_mock.limit(10).AndReturn(expected_result)
self._mox.ReplayAll()
r = PostRepository(session_mock)
result = r.find_latest()
self._mox.VerifyAll()
self.assertEquals(expected_result, result)
Post.date.desc = tmp
This does work, though feels ugly and I'm not sure why it fails without the "AndReturn('test')" piece of "Post.date.desc().AndReturn('test')"
I don't think you're really gaining much benefit by using mocks for testing your queries. Testing should be testing the logic of the code, not the implementation. A better solution would be to create a fresh database, add some objects to it, run the query on that database, and determine if you're getting the correct results back. For example:
# Create the engine. This starts a fresh database
engine = create_engine('sqlite://')
# Fills the database with the tables needed.
# If you use declarative, then the metadata for your tables can be found using Base.metadata
metadata.create_all(engine)
# Create a session to this database
session = sessionmaker(bind=engine)()
# Create some posts using the session and commit them
...
# Test your repository object...
repo = PostRepository(session)
results = repo.find_latest()
# Run your assertions of results
...
Now, you're actually testing the logic of the code. This means that you can change the implementation of your method, but so long as the query works correctly, the tests should still pass. If you want, you could write this method as a query that gets all the objects, then slices the resulting list. The test would pass, as it should. Later on, you could change the implementation to run the query using the SA expression APIs, and the test would pass.
One thing to keep in mind is that you might have problems with sqlite behaving differently than another database type. Using sqlite in-memory gives you fast tests, but if you want to be serious about these tests, you'll probably want to run them against the same type of database you'll be using in production as well.
If yet you want to create a unit test with mock input, you can create instances of your model with fake data
In case that the result proxy return result with data from more than one of the models (for instance when you join two tables), you can use collections data struct called namedtuple
We are using it to mock results of join queries

Categories