Refreshing pytest fixtures in first test during custom scenario runner - python

I implemented scenario support for my pytest-based tests, and it works well.
However, one of my fixtures initialises the database with clean tables, and the second scenario runs the test with a dirty database. How can I get the database fixture re-initialised or refreshed in subsequent scenarios?
To be clear, I want to see this:
scenario 1
test_demo1 gets a fresh DB, and the test writes to the DB
test_demo2 does not re-init the DB, but sees the changes made by test_demo1
scenario 2
test_demo1 gets a fresh DB again, and the test writes to the DB
test_demo2 does not re-init the DB, but sees the changes made by test_demo1 only in scenario 2.
def pytest_runtest_setup(item):
if hasattr(item.cls, "scenarios") and "first" in item.keywords:
# what to do here?
#pytest.mark.usefixtures("db")
class TestSampleWithScenarios:
scenarios = [scenario1, scenario2]
#pytest.mark.first
def test_demo1(self, db):
# db is dirty here in scenario2
pass
def test_demo2(self, db):
pass
I'm currently digging through the pytest sources to find an answer and I will post here once I have something.

I've found a workaround. Have a regular test which uses the DB fixture, parametrize with the scenarios, and call the tests in the class directly:
#pytest.mark.parametrize(["scenario"], scenarios)
def test_sample_with_scenarios(db, scenario):
TestSampleWithScenarios().test_demo1(db, scenario)
TestSampleWithScenarios().test_demo2(db, scenario)

Related

How to explicitly instruct PyTest to drop a database after some tests?

I am writing unit-tests with pytest-django for a django app. I want to make my tests more performant and doing so is requiring me to keep data saved in database for a certain time and not drop it after one test only. For example:
#pytest.mark.django_db
def test_save():
p1 = MyModel.objects.create(description="some description") # this object has the id 1
p1.save()
#pytest.mark.django_db
def test_modify():
p1 = MyModel.objects.get(id=1)
p1.description = "new description"
What I want is to know how to keep both tests separated but both will use the same test database for some time and drop it thereafter.
I think what you need are pytest fixtures. They allow you yo create objects (stored in database if needed) that will be used during tests. You can have a look at pytest fixtures scope that you can set so that the fixture is not deleted from database and reloading for each test that requires it but instead is created once for a bunch of tests and deleted afterwards.
You should read the documentation of pytest fixtures (https://docs.pytest.org/en/6.2.x/fixture.html) and the section dedicated to fixtures' scope (https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session).

pytest mark django db: avoid to save fixture in tests

I need to:
Avoid to use save() in tests
Use #pytest.mark.django_db on all tests inside this class
Create a number of trx fixtures (10/20) to act like false data.
import pytest
from ngg.processing import (
elab_data
)
class TestProcessing:
#pytest.mark.django_db
def test_elab_data(self, plan,
obp,
customer,
bac,
col,
trx_1,
trx_2,
...):
plan.save()
obp.save()
customer.save()
bac.save()
col.save()
trx.save()
elab_data(bac, col)
Where the fixtures are simply models like that:
#pytest.fixture
def plan():
plan = Plan(
name = 'test_plan',
status = '1'
)
return plan
I don't find this way really clean. How would you do that?
TL;DR
test.py
import pytest
from ngg.processing import elab_data
#pytest.mark.django_db
class TestProcessing:
def test_elab_data(self, plan, obp, customer, bac, col, trx_1, trx_2):
elab_data(bac, col)
conftest.py
#pytest.fixture(params=[
('test_plan', 1),
('test_plan2', 2),
])
def plan(request, db):
name, status = request.param
return Plan.objects.create(name=name, status=status)
I'm not quite sure if I got it correctly
Avoid to use save() in tests
You may create objects using instance = Model.objects.create() or just put instance.save() in fixtures.
As described at note section here
To access the database in a fixture, it is recommended that the fixture explicitly request one of the db, transactional_db or django_db_reset_sequences fixtures.
and at fixture section here
This fixture will ensure the Django database is set up. Only required for fixtures that want to use the database themselves. A test function should normally use the pytest.mark.django_db() mark to signal it needs the database.
you may want to use db fixture in you records fixtures and keep django_db mark in your test cases.
Use #pytest.mark.django_db on all tests inside this class
To mark whole classes you may use decorator on classes or pytestmark variable as described here.
You may use pytest.mark decorators with classes to apply markers to all of its test methods
To apply marks at the module level, use the pytestmark global variable
Create a number of trx fixtures (10/20) to act like false data.
I didn't quite get what you were trying to do but still would assume that it is one of the following things:
Create multiple objects and pass them as fixtures
In that case you may want to create a fixture that will return generator or list and use whole list instead of multiple fixtures
Test case using different variants of fixture but with one/few at a time
In that case you may want to parametrize your fixture so it will return different objects and test case will run multiple times - one time per one variant

Pony ORM - tear down in testing

I am using Pony ORM for managing an sqlite database in a python package I am developing.
I would like to use pytest for testing.
My package provides an "agent" object, which is used to connect to a server API and retrieve "events". On initialisation of the agent, the pony orm is setup and bound to a sqlite db, either in memory (for testing) or as a file.
def setup_db(filepath=None):
if filepath:
db.bind(provider="sqlite", filename=filepath, create_db=True)
else:
db.bind(provider="sqlite", filename=":memory:", create_db=True)
db.provider.converter_classes.append((Enum, EnumConverter))
db.generate_mapping(create_tables=True)
The state of the events are stored in an sqlite db using pony orm.
I wish to create a new agent object, with a clean database for each test, so I am using a pytest fixture in the conftest.py file.
#pytest.fixture
def agent():
agent=Agent(parm1="param1",...)
return agent
I am unable to correctly "unbind" from the database and get this error on my second test:
pony.orm.core.BindingError: Database object was already bound to SQLite
provider
I would like some advice on the best way to proceed.
Thanks.
I think for your case you should make some factory for entities and create new db objects for each setup.
def define_entities(db):
class Student(db.Entity):
...
class Group(db.Entity):
...
So then you can do something like
def setup_db(filepath=None):
db = Database()
if filepath:
db.bind(provider="sqlite", filename=filepath, create_db=True)
else:
db.bind(provider="sqlite", filename=":memory:", create_db=True)
define_entities(db)
db.provider.converter_classes.append((Enum, EnumConverter))
db.generate_mapping(create_tables=True)
Looking at Pony code it seems it should be enough to clear provider attribute of Database instance to make it fresh so it can be bind again.
If you yield Agent instead of returning it from your fixture, everything you put after yield statement will be run as fixture tear down code.
The previous answer from zgoda almost works. Besides db.provider, it is also necessary to clear db.schema. My suggestion is that you create another function:
def unbind_db():
db.provider = db.schema = None
And your fixture should be something like:
#fixture
def database() -> None:
setup_db()
...
try:
yield
finally:
unbind_db()

Tests fails with TransactionTestCase and pytest

I have an issue with a my unit tests and the way django manages transactions.
In my code I have a function:
def send():
autocommit = transaction.set_autocommit(False)
try:
# stuff
finally:
transaction.rollback()
transaction.set_autocommit(autocommit)
In my test I have:
class MyTest(TransactionTestCase):
def test_send(self):
send()
The issue I am having is that my test_send passes successfully but not 80% of my other tests.
It seems the transaction of the other tests are failing
btw I am using py.test to run my tests
EDIT:
To make things more clear when I run my tests with only
myapp.test.test_module.py it runs fine and all 3 tests passes but when I run all my tests most of the fails, will try to produce a test app
Also all my tests passes with the default test runner from django
EDIT2:
Here is A minimal example to test this issue:
class ManagementTestCase(TransactionTestCase):
def test_transfer_ubl(self, MockExact):
pass
class TestTestCase(TestCase):
def test_1_user(self):
get_user_model().objects.get(username="admin")
self.assertEqual(get_user_model().objects.all().count(), 1)
Bear in mind there is a datamigration that adds an "admin" user (the TestTestCase succeeds alone but not when the ManagmentTestCase is run before)
It seems autocommit has nothing to do with it.
The TestCase class wraps the tests inside two atomic blocks. Therefore it is not possible to use transaction.set_autocommit() or transaction.rollback() if you are inheriting from TestCase.
As the docs say, you should use TransactionTestCase if you are testing specific database transaction behaviour.
having autocommit = transaction.set_autocommit(False) inside the send function feels wrong. Disabling the transaction is done here presumably for testing purposes, but the rule of thumb is to keep your test logic outside your code.
As #Alasdair has pointed out, django docs states "Django’s TestCase class also wraps each test in a transaction for performance reasons."
It is not clear from your question whether you're testing specific database transaction logic or not, if that is the case then #Alasdair's answer of using the TransactionTestCase is the way to go.
Otherwise, removing the transaction context switch from around the stuff inside your send function should help.
Since you mentioned pytest as your test runner, I would also recommend making use of pytest. Pytest-django plugin comes with nice features such selectively setting some of your tests to require transactions, using markers.
pytest.mark.django_db(transaction=False)
If installing a plugin is too much, then you could roll your own transaction manage fixture. Like
#pytest.fixture
def no_transaction(request):
autocommit = transaction.set_autocommit(False)
def rollback():
transaction.rollback()
transaction.set_autocommit(True)
request.addfinalizer(rollback)
Your test_send will then require the no_transaction fixture.
def test_send(no_transaction):
send()
For those who still looking for a solution, serialized_rollback option is a way to go:
class ManagementTestCase(TransactionTestCase):
serialized_rollback = True
def test_transfer_ubl(self, MockExact):
pass
class TestTestCase(TestCase):
def test_1_user(self):
get_user_model().objects.get(username="admin")
self.assertEqual(get_user_model().objects.all().count(), 1)
from the docs
Django can reload that data for you on a per-testcase basis by setting the serialized_rollback option to True in the body of the TestCase or TransactionTestCase, but note that this will slow down that test suite by approximately 3x.
Unfortunately, pytest-django still missing this feature.

Cleaning up after a unit test that asserts an IntegrityError is thrown

I have a Django model with a "title" CharField(unique=True). I have a unit test that asserts that creating a second instance with the same title throws an IntegrityError. (I'm using pytest and pytest-django.)
I have something like:
class Foo(models.Model):
title = models.CharField(unique=True)
def test_title_is_unique(db):
Foo.objects.create(title='foo')
with pytest.raises(IntegrityError):
Foo.objects.create(title='foo')
This works fine, except the above code doesn't include cleanup code. pytest-django doesn't clean up the database for you, so you need to register cleanup handlers when you create or save a model instance. Something like this:
def test_title_is_unique(request, db):
foo = Foo.objects.create(title='foo')
request.addfinalizer(foo.delete)
with pytest.raises(IntegrityError):
Foo.objects.create(title='foo')
Okay, that's fine. But what if the second .create() call erroneously succeeds? I still want to clean up that instance, but only if it (erroneously) gets created.
Here is what I settled on:
def test_title_is_unique(request, db):
foo = Foo.objects.create(title='foo')
request.addfinalizer(foo.delete)
try:
with pytest.raises(IntegrityError):
new_foo = Foo.objects.create(title='foo')
finally:
if 'new_foo' in locals():
request.addfinalizer(new_foo.delete)
This doesn't feel particularly elegant or Pythonic, not to mention there are a bunch of lines of code that really shouldn't be running.
How do I guarantee that the second model instance is cleaned up if created, but with fewer hoops to jump through, and/or using fewer lines of code?
You should not need to worry about the cleanup. pytest-django's db fixture disables transactions for the test, executing the entire test in one transaction and rolling back the transaction at the end. This ensures the database remains clean.
If the test requires transaction there's the transactional_db fixture which will enable transactions (slower) and flush the entire contents of the db after the test (very slow) again cleaning up for you.
So if the cleanup does not happen then you should probably file a bug at pytest-django. But I would be surprised if this is the case unless I missed something important.

Categories