Tests fails with TransactionTestCase and pytest - python

I have an issue with a my unit tests and the way django manages transactions.
In my code I have a function:
def send():
autocommit = transaction.set_autocommit(False)
try:
# stuff
finally:
transaction.rollback()
transaction.set_autocommit(autocommit)
In my test I have:
class MyTest(TransactionTestCase):
def test_send(self):
send()
The issue I am having is that my test_send passes successfully but not 80% of my other tests.
It seems the transaction of the other tests are failing
btw I am using py.test to run my tests
EDIT:
To make things more clear when I run my tests with only
myapp.test.test_module.py it runs fine and all 3 tests passes but when I run all my tests most of the fails, will try to produce a test app
Also all my tests passes with the default test runner from django
EDIT2:
Here is A minimal example to test this issue:
class ManagementTestCase(TransactionTestCase):
def test_transfer_ubl(self, MockExact):
pass
class TestTestCase(TestCase):
def test_1_user(self):
get_user_model().objects.get(username="admin")
self.assertEqual(get_user_model().objects.all().count(), 1)
Bear in mind there is a datamigration that adds an "admin" user (the TestTestCase succeeds alone but not when the ManagmentTestCase is run before)
It seems autocommit has nothing to do with it.

The TestCase class wraps the tests inside two atomic blocks. Therefore it is not possible to use transaction.set_autocommit() or transaction.rollback() if you are inheriting from TestCase.
As the docs say, you should use TransactionTestCase if you are testing specific database transaction behaviour.

having autocommit = transaction.set_autocommit(False) inside the send function feels wrong. Disabling the transaction is done here presumably for testing purposes, but the rule of thumb is to keep your test logic outside your code.
As #Alasdair has pointed out, django docs states "Django’s TestCase class also wraps each test in a transaction for performance reasons."
It is not clear from your question whether you're testing specific database transaction logic or not, if that is the case then #Alasdair's answer of using the TransactionTestCase is the way to go.
Otherwise, removing the transaction context switch from around the stuff inside your send function should help.
Since you mentioned pytest as your test runner, I would also recommend making use of pytest. Pytest-django plugin comes with nice features such selectively setting some of your tests to require transactions, using markers.
pytest.mark.django_db(transaction=False)
If installing a plugin is too much, then you could roll your own transaction manage fixture. Like
#pytest.fixture
def no_transaction(request):
autocommit = transaction.set_autocommit(False)
def rollback():
transaction.rollback()
transaction.set_autocommit(True)
request.addfinalizer(rollback)
Your test_send will then require the no_transaction fixture.
def test_send(no_transaction):
send()

For those who still looking for a solution, serialized_rollback option is a way to go:
class ManagementTestCase(TransactionTestCase):
serialized_rollback = True
def test_transfer_ubl(self, MockExact):
pass
class TestTestCase(TestCase):
def test_1_user(self):
get_user_model().objects.get(username="admin")
self.assertEqual(get_user_model().objects.all().count(), 1)
from the docs
Django can reload that data for you on a per-testcase basis by setting the serialized_rollback option to True in the body of the TestCase or TransactionTestCase, but note that this will slow down that test suite by approximately 3x.
Unfortunately, pytest-django still missing this feature.

Related

How to explicitly instruct PyTest to drop a database after some tests?

I am writing unit-tests with pytest-django for a django app. I want to make my tests more performant and doing so is requiring me to keep data saved in database for a certain time and not drop it after one test only. For example:
#pytest.mark.django_db
def test_save():
p1 = MyModel.objects.create(description="some description") # this object has the id 1
p1.save()
#pytest.mark.django_db
def test_modify():
p1 = MyModel.objects.get(id=1)
p1.description = "new description"
What I want is to know how to keep both tests separated but both will use the same test database for some time and drop it thereafter.
I think what you need are pytest fixtures. They allow you yo create objects (stored in database if needed) that will be used during tests. You can have a look at pytest fixtures scope that you can set so that the fixture is not deleted from database and reloading for each test that requires it but instead is created once for a bunch of tests and deleted afterwards.
You should read the documentation of pytest fixtures (https://docs.pytest.org/en/6.2.x/fixture.html) and the section dedicated to fixtures' scope (https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session).

How to organize tests in a class in Pytest?

According to "https://docs.pytest.org/en/stable/getting-started.html" in Pytest when grouping tests inside classes is that each test has a unique instance of the class. Having each test share the same class instance would be very detrimental to test isolation and would promote poor test practices.What does that mean ? This is outlined below:
content of test_class_demo.
class TestClassDemoInstance:
def test_one(self):
assert 0
def test_two(self):
assert 0
Imagine you are testing a user account system, where you can create users and change passwords. You need to have a user before you can change its password, and you don't want to repeat yourself, so you could naively structure the test like this:
class TestUserService:
def test_create_user(self):
# Store user_id on self to reuse it in subsequent tests.
self.user_id = UserService.create_user("timmy")
assert UserService.get_user_id("timmy") == self.user_id
def test_set_password(self):
# Set the password of the user we created in the previous test.
UserService.set_password(self.user_id, "hunter2")
assert UserService.is_password_valid(self.user_id, "hunter2")
But by using self to pass data from one test case to the next, you have created several problems:
The tests must be run in this order. First test_create_user, then test_set_password.
All tests must be run. You can't re-run just test_set_password independently.
All previous tests must pass. If test_create_user fails, we can no longer meaningfully run test_set_password.
Tests can no longer be run in parallel.
So to prevent this kind of design, pytest wisely decided that each test gets a brand new instance of the test class.
Just to add to Thomas answer, please be aware that currently there is an error in the documentation you mentioned. As described in this open issue on pytest Github, in what is shown below the explanation, both tests are demonstrated to share the same instance of the class, at 0xdeadbeef. This seems to demonstrate the opposite of what is stated (that the instances are not the same).

Mocking before importing in one test file is affecting another test file

I have a class which has a side effect at import time: it populates a field with a value from a deployed infrastructure. It basically boils down to this:
class MyModel:
class Meta:
table_name = get_param("table_name")
I want to use instances of MyModel in offline tests, so I need to make sure that boto3 doesn't actually run the production get_param code. I've done it by mocking out get_param in these tests:
with patch("….get_param"):
from … import MyModel
def test_…
Clunky, but it sort of works. The problem is that if I run pytest with this test file first and then an acceptance test file (which does need to use the real get_param), the acceptance test ends up with a mocked version of get_param (reversing the file sequence makes both tests pass). The acceptance test runs fine on its own, but I'd rather not pull apart the test suite based on such a weird premise. Is there some way to avoid this namespace pollution?
Looks like I don't understand how Python imports work.

Refreshing pytest fixtures in first test during custom scenario runner

I implemented scenario support for my pytest-based tests, and it works well.
However, one of my fixtures initialises the database with clean tables, and the second scenario runs the test with a dirty database. How can I get the database fixture re-initialised or refreshed in subsequent scenarios?
To be clear, I want to see this:
scenario 1
test_demo1 gets a fresh DB, and the test writes to the DB
test_demo2 does not re-init the DB, but sees the changes made by test_demo1
scenario 2
test_demo1 gets a fresh DB again, and the test writes to the DB
test_demo2 does not re-init the DB, but sees the changes made by test_demo1 only in scenario 2.
def pytest_runtest_setup(item):
if hasattr(item.cls, "scenarios") and "first" in item.keywords:
# what to do here?
#pytest.mark.usefixtures("db")
class TestSampleWithScenarios:
scenarios = [scenario1, scenario2]
#pytest.mark.first
def test_demo1(self, db):
# db is dirty here in scenario2
pass
def test_demo2(self, db):
pass
I'm currently digging through the pytest sources to find an answer and I will post here once I have something.
I've found a workaround. Have a regular test which uses the DB fixture, parametrize with the scenarios, and call the tests in the class directly:
#pytest.mark.parametrize(["scenario"], scenarios)
def test_sample_with_scenarios(db, scenario):
TestSampleWithScenarios().test_demo1(db, scenario)
TestSampleWithScenarios().test_demo2(db, scenario)

Cleaning up after a unit test that asserts an IntegrityError is thrown

I have a Django model with a "title" CharField(unique=True). I have a unit test that asserts that creating a second instance with the same title throws an IntegrityError. (I'm using pytest and pytest-django.)
I have something like:
class Foo(models.Model):
title = models.CharField(unique=True)
def test_title_is_unique(db):
Foo.objects.create(title='foo')
with pytest.raises(IntegrityError):
Foo.objects.create(title='foo')
This works fine, except the above code doesn't include cleanup code. pytest-django doesn't clean up the database for you, so you need to register cleanup handlers when you create or save a model instance. Something like this:
def test_title_is_unique(request, db):
foo = Foo.objects.create(title='foo')
request.addfinalizer(foo.delete)
with pytest.raises(IntegrityError):
Foo.objects.create(title='foo')
Okay, that's fine. But what if the second .create() call erroneously succeeds? I still want to clean up that instance, but only if it (erroneously) gets created.
Here is what I settled on:
def test_title_is_unique(request, db):
foo = Foo.objects.create(title='foo')
request.addfinalizer(foo.delete)
try:
with pytest.raises(IntegrityError):
new_foo = Foo.objects.create(title='foo')
finally:
if 'new_foo' in locals():
request.addfinalizer(new_foo.delete)
This doesn't feel particularly elegant or Pythonic, not to mention there are a bunch of lines of code that really shouldn't be running.
How do I guarantee that the second model instance is cleaned up if created, but with fewer hoops to jump through, and/or using fewer lines of code?
You should not need to worry about the cleanup. pytest-django's db fixture disables transactions for the test, executing the entire test in one transaction and rolling back the transaction at the end. This ensures the database remains clean.
If the test requires transaction there's the transactional_db fixture which will enable transactions (slower) and flush the entire contents of the db after the test (very slow) again cleaning up for you.
So if the cleanup does not happen then you should probably file a bug at pytest-django. But I would be surprised if this is the case unless I missed something important.

Categories