Django DB records disappears after first testcase run - python

I'm working on a project based on Django REST Framework. So I need to write some test cases for my REST API.
I've written some basic class (let's call it BaseAPITestCase) inherited from standard DRFs APITransactionTestCase.
In this class I've defined setUp method where I'm creating some test user which belongs to some groups (I'm using UserFactory written with FactoryBoy).
When I run my tests, the first one (first test case method from first child class) successfully creates a user with specified groups, but the others don't (other test case methods in the same class).
User groups just don't exist in DB at this time. It seems like existed records are deleted from DB at each new test case run. But how then it works for the first time?
I've read Django test documentation but can't figure out why it happens... Can anyone explain it?
The main question is what I should do to make these tests works?
Should I create user once and store it in object variable?
Should I add some params to preserve user groups data?
Or should I add user groups to fixtures? In that case, how can I create this fixture properly? (All related models, such as permissions and content types)
Simplified source code for illustration:
from rest_framework.test import APITransactionTestCase
class BaseAPITestCase(APITransactionTestCase):
def setUp(self):
self.user = UserFactory(
username='login',
password='pass',
group_names=('admin', )
)
self.client = APIClient()
self.client.force_login(self.user)
def tearDown(self):
self.client.logout()
class CampaignListTest(BaseAPITestCase):
def test_authorized_get(self):
# successfully gets user groups from DB
def test_authorized_post(self):
# couldn't find any groups

TransactionTestCase is a test case to test transactions. As such, it explicitly does not use transactions to isolate tests, as that would interfere with the behaviour of the transactions that are being tested.
To isolate tests, TransactionTestCase rolls back the database by truncating all tables. This is the easiest and fastest solution without using transactions, but as you noticed this will delete all data, including the groups that were generated in a post_migrate signal receiver. You can set serialized_rollback = True on the class, in which case it will serialize all changes to the database, and reverse these changes them after each test. However, this is significantly slower, and often greatly increases the time it takes to run the test suite, so this is not the default.
TestCase does not have this restriction, so it wraps each test case in a transaction, and each individual test in a savepoint. Roll back using transactions and savepoints is fast and allows you to keep the data that was there at the start of the transaction or savepoint. For this reason, it is preferable to use TestCase whenever possible.
This extends to DRF's APITransactionTestCase and APITestCase, which simply inherit from Django's test cases.

Related

What are the different use cases for APITestCase, APISImpleTestCase, and APITransactionTestCase in Django Rest Framework

The docs for the different test case classes are here
I am unsure of what situations I would use each of the test case classes:
APITestCase
APISimpleTestCase
APITransactionTestCase
As explained in the Django Rest Framework Docs, the 3 available test classes simply extend the regular Django test classes but switch the client to use APIClient.
This can also be seen in the Django Rest Framework source code
class APITransactionTestCase(testcases.TransactionTestCase):
client_class = APIClient
class APITestCase(testcases.TestCase):
client_class = APIClient
class APISimpleTestCase(testcases.SimpleTestCase):
client_class = APIClient
The first test case you should know about is the APISimpleTestCase which allows us to test general DRF/Django things such as http redirects and checking some callable raises an exception. The docs note that we shouldn't use APISimpleTestCase when doing any testing with the database.
The reason we shouldn't use APISimpleTestCase with the database is because the test data would stay in the database across multiple tests. To get around this we must use APITransactionTestCase which will use atomic() blocks to wrap tests in transactions and allow the test runner to roll back the database at the beginning of each test, allowing easy atomic testing of database related actions. It also adds some extra assertion methods related to database assertions such as assertNumQueries.
Finally, the APITestCase wraps the tests with 2 atomic() blocks, one for the whole test class and one for each test within the class. This essentially stops tests from altering the database for other tests as the transactions are rolled back at the end of each test. By having this second atomic() block around the whole test class, specific database transaction behaviour can be hard to test and hence you'd want to drop back to using APITransactionTestCase.

Testing Django model mechanics

I have written a baseclass to be used by django models. This baseclass adds some checks to the model, so I can be sure that they cannot be used in certain ways.
I want to test this baseclass, but in order to test all edge cases, I need to test an incorrect implementation (to make sure that wrong implementations are caught).
Due to the way Django works, I cannot write model classes inside tests directly like
class TestMyBaseClass:
def test_original_must_be_provided(self, db):
class MyModel(FlattenedProxyModel):
pass
# Create an instance
a = MyModel()
# Fail when trying to refresh info
with pytest.raises(MyError):
a.refresh_fields()
There will be cases where it will fail because there is no database table for this class (although this particular test will work, because the database is never accessed).
So the proper approach would be to use a Mock of a Django model. But how would that event work? I want a mock that extends my baseclass (uses it as a mixin) and it needs to respond appropriately to the isinstance(self, django.db.models.Model).
Any ideas?

In a Django test, how should I save a database object and then retrieve it from the database?

I am using Django 1.8. I wrote the following code to test that a pre_save hook works correctly, but this code seems very inelegant. Is this the "proper way" to write this type of unit test?
class PreSaveTests(TestCase):
def test_pre_save_hook(self):
person = Person(name="Joe")
person.save()
person2 = Person.objects.get(pk = person.pk)
# Confirm that the pre_save hook ran.
# The hook sets person.is_cool to True.
self.assertEqual(person2.is_cool, True)
This works fine, but it seems ugly.
The really ugly part is that person and person2 are the same database object. The only difference is that person2 was retrieved from the database.
What you're doing in your test is perfectly fine. You can however simplify / improve it a little in my opinion.
I think you should use factories (you can use FactoryBoy). This way you won't have to update your test when you add/remove mandatory fields on your model. Also, you can remove irrelevant information from your test. In this case, the fact that the person name is Joe is completely irrelevant.
You can replace:
person = Person(name="Joe")
person.save()
with:
person = PersonFactory.create()
As Daniel mentioned, you don't need to reload the Person instance. So you don't have to do this:
person2 = Person.objects.get(pk = person.pk)
Finally, a small tip, you can use assertTrue instead of assertEquals(something, True):
class PreSaveTests(TestCase):
def test_pre_save_hook(self):
person = PersonFactory.create()
self.assertTrue(person.is_cool)
Firstly, I'm not sure why you think that's ugly: seems a perfectly reasonable way to test this functionality.
However, you could definitely make it simpler. Although Django instances don't have identity - that is, two instances retrieved from the database separately won't share modifications until they are saved and retrieved - when the pre-save hook runs, it modifies the existing instance. So in fact person will get the modification to set is_cool, so there is no need to retrieve and check person2.
You could directly check the property in the query, without actually getting an object:
class PreSaveTests(TestCase):
def test_pre_save_hook(self):
person = Person(name="Joe")
person.save()
# Confirm that the pre_save hook ran.
# The hook sets person.is_cool to True.
self.assertTrue(
Person.objects.filter(pk = person.pk, is_cool=True).exists()
)
I think that's a good way to test simple functionality. However, a "by the book" unit test would be better defined by mocking the database functionality.
This way you can unit test your methods without caring about what the database is doing.
I normally do this with the mock library (included in 3.x). Without going into much detail as it's has been described in other answers, you can use a patch to mock the model you're testing (Person) and then make it return something.
Take a look at mock-django as well, it provides lots of functionality related to this, https://pypi.python.org/pypi/mock-django and here https://docs.python.org/3/library/unittest.mock.html
I can't test this (and I'll make it more explicit that I'd normally) for Python 3. Inside a unittest class, you can create a test like this.
# first patch your class
#patch('my_app_name.models.Person')
def test_my_person(self, person_mock)
person_mock.objects = MagicMock()
person_mock.objects.configure_mock(get.return_value='guy_number_1')
# then you can test your method. For example if your method change the guy name.
self.assertEquals(my_method('guy_number_1'), 'GUY_NUMBER_1')
The code is not the best but the idea is that you're mocking the database, so if your database connection brakes, your unittest don't (as it should be because you aren't testing Django functionality nor your database connection).
This has been useful for me when doing automatic building and testing without having to deploy a test database. Then you can add integration tests to cover your database functionality.
I'll extend the explanation if it isn't clear enough.
Useful things sometimes overlooked in mock are the configure method, side_effect for mocking exceptions and sometimes you will need to reload your module to apply the patches.
A bit late on this question but there is a refresh_from_db() function in Django 3.2,
So you can run:
person = Person(name="Geof")
person.save()
person.refresh_from_db()
https://docs.djangoproject.com/en/3.2/ref/models/instances/#refreshing-objects-from-database

Django Test Creating Forms with New IDs

I have just moved all of my AJAX validation code over to Django Forms. I need some help updating my tests. I basically have some test data, declared as constants, that are used across all suites. I am then this data repeatedly throughout my tests.
As part of the setup I create some users and login the user that I need:
def setUp(self):
self.client = Client()
create_user(username='staff', email='staff#staff.com',
password='staff', staff=True)
create_user(username='agent', email='agent#agent.com',
password='agent', staff=False)
ShiftType.objects.create(type_id='SI', description='Sales Inbox')
self.client.login(username='staff', password='staff')
The tear down deletes this data (or it used to):
def tearDown(self):
# Clean up the DB
self.client.logout()
ShiftType.objects.all().delete()
User.objects.all().delete()
Event.objects.all().delete()
RecurrentEvent.objects.all().delete()
This was working fine, but now the form does not validate because the form value id given by the users is incremented each time. For example:
ERROR: <ul class="errorlist"><li>employee_id<ul class="errorlist"><li>Select a valid choice. That choice is not one of the available choices.</li></ul></li></ul>
Printing the form allows me to see that the ids are being incremented each time.
Why is this happening even though I am deleting all employees?
I would look into using text fixtures rather than creating and deleting the data every time. This is pretty easy to do in Django. In your tests.py it would look something like this
class BlogTest(TestCase):
fixtures = ['core/fixtures/test.json']
When you do this django will build you a test database, load the fixture into it, run your tests, and then blow away the database after the tests are done. If you want to even use a different database engine (we do this to have our tests use sqlite because it is fast) you can throw something like this into your settings.py
This will make it so the ID's are the same every single time which would fix your problem.

Django get_query_set override is being cached

I'm overriding Django's get_query_set function on one of my models dynamically. I'm doing this to forcibly filter the original query set returned by Model.objects.all/filter/get by a "scenario" value, using a decorator. Here's the decorator's function:
# Get the base QuerySet for these models before we modify their
# QuerySet managers. This prevents infinite recursion since the
# get_query_set function doesn't rely on itself to get this base QuerySet.
all_income_objects = Income.objects.all()
# Figure out what scenario the user is using.
current_scenario = Scenario.objects.get(user=request.user, selected=True)
# Modify the imported income class to filter based on the current scenario.
Expense.objects.get_query_set = lambda: all_expense_objects.filter(scenario=current_scenario)
# Call the method that was initially supposed to
# be executed before we were so rudely interrupted.
return view(request, **arguments)
I'm doing this to DRY up the code, so that all of my queries aren't littered with an additional filter. However, if the scenario changes, no objects are being returned. If I kill all of my python processes on my server, the objects for the newly select scenario appear. I'm thinking that it's caching the modified class, and then when the scenario changes, it's applying another filter that will never make sense, since objects can only have one scenario at a time.
This hasn't been an issue with user-based filters because the user never changes for my session. Is passenger doing something stupid to hold onto class objects between requests? Should I be bailing on this weird design pattern and just implement these filters on a per-view basis? There must be a best practice for DRYing filters up that apply across many views based on something dynamic, like the current user.
What about creating a Manager object for the model which takes the user as an argument where this filtering is done. My understanding of being DRY w/ Django querysets is to use a Model Manager
#### view code:
def some_view(request):
expenses = Expense.objects.filter_by_cur_scenario(request.user)
# add additional filters here, or add to manager via more params
expenses = expenses.filter(something_else=True)
#### models code:
class ExpenseManager(models.Manager):
def filter_by_cur_scenario(self, user):
current_scenario = Scenario.objects.get(user=request.user, selected=True)
return self.filter(scenario=current_scenario)
class Expense(models.Model):
objects = ExpenseManager()
Also, one quick caveat on the manager (which may apply to overriding get_query_set): foreign relationships will not take into account any filtering done at this level. For example, you override the MyObject.objects.filter() method to always filter out deleted rows; A model w/ a foreignkey to that won't use that filter function (at least from what I understand -- someone please correct me if I'm wrong).
I was hoping to have this implementation happen without having to code anything in other views. Essentially, after the class is imported, I want to modify it so that no matter where it's referenced using Expense.objects.get/filter/all it's already been filtered. As a result, there is no implementation required for any of the other views; it's completely transparent. And, even in cases where I'm using it as a ForeignKey, when an object is retrieved using the aforementioned Expense.objects.get/filter/all, they'll be filtered as well.

Categories