I'm working on a web project in Django, and I'm using the python unittest framework. For every app I have some fixtures. This means, that every app has some identical tables in fixtures. I would like to share fixtures between apps and testcases, because otherwise if I change a model, I will have to change all json fixtures where this concrete table is referenced.
Is it sensible to use global fixtures?
Do not use static fixtures, it is a bad automated tests pattern. Use dynamic fixtures.
Django Dynamic Fixture has options to create global fixtures. Check its Nose plugin or the Shelve option.
I would highly recommend looking into Django's Testing architecture. Check out TestCase.fixtures especially; this is far more advanced and Django-specific than unittest.
I can't think of anything wrong with using global fixtures as long as you delete them in your tearDown method (or teardown_test_environment method - see below).
I am not sure if you are asking to find out how to do this. If so there are two ways that I can think of.
Use a common base class for all your tests. Something like this:
class TestBase(django.test.TestCase):
fixtures = ['common_fixtures.xml']
class MyTestClass(TestBase):
fixtures = TestBase.fixtures + ['fixtures_for_this_test.xml']
def test_foo(self):
# test stuff
Use a custom test runner. In your test runner load all the fixtures you need before running the tests and take them down after executing the tests. You should preferably do this by using your own setup_ and teardown_test_environment methods.
Related
I am trying to write test cases for the Django RestAPI that we have but I have an issue with the fixtures loading. Everything works correctly when I have only one TestCase but when I add a second TestCase in a second django app I get django.db.utils.IntegrityError. My original intention was to create a general TestCase where I set up the most used objects in the setUpTestData function and make the other tests inherit from that one.
Things I have tried:
Using APITestCase from rest_framework and TestCase from django
Not using inheritance, and having both files using TestCase from django
Using setUp method instead of setUpTestData
Using call_command from django.core.management in the setUpTestData method instead of creating the fixtures in each class in the class variable
Declaring the fixtures only in the TestCase that is executed first (this makes the other TestCase have an empty DB, so it it clear that for each TestCase info is recreated)
Changed the order of the files in the fixtures variable
When I comment one of the test files the other works and vice-versa. When I used the call_command with verbose=2, the fixtures in the first file executed work perfectly, and it breaks just when trying to install the first fixture of the second file, with the error:
django.db.utils.IntegrityError: Problem installing fixtures: insert or update on table "preference_questions" violates foreign key constraint "preference_questi_preference_questi_64f61c66_fk_prefer" DETAIL: Key (preference_question_category_id)=(2) is not present in table "preference_question_category"
Sometimes it gives ForeignKeyViolation depending on the case of the above mentioned.
Turns out the issue was that the fixtures had IDs on them, those IDs cannot be removed because objects had relations between them. In this case, there are two options I found:
Use natural keys as explained in this post, this makes you create new methods for the natural keys and dump your DB
Create the objects manually in the setUpTestData (labor intensive but won't give you any problems)
Depending on your case it may be more interesting to use one or the other. Nonetheless, the first one is the best probably.
pytest module scoped fixtures that use the django test database are perplexing us. We want a fixture, that creates an entry in the database, once per test module.
This works, but it's very clunky:
import pytest
#pytest.fixture(scope='module')
def site(django_db_setup, django_db_blocker):
with django_db_blocker.unblock():
address = models.Address.create(..)
site = models.Site.create(address=address, ..)
yield site
with django_db_blocker.unblock():
site.delete()
address.delete()
We've noticed that django_db_setup is required, because if this fixture is the first one called, the test database won't be set up and the fixture will be created in the non-test database!
For complicated objects that have more than one or two related models, the fixture becomes very ugly.
Is there a better way to do this?
The docs for the different test case classes are here
I am unsure of what situations I would use each of the test case classes:
APITestCase
APISimpleTestCase
APITransactionTestCase
As explained in the Django Rest Framework Docs, the 3 available test classes simply extend the regular Django test classes but switch the client to use APIClient.
This can also be seen in the Django Rest Framework source code
class APITransactionTestCase(testcases.TransactionTestCase):
client_class = APIClient
class APITestCase(testcases.TestCase):
client_class = APIClient
class APISimpleTestCase(testcases.SimpleTestCase):
client_class = APIClient
The first test case you should know about is the APISimpleTestCase which allows us to test general DRF/Django things such as http redirects and checking some callable raises an exception. The docs note that we shouldn't use APISimpleTestCase when doing any testing with the database.
The reason we shouldn't use APISimpleTestCase with the database is because the test data would stay in the database across multiple tests. To get around this we must use APITransactionTestCase which will use atomic() blocks to wrap tests in transactions and allow the test runner to roll back the database at the beginning of each test, allowing easy atomic testing of database related actions. It also adds some extra assertion methods related to database assertions such as assertNumQueries.
Finally, the APITestCase wraps the tests with 2 atomic() blocks, one for the whole test class and one for each test within the class. This essentially stops tests from altering the database for other tests as the transactions are rolled back at the end of each test. By having this second atomic() block around the whole test class, specific database transaction behaviour can be hard to test and hence you'd want to drop back to using APITransactionTestCase.
I am using Django 1.8. I wrote the following code to test that a pre_save hook works correctly, but this code seems very inelegant. Is this the "proper way" to write this type of unit test?
class PreSaveTests(TestCase):
def test_pre_save_hook(self):
person = Person(name="Joe")
person.save()
person2 = Person.objects.get(pk = person.pk)
# Confirm that the pre_save hook ran.
# The hook sets person.is_cool to True.
self.assertEqual(person2.is_cool, True)
This works fine, but it seems ugly.
The really ugly part is that person and person2 are the same database object. The only difference is that person2 was retrieved from the database.
What you're doing in your test is perfectly fine. You can however simplify / improve it a little in my opinion.
I think you should use factories (you can use FactoryBoy). This way you won't have to update your test when you add/remove mandatory fields on your model. Also, you can remove irrelevant information from your test. In this case, the fact that the person name is Joe is completely irrelevant.
You can replace:
person = Person(name="Joe")
person.save()
with:
person = PersonFactory.create()
As Daniel mentioned, you don't need to reload the Person instance. So you don't have to do this:
person2 = Person.objects.get(pk = person.pk)
Finally, a small tip, you can use assertTrue instead of assertEquals(something, True):
class PreSaveTests(TestCase):
def test_pre_save_hook(self):
person = PersonFactory.create()
self.assertTrue(person.is_cool)
Firstly, I'm not sure why you think that's ugly: seems a perfectly reasonable way to test this functionality.
However, you could definitely make it simpler. Although Django instances don't have identity - that is, two instances retrieved from the database separately won't share modifications until they are saved and retrieved - when the pre-save hook runs, it modifies the existing instance. So in fact person will get the modification to set is_cool, so there is no need to retrieve and check person2.
You could directly check the property in the query, without actually getting an object:
class PreSaveTests(TestCase):
def test_pre_save_hook(self):
person = Person(name="Joe")
person.save()
# Confirm that the pre_save hook ran.
# The hook sets person.is_cool to True.
self.assertTrue(
Person.objects.filter(pk = person.pk, is_cool=True).exists()
)
I think that's a good way to test simple functionality. However, a "by the book" unit test would be better defined by mocking the database functionality.
This way you can unit test your methods without caring about what the database is doing.
I normally do this with the mock library (included in 3.x). Without going into much detail as it's has been described in other answers, you can use a patch to mock the model you're testing (Person) and then make it return something.
Take a look at mock-django as well, it provides lots of functionality related to this, https://pypi.python.org/pypi/mock-django and here https://docs.python.org/3/library/unittest.mock.html
I can't test this (and I'll make it more explicit that I'd normally) for Python 3. Inside a unittest class, you can create a test like this.
# first patch your class
#patch('my_app_name.models.Person')
def test_my_person(self, person_mock)
person_mock.objects = MagicMock()
person_mock.objects.configure_mock(get.return_value='guy_number_1')
# then you can test your method. For example if your method change the guy name.
self.assertEquals(my_method('guy_number_1'), 'GUY_NUMBER_1')
The code is not the best but the idea is that you're mocking the database, so if your database connection brakes, your unittest don't (as it should be because you aren't testing Django functionality nor your database connection).
This has been useful for me when doing automatic building and testing without having to deploy a test database. Then you can add integration tests to cover your database functionality.
I'll extend the explanation if it isn't clear enough.
Useful things sometimes overlooked in mock are the configure method, side_effect for mocking exceptions and sometimes you will need to reload your module to apply the patches.
A bit late on this question but there is a refresh_from_db() function in Django 3.2,
So you can run:
person = Person(name="Geof")
person.save()
person.refresh_from_db()
https://docs.djangoproject.com/en/3.2/ref/models/instances/#refreshing-objects-from-database
I used to have a standalone script with some unit tests to test data in our database. I did not use the builtin Django testing tool, as that would create an empty testing database, which is not what I want.
In that script, I created three different classes extending unittest.TestCase containing some test functions that directly executed SQL statements.
Now I would prefer to be able to access the Django ORM directly. The easiest way to do this is via a custom management commant (./manage.py datatests).
In the standalone script, I could call all unit tests via the following function:
if __name__ == '__main__':
unittest.main()
It would discover all tests in the current file and run them.
How can I do an equivalent thing (run some test suites) from within a custom Django management command?
I'm sorry for not having searched for an answer long enough before asking, but I found the solution to this problem myself in another Stackoverflow answer:
How to run django unit-tests on production database?
Essentially, instead of unittest.main() the following code can be used:
suite = unittest.TestLoader().loadTestsFromTestCase(TestCaseClass)
unittest.TextTestRunner(verbosity=2).run(suite)
This will load all tests in the specified TestCaseClass. If you want to load all tests in the current module, creating the suite this way will help:
suite = TestLoader().loadTestsFromName(__name__)
The Stackoverflow answer linked above contains a full example. Furthermore, the Basic Example section of the unittest module docs describes the same thing. For other options to load tests, see Loading and running tests in the docs.
You may want to specify the contents of your start-up db through fixtures. It will load up the context for db for particular test. And you can take a snapshot of db with
$ ./manage.py dumpdata my_app > fixtures/my_pre_test_db.json`
Now in your unit test you will have something like this:
class MyTestCase(TestCase):
fixtures = ['fixtures/my_pre_test_db.json']
def testThisFeature(self):
...