pytest module scoped fixtures that use the django test database are perplexing us. We want a fixture, that creates an entry in the database, once per test module.
This works, but it's very clunky:
import pytest
#pytest.fixture(scope='module')
def site(django_db_setup, django_db_blocker):
with django_db_blocker.unblock():
address = models.Address.create(..)
site = models.Site.create(address=address, ..)
yield site
with django_db_blocker.unblock():
site.delete()
address.delete()
We've noticed that django_db_setup is required, because if this fixture is the first one called, the test database won't be set up and the fixture will be created in the non-test database!
For complicated objects that have more than one or two related models, the fixture becomes very ugly.
Is there a better way to do this?
The docs for the different test case classes are here
I am unsure of what situations I would use each of the test case classes:
APITestCase
APISimpleTestCase
APITransactionTestCase
As explained in the Django Rest Framework Docs, the 3 available test classes simply extend the regular Django test classes but switch the client to use APIClient.
This can also be seen in the Django Rest Framework source code
class APITransactionTestCase(testcases.TransactionTestCase):
client_class = APIClient
class APITestCase(testcases.TestCase):
client_class = APIClient
class APISimpleTestCase(testcases.SimpleTestCase):
client_class = APIClient
The first test case you should know about is the APISimpleTestCase which allows us to test general DRF/Django things such as http redirects and checking some callable raises an exception. The docs note that we shouldn't use APISimpleTestCase when doing any testing with the database.
The reason we shouldn't use APISimpleTestCase with the database is because the test data would stay in the database across multiple tests. To get around this we must use APITransactionTestCase which will use atomic() blocks to wrap tests in transactions and allow the test runner to roll back the database at the beginning of each test, allowing easy atomic testing of database related actions. It also adds some extra assertion methods related to database assertions such as assertNumQueries.
Finally, the APITestCase wraps the tests with 2 atomic() blocks, one for the whole test class and one for each test within the class. This essentially stops tests from altering the database for other tests as the transactions are rolled back at the end of each test. By having this second atomic() block around the whole test class, specific database transaction behaviour can be hard to test and hence you'd want to drop back to using APITransactionTestCase.
I'm working on a project based on Django REST Framework. So I need to write some test cases for my REST API.
I've written some basic class (let's call it BaseAPITestCase) inherited from standard DRFs APITransactionTestCase.
In this class I've defined setUp method where I'm creating some test user which belongs to some groups (I'm using UserFactory written with FactoryBoy).
When I run my tests, the first one (first test case method from first child class) successfully creates a user with specified groups, but the others don't (other test case methods in the same class).
User groups just don't exist in DB at this time. It seems like existed records are deleted from DB at each new test case run. But how then it works for the first time?
I've read Django test documentation but can't figure out why it happens... Can anyone explain it?
The main question is what I should do to make these tests works?
Should I create user once and store it in object variable?
Should I add some params to preserve user groups data?
Or should I add user groups to fixtures? In that case, how can I create this fixture properly? (All related models, such as permissions and content types)
Simplified source code for illustration:
from rest_framework.test import APITransactionTestCase
class BaseAPITestCase(APITransactionTestCase):
def setUp(self):
self.user = UserFactory(
username='login',
password='pass',
group_names=('admin', )
)
self.client = APIClient()
self.client.force_login(self.user)
def tearDown(self):
self.client.logout()
class CampaignListTest(BaseAPITestCase):
def test_authorized_get(self):
# successfully gets user groups from DB
def test_authorized_post(self):
# couldn't find any groups
TransactionTestCase is a test case to test transactions. As such, it explicitly does not use transactions to isolate tests, as that would interfere with the behaviour of the transactions that are being tested.
To isolate tests, TransactionTestCase rolls back the database by truncating all tables. This is the easiest and fastest solution without using transactions, but as you noticed this will delete all data, including the groups that were generated in a post_migrate signal receiver. You can set serialized_rollback = True on the class, in which case it will serialize all changes to the database, and reverse these changes them after each test. However, this is significantly slower, and often greatly increases the time it takes to run the test suite, so this is not the default.
TestCase does not have this restriction, so it wraps each test case in a transaction, and each individual test in a savepoint. Roll back using transactions and savepoints is fast and allows you to keep the data that was there at the start of the transaction or savepoint. For this reason, it is preferable to use TestCase whenever possible.
This extends to DRF's APITransactionTestCase and APITestCase, which simply inherit from Django's test cases.
I have written a baseclass to be used by django models. This baseclass adds some checks to the model, so I can be sure that they cannot be used in certain ways.
I want to test this baseclass, but in order to test all edge cases, I need to test an incorrect implementation (to make sure that wrong implementations are caught).
Due to the way Django works, I cannot write model classes inside tests directly like
class TestMyBaseClass:
def test_original_must_be_provided(self, db):
class MyModel(FlattenedProxyModel):
pass
# Create an instance
a = MyModel()
# Fail when trying to refresh info
with pytest.raises(MyError):
a.refresh_fields()
There will be cases where it will fail because there is no database table for this class (although this particular test will work, because the database is never accessed).
So the proper approach would be to use a Mock of a Django model. But how would that event work? I want a mock that extends my baseclass (uses it as a mixin) and it needs to respond appropriately to the isinstance(self, django.db.models.Model).
Any ideas?
I recently switched from Django 1.6 to 1.7, and I began using migrations (I never used South).
Before 1.7, I used to load initial data with a fixture/initial_data.json file, which was loaded with the python manage.py syncdb command (when creating the database).
Now, I started using migrations, and this behavior is deprecated :
If an application uses migrations, there is no automatic loading of fixtures.
Since migrations will be required for applications in Django 2.0, this behavior is considered deprecated. If you want to load initial data for an app, consider doing it in a data migration.
(https://docs.djangoproject.com/en/1.7/howto/initial-data/#automatically-loading-initial-data-fixtures)
The official documentation does not have a clear example on how to do it, so my question is :
What is the best way to import such initial data using data migrations :
Write Python code with multiple calls to mymodel.create(...),
Use or write a Django function (like calling loaddata) to load data from a JSON fixture file.
I prefer the second option.
I don't want to use South, as Django seems to be able to do it natively now.
Update: See #GwynBleidD's comment below for the problems this solution can cause, and see #Rockallite's answer below for an approach that's more durable to future model changes.
Assuming you have a fixture file in <yourapp>/fixtures/initial_data.json
Create your empty migration:
In Django 1.7:
python manage.py makemigrations --empty <yourapp>
In Django 1.8+, you can provide a name:
python manage.py makemigrations --empty <yourapp> --name load_intial_data
Edit your migration file <yourapp>/migrations/0002_auto_xxx.py
2.1. Custom implementation, inspired by Django' loaddata (initial answer):
import os
from sys import path
from django.core import serializers
fixture_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../fixtures'))
fixture_filename = 'initial_data.json'
def load_fixture(apps, schema_editor):
fixture_file = os.path.join(fixture_dir, fixture_filename)
fixture = open(fixture_file, 'rb')
objects = serializers.deserialize('json', fixture, ignorenonexistent=True)
for obj in objects:
obj.save()
fixture.close()
def unload_fixture(apps, schema_editor):
"Brutally deleting all entries for this model..."
MyModel = apps.get_model("yourapp", "ModelName")
MyModel.objects.all().delete()
class Migration(migrations.Migration):
dependencies = [
('yourapp', '0001_initial'),
]
operations = [
migrations.RunPython(load_fixture, reverse_code=unload_fixture),
]
2.2. A simpler solution for load_fixture (per #juliocesar's suggestion):
from django.core.management import call_command
fixture_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../fixtures'))
fixture_filename = 'initial_data.json'
def load_fixture(apps, schema_editor):
fixture_file = os.path.join(fixture_dir, fixture_filename)
call_command('loaddata', fixture_file)
Useful if you want to use a custom directory.
2.3. Simplest: calling loaddata with app_label will load fixtures from the <yourapp>'s fixtures dir automatically :
from django.core.management import call_command
fixture = 'initial_data'
def load_fixture(apps, schema_editor):
call_command('loaddata', fixture, app_label='yourapp')
If you don't specify app_label, loaddata will try to load fixture filename from all apps fixtures directories (which you probably don't want).
Run it
python manage.py migrate <yourapp>
Short version
You should NOT use loaddata management command directly in a data migration.
# Bad example for a data migration
from django.db import migrations
from django.core.management import call_command
def load_fixture(apps, schema_editor):
# No, it's wrong. DON'T DO THIS!
call_command('loaddata', 'your_data.json', app_label='yourapp')
class Migration(migrations.Migration):
dependencies = [
# Dependencies to other migrations
]
operations = [
migrations.RunPython(load_fixture),
]
Long version
loaddata utilizes django.core.serializers.python.Deserializer which uses the most up-to-date models to deserialize historical data in a migration. That's incorrect behavior.
For example, supposed that there is a data migration which utilizes loaddata management command to load data from a fixture, and it's already applied on your development environment.
Later, you decide to add a new required field to the corresponding model, so you do it and make a new migration against your updated model (and possibly provide a one-off value to the new field when ./manage.py makemigrations prompts you).
You run the next migration, and all is well.
Finally, you're done developing your Django application, and you deploy it on the production server. Now it's time for you to run the whole migrations from scratch on the production environment.
However, the data migration fails. That's because the deserialized model from loaddata command, which represents the current code, can't be saved with empty data for the new required field you added. The original fixture lacks necessary data for it!
But even if you update the fixture with required data for the new field, the data migration still fails. When the data migration is running, the next migration which adds the corresponding column to the database, is not applied yet. You can't save data to a column which does not exist!
Conclusion: in a data migration, the loaddata command introduces potential inconsistency between the model and the database. You should definitely NOT use it directly in a data migration.
The Solution
loaddata command relies on django.core.serializers.python._get_model function to get the corresponding model from a fixture, which will return the most up-to-date version of a model. We need to monkey-patch it so it gets the historical model.
(The following code works for Django 1.8.x)
# Good example for a data migration
from django.db import migrations
from django.core.serializers import base, python
from django.core.management import call_command
def load_fixture(apps, schema_editor):
# Save the old _get_model() function
old_get_model = python._get_model
# Define new _get_model() function here, which utilizes the apps argument to
# get the historical version of a model. This piece of code is directly stolen
# from django.core.serializers.python._get_model, unchanged. However, here it
# has a different context, specifically, the apps variable.
def _get_model(model_identifier):
try:
return apps.get_model(model_identifier)
except (LookupError, TypeError):
raise base.DeserializationError("Invalid model identifier: '%s'" % model_identifier)
# Replace the _get_model() function on the module, so loaddata can utilize it.
python._get_model = _get_model
try:
# Call loaddata command
call_command('loaddata', 'your_data.json', app_label='yourapp')
finally:
# Restore old _get_model() function
python._get_model = old_get_model
class Migration(migrations.Migration):
dependencies = [
# Dependencies to other migrations
]
operations = [
migrations.RunPython(load_fixture),
]
Inspired by some of the comments (namely n__o's) and the fact that I have a lot of initial_data.* files spread out over multiple apps I decided to create a Django app that would facilitate the creation of these data migrations.
Using django-migration-fixture you can simply run the following management command and it will search through all your INSTALLED_APPS for initial_data.* files and turn them into data migrations.
./manage.py create_initial_data_fixtures
Migrations for 'eggs':
0002_auto_20150107_0817.py:
Migrations for 'sausage':
Ignoring 'initial_data.yaml' - migration already exists.
Migrations for 'foo':
Ignoring 'initial_data.yaml' - not migrated.
See django-migration-fixture for install/usage instructions.
In order to give your database some initial data, write a data migration.
In the data migration, use the RunPython function to load your data.
Don't write any loaddata command as this way is deprecated.
Your data migrations will be run only once. The migrations are an ordered sequence of migrations. When the 003_xxxx.py migrations is run, django migrations writes in the database that this app is migrated until this one (003), and will run the following migrations only.
The solutions presented above didn't work for me unfortunately. I found that every time I change my models I have to update my fixtures. Ideally I would instead write data migrations to modify created data and fixture-loaded data similarly.
To facilitate this I wrote a quick function which will look in the fixtures directory of the current app and load a fixture. Put this function into a migration in the point of the model history that matches the fields in the migration.
On Django 2.1, I wanted to load some models (Like country names for example) with initial data.
But I wanted this to happen automatically right after the execution of initial migrations.
So I thought that it would be great to have a sql/ folder inside each application that required initial data to be loaded.
Then within that sql/ folder I would have .sql files with the required DMLs to load the initial data into the corresponding models, for example:
INSERT INTO appName_modelName(fieldName)
VALUES
("country 1"),
("country 2"),
("country 3"),
("country 4");
To be more descriptive, this is how an app containing a sql/ folder would look:
Also I found some cases where I needed the sql scripts to be executed in a specific order. So I decided to prefix the file names with a consecutive number as seen in the image above.
Then I needed a way to load any SQLs available inside any application folder automatically by doing python manage.py migrate.
So I created another application named initial_data_migrations and then I added this app to the list of INSTALLED_APPS in settings.py file. Then I created a migrations folder inside and added a file called run_sql_scripts.py (Which actually is a custom migration). As seen in the image below:
I created run_sql_scripts.py so that it takes care of running all sql scripts available within each application. This one is then fired when someone runs python manage.py migrate. This custom migration also adds the involved applications as dependencies, that way it attempts to run the sql statements only after the required applications have executed their 0001_initial.py migrations (We don't want to attempt running a SQL statement against a non-existent table).
Here is the source of that script:
import os
import itertools
from django.db import migrations
from YourDjangoProjectName.settings import BASE_DIR, INSTALLED_APPS
SQL_FOLDER = "/sql/"
APP_SQL_FOLDERS = [
(os.path.join(BASE_DIR, app + SQL_FOLDER), app) for app in INSTALLED_APPS
if os.path.isdir(os.path.join(BASE_DIR, app + SQL_FOLDER))
]
SQL_FILES = [
sorted([path + file for file in os.listdir(path) if file.lower().endswith('.sql')])
for path, app in APP_SQL_FOLDERS
]
def load_file(path):
with open(path, 'r') as f:
return f.read()
class Migration(migrations.Migration):
dependencies = [
(app, '__first__') for path, app in APP_SQL_FOLDERS
]
operations = [
migrations.RunSQL(load_file(f)) for f in list(itertools.chain.from_iterable(SQL_FILES))
]
I hope someone finds this helpful, it worked just fine for me!. If you have any questions please let me know.
NOTE: This might not be the best solution since I'm just getting started with Django, however I still wanted to share this "How-to" with you all since I didn't find much information while googling about this.
In my opinion fixtures are a bit bad. If your database changes frequently, keeping them up-to-date will came a nightmare soon. Actually, it's not only my opinion, in the book "Two Scoops of Django" it's explained much better.
Instead I'll write a Python file to provide initial setup. If you need something more I'll suggest you look at Factory boy.
If you need to migrate some data you should use data migrations.
There's also "Burn Your Fixtures, Use Model Factories" about using fixtures.
Although #rockallite's answer is excellent, it does not explain how to handle fixtures that rely on natural keys instead of integer pk values.
Simplified version
First, note that #rockallite's solution can be simplified by using unittest.mock.patch as a context manager, and by patching apps instead of _get_model:
...
from unittest.mock import patch
...
def load_fixture(apps, schema_editor):
with patch('django.core.serializers.python.apps', apps):
call_command('loaddata', 'your_data.json', ...)
...
This works well, as long as your fixtures do not rely on natural keys.
If they do, you're likely to see a DeserializationError: ... value must be an integer....
The problem with natural keys
Under the hood, loaddata uses django.core.serializers.deserialize() to load your fixture objects.
The deserialization of fixtures based on natural keys relies on two things:
the presence of a get_by_natural_key() method on the model's default manager
the presence of a natural_key() method on the model itself
The get_by_natural_key() method is necessary for the deserializer to know how to interpret the natural key, instead of an integer pk value.
Both methods are necessary for the deserializer to get existing objects from the database by natural key, as also explained here.
However, the apps registry which is available in your migrations uses historical models, and these do not have access to custom managers or custom methods such as natural_key().
Possible solution: step 1
The problem of the missing get_by_natural_key() method from our custom model manager is relatively easy to solve:
Just set use_in_migrations=True on your custom manager, as described in the documentation.
This ensures that your historical models can access the current get_by_natural_key() during migrations, and fixture loading should now succeed.
However, your historical models still don't have a natural_key() method. As a result, your fixtures will be treated as new objects, even if they are already present in the database.
This may lead to a variety of errors if the data-migration is ever re-applied, such as:
unique-constraint violations (if your models have unique-constraints)
duplicate fixture objects (if your models do not have unique-constraints)
"get returned multiple objects" errors (due to duplicate fixture objects created previously)
So, effectively, you're still missing out on a kind of get_or_create-like behavior during deserialization.
To experience this, just apply a data-migration as described above (in a test environment), then roll back the same data-migration (without removing the data), then re-apply the data-migration.
Possible solution: step 2
The problem of the missing natural_key() method from the model itself is a bit more difficult to solve.
One solution would be to assign the natural_key() method from the current model to the historical model, for example:
...
from unittest.mock import patch
from django.apps import apps as current_apps
from django.core.management import call_command
...
def load_fixture(apps, schema_editor):
def _get_model_patch(app_label):
""" add natural_key method from current model to historical model """
historical_model = apps.get_model(app_label=app_label)
current_model = current_apps.get_model(app_label=app_label)
historical_model.natural_key = current_model.natural_key
return historical_model
with patch('django.core.serializers.python._get_model', _get_model_patch):
call_command('loaddata', 'your_data.json', ...)
...
Notes:
For clarity, I omitted things like error handling and attribute checking from the example. You should implement those where necessary.
This solution uses the current model's natural_key method, which may still lead to trouble in certain scenarios, but the same goes for Django's use_in_migrations option for model managers.