I have created a series of checks using Django's System check framework.
Some of the checks are used to confirm that fixtures are set up correctly. For example, I have a check that confirms if all users have at least one group.
#register(Tag.database)
def check_users_have_group(app_configs, **kwargs):
errors = []
users = UserModel.objects.all()
for user in users:
if not user.groups.exists():
message = f'{user} has no permission groups set.'
errors.append(
Error(
message,
obj='account',
id=f'check_user_{user.id}_permission_groups'
)
)
return errors
Django's default is to run checks on migration. If I deploy the app without an existing database, then when I run migrate to set up the database the above check will cause a ProgrammingError because the table is not yet created:
django.db.utils.ProgrammingError: relation "accounts_account" does not exist
How can I exclude this test from running on python manage.py migrate? I want to run this after the migration is complete.
Django's default is to run checks on migration
OP can configure it to not be so using the flag --skip-checks, like
django-admin migrate --skip-checks
As mentioned in the previous link
This option is only available if the requires_system_checks command attribute is not an empty list or tuple.
So, OP then has to configure the requires_system_checks to match what OP wants, because the default value is 'all'.
Since OP doesn't want to run Tags.database, don't include that one in the list. So OP will have something like
requires_system_checks = [Tags.staticfiles, Tags.models]
In light of the new specifications (that OP wants to run other database checks), let's get some more context.
Django's system checks are organized using specific built-in tags. Reading more about the database one, we see
database: Checks database-related configuration issues. Database checks are not run by default because they do more than static code analysis as regular checks do. They are only run by the migrate command or if you specify configured database aliases using the --database option when calling the check command.
This points to what Abdul Aziz Barkat mentions in the comment that
The System check framework is for static checks so I don't know if implementing your checks over there is the best place to do so. You might instead want to come up with a custom management command to do this checking.
In other words, it's advisable that, for such thing, one creates a custom command. In this other answer I explain how to do so.
Related
Problem
During a data migration, I'm trying to create a django.contrib.auth.models.Group and some Users and then attaching said group to one of the users.
Problem I'm finding (other than the permissions still not being created, but I've already found a solution to that), is that for some reason the many-to-many manager doesn't seem to be working as it should (?).
Basically what I'm trying to do is something like:
group = Group.objects.create(name="manager")
# user creation...
user.groups.add(group)
However, I get the following error:
TypeError: Group instance expected, got <Group: manager>
Whenever I try to replicate this in the Django shell, it works without any problem. It only fails while doing migration. Any ideas?
Things I've tried and other information
I've tried both populating the m2m relation through the User related manager and the Group related manager, that is, user.groups.add(group) and group.user_set.add(user). Both give me a similar error.
Just partially related, but just so I have the permissions needed, I have this first in my migration:
for app_config in apps.get_app_configs():
app_config.models_module = True
create_permissions(app_config, verbosity=0)
app_config.models_module = None
The group is supposedly created properly. Given that I create the groups and the users in different helper functions, I actually grab the group with Group.objects.get(name="manager"), and when printing, it shows the correct information.
Turns out that the problem comes up when you use the django.contrib.auth.models.Group model directly. If instead you use apps.get_model("auth.Group"), everything works fine.
Running on Django 3.2 I use dumpdata -o db.json -a to export multiple databases to .json.
Looking into dumpdata.py, it retrieves all objects from a model by calling
queryset = objects.using(using).order_by(model._meta.pk.name)
https://github.com/django/django/blob/main/django/core/management/commands/dumpdata.py, line 185
My problem is that in my case, using is set to 'default' by default, even though I use --all parameter. And later, when calling objects.using(using) it tries to retrieve all objects from default database, even though it's supposed to be 'MIFIR'. What did I do wrong? Have I misconfigured something in my database? I set the app_label in _meta and added my app_label to dbrouter.py, I can see it resolving database name correctly.
Manager, Still tries to use default, Error
Seems that you can use --database to specify a databse.
Also keep in mind that to add new database to Django project you need to create a DBRouter for that database. Not sure, but that might be the problem...
I have django application and migration file 002_auto.py, which uses Django RunPython to alter database. I have no idea how much migrations files would be created in future, but I want the file 002_auto.py to be applied as the last part of migration process.
How to set that migrations to be executed as the last one while performing django migrations without need to perform any manual steps each time I want to execute migrate command (or altering dependencies variable each time i've added new migrations)?
p.s. I've looked into django migrations documentation and other articles without success.
You can subclass migrate command and put this code after super call
# project/myapp/management/commands/custom_migrate.py
from django.core.management.commands.migrate import Command as MigrateCommand
class Command(MigrateCommand):
def handle(self, *args, **options):
super().handle(*args, **options)
# put your code from 002_auto.py here
This command should be added to app that is in your INSTALLED_APPS. And then you can call it like this
python manage.py custom_migrate
Read more about custom commands https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/
Probably you can use post_migrate signal in one of your models, and put the code you call in 002_auto.py to signal hanlder.
https://docs.djangoproject.com/en/1.10/ref/signals/#post-migrate
If a migration, needs to be applied in a certain order then that isn't ever going to work (regardless of whatever hacky solution you try out).
Chances are at some point in the future you're going to introduce a change that would mean having this last wouldn't make any sense.
Your best option is to make a custom management command and/or bash script and then use that to migrate and alter your database.
Currently I'm using the default admin portal which is working fine. Then inside models.py I try to add a field as follows:
class MyModel(models.Model):
# new field is 'info'
info = models.CharField(max_length=100)
MyModel has already been successfully defined and used in the above code I simply wish to add a single field. I rerun sync
python manage.py syncdb
Creating tables ...
Installing custom SQL ...
Installing indexes ...
Installed 0 object(s) from 0 fixture(s)
However then Django throws an error when I try to use the interface
column myproject_mymodel.info does not exist
What am I doing wrong?
manage.py syncdb will only create tables that do not exist, it will not work for adding or removing columns, or for modifications to columns.
I suggest reading through the following chapter of the Django Book (which is free online):
Chapter 10: Advanced Models
Here are the steps given in that chapter for adding fields:
First, take these steps in the development environment (i.e., not on the production server):
Add the field to your model.
Run manage.py sqlall [yourapp] to see
the new CREATE TABLE statement for the model. Note the column
definition for the new field.
Start your database’s interactive
shell (e.g., psql or mysql, or you can use manage.py dbshell). Execute an ALTER TABLE statement that adds your new column.
Launch
the Python interactive shell with manage.py shell and verify that
the new field was added properly by importing the model and
selecting from the table (e.g., MyModel.objects.all()[:5]). If you
updated the database correctly, the statement should work without
errors.
Then on the production server perform these steps:
Start your database’s interactive shell.
Execute the ALTER TABLE statement you used in step 3 of the development environment steps.
Add the field to your model. If you’re using source-code revision control and you checked in your change in development environment step 1, now is the time to update the code (e.g., svn update, with Subversion) on the production server.
Restart the Web server for the code changes to take effect.
Is it possible to selectively filter which records Django's dumpdata management command outputs? I have a few models, each with millions of rows, and I only want to dump records in one model fitting a specific criteria, as well as all foreign-key linked records referencing any of those records.
Consider this use-case. Say I had a production database where my User model has millions of records. I have several other models (Log, Transaction, Purchase, Bookmarks, etc) all referencing the User model. I want to do development on my Django app, and I want to test using realistic data. However, my production database is so enormous, I can't realistically take a snapshot of the entire thing and load it locally. So ideally, I'd want to use dumpdata to dump 50 random User records, and all related records to JSON, and use that to populate a development database.
Is there an easy way to accomplish this?
I think django-fixture-magic might be worth a look at.
You'll find some additional background info in Scrubbing your Django database.
This snippet might be helpful for you (it follows relationships and serializes them):
http://djangosnippets.org/snippets/918/
You could use also that management command and override the default managers for whichever models you would like to return custom querysets.
This isn't a simple answer to my question, but I found some interesting docs on Django's built-in natural keys feature, which would allow representing serialized records without the primary key. Unfortunately, it doesn't look like this is fully integrated into dumpdata, and there's an old outstanding ticket to fully rely on natural keys.
It also seems the serializers.serialize() function allows serialization of an arbitrary list of specific model instances.
Presumably, if I implemented a natural_key() method on all my models, and then called serializers.serialize([Users.objects.filter(criteria)]), it should come close to accomplishing what I want. I might have to write a function to crawl all the FK references, and include those in the list of objects passed to serialize().
This is a very old question, but I recently wrote a custom management command to do just that. It looks very similar to the existing dumpdata command except that it takes some extra arguments to define how I want to filter the querysets and it overrides the get_objects function to perform the actual filtering:
def get_objects(dump_attributes, dump_values):
qs_1 = ModelClass1.objects.filter(**options["filter_options_for_model_class_1"])
qs_2 = ModelClass2.objects.filter(**options["filter_options_for_model_class_2"])
# ...repeat for as many different model classes you want to dump...
yield from chain(qs_1, qs_2, ...)
I had the same problem but i didn't want to add another package and the snippet still didn't let me to filter my data and i just want a temporary solution
So i thought with my self why not override the default manager apply my filter there, take the dump and then revert my code back. This is of course too hacky and dangerous but in my case made sense.
Yes I had to vim code on live server but you don't need to reload the server since running command through manage.py would run your current code base so the server from the end-user perspective basically remained on-touched.
from django.db.models import Manager
class DahlBookManager(Manager):
def get_queryset(self):
return super().get_queryset().filter(is_edited=False)
class FriendshipQuestion(models.Model):
objects = DahlBookManager()
and then running the dumpdata command did exactly what i needed which was returning all the unedited questions in my case.
Then I git checkout mymodelfile.py to revert it back to the original.
This by no mean is a good solution but it will get somebody either fired or unstuck.
As of Django 3.2, you can use dumpdata to dump a specific app and/or model. For example, for an app named customer:
python manage.py dumpdata customer
or, to dump a model named shoppingcart within the customer app:
python manage.py dumpdata customer.shoppingcart
There are many options with dumpdata, including writing to several output file formats and handling custom managers on models. For example:
python manage.py dumpdata customer --all --indent 4 --output my_fixtures.json
The options:
--all: dumps the records even if you use a custom manager on the model
--indent : amount to indent when writing to file
--output : Send output to a file instead of stdout. Default format is JSON.
See the docs at:
https://docs.djangoproject.com/en/3.2/ref/django-admin/#dumpdata