Related
I have a parent model which contains a database of unique records, as follows (truncated - there are many more fields):
models.py - parent
class DBPlatform(models.Model):
description = models.CharField(max_length=300, unique=True)
PDS_date = models.DateField()
PDS_version = models.CharField(max_length=50, blank=True)
I use this model to create a child model to save me copying all of the fields. The child model saves specific user-generated instances of the parent records. They are stored separately as they may be edited by the user:
models.py - child
class Platform(DBPlatform):
scenario = models.ForeignKey(Scenario,
on_delete=models.CASCADE,
related_name="platforms")
database_platform = models.ForeignKey(DBPlatform,
on_delete=models.CASCADE,
related_name="instances")
edited = models.BooleanField()
I am using Django REST Framework to create an API for the eventual app. When a child model is created, I want to update all of its inherited fields with those of the parent model. The incredibly convoluted steps I have taken so far (that do not work) are in the views.py file of the child mode. As follows:
api.views.py - child
class PlatformViewSet(viewsets.ModelViewSet):
lookup_field = "id"
serializer_class = PlatformSerializer
permission_classes = [IsAuthenticated]
def perform_create(self, serializer):
db_id = self.request.data["database_platform"]
database_platform = get_object_or_404(DBPlatform, id=db_id)
datadict = self.request.data.dict()
datadict.update(database_platform.__dict__)
query_dict = QueryDict('', mutable=True)
query_dict.update(datadict)
self.request.data = query_dict
serializer.save()
How can I achieve what I am looking to do? I surely am taking the wrong approach as this can't be an uncommon thing.
EDIT:
Ruddra's comment has made me consider that the whole design pattern is faulty. Should I just be using a single model and a boolean flag for the "template" instance?
The serializer data is not changed by the operations before serializer.save().
If you want to do it this way, you'll have edit the serializer or re-serialize the data.
Unless this is something that needs to happen only through API and only on this endpoint, I'd suggest overwriting the model's save method or using pre_save signal. To make sure this operation is performed only when creating a new instance, you can check if self (in case of overwriting save) or instance (in case of signal) has id.
TL;DR both my model and my form calculate the value of the field number_as_char. Can I avoid the double work, but still check uniqueness when using the model without the form?
I use Python 3 and Django 1.11
My model looks as follows:
class Account(models.Model):
parent_account = models.ForeignKey(
to='self',
on_delete=models.PROTECT,
null=True,
blank=True)
number_suffix = models.PositiveIntegerField()
number_as_char = models.CharField(
max_length=100,
blank=True,
default='',
unique=True)
#classmethod
def get_number_as_char(cls, parent_account, number_suffix):
# iterate over all parents
suffix_list = [str(number_suffix), ]
parent = parent_account
while parent is not None:
suffix_list.insert(0, str(parent.number_suffix))
parent = parent.parent_account
return '-'.join(suffix_list)
def save(self, *args, **kwargs):
self.number_as_char = self.get_number_as_char(
self.parent_account, self.number_suffix)
super().save(*args, **kwargs)
The field number_as_char is not supposed to be set by the user because it is calculated based on the selected parent_account: it is obtained by chaining the values of the field number_suffix of all the parent accounts and the current instance.
Here is an example with three accounts:
ac1 = Account()
ac1.parent_account = None
ac1.number_suffix = 2
ac1.save()
# ac1.number_as_char is '2'
ac2 = Account()
ac2.parent_account = ac1
ac2.number_suffix = 5
ac2.save()
# ac2.number_as_char is '2-5'
ac3 = Account()
ac3.parent_account = ac2
ac3.number_suffix = 1
ac3.save()
# ac3.number_as_char is '2-5-1'
It is NOT an option to drop the field and use a model property instead, because I need to ensure uniqueness and also use that field for sorting querysets with order_by().
My form looks as follows:
class AccountForm(forms.ModelForm):
class Meta:
model = Account
fields = [
'parent_account', 'number_suffix', 'number_as_char',
]
widgets = {
'number_as_char': forms.TextInput(attrs={'readonly': True}),
}
def clean(self):
super().clean()
self.cleaned_data['number_as_char'] = self.instance.get_number_as_char(
self.cleaned_data['parent_account'], self.cleaned_data['number_suffix'])
I included number_as_char in the form with widget attribute readonly and I use the forms clean() method to calculate number_as_char (it has to be calculated before validating uniqueness).
This all works (the model and the form), but after validating the form, the value of number_as_char will be calculated again by the models save() method. Its not a big problem, but is there a way to avoid this double calculation?
If I remove the calculation from the forms clean() method, then the uniqueness will not be validated with the new value (it will only check the old value).
I don't want to remove the calculation entirely from the model because I use the model in other parts without the form.
Do you have any suggestions what could be done differently to avoid double calculation of the field?
I can't see any way around doing this in two places (save() and clean()) given that you need it to work for non-form-based saves as well).
However I can offer two efficiency improvements to your get_number_as_char method:
Make it a cached_property so that the second time it is called, you simply return a cached value and eliminate double-calculation. Obviously you need to be careful that this isn't called before an instance is updated, otherwise the old number_as_char will be cached. This should be fine as long as get_number_as_char() is only called during a save/clean.
Based on the information you've provided above you shouldn't have to iterate over all the ancestors, but can simply take the number_as_char for the parent and append to it.
The following incorporates both:
#cached_property
def get_number_as_char(self, parent_account, number_suffix):
number_as_char = str(number_suffix)
if parent_account is not None:
number_as_char = '{}-{}'.format(parent_account.number_as_char, number_as_char)
return number_as_char
To be sure that the caching doesn't cause problems you could just clear the cached value after you're done saving:
def save(self, *args, **kwargs):
self.number_as_char = self.get_number_as_char(
self.parent_account, self.number_suffix)
super().save(*args, **kwargs)
# Clear the cache, in case something edits this object again.
del self.get_number_as_char
I tinkered with it a bit, and I think I found a better way.
By using the disabled property on the number_as_char field of your model form, you can entirely ignore users input (and make the field disabled in a single step).
Your model already calculates the number_as_char attribute in the save method. However, if the Unique constraint fails, then your admin UI will throw a 500 error. However, you can move your field calculation to the clean() method, leaving the save() method as it is.
So the full example will look similar to this:
The form:
class AccountForm(forms.ModelForm):
class Meta:
model = Account
fields = [
'parent_account', 'number_suffix', 'number_as_char',
]
widgets = {
'number_as_char': forms.TextInput(attrs={'disabled': True}),
}
The model:
class Account(models.Model):
# ...
def clean(self):
self.number_as_char = self.get_number_as_char(
self.parent_account, self.number_suffix
)
super().clean()
That way anything that generates form based on your model will throw a nice validation error (provided that it uses the built-in model validation, which is the case for Model Forms).
The only downside to this is that if you save a model that triggers the validation error, you will see an empty field instead of the value that failed the validation - but I guess there is some nice way to fix this as well - I'll edit my answer if I also find a solution to this.
After reading all the answers and doing some more digging through the docs, I ended up using the following:
#samu suggested using the models clean() method and #Laurent S suggested using unique_together for (parent_account, number_suffix). Since only using unique_together doesn't work for me because parent_account can be null, I opted for combining the two ideas: checking for existing (parent_account, number_suffix) combinations in the models clean() method.
As a consecuence, I removed number_as_char from the form and it is now only calculated in the save() method. By the way: thanks to #solarissmoke for suggesting to calculated it based on the first parent only, not iterating all the way to the top of the chain.
Another consecuence is that I now need to explicitly call the models full_clean() method to validate uniqueness when using the model without the form (otherwise I will get the database IntegrityError), but I can live with that.
So, now my model looks like this:
class Account(models.Model):
parent_account = models.ForeignKey(
to='self',
on_delete=models.PROTECT,
null=True,
blank=True)
number_suffix = models.PositiveIntegerField()
number_as_char = models.CharField(
max_length=100,
default='0',
unique=True)
def save(self, *args, **kwargs):
if self.parent_account is not None:
self.number_as_char = '{}-{}'.format(
self.parent_account.number_as_char,
self.number_suffix)
else:
self.number_as_char = str(self.number_suffix)
super().save(*args, **kwargs)
def clean(self):
qs = self._meta.model.objects.exclude(pk=self.pk)
qs = qs.filter(
parent_account=self.parent_account,
number_suffix=self.number_suffix)
if qs.exists():
raise ValidationError('... some message ...')
And my form ends up like this:
class AccountForm(forms.ModelForm):
class Meta:
model = Account
fields = [
'parent_account', 'number_suffix',
]
EDIT
I'll mark my own answer as accepted, because non of the suggestions fully suited my needs.
However, the bounty goes to #samus answer for pointing me in the right direction with using the clean() method.
Another way - probably not as good though - would be to use Django signals. You could make a pre_save signal that would set the correct value for number_as_char field on the instance that's about to get saved.
That way you don't have to have it done in a save() method of your model, OR in the clean() method of your ModelForm.
Using signals should ensure that any operation that uses the ORM to manipulate your data (which, by extend, should mean all ModelForms as well) will trigger your signal.
The disadvantage to this approach is that it is not clear from the code directly how is this property generated. One has to stumble upon the signal definition in order to discover that it's even there. If you can live with it though, I'd go with signals.
I am using Python 2.7 and Django 1.6.3
I want to define extra model field which is not actually in db table. I have a way which is defining a callable method with property annotation like;
class MyClass(models.Model):
my_field = models.CharField(max_length=50)
#property
def my_extra_field(self):
return self.my_field+'extra value'
This works fine to show it on admin change list pages. But the extra field is not on db level. It is being generated on programming level. Django asks it for every model object.
This cause me some troubles. My all admin change list pages have capability of exporting as excel or some other type. I am using admin query set to build that report. I have also jasper reports mechanism that works with SQL select queries. So, I, want to use the queryset to take this select query.
I think being able to define extra fields on db level is important for something. Not just for reason of mine. So, the question all about this.
Is there a way to define an extra custom fields on db level instead of programming level in Django.
Thank you!.
Edited
Adding it to admin list_filter is also another problem if it is not really a field. Django does not allow you to add it.
Could you create a new database field and then overwrite the save method to populate that field? I do that often to create a marked up version of a text field. For example:
class Dummmy(models.Model):
content = models.TextField()
content_html = models.TextField(editable=False, null=True)
def save(self, *args, **kwargs):
self.content_html = markdown(self.content)
super(Dummmy, self).save(*args, **kwargs)
So for you:
class MyClass(models.Model):
my_field = models.CharField(max_length=50)
my_extra_field = models.CharField(editable=False, null=True)
def save(self, *args, **kwargs):
self.my_extra_field = self.my_field + 'extra value'
super(MyClass, self).save(*args, **kwargs)
From the example of Django Book, I understand if I create models as following:
from xxx import B
class A(models.Model):
b = ManyToManyField(B)
The Django would create a new table(A_B) beyond Table A, which has three columns:
id
a_id
b_id
But now I want to add a new column in the Table A_B, thus would be very easy if I use normal SQL, but now anyone can help me how to do? I can't find any useful information in this book.
It's very easy using django too! You can use through to define your own manytomany intermediary tables
Documentation provides an example addressing your issue:
Extra fields on many-to-many relationships
class Person(models.Model):
name = models.CharField(max_length=128)
def __unicode__(self):
return self.name
class Group(models.Model):
name = models.CharField(max_length=128)
members = models.ManyToManyField(Person, through='Membership')
def __unicode__(self):
return self.name
class Membership(models.Model):
person = models.ForeignKey(Person, on_delete=models.CASCADE)
group = models.ForeignKey(Group, on_delete=models.CASCADE)
date_joined = models.DateField()
invite_reason = models.CharField(max_length=64)
As #dm03514 has answered it is indeed very easy to add column to M2M table via
defining explicitly the M2M through model and adding the desired field there.
However if you would like to add some column to all m2m tables - such approach
wouldn't be sufficient, because it would require to define the M2M through
models for all ManyToManyField's that have been defined across the project.
In my case I wanted to add a "created" timestamp column to all M2M tables that
Django generates "under the hood" without the necessity of defining a separate
model for every ManyToManyField field used in the project. I came up with a
neat solution presented bellow. Cheers!
Introduction
While Django scans your models at startup it creates automatically an implicit
through model for every ManyToManyField that does not define it explicitly.
class ManyToManyField(RelatedField):
# (...)
def contribute_to_class(self, cls, name, **kwargs):
# (...)
super().contribute_to_class(cls, name, **kwargs)
# The intermediate m2m model is not auto created if:
# 1) There is a manually specified intermediate, or
# 2) The class owning the m2m field is abstract.
# 3) The class owning the m2m field has been swapped out.
if not cls._meta.abstract:
if self.remote_field.through:
def resolve_through_model(_, model, field):
field.remote_field.through = model
lazy_related_operation(resolve_through_model, cls, self.remote_field.through, field=self)
elif not cls._meta.swapped:
self.remote_field.through = create_many_to_many_intermediary_model(self, cls)
Source: ManyToManyField.contribute_to_class()
For creation of this implicit model Django uses the
create_many_to_many_intermediary_model() function, which constructs new class
that inherits from models.Model and contains foreign keys to both sides of the
M2M relation. Source: django.db.models.fields.related.create_many_to_many_intermediary_model()
In order to add some column to all auto generated M2M through tables you will
need to monkeypatch this function.
The solution
First you should create the new version of the function that will be used to
patch the original Django function. To do so just copy the code of the function
from Django sources and add the desired fields to the class it returns:
# For example in: <project_root>/lib/monkeypatching/custom_create_m2m_model.py
def create_many_to_many_intermediary_model(field, klass):
# (...)
return type(name, (models.Model,), {
'Meta': meta,
'__module__': klass.__module__,
from_: models.ForeignKey(
klass,
related_name='%s+' % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
to: models.ForeignKey(
to_model,
related_name='%s+' % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
# Add your custom-need fields here:
'created': models.DateTimeField(
auto_now_add=True,
verbose_name='Created (UTC)',
),
})
Then you should enclose the patching logic in a separate function:
# For example in: <project_root>/lib/monkeypatching/patches.py
def django_m2m_intermediary_model_monkeypatch():
""" We monkey patch function responsible for creation of intermediary m2m
models in order to inject there a "created" timestamp.
"""
from django.db.models.fields import related
from lib.monkeypatching.custom_create_m2m_model import create_many_to_many_intermediary_model
setattr(
related,
'create_many_to_many_intermediary_model',
create_many_to_many_intermediary_model
)
Finally you have to perform patching, before Django kicks in. Put such code in
__init__.py file located next to your Django project settings.py file:
# <project_root>/<project_name>/__init__.py
from lib.monkeypatching.patches import django_m2m_intermediary_model_monkeypatch
django_m2m_intermediary_model_monkeypatch()
Few other things worth mentioning
Remember that this does not affect m2m tables that have been created in the
db in the past, so if you are introducing this solution in a project that
already had ManyToManyField fields migrated to db, you will need to prepare a
custom migration that will add your custom columns to the tables which were
created before the monkeypatch. Sample migration provided below :)
from django.db import migrations
def auto_created_m2m_fields(_models):
""" Retrieves M2M fields from provided models but only those that have auto
created intermediary models (not user-defined through models).
"""
for model in _models:
for field in model._meta.get_fields():
if (
isinstance(field, models.ManyToManyField)
and field.remote_field.through._meta.auto_created
):
yield field
def add_created_to_m2m_tables(apps, schema_editor):
# Exclude proxy models that don't have separate tables in db
selected_models = [
model for model in apps.get_models()
if not model._meta.proxy
]
# Select only m2m fields that have auto created intermediary models and then
# retrieve m2m intermediary db tables
tables = [
field.remote_field.through._meta.db_table
for field in auto_created_m2m_fields(selected_models)
]
for table_name in tables:
schema_editor.execute(
f'ALTER TABLE {table_name} ADD COLUMN IF NOT EXISTS created '
'timestamp with time zone NOT NULL DEFAULT now()',
)
class Migration(migrations.Migration):
dependencies = []
operations = [migrations.RunPython(add_created_to_m2m_tables)]
Remember that the solution presented only affects the tables that Django
creates automatically for ManyToManyField fields that do not define the
through model. If you already have some explicit m2m through models you will
need to add your custom-need columns there manually.
The patched create_many_to_many_intermediary_model function will apply also
to the models of all 3rd-party apps listed in your INSTALLED_APPS setting.
Last but not least, remember that if you upgrade Django version the original
source code of the patched function may change (!). It's a good idea to setup a
simple unit test that will warn you if such situation happens in the future.
To do so modify the patching function to save the original Django function:
# For example in: <project_root>/lib/monkeypatching/patches.py
def django_m2m_intermediary_model_monkeypatch():
""" We monkey patch function responsible for creation of intermediary m2m
models in order to inject there a "created" timestamp.
"""
from django.db.models.fields import related
from lib.monkeypatching.custom_create_m2m_model import create_many_to_many_intermediary_model
# Save the original Django function for test
original_function = related.create_many_to_many_intermediary_model
setattr(
create_many_to_many_intermediary_model,
'_original_django_function',
original_function
)
# Patch django function with our version of this function
setattr(
related,
'create_many_to_many_intermediary_model',
create_many_to_many_intermediary_model
)
Compute the hash of the source code of the original Django function and prepare
a test that checks whether it is still the same as when you patched it:
def _hash_source_code(_obj):
from inspect import getsourcelines
from hashlib import md5
source_code = ''.join(getsourcelines(_obj)[0])
return md5(source_code.encode()).hexdigest()
def test_original_create_many_to_many_intermediary_model():
""" This test checks whether the original Django function that has been
patched did not changed. The hash of function source code is compared
and if it does not match original hash, that means that Django version
could have been upgraded and patched function could have changed.
"""
from django.db.models.fields.related import create_many_to_many_intermediary_model
original_function_md5_hash = '69d8cea3ce9640f64ce7b1df1c0934b8' # hash obtained before patching (Django 2.0.3)
original_function = getattr(
create_many_to_many_intermediary_model,
'_original_django_function',
None
)
assert original_function
assert _hash_source_code(original_function) == original_function_md5_hash
Cheers
I hope someone will find this answer useful :)
Under the hood, Django creates automatically a through model. It is possible to modify this automatic model foreign key column names.
I could not test the implications on all scenarios, so far it works properly for me.
Using Django 1.8 and onwards' _meta api:
class Person(models.Model):
pass
class Group(models.Model):
members = models.ManyToManyField(Person)
Group.members.through._meta.get_field('person').column = 'alt_person_id'
Group.members.through._meta.get_field('group' ).column = 'alt_group_id'
# Prior to Django 1.8 _meta can also be used, but is more hackish than this
Group.members.through.person.field.column = 'alt_person_id'
Group.members.through.group .field.column = 'alt_group_id'
Same as question I needed a custom models.ManyToManyField to add some columns to specific M2M relations.
My answer is Base on #Krzysiek answer with a small change, I inherit a class from models.ManyToManyField and monkeypatch its contribute_to_class method partially with unittest.mock.patch to use a custom create_many_to_many_intermediary_model instead of original one, This way I can control which M2M relations can have custom columns and also 3rd-party apps won't affected as #Krzysiek mentioned in its answer
from django.db.models.fields.related import (
lazy_related_operation,
resolve_relation,
make_model_tuple,
CASCADE,
_,
)
from unittest.mock import patch
def custom_create_many_to_many_intermediary_model(field, klass):
from django.db import models
def set_managed(model, related, through):
through._meta.managed = model._meta.managed or related._meta.managed
to_model = resolve_relation(klass, field.remote_field.model)
name = "%s_%s" % (klass._meta.object_name, field.name)
lazy_related_operation(set_managed, klass, to_model, name)
to = make_model_tuple(to_model)[1]
from_ = klass._meta.model_name
if to == from_:
to = "to_%s" % to
from_ = "from_%s" % from_
meta = type(
"Meta",
(),
{
"db_table": field._get_m2m_db_table(klass._meta),
"auto_created": klass,
"app_label": klass._meta.app_label,
"db_tablespace": klass._meta.db_tablespace,
"unique_together": (from_, to),
"verbose_name": _("%(from)s-%(to)s relationship")
% {"from": from_, "to": to},
"verbose_name_plural": _("%(from)s-%(to)s relationships")
% {"from": from_, "to": to},
"apps": field.model._meta.apps,
},
)
# Construct and return the new class.
return type(
name,
(models.Model,),
{
"Meta": meta,
"__module__": klass.__module__,
from_: models.ForeignKey(
klass,
related_name="%s+" % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
to: models.ForeignKey(
to_model,
related_name="%s+" % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
# custom-need fields here:
"is_custom_m2m": models.BooleanField(default=False),
},
)
class CustomManyToManyField(models.ManyToManyField):
def contribute_to_class(self, cls, name, **kwargs):
############################################################
# Inspired by https://stackoverflow.com/a/60421834/9917276 #
############################################################
with patch(
"django.db.models.fields.related.create_many_to_many_intermediary_model",
wraps=custom_create_many_to_many_intermediary_model,
):
super().contribute_to_class(cls, name, **kwargs)
Then I use my CustomManyToManyField instead of models.ManyToMany When I want my m2m table have custom fields
class MyModel(models.Model):
my_m2m_field = CustomManyToManyField()
Note that new custom columns may not add if m2m field already exist, and you have to add them manualy or with script by migration as #Krzysiek mentioned.
I want to add few fields to every model in my django application. This time it's created_at, updated_at and notes. Duplicating code for every of 20+ models seems dumb. So, I decided to use abstract base class which would add these fields. The problem is that fields inherited from abstract base class come first in the field list in admin. Declaring field order for every ModelAdmin class is not an option, it's even more duplicate code than with manual field declaration.
In my final solution, I modified model constructor to reorder fields in _meta before creating new instance:
class MyModel(models.Model):
# Service fields
notes = my_fields.NotesField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
abstract = True
last_fields = ("notes", "created_at", "updated_at")
def __init__(self, *args, **kwargs):
new_order = [f.name for f in self._meta.fields]
for field in self.last_fields:
new_order.remove(field)
new_order.append(field)
self._meta._field_name_cache.sort(key=lambda x: new_order.index(x.name))
super(MyModel, self).__init__(*args, **kwargs)
class ModelA(MyModel):
field1 = models.CharField()
field2 = models.CharField()
#etc ...
It works as intended, but I'm wondering, is there a better way to acheive my goal?
I was having the very same problem, but I found these solutions to be problematic, so here's what I did:
class BaseAdmin(admin.ModelAdmin):
def get_fieldsets(self, request, obj = None):
res = super(BaseAdmin, self).get_fieldsets(request, obj)
# I only need to move one field; change the following
# line to account for more.
res[0][1]['fields'].append(res[0][1]['fields'].pop(0))
return res
Changing the fieldset in the admin makes more sense to me, than changing the fields in the model.
If you mainly need the ordering for Django's admin you could also create your "generic"-admin class via sub-classing Django's admin class. See http://docs.djangoproject.com/en/dev/intro/tutorial02/#customize-the-admin-form for customizing the display of fields in the admin.
You could overwrite the admin's __init__ to setup fields/fieldsets on creation of the admin instance as you wish. E.g. you could do something like:
class MyAdmin(admin.ModelAdmin):
def __init__(self, model, admin_site):
general_fields = ['notes', 'created_at', 'updated_at']
fields = [f.name for f in self.model._meta.fields if f.name not in general_fields]
self.fields = fields + general_fields
super(admin.ModelAdmin, self).__init__(model, admin_site)
Besides that i think it's not a good practice to modify the (private) _field_name_cache!
I ALSO didn't like the other solutions, so I instead just modified the migrations files directly.
Whenever you create a new table in models.py, you will have to run "python manage.py makemigrations" (I believe this in Django >= v1.7.5). Once you do this, open up the newly created migrations file in your_app_path/migrations/ directory and simply move the rows to the order you want them to be in. Then run "python manage.py migrate". Voila! By going into "python manage.py dbshell" you can see that the order of the columns is exactly how you wanted them!
Downside to this method: You have to do this manually for each table you create, but fortunately the overhead is minimal. And this can only be done when you're creating a new table, not to modify an existing one.