From the example of Django Book, I understand if I create models as following:
from xxx import B
class A(models.Model):
b = ManyToManyField(B)
The Django would create a new table(A_B) beyond Table A, which has three columns:
id
a_id
b_id
But now I want to add a new column in the Table A_B, thus would be very easy if I use normal SQL, but now anyone can help me how to do? I can't find any useful information in this book.
It's very easy using django too! You can use through to define your own manytomany intermediary tables
Documentation provides an example addressing your issue:
Extra fields on many-to-many relationships
class Person(models.Model):
name = models.CharField(max_length=128)
def __unicode__(self):
return self.name
class Group(models.Model):
name = models.CharField(max_length=128)
members = models.ManyToManyField(Person, through='Membership')
def __unicode__(self):
return self.name
class Membership(models.Model):
person = models.ForeignKey(Person, on_delete=models.CASCADE)
group = models.ForeignKey(Group, on_delete=models.CASCADE)
date_joined = models.DateField()
invite_reason = models.CharField(max_length=64)
As #dm03514 has answered it is indeed very easy to add column to M2M table via
defining explicitly the M2M through model and adding the desired field there.
However if you would like to add some column to all m2m tables - such approach
wouldn't be sufficient, because it would require to define the M2M through
models for all ManyToManyField's that have been defined across the project.
In my case I wanted to add a "created" timestamp column to all M2M tables that
Django generates "under the hood" without the necessity of defining a separate
model for every ManyToManyField field used in the project. I came up with a
neat solution presented bellow. Cheers!
Introduction
While Django scans your models at startup it creates automatically an implicit
through model for every ManyToManyField that does not define it explicitly.
class ManyToManyField(RelatedField):
# (...)
def contribute_to_class(self, cls, name, **kwargs):
# (...)
super().contribute_to_class(cls, name, **kwargs)
# The intermediate m2m model is not auto created if:
# 1) There is a manually specified intermediate, or
# 2) The class owning the m2m field is abstract.
# 3) The class owning the m2m field has been swapped out.
if not cls._meta.abstract:
if self.remote_field.through:
def resolve_through_model(_, model, field):
field.remote_field.through = model
lazy_related_operation(resolve_through_model, cls, self.remote_field.through, field=self)
elif not cls._meta.swapped:
self.remote_field.through = create_many_to_many_intermediary_model(self, cls)
Source: ManyToManyField.contribute_to_class()
For creation of this implicit model Django uses the
create_many_to_many_intermediary_model() function, which constructs new class
that inherits from models.Model and contains foreign keys to both sides of the
M2M relation. Source: django.db.models.fields.related.create_many_to_many_intermediary_model()
In order to add some column to all auto generated M2M through tables you will
need to monkeypatch this function.
The solution
First you should create the new version of the function that will be used to
patch the original Django function. To do so just copy the code of the function
from Django sources and add the desired fields to the class it returns:
# For example in: <project_root>/lib/monkeypatching/custom_create_m2m_model.py
def create_many_to_many_intermediary_model(field, klass):
# (...)
return type(name, (models.Model,), {
'Meta': meta,
'__module__': klass.__module__,
from_: models.ForeignKey(
klass,
related_name='%s+' % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
to: models.ForeignKey(
to_model,
related_name='%s+' % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
# Add your custom-need fields here:
'created': models.DateTimeField(
auto_now_add=True,
verbose_name='Created (UTC)',
),
})
Then you should enclose the patching logic in a separate function:
# For example in: <project_root>/lib/monkeypatching/patches.py
def django_m2m_intermediary_model_monkeypatch():
""" We monkey patch function responsible for creation of intermediary m2m
models in order to inject there a "created" timestamp.
"""
from django.db.models.fields import related
from lib.monkeypatching.custom_create_m2m_model import create_many_to_many_intermediary_model
setattr(
related,
'create_many_to_many_intermediary_model',
create_many_to_many_intermediary_model
)
Finally you have to perform patching, before Django kicks in. Put such code in
__init__.py file located next to your Django project settings.py file:
# <project_root>/<project_name>/__init__.py
from lib.monkeypatching.patches import django_m2m_intermediary_model_monkeypatch
django_m2m_intermediary_model_monkeypatch()
Few other things worth mentioning
Remember that this does not affect m2m tables that have been created in the
db in the past, so if you are introducing this solution in a project that
already had ManyToManyField fields migrated to db, you will need to prepare a
custom migration that will add your custom columns to the tables which were
created before the monkeypatch. Sample migration provided below :)
from django.db import migrations
def auto_created_m2m_fields(_models):
""" Retrieves M2M fields from provided models but only those that have auto
created intermediary models (not user-defined through models).
"""
for model in _models:
for field in model._meta.get_fields():
if (
isinstance(field, models.ManyToManyField)
and field.remote_field.through._meta.auto_created
):
yield field
def add_created_to_m2m_tables(apps, schema_editor):
# Exclude proxy models that don't have separate tables in db
selected_models = [
model for model in apps.get_models()
if not model._meta.proxy
]
# Select only m2m fields that have auto created intermediary models and then
# retrieve m2m intermediary db tables
tables = [
field.remote_field.through._meta.db_table
for field in auto_created_m2m_fields(selected_models)
]
for table_name in tables:
schema_editor.execute(
f'ALTER TABLE {table_name} ADD COLUMN IF NOT EXISTS created '
'timestamp with time zone NOT NULL DEFAULT now()',
)
class Migration(migrations.Migration):
dependencies = []
operations = [migrations.RunPython(add_created_to_m2m_tables)]
Remember that the solution presented only affects the tables that Django
creates automatically for ManyToManyField fields that do not define the
through model. If you already have some explicit m2m through models you will
need to add your custom-need columns there manually.
The patched create_many_to_many_intermediary_model function will apply also
to the models of all 3rd-party apps listed in your INSTALLED_APPS setting.
Last but not least, remember that if you upgrade Django version the original
source code of the patched function may change (!). It's a good idea to setup a
simple unit test that will warn you if such situation happens in the future.
To do so modify the patching function to save the original Django function:
# For example in: <project_root>/lib/monkeypatching/patches.py
def django_m2m_intermediary_model_monkeypatch():
""" We monkey patch function responsible for creation of intermediary m2m
models in order to inject there a "created" timestamp.
"""
from django.db.models.fields import related
from lib.monkeypatching.custom_create_m2m_model import create_many_to_many_intermediary_model
# Save the original Django function for test
original_function = related.create_many_to_many_intermediary_model
setattr(
create_many_to_many_intermediary_model,
'_original_django_function',
original_function
)
# Patch django function with our version of this function
setattr(
related,
'create_many_to_many_intermediary_model',
create_many_to_many_intermediary_model
)
Compute the hash of the source code of the original Django function and prepare
a test that checks whether it is still the same as when you patched it:
def _hash_source_code(_obj):
from inspect import getsourcelines
from hashlib import md5
source_code = ''.join(getsourcelines(_obj)[0])
return md5(source_code.encode()).hexdigest()
def test_original_create_many_to_many_intermediary_model():
""" This test checks whether the original Django function that has been
patched did not changed. The hash of function source code is compared
and if it does not match original hash, that means that Django version
could have been upgraded and patched function could have changed.
"""
from django.db.models.fields.related import create_many_to_many_intermediary_model
original_function_md5_hash = '69d8cea3ce9640f64ce7b1df1c0934b8' # hash obtained before patching (Django 2.0.3)
original_function = getattr(
create_many_to_many_intermediary_model,
'_original_django_function',
None
)
assert original_function
assert _hash_source_code(original_function) == original_function_md5_hash
Cheers
I hope someone will find this answer useful :)
Under the hood, Django creates automatically a through model. It is possible to modify this automatic model foreign key column names.
I could not test the implications on all scenarios, so far it works properly for me.
Using Django 1.8 and onwards' _meta api:
class Person(models.Model):
pass
class Group(models.Model):
members = models.ManyToManyField(Person)
Group.members.through._meta.get_field('person').column = 'alt_person_id'
Group.members.through._meta.get_field('group' ).column = 'alt_group_id'
# Prior to Django 1.8 _meta can also be used, but is more hackish than this
Group.members.through.person.field.column = 'alt_person_id'
Group.members.through.group .field.column = 'alt_group_id'
Same as question I needed a custom models.ManyToManyField to add some columns to specific M2M relations.
My answer is Base on #Krzysiek answer with a small change, I inherit a class from models.ManyToManyField and monkeypatch its contribute_to_class method partially with unittest.mock.patch to use a custom create_many_to_many_intermediary_model instead of original one, This way I can control which M2M relations can have custom columns and also 3rd-party apps won't affected as #Krzysiek mentioned in its answer
from django.db.models.fields.related import (
lazy_related_operation,
resolve_relation,
make_model_tuple,
CASCADE,
_,
)
from unittest.mock import patch
def custom_create_many_to_many_intermediary_model(field, klass):
from django.db import models
def set_managed(model, related, through):
through._meta.managed = model._meta.managed or related._meta.managed
to_model = resolve_relation(klass, field.remote_field.model)
name = "%s_%s" % (klass._meta.object_name, field.name)
lazy_related_operation(set_managed, klass, to_model, name)
to = make_model_tuple(to_model)[1]
from_ = klass._meta.model_name
if to == from_:
to = "to_%s" % to
from_ = "from_%s" % from_
meta = type(
"Meta",
(),
{
"db_table": field._get_m2m_db_table(klass._meta),
"auto_created": klass,
"app_label": klass._meta.app_label,
"db_tablespace": klass._meta.db_tablespace,
"unique_together": (from_, to),
"verbose_name": _("%(from)s-%(to)s relationship")
% {"from": from_, "to": to},
"verbose_name_plural": _("%(from)s-%(to)s relationships")
% {"from": from_, "to": to},
"apps": field.model._meta.apps,
},
)
# Construct and return the new class.
return type(
name,
(models.Model,),
{
"Meta": meta,
"__module__": klass.__module__,
from_: models.ForeignKey(
klass,
related_name="%s+" % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
to: models.ForeignKey(
to_model,
related_name="%s+" % name,
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=CASCADE,
),
# custom-need fields here:
"is_custom_m2m": models.BooleanField(default=False),
},
)
class CustomManyToManyField(models.ManyToManyField):
def contribute_to_class(self, cls, name, **kwargs):
############################################################
# Inspired by https://stackoverflow.com/a/60421834/9917276 #
############################################################
with patch(
"django.db.models.fields.related.create_many_to_many_intermediary_model",
wraps=custom_create_many_to_many_intermediary_model,
):
super().contribute_to_class(cls, name, **kwargs)
Then I use my CustomManyToManyField instead of models.ManyToMany When I want my m2m table have custom fields
class MyModel(models.Model):
my_m2m_field = CustomManyToManyField()
Note that new custom columns may not add if m2m field already exist, and you have to add them manualy or with script by migration as #Krzysiek mentioned.
Related
I'm trying to generate UUIDs for some models in a migration. The problem is that the models returned from apps.get_app_config(app_name).get_models() are these __fake__ objects, they are what Django calls historical models, so calling issubclass(fake_model, UUIDModelMixin) returns False when I am expecting True.
Is there anyway to determine what parent classes these historical model objects actually inherited from?
Relevant Django docs:
https://docs.djangoproject.com/en/3.1/topics/migrations/#historical-models
And here is the full function being called in a migration:
from os.path import basename, dirname
import uuid
from common.models.uuid_mixin import UUIDModelMixin
def gen_uuid(apps, schema_editor):
app_name = basename(dirname(dirname(__file__)))
models = apps.get_app_config(app_name).get_models()
uuid_models = [m for m in models if issubclass(m, UUIDModelMixin)]
for model in uuid_models:
for row in model.objects.all():
row.uuid = uuid.uuid4()
row.save(update_fields=['uuid'])
Here is my UUIDModelMixin code:
class UUIDModelMixin(models.Model):
"""
`uuid` field will be auto set with uuid4 values
"""
uuid = models.UUIDField(default=uuid.uuid4, editable=False, unique=True)
#property
def short_uuid(self):
return truncatechars(self.uuid, 8)
class Meta:
abstract = True
I don't think it's correct to stick to checking model mixin on the historical model because migrations don't record which abstract base class your model inherits from at any given point in time and cannot really trace back that in any way. Suppose that your model inherits from MixinX in mutation 0001 but then you change it to MixinY. First of all no new migration will be generated through makemigrations. Second you can even delete MixinX so this should not really impact your existing migrations.
Instead, you can stick to checking fields (even your own field types) because the specific type of the field is recorded inside migrations. For example if we change django.db.models.UUIDField to MyUUIDField which inherits it then makemigrations creates new migration for that and the code below would be able to find the specific type of the field:
def gen_uuid(apps, schema_editor):
app_name = basename(dirname(dirname(__file__)))
models = apps.get_app_config(app_name).get_models()
for model in models:
uuid_fields = []
for field in model._meta.get_fields():
if not isinstance(field, UUIDField): # or custom MyUUIDField
continue
uuid_fields.append(field)
if not uuid_fields:
continue
for row in model.objects.all():
for uuid_field in uuid_fields:
setattr(row, uuid_field.get_attname(), uuid.uuid4())
row.save(update_fields=[f.get_attname() for f in uuid_fields])
There is such statement in the Django docs:
In addition, the concrete base classes of the model are stored as pointers, so you must always keep base classes around for as long as there is a migration that contains a reference to them.
It actually means that if your base model is non-abstract then it will be stored in migration and probably there is the way to verify class hierarchy. However if the mixin is an abstract base model then it won't. That's why I would suggest to stick to checking field types in this case.
To determine the parent classes of the historical models use apps.get_model method. It's about the actual model class, rather than the historical model object.
from os.path import basename, dirname
import uuid
from common.models.uuid_mixin import UUIDModelMixin
def gen_uuid(apps, schema_editor):
app_name = basename(dirname(dirname(__file__)))
models = apps.get_app_config(app_name).get_models()
uuid_models = []
for model in models:
actual_model = apps.get_model(app_label=app_name, model_name=model._meta.model_name)
if issubclass(actual_model, UUIDModelMixin):
uuid_models.append(actual_model)
for model in uuid_models:
for row in model.objects.all():
row.uuid = uuid.uuid4()
row.save(update_fields=['uuid'])
I’ve got these two models (examples) and when I’m trying to run my tests - it errors out saying: no such table: my_app_modelA - if I scroll up I can see that it is bombing out when creating modelB (which I assume is due to the default being applied). Is there a way to order these so that modelA will always get created before modelB? Or should I not be referencing that method as a default attribute? Just trying to get my tests working and this is my sticking point.
My models look like this:
class modelA(models.Model):
attribute = models.IntegerField()
active = models.BooleanField(default=False)
#classmethod
def get_active_attribute(cls):
return modelA.objects.get(active=True).attribute
class modelB(models.Model):
attribute = models.IntegerField(default=modelA.get_active_attribute())
My questions are:
Is that an acceptable thing to do - having default call another model method?
Is there a way to handle the creation of those models in a way that I can guarantee that modelA gets created first so modelB can succesfully create in my tests?
First of all, the migrations happen in the order which is defined when migration file is created.
# 0001_initial.py
...
operations = [
migrations.CreateModel(
name=modelA,
....
),
migrations.CreateModel(
name=modelB,
....
),
]
You can check your migration files and make sure modelA is before modelB.
Secondly, modelA.get_active_attribute() needs a DB entry to be able to return something. While running migrations, you are not inserting data. So you should not be declaring default by other model's object.
You should instead override save() to ensure the default value is based on modelA's attribute.
class modelB(models.Model):
attribute = models.IntegerField()
def save(self, *args, **kwargs):
if self.attribute is None:
self.attribute = modelA.get_active_attribute()
super(modelB, self).save(*args, **kwargs)
Looking at graphene_django, I see they have a bunch of resolvers picking up django model fields mapping them to graphene types.
I have a subclass of JSONField I'd also like to be picked up.
:
# models
class Recipe(models.Model):
name = models.CharField(max_length=100)
instructions = models.TextField()
ingredients = models.ManyToManyField(
Ingredient, related_name='recipes'
)
custom_field = JSONFieldSubclass(....)
# schema
class RecipeType(DjangoObjectType):
class Meta:
model = Recipe
custom_field = ???
I know I could write a separate field and resolver pair for a Query, but I'd prefer it to be available as part of the schema for that model.
What I realize I could do:
class RecipeQuery:
custom_field = graphene.JSONString(id=graphene.ID(required=True))
def resolve_custom_field(self, info, **kwargs):
id = kwargs.get('id')
instance = get_item_by_id(id)
return instance.custom_field.to_json()
But -- this means a separate round trip, to get the id then get the custom_field for that item, right?
Is there a way I could have it seen as part of the RecipeType schema?
Ok, I can get it working by using:
# schema
class RecipeType(DjangoObjectType):
class Meta:
model = Recipe
custom_field = graphene.JSONString(resolver=lambda my_obj, resolve_obj: my_obj.custom_field.to_json())
(the custom_field has a to_json method)
I figured it out without deeply figuring out what is happening in this map between graphene types and the django model field types.
It's based on this:
https://docs.graphene-python.org/en/latest/types/objecttypes/#resolvers
Same function name, but parameterized differently.
I think I have a more or less unorthodox and hackish question for you. What I currently have is django project with multiple apps.
I want to use a non-abstract model (ModelA) of one app (app1) and use it in another app (app2) by subclassing it. App1's models
should not be migrated to the DB, I just want to use the capabilities of app1 and it's model classes, by extending its functionality and logic.
I achieved that by adding both apps to settings.INSTALLED_APPS, and preventing app1's models being migrated to the DB.
INSTALLED_APPS += (
'App1',
'App2',
)
# This is needed to just use App1's models
# without creating it's database tables
# See: http://stackoverflow.com/a/35921487/1230358
MIGRATION_MODULES = {
'App1': None,
}
So far so good, ugly and hackish, I know... The remaining problem is now that most of app1's models are non-abstract (ModelA) and if I try
to subclass them, none of the ModelA's fields get populated to the db into the table named app2_modelb. This is clear to me, because I excluded the App1 from
migrating to the DB and therefore the table app1_modela is completely missing in the DB.
My idea now was to clone ModelA, preserve all its functionallity, and changing it's Meta information from non-abstract to abstract (ModelB.Meta.abstract = True).
I hope that by this, all the original fields of ModelA will be inherited to ModelB and can be found in its respective DB table and columns (app1_modelb).
What I have right now is:
# In app1 -> models.py
class ModelA(models.Model):
title = models.CharField(_('title'), max_length=255)
subtitle = models.CharField(_('subtitle'), max_length=255)
class Meta:
abstract = False # just explicitly for demonstration
# In app2 -> models.py
from app1.models import ModelA
class ModelB(ModelA):
pass
# Just extending ModelAdoes not create the fields title and subtitle fields in app2_modelb
# because ModelA.meta.abstract = False
My current way (pseudo code) to make an existing non-abstract model abstract looks like this:
# In app2 -> models.py
from app1.models import ModelA
def get_abstract_class(cls):
o = dict(cls.__dict__)
o['_meta'].abstract = True
o['_meta'].app_label = 'app2'
o['__module__'] = 'app2.models'
#return type('Abstract{}'.format(cls.__name__), cls.__bases__, o)
return type('Abstract{}'.format(cls.__name__), (cls,), o)
ModelB = get_abstract_class(ModelA)
class ModelC(ModelB):
# title and subtitle are inherited from ModelA
description = models.CharField(_('description'), max_length=255)
This does not work, and after this lengthy description my (simple) question would be, if and how is it possible to clone a non-abstract model class preserving all its functionality and how to change it to be abstract?
Just to be clear. All upper fuzz is about, that I can't change any code in app1. May it be that app1 is a django app installed via pip.
Why not, in app1
AbstractBaseModelA(models.Model):
# other stuff here
class Meta:
is_abstract=True
ModelA(AbstractBaseModelA):
# stuff
in app2:
MobelB(AbstractBaseModelA):
# stuff
Sorry if I've misunderstood your aims, but I think the above should achieve the same end result.
I want to add few fields to every model in my django application. This time it's created_at, updated_at and notes. Duplicating code for every of 20+ models seems dumb. So, I decided to use abstract base class which would add these fields. The problem is that fields inherited from abstract base class come first in the field list in admin. Declaring field order for every ModelAdmin class is not an option, it's even more duplicate code than with manual field declaration.
In my final solution, I modified model constructor to reorder fields in _meta before creating new instance:
class MyModel(models.Model):
# Service fields
notes = my_fields.NotesField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
abstract = True
last_fields = ("notes", "created_at", "updated_at")
def __init__(self, *args, **kwargs):
new_order = [f.name for f in self._meta.fields]
for field in self.last_fields:
new_order.remove(field)
new_order.append(field)
self._meta._field_name_cache.sort(key=lambda x: new_order.index(x.name))
super(MyModel, self).__init__(*args, **kwargs)
class ModelA(MyModel):
field1 = models.CharField()
field2 = models.CharField()
#etc ...
It works as intended, but I'm wondering, is there a better way to acheive my goal?
I was having the very same problem, but I found these solutions to be problematic, so here's what I did:
class BaseAdmin(admin.ModelAdmin):
def get_fieldsets(self, request, obj = None):
res = super(BaseAdmin, self).get_fieldsets(request, obj)
# I only need to move one field; change the following
# line to account for more.
res[0][1]['fields'].append(res[0][1]['fields'].pop(0))
return res
Changing the fieldset in the admin makes more sense to me, than changing the fields in the model.
If you mainly need the ordering for Django's admin you could also create your "generic"-admin class via sub-classing Django's admin class. See http://docs.djangoproject.com/en/dev/intro/tutorial02/#customize-the-admin-form for customizing the display of fields in the admin.
You could overwrite the admin's __init__ to setup fields/fieldsets on creation of the admin instance as you wish. E.g. you could do something like:
class MyAdmin(admin.ModelAdmin):
def __init__(self, model, admin_site):
general_fields = ['notes', 'created_at', 'updated_at']
fields = [f.name for f in self.model._meta.fields if f.name not in general_fields]
self.fields = fields + general_fields
super(admin.ModelAdmin, self).__init__(model, admin_site)
Besides that i think it's not a good practice to modify the (private) _field_name_cache!
I ALSO didn't like the other solutions, so I instead just modified the migrations files directly.
Whenever you create a new table in models.py, you will have to run "python manage.py makemigrations" (I believe this in Django >= v1.7.5). Once you do this, open up the newly created migrations file in your_app_path/migrations/ directory and simply move the rows to the order you want them to be in. Then run "python manage.py migrate". Voila! By going into "python manage.py dbshell" you can see that the order of the columns is exactly how you wanted them!
Downside to this method: You have to do this manually for each table you create, but fortunately the overhead is minimal. And this can only be done when you're creating a new table, not to modify an existing one.