Django trigger parent model save when editing inline in admin - python

I have a model (Parent) with one-to-many relation to another model (Child). The save method of Parent model is overwritten:
class ParentModel(models.Model)
(...)
def save(self, *args, **kwargs):
(...) # Do sth with the model
super(ParentModel, self).save(*args, **kwargs)
class ChildModel(models.Model):
parent= models.ForeignKey(ParentModel)
In admin panel multiple Child models objects are displayed using StackedInline on Parent model's page. If a field of parent is edited and saved, the save method is called. When only child's fields are edited, Django do not call the save method of parent (as expected, because nothing changed).
What is the best way to force saving the parent, even if only child was edited (so that my overwritten method does it's stuff)?

You have a few solutions. Here goes, from simpler to more complex:
You could implement a custom save method for ChildModel that calls ParentModel.save.
You could also connect to your ChildModel's post_save or pre_save signal.
Now, these two solutions will prove annoying if you're going to update a lot of ChildModel instances at once, as you will be calling ParentModel.save several times, maybe without purpose.
You might then want to use the following:
Override your ParentModel's ModelAdmin.change_view to handle your logic; this is pretty tricky however.
I'm however pretty surprised by the behavior your're encountering, from checking the source, the object should be saved anyway; edited or not.

Related

Should I prefer one general signal instead of multiple specific ones?

When the user creates a product, multiple actions have to be done in save() method before calling super(Product,self).save(*args,**kwargs).
I'm not sure if I should use just one pre_save signal to do all these actions or it is better to create a signal for each of these actions separately.
Simple example (I'm going to replace save overrides by signals):
class Product(..):
def save(...):
if not self.pk:
if not self.category:
self.category = Category.get_default()
if not self.brand:
self.brand = 'NA'
super(Product,self).save(*args,**kwargs)
...
SO
#receiver(pre_save,sender=Product)
def set_attrs(instance,**kwargs):
if kwargs['created']:
instance.category = Category.get_default()
instance.brand = 'NA'
OR
#receiver(pre_save,sender=Product)
def set_category(instance,**kwargs):
if kwargs['created']:
instance.category = Category.get_default()
#receiver(pre_save,sender=Product)
def set_brand(instance,**kwargs):
if kwargs['created']:
instance.brand = 'NA'
This is just simple example. In this case, the general set_attrs should be probably enough but there are more complex situations with different actions like creating userprofile for user and then userplan etc.
Is there some best practice advice for this? Your opinions?
To put the facts out simply, it could be pointed out as a single piece of advice,
If action on one model's instance affects another model, signals are the cleanest way to go about. This is an example where you can go with a signal, because you might want to avoid some_model.save() call from within the save() method of another_model, if you know what I mean.
To elaborate on an example, when overriding save() methods, common task is to create slugs from some fields in the model. If you are required to implement this process on multiple models, then using a pre_save signal would be a benefit, rather than hard-coding in save() method of each models.
Also, on bulk operations, these signals and methods are not necessarily called.
From the docs,
Overridden model methods are not called on bulk operations
Note that the delete() method for an object is not necessarily called when deleting objects in bulk using a QuerySet or as a result of a cascading delete. To ensure customized delete logic gets executed, you can use pre_delete and/or post_delete signals.
Unfortunately, there isn’t a workaround when creating or updating objects in bulk, since none of save(), pre_save, and post_save are called.
For more reference,
Django override save() or signals?
Overriding predefined model methods
Django: signal or model method?

Django model form saves m2m after instance

I am having an issue with the way Django class-based forms save a form. I am using a form.ModelForm for one of my models which has some many-to-many relationships.
In the model's save method I check the value of some of these relationships to modify other attributes:
class MyModel(models.Model):
def save(self, *args, **kwargs):
if self.m2m_relationship.exists():
self.some_attribute = False
super(MyModel, self).save(*args, **kwargs)
Even if I populated some data in the m2m relationship in my form, I self.m2m_relationship when saving the model and surprisingly it was an empty QuerySet. I eventually found out the following:
The form.save() method is called to save the form, it belongs to the BaseModelForm class. Then this method returns save_instance, a function in forms\models.py. This function defines a local function save_m2m() which saves many-to-many relationships in a form.
Here's the thing, check out the order save_instance chooses when saving and instance and m2m:
instance.save()
save_m2m()
Obviously the issue is here. The instance's save method is called first, that's why self.m2m_relationship was an empty QuerySet. It just doesn't exist yet.
What can I do about it? I can't just change the order in the save_instance function because it is part of Django and I might break something else.
Daniel's answer gives the reason for this behaviour, you won't be able to fix it.
But there is the m2m_changed signal that is sent whenever something changes about the m2m relationship, and maybe you can use that:
from django.db.models import signals
#signals.receiver(signals.m2m_changed, sender=MyModel.m2m_relationship.through)
def handle_m2m_changed(sender, instance, action, **kwargs):
if action == 'post_add':
# Do your check here
But note the docs say that instance "can be an instance of the sender, or of the class the ManyToManyField is related to".
I don't know how that works exactly, but you can try out which you get and then adapt the code.
But it would be impossible to do it any other way.
A many-to-many relationship is not a field on the instance, it is an entry in a linking table. There is no possible way to save that relationship before the instance itself exists, as it won't have an ID to enter into that linking table.

What's the difference between these two ways to override the save() method in a Django ModelForm?

I've come across two methods of doing this. The accepted answer here suggests:
def save(self, *args, **kwargs):
instance = super(ModelClass, self).save(commit=False)
instance.my_stuff = manual_value
instance.save()
But the following, found here, seems more elegant:
def save(self, *args, **kwargs):
self.my_stuff = manual_value
super(ModelClass, self).save(*args, **kwargs)
Is there any reason to choose one over the other, other than the latter being one less line, such as a reason for running the parent save() first?
The two examples are doing different things. The first is saving the model form's save method, the second is overriding the model's save method.
It only makes sense to override the model's method if you want the value to be set every time the model is saved. If updating the field is related to the form, then overriding the form's save method is the correct thing to do.
In the model form's save method, you have to call save(commit=False) first to get the instance. I wouldn't worry about it being inelegant, it's a very common pattern in Django, and is documented here.
First one will create instance of model, without saving it, then you can add some value (that is required or not) and manually trigger save on that instance.
Second one will save some field on ModelForm (not on actual instance of your model) and then create + save instance of your model.
If in second one you're just setting value on form field that corresponds to model field edited in first example, that will almost work in same way.
Why almost? On second example you have to have that form field inside your form class, on first example you don't. If that field is required, and have been left empty, second example won't validate.
That being said, first example can add or change fields in your model that haven't appeared on form, second example can only modify fields that have been specified inside form class.

Django "emulate" database trigger behavior on bulk insert/update/delete

It's a self expaining question but here we go.
I'm creating a business app in Django, and i didn't wanted to "spread" all the logic across app AND database, but in the other hand, i didn't wanted to let the Database handle this task (its possible through the use of Triggers).
So I wanted to "reproduce" the behavior of the Databse Triggers, but inside the Model Class in Django (um currently using Django 1.4).
After some research, I figured out that with single objects, I could override the "save" and "delete" methods of "models.Model" class, inserting the "before" and "after" hooks so they could be executed before and after the parent's save/delete. Like This:
class MyModel(models.Model):
def __before(self):
pass
def __after(self):
pass
#commit_on_success #the decorator is only to ensure that everything occurs inside the same transaction
def save(self, *args, *kwargs):
self.__before()
super(MyModel,self).save(args, kwargs)
self.__after()
The BIG problem is with bulk operations. Django doesn't triggers the save/delete of the models when running the "update()"/"delete()" from it's QuerySet. Insted, it uses the QuerySet's own method. And to get a little bit worst, it doesn't trigger any signal either.
Edit:
Just to be a little more specific: the model loading inside the view is dynamic, so it's impossible to define a "model specific" way. In this case, I should create an Abstract Class and handle it there.
My last attempt was to create a custom Manager, and in this custom manager, override the update method, looping over the models inside the queryset, and trigering the "save()" of each model (take in consideration the implementation above, or the "signals" system). It works, but results in a database "overload" (imagine a 10k rows queryset being updated).
First, instead of overriding save to add __before and __after methods, you can use the built-in pre_save, post_save, pre_delete, and post_delete signals. https://docs.djangoproject.com/en/1.4/topics/signals/
from django.db.models.signals import post_save
class YourModel(models.Model):
pass
def after_save_your_model(sender, instance, **kwargs):
pass
# register the signal
post_save.connect(after_save_your_model, sender=YourModel, dispatch_uid=__file__)
pre_delete and post_delete will get triggered when you call delete() on a queryset.
For bulk updating, you'll have to manually call the function you want to trigger yourself, however. And you can throw it all in a transaction as well.
To call the proper trigger function if you're using dynamic models, you can inspect the model's ContentType. For example:
from django.contrib.contenttypes.models import ContentType
def view(request, app, model_name, method):
...
model = get_model(app, model_name)
content_type = ContentType.objects.get_for_model(model)
if content_type == ContenType.objects.get_for_model(YourModel):
after_save_your_model(model)
elif content_type == Contentype.objects.get_for_model(AnotherModel):
another_trigger_function(model)
With a few caveats, you can override the queryset's update method to fire the signals, while still using an SQL UPDATE statement:
from django.db.models.signals import pre_save, post_save
def CustomQuerySet(QuerySet):
#commit_on_success
def update(self, **kwargs):
for instance in self:
pre_save.send(sender=instance.__class__, instance=instance, raw=False,
using=self.db, update_fields=kwargs.keys())
# use self instead of self.all() if you want to reload all data
# from the db for the post_save signal
result = super(CustomQuerySet, self.all()).update(**kwargs)
for instance in self:
post_save.send(sender=instance.__class__, instance=instance, created=False,
raw=False, using=self.db, update_fields=kwargs.keys())
return result
update.alters_data = True
I clone the current queryset (using self.all()), because the update method will clear the cache of the queryset object.
There are a few issues that may or may not break your code. First of all it will introduce a race condition. You do something in the pre_save signal's receivers based on data that may no longer be accurate when you update the database.
There may also be some serious performance issues with large querysets. Unlike the update method, all models will have to be loaded into memory, and then the signals still need to be executed. Especially if the signals themselves have to interact with the database, performance can be unacceptably slow. And unlike the regular pre_save signal, changing the model instance will not automatically cause the database to be updated, as the model instance is not used to save the new data.
There are probably some more issues that will cause a problem in a few edge cases.
Anyway, if you can handle these issues without having some serious problems, I think this is the best way to do this. It produces as little overhead as possible while still loading the models into memory, which is pretty much required to correctly execute the various signals.

Django - call original and overriden save method

This may be a noobish question but it bothers me quite a lot (I'm quite new to both django and python)
In my django app, I overrided the save() method of a model to perform some interaction on the file system.
I created a form class like this :
class AddItemForm(ModelForm):
class Meta:
model = OriginalModel
So, in my views, when I call form.save(), eveything works fine.
But, when testing my app, I'd like to be able to call the original save() method to avoid creating plenty of files that I won't use and have to take care of.
What I tried is to create a savebis() method, in order to preserve the original save() method, but then how can I pass it to the ModelForm, so that I can call form.save() or form.savebis()?
EDIT : savebis() is already written and working in my model. I want to be able to call it from a modelform instance, but I don't know how to do this.
From your question, it sounds to me like you've got some optional processing that should occur in the Model's save method. As suggested in the question comments, just add a TESTING = True type constant to your settings.py file used during testing and check this value in the Model save method:
from django.conf import settings
class OriginalModel(Model):
...
def save(self, *args, **kwargs):
if not settings.TESTING:
# Do some file system interaction, but not during testing.
pass
# Now save instance as per normal.
return super(originalModel, self).save(*args, **kwargs)
If you go down the path of overriding the ModelForm save method then you have to replicate existing functionality of the ModelForm save method, and changing it to call you model's savebis() method instead of the save() method. I'd advise against going down this path as it make the code more complex than need be.

Categories