So every model comes with some commonly used functions such as save and delete.
Delete is often overridden to set a boolean field such as is_active to false, this way data is not lost. But sometimes a model exists that has information that, once created, should always exist and never even be "inactive". I was wondering what the best practice for handling this model's delete method would be?
ideas
make it simply useless:
def delete(self):
return False
but that just seems odd. Is there maybe a Meta option to disable deleting? is there any "nice" way to do this?
Well it depends, you cannot truly restrict deletion, because somebody can always call delete() on queryset or just plain DELETE sql command. If you want to disable delete button in django admin though, you should look here.
delete() on queryset can be restricted with this:
class NoDeleteQuerySet(models.QuerySet):
def delete(self, *args, **kwargs):
pass
class MyModel(models.Model):
objects = NoDeleteQuerySet.as_manager()
...
Django docs - link
Related
I have a legacy project that saves models with save, bulk_create and other methods within the framework.
What is the best way to set a specific value for an attribute so that every time a record is saved the new value is also saved? This value is constructed based on other attributes of the instance that is being saved.
I pose this question because I'm not sure all ways that save is possible in Django except save and bulk_create and knowing that on bulk_create:
The model’s save() method will not be called, and the pre_save and
post_save signals will not be sent.
https://docs.djangoproject.com/en/1.8/ref/models/querysets/#bulk-create
As far as I know, there are 3 ways to create/update model instances (which are records in database tables):
Using the model instance method save().
Using the queryset methods create(), update(), get_or_create(), update_or_create() and bulk_create().
Using raw SQL or other low-level ways.
If you intend to calculate the value of a field when saving, you could override all of the methods I listed above.
Signals (like pre_create) are not a complete solution because they don't get triggered when bulk_create() is used and so some instance could get saved without the calculated attribute.
There is no django way (that I know) to intercept the third point I mentioned (raw SQL).
You did not elaborate on your use case, but (depending on your table size and change frequency) maybe you could also try:
run a periodical process (maybe using crontab) that updates the calculated field of all model instances.
add a database trigger that calculates the field.
Legacy databases or systems or usually not fun to work with, so maybe you will have to settle for a sub-optimal solution.
You can set default value in your model's field using custom functions. For example you have a Post model that also has a field slug. You want default value for slug field to be auto generated from name field. You can write your model like below:
class Post(models.Model):
def generate_slug(self):
return slugify(self.name)
name = models.CharField()
description = models.TextField()
attachment = models.FileField()
slug = models.CharField(default=generate_slug)
This way when you create a new post, the slug field will be auto generated from the name field.
Another way to do that is to create a layer between your caller and the models(database layer) so you can add your logic there. With this you will narrow the possibilities to just the methods you expose in that layer and have control over what should happen everywhere in terms of database talk.
The best way to deal with this issue is to override the save method().
You can use as well raw sql queries , which can easily solve your problems as well
class Model(model.Model):
field1=models.CharField()
field2=models.CharField()
field3=models.CharField()
def myfunc (self):
pass
#
def save(self, *args, **kwargs):
q = MyModel.objects.select_related('fields1', 'field2', 'filed2').filter(related_field)
super(Model, self).save(*args, **kwargs)
When the user creates a product, multiple actions have to be done in save() method before calling super(Product,self).save(*args,**kwargs).
I'm not sure if I should use just one pre_save signal to do all these actions or it is better to create a signal for each of these actions separately.
Simple example (I'm going to replace save overrides by signals):
class Product(..):
def save(...):
if not self.pk:
if not self.category:
self.category = Category.get_default()
if not self.brand:
self.brand = 'NA'
super(Product,self).save(*args,**kwargs)
...
SO
#receiver(pre_save,sender=Product)
def set_attrs(instance,**kwargs):
if kwargs['created']:
instance.category = Category.get_default()
instance.brand = 'NA'
OR
#receiver(pre_save,sender=Product)
def set_category(instance,**kwargs):
if kwargs['created']:
instance.category = Category.get_default()
#receiver(pre_save,sender=Product)
def set_brand(instance,**kwargs):
if kwargs['created']:
instance.brand = 'NA'
This is just simple example. In this case, the general set_attrs should be probably enough but there are more complex situations with different actions like creating userprofile for user and then userplan etc.
Is there some best practice advice for this? Your opinions?
To put the facts out simply, it could be pointed out as a single piece of advice,
If action on one model's instance affects another model, signals are the cleanest way to go about. This is an example where you can go with a signal, because you might want to avoid some_model.save() call from within the save() method of another_model, if you know what I mean.
To elaborate on an example, when overriding save() methods, common task is to create slugs from some fields in the model. If you are required to implement this process on multiple models, then using a pre_save signal would be a benefit, rather than hard-coding in save() method of each models.
Also, on bulk operations, these signals and methods are not necessarily called.
From the docs,
Overridden model methods are not called on bulk operations
Note that the delete() method for an object is not necessarily called when deleting objects in bulk using a QuerySet or as a result of a cascading delete. To ensure customized delete logic gets executed, you can use pre_delete and/or post_delete signals.
Unfortunately, there isn’t a workaround when creating or updating objects in bulk, since none of save(), pre_save, and post_save are called.
For more reference,
Django override save() or signals?
Overriding predefined model methods
Django: signal or model method?
I am having an issue with the way Django class-based forms save a form. I am using a form.ModelForm for one of my models which has some many-to-many relationships.
In the model's save method I check the value of some of these relationships to modify other attributes:
class MyModel(models.Model):
def save(self, *args, **kwargs):
if self.m2m_relationship.exists():
self.some_attribute = False
super(MyModel, self).save(*args, **kwargs)
Even if I populated some data in the m2m relationship in my form, I self.m2m_relationship when saving the model and surprisingly it was an empty QuerySet. I eventually found out the following:
The form.save() method is called to save the form, it belongs to the BaseModelForm class. Then this method returns save_instance, a function in forms\models.py. This function defines a local function save_m2m() which saves many-to-many relationships in a form.
Here's the thing, check out the order save_instance chooses when saving and instance and m2m:
instance.save()
save_m2m()
Obviously the issue is here. The instance's save method is called first, that's why self.m2m_relationship was an empty QuerySet. It just doesn't exist yet.
What can I do about it? I can't just change the order in the save_instance function because it is part of Django and I might break something else.
Daniel's answer gives the reason for this behaviour, you won't be able to fix it.
But there is the m2m_changed signal that is sent whenever something changes about the m2m relationship, and maybe you can use that:
from django.db.models import signals
#signals.receiver(signals.m2m_changed, sender=MyModel.m2m_relationship.through)
def handle_m2m_changed(sender, instance, action, **kwargs):
if action == 'post_add':
# Do your check here
But note the docs say that instance "can be an instance of the sender, or of the class the ManyToManyField is related to".
I don't know how that works exactly, but you can try out which you get and then adapt the code.
But it would be impossible to do it any other way.
A many-to-many relationship is not a field on the instance, it is an entry in a linking table. There is no possible way to save that relationship before the instance itself exists, as it won't have an ID to enter into that linking table.
I am deleting objects in different models with DeleteView in Django.
The problem is that I don't want the objects to be completely deleted but rather just hidden. First I thought that it made sense to keep my views as they are but instead overriding the delete method in each model to do as follows
def delete(self, force=False):
if force:
return super(ModelName, self).delete()
else:
self.is_deleted = True
self.save()
but then I noticed that the delete method wont be called in bulk deletion so this method will be too risky.
Can someone recommend a good way to do this? I still want to keep the normal behaviour of DeleteView but it should just 'deactivating' the objects rather than deleting them.
DeleteView is as follows:
def delete(self, request, *args, **kwargs):
"""
Calls the delete() method on the fetched object and then
redirects to the success URL.
"""
self.object = self.get_object()
success_url = self.get_success_url()
self.object.delete()
return HttpResponseRedirect(success_url)
Will it be sufficient if I replace self.object.delete() with
self.object.is_deleted = True
self.object.save()
When I have marked my objects as deleted, how can I make sure that my querysets wont contain the deleted objects? I could simply replace get_queryset() in my ListView but they should be left out of any queryset on the page so I wonder if I would get better results if I customize the objects manager instead?
I've been looking at django-reversion. Could I simply just delete all objects in the normal manner and then use django-reversion if I want to restore them? Are there any disadvantages of this solution?
When I have marked my objects as deleted, how can I make sure that my querysets wont contain the deleted objects?
As the comment states, the Django-only solution is writing a customer Manager that understands your is_deleted field.
I've been looking at django-reversion. Could I simply just delete all objects in the normal manner and then use django-reversion if I want to restore them?
Yes, as long as your wrap your deletions in reversions. This can be as simple as using the reversion middleware to wrap all deletes and saves:
MIDDLEWARE_CLASSES = (
'reversion.middleware.RevisionMiddleware',
# Other middleware goes here...
)
Are there any disadvantages of this solution?
None that I've found, its well supported and apart from just deletion support it also has version tracking.
but then I noticed that the delete method wont be called in bulk
deletion so this method will be too risky.
you can write your own QuerySet for that and use it as_manager. Same QuerySet can take care of hiding your deleted fields from displaying. Remember to leave some way for retrieving all of your deleted fields.
Preconditions:
I'm new to Python and to Flask-Admin in particular. I created a simple test service, which has MondoDB, keeping the data with relationship of 'one-to-one' kind.
employeeName -> salary
The model looks like that:
class Employee(db.Document):
fullName = db.StringField(max_length=160, unique=True)
salary = db.IntField()
And I use Flask-Admin to observe the table with the data and to edit it.
When I want to change the 'salary' field, I just press the 'edit' button and in Flask-Admin's default edit view I change the integer value. I press 'Submit' and a new value in the database is successfully applied.
Question:
But I need to override the Submit method in the way, that leaves as it is the functionality and adds some custom code. Like let's assume I want to add a comment in the log file after an actual db submit:
logging.warning('The salary of %s: was changed to /%s', fullName, salary)
Any suggestion on how to achieve that would be much appreciated. Perhaps you could direct me in the way to go, since the Flask-Admin documentation doesn't give me enough help so far.
You can override on_model_change method to add your custom logic. Check http://flask-admin.readthedocs.org/en/latest/api/mod_model/#flask.ext.admin.model.BaseModelView.on_model_change
I ended up overriding a save method in my Document-derived class.
So now my Employee class contains this kind of code:
def save(self, *args, **kwargs):
print 'whatever I want to do myself is here'
return super(Employee, self).save(*args, **kwargs)
Today I found that this solution is actually nothing new and is described on StackOverflow.
But for my specific case I think Joes' answer is better. I like it more, because if I override on_model_change I invoke my custom code only if I edit database through Admin webpage; and each programmatic operation over database (like save, update) will work using native code - which is exactly what I want. If I override save method, I will be handling every save operation myself, whether It was initiated by Admin area or programmatically by the server engine.
Solved, thank you!