My example is very contrived, but hopefully it gets the point across.
Say I have two models like this:
class Group(models.Model):
name = models.CharField(max_length=50)
class Member(models.Model):
name = models.CharField(max_length=50)
group = models.ForeignKey(Group)
I want to add some code so that when a Member is deleted it gets recreated as a new entry (remember, very contrived!). So I do this:
#receiver(post_delete, sender=Member)
def member_delete(sender, instance, **kwargs):
instance.pk = None
instance.save()
This works perfectly fine for when a Member is deleted.
The issue, though, is if a Group is deleted this same handler is called. The Member is re-created with a reference to the Group and an IntegrityError is thrown when the final commit occurs.
Is there any way within the signal handler to determine that Group is being deleted?
What I've tried:
The sender seems to always be Member regardless.
I can't seem to find anything on instance.group to indicate a delete. Even trying to do a Group.objects.filter(id=instance.group_id).exists() returns true. It may be that the actual delete of the parent occurs after post_delete calls occur on the children, in which case what I'm trying to do is impossible.
Try doing your job by a classmethod inside Member class and forget about signals.
#classmethod
def reinit(cls, instance):
instance.delete()
instance.save()
Related
In Django I'd like to add a field "verified" of type BooleanField to my models which shall indicate if the current model instance has been reviewed by a staff user member or not. Whenever a model instance field other than the "verified" field changed the verified field value shall be reset to False. Whenever only the "verified" field has been changed it's value shall be taken as is (most of the time True but potentially False as well).
One possibility would be to reset the "verified" field in post-save signals handlers considering update_fields passed to save(). However using signals seems to be considered an anti-pattern in almost all use cases. Instead one should override the save() method. But still when overriding save I'd have to determine update_fields manually somehow. Otherwise I've no information about which fields changed.
How can I implement something like this most easily. I'd prefer a solution using a third-party package w.o. custom hacks or a solution without any dependencies to other packages. However using django-model-utils monitorfield, django-dirtyfields for a custom implementation or something equivalent would be ok as well.
Using dirty-fields seems to be easiest to implement a verified field. So far I came up with something like follows:
DJANGO-APP/models.py:
from django.db import models
from dirtyfields import DirtyFieldsMixin
class VerifiedModel(DirtyFieldsMixin, models.Model):
"""
Abstract class which allows to extend models with user verification model field.
"""
ENABLE_M2M_CHECK = True
verified = models.BooleanField(
default=False,
help_text="The verification status. True means meta-data is verified. False means meta-data is not verified. The verification status is reset to False whenever a field is set.",
)
def _update_verified_field(self):
"""
To be called in inheriting model's save() method.
"""
if self.is_dirty():
if not 'verified' in self.get_dirty_fields():
self.verified = False
class Meta:
abstract = True
class ModelToBeVerified(VerifiedModel):
...
def save(self, *args, **kwargs):
...
self._update_verified_field()
return super(ModelToBeVerified, self).save(*args, **kwargs)
I have the following model:
class A():
foriegn_id1 = models.CharField # ref to a database not managed by django
foriegn_id2 = models.CharField
class B():
a = models.OneToOneField(A, on_delete=models.CASCADE)
So I want A to be deleted as well when B is deleted:
#receiver(post_delete, sender=B)
def post_delete_b(sender, instance, *args, **kwargs):
if instance.a:
instance.a.delete()
And on the deletion of A, I want to delete the objects from the unmanaged databases:
#receiver(post_delete, sender=A)
def post_delete_b(sender, instance, *args, **kwargs):
if instance.foriegn_id1:
delete_foriegn_obj_1(instance.foriegn_id1)
if instance.foriegn_id2:
delete_foriegn_obj_2(instance.foriegn_id2)
Now, if I delete object B, it works fine. But if I delete obj A, then obj B is deleted by cascade, and then it emits a post_delete signal, which triggers the deletion of A again. Django knows how to manage that on his end, so it works fine until it reaches delete_foriegn_obj, which is then called twice and returns a failure on the second attempt.
I thought about validating that the object exists in delete_foriegn_obj, but it adds 3 more calls to the DB.
So the question is: is there a way to know during post_delete_b that object a has been deleted?
Both instance.a and A.objects.get(id=instance.a.id) return the object (I guess Django caches the DB update until it finishes all of the deletions are done).
The problem is that the cascaded deletions are performed before the requested object is deleted, hence when you queried the DB (A.objects.get(id=instance.a.id)) the related a instance is present there. instance.a can even show a cached result so there's no way it would show otherwise.
So while deleting a B model instance, the related A instance will always be existent (if actually there's one). Hence, from the B model post_delete signal receiver, you can get the related A instance and check if the related B actually exists from DB (there's no way to avoid the DB here to get the actual picture underneath):
#receiver(post_delete, sender=B)
def post_delete_b(sender, instance, *args, **kwargs):
try:
a = instance.a
except AttributeError:
return
try:
a._state.fields_cache = {}
except AttributeError:
pass
try:
a.b # one extra query
except AttributeError:
# This is cascaded delete
return
a.delete()
We also need to make sure we're not getting any cached result by making a._state.fields_cache empty. The fields_cache (which is actually a descriptor that returns a dict upon first access) is used by the ReverseOneToOneDescriptor (accessor to the related object on the opposite side of a one-to-one) to cache the related field name-value. FWIW, the same is done on the forward side of the relationship by the ForwardOneToOneDescriptor accessor.
Edit based on comment:
If you're using this function for multiple senders' post_delete, you can dynamically get the related attribute via getattr:
getattr(a, sender.a.field.related_query_name())
this does the same as a.b above but allows us to get attribute dynamically via name, so this would result in exactly similar query as you can imagine.
I simplify my code structure, which contains two models:
# created by third part app, not Django one
# but we share same DB, so i have access to this one
class A(models.Model):
title = models.TextField()
# other fields ...
class Meta:
manage = False
class B(models.Model):
model_a = models.OneToOneField(A, related_name='+')
# other fields, to extend model A functionality
Is this a good way to extend third part app model A with my additional fields and methods? Now i have problem to sync this models true one-to-one field. Since I don't have access to trigger model A creation.
In ideal world i should have CarA and CarB. And CarB = CarA relation should be created if CarB exists.
I base this idea on Django 1.5 user extension. Is this clear enough? Or should i do something else?
You could use a property to create the B instance on access if it doesn't exist yet, ie,
class A(models.Model):
title = models.TextField()
# other fields ...
class Meta:
manage = False
#property
def b(self):
if not hasattr(self, "__bcache"):
self.__bcache, created = B.objects.get_or_create(model_a = self)
return self.__bcache
It seems like you're new to both Python and Django so let's explain quickly...
First, the "#property" part: it's a decorator that turns the following function into a computed attribute - IOW you use it as an attribute (myA.b.whatever), and under the hood it turns it into a method call (myA.b().whatever). It's not strictly required here, we would have used an explicit getter (the same method named get_a()) but it's cleaner that way.
Then our method implementation: obviously we don't want to hit the database each time someone looks up A.b, so
first we check if an attribute named __bcache ("b" "cache") is set on the current instance.
if not, we call B.objects.get_or_create(a_model=self) which will either retrieve the existing B instance for this A instance or create one if none exists yet and we store this B instance as self.__bcache so next call will retrieve it directly from __bcache instead of hitting the database.
and finally we return self.__bcache that is now garanteed to exists and point to the related B instance.
I'm using ndb.polymodel.PolyModel to model different types of social media accounts. After an account has been deleted some cleanup has to be done (depending on the subclass).
I tried implementing a _pre_delete_hook on the subclass, but it never gets executed. When adding the exact same hook to its parent class it does get executed:
class Account(polymodel.PolyModel):
user = ndb.KeyProperty(kind=User)
#classmethod
def _pre_delete_hook(cls, key):
# Works as expected
pass
class TwitterAccount(Account):
twitter_profile = ndb.StringProperty()
#classmethod
def _pre_delete_hook(cls, key):
# Never gets called
pass
t_account = TwitterAccount.get_by_id(1)
t_account.key.delete()
Seems strange to me as I would expect the subclass hook to override its parent. Or is this expected behavior?
Solution:
I missed the fact that a delete operation happens on a Key (not a model instance). The key itself only knows the name of the topmost class (Account in this case).
I ended up defining a custom _delete_hook instance method on the subclass. When _pre_delete_hook gets called, I fetch the entity and then check if its class has a specific delete_hook to execute:
# In subclass:
def _delete_hook(self):
# Do stuff specific for this subclass
return
# In parent class
#classmethod
def _pre_delete_hook(cls, key):
s = key.get()
if hasattr(s, '_delete_hook'):
s._delete_hook()
Unfortunately this is expected though non-obvious behaviour
When you call key delete with a PolyModel you are only calling delete on the parent class.
Have a look at the Key you are calling delete on, you will see the Kind is the parent Model. It then looks up the class via the Kind -> class map which will give you an Account class. Have a read up on how PolyModel works. It stores all Account sub classes as Account and has an extra property that describes the inheritance heirarchy. So that a query on Account will return all Account subclasses, but a query on TwitterAccount will only return that subclass.
Calling ndb.delete_multi won't work either.
If you want a specific PolyModel subclass pre delete hook to run, you will have to add a delete method to the subclass, call that, that can then call the subclass _pre_delete_hook (and may be call super).
But that will cause issues if you ever call key.delete directly
I've to update some database tables after saving a particular model. I've used the #receiver(post_save decorator for this. But when in this decorator function, the values are still not saved in the database. I've one to many relation but when I get the current instance that is being saved using kwargs['instance'], it doesn't have child objects. But after saving when I check from shell, it does have child objects. Following is the code that I'm using:
#receiver(post_save, sender=Test)
def do_something(sender, **kwargs):
test = kwargs['instance']
users = User.objects.filter(tags__in=test.tags.values_list('id',flat=True))
for user in users:
other_model = OtherModel(user=user, test=test, is_new=True)
other_model.save()
post_save is sent at the end of Model.save_base(), which is itself called by Model.save((). This means that if you override your model's save() method, post_save is sent when you call on super(YourModel, self).save(*args, **kw).
If Tag has a ForeignKey on Test and the Test instance was just created, you can't expect to have any Tag instance related to your Test instance at this stage, since the Tag instances obviously need to know the Test instance's pk first so they can be saved too.
The post_save for the parent instance is called when the parent instance is saved. If the children are added after that, then they won't exist at the time the parent post_save is called.