I have a few model inheritance levels in Django:
class WorkAttachment(models.Model):
""" Abstract class that holds all fields that are required in each attachment """
work = models.ForeignKey(Work)
added = models.DateTimeField(default=datetime.datetime.now)
views = models.IntegerField(default=0)
class Meta:
abstract = True
class WorkAttachmentFileBased(WorkAttachment):
""" Another base class, but for file based attachments """
description = models.CharField(max_length=500, blank=True)
size = models.IntegerField(verbose_name=_('size in bytes'))
class Meta:
abstract = True
class WorkAttachmentPicture(WorkAttachmentFileBased):
""" Picture attached to work """
image = models.ImageField(upload_to='works/images', width_field='width', height_field='height')
width = models.IntegerField()
height = models.IntegerField()
There are many different models inherited from WorkAttachmentFileBased and WorkAttachment. I want to create a signal, which would update an attachment_count field for parent work, when attachment is created. It would be logical, to think that signal made for parent sender (WorkAttachment) would run for all inherited models too, but it does not. Here is my code:
#receiver(post_save, sender=WorkAttachment, dispatch_uid="att_post_save")
def update_attachment_count_on_save(sender, instance, **kwargs):
""" Update file count for work when attachment was saved."""
instance.work.attachment_count += 1
instance.work.save()
Is there a way to make this signal work for all models inherited from WorkAttachment?
Python 2.7, Django 1.4 pre-alpha
P.S. I've tried one of the solutions I found on the net, but it did not work for me.
You could register the connection handler without sender specified. And filter the needed models inside it.
from django.db.models.signals import post_save
from django.dispatch import receiver
#receiver(post_save)
def my_handler(sender, **kwargs):
# Returns false if 'sender' is NOT a subclass of AbstractModel
if not issubclass(sender, AbstractModel):
return
...
Ref: https://groups.google.com/d/msg/django-users/E_u9pHIkiI0/YgzA1p8XaSMJ
The simplest solution is to not restrict on the sender, but to check in the signal handler whether the respective instance is a subclass:
#receiver(post_save)
def update_attachment_count_on_save(sender, instance, **kwargs):
if isinstance(instance, WorkAttachment):
...
However, this may incur a significant performance overhead as every time any model is saved, the above function is called.
I think I've found the most Django-way of doing this: Recent versions of Django suggest to connect signal handlers in a file called signals.py. Here's the necessary wiring code:
your_app/__init__.py:
default_app_config = 'your_app.apps.YourAppConfig'
your_app/apps.py:
import django.apps
class YourAppConfig(django.apps.AppConfig):
name = 'your_app'
def ready(self):
import your_app.signals
your_app/signals.py:
def get_subclasses(cls):
result = [cls]
classes_to_inspect = [cls]
while classes_to_inspect:
class_to_inspect = classes_to_inspect.pop()
for subclass in class_to_inspect.__subclasses__():
if subclass not in result:
result.append(subclass)
classes_to_inspect.append(subclass)
return result
def update_attachment_count_on_save(sender, instance, **kwargs):
instance.work.attachment_count += 1
instance.work.save()
for subclass in get_subclasses(WorkAttachment):
post_save.connect(update_attachment_count_on_save, subclass)
I think this works for all subclasses, because they will all be loaded by the time YourAppConfig.ready is called (and thus signals is imported).
You could try something like:
model_classes = [WorkAttachment, WorkAttachmentFileBased, WorkAttachmentPicture, ...]
def update_attachment_count_on_save(sender, instance, **kwargs):
instance.work.attachment_count += 1
instance.work.save()
for model_class in model_classes:
post_save.connect(update_attachment_count_on_save,
sender=model_class,
dispatch_uid="att_post_save_"+model_class.__name__)
(Disclaimer: I have not tested the above)
I just did this using python's (relatively) new __init_subclass__ method:
from django.db import models
def perform_on_save(*args, **kw):
print("Doing something important after saving.")
class ParentClass(models.Model):
class Meta:
abstract = True
#classmethod
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
models.signals.post_save.connect(perform_on_save, sender=cls)
class MySubclass(ParentClass):
pass # signal automatically gets connected.
This requires django 2.1 and python 3.6 or better. Note that the #classmethod line seems to be required when working with the django model and associated metaclass even though it's not required according to the official python docs.
post_save.connect(my_handler, ParentClass)
# connect all subclasses of base content item too
for subclass in ParentClass.__subclasses__():
post_save.connect(my_handler, subclass)
have a nice day!
Michael Herrmann's solution is definitively the most Django-way of doing this.
And yes it works for all subclasses as they are loaded at the ready() call.
I would like to contribute with the documentation references :
In practice, signal handlers are usually defined in a signals submodule of the application they relate to. Signal receivers are connected in the ready() method of your application configuration class. If you’re using the receiver() decorator, simply import the signals submodule inside ready().
https://docs.djangoproject.com/en/dev/topics/signals/#connecting-receiver-functions
And add a warning :
The ready() method may be executed more than once during testing, so you may want to guard your signals from duplication, especially if you’re planning to send them within tests.
https://docs.djangoproject.com/en/dev/topics/signals/#connecting-receiver-functions
So you might want to prevent duplicate signals with a dispatch_uid parameter on the connect function.
post_save.connect(my_callback, dispatch_uid="my_unique_identifier")
In this context I'll do :
for subclass in get_subclasses(WorkAttachment):
post_save.connect(update_attachment_count_on_save, subclass, dispatch_uid=subclass.__name__)
https://docs.djangoproject.com/en/dev/topics/signals/#preventing-duplicate-signals
This solution resolves the problem when not all modules imported into memory.
def inherited_receiver(signal, sender, **kwargs):
"""
Decorator connect receivers and all receiver's subclasses to signals.
#inherited_receiver(post_save, sender=MyModel)
def signal_receiver(sender, **kwargs):
...
"""
parent_cls = sender
def wrapper(func):
def childs_receiver(sender, **kw):
"""
the receiver detect that func will execute for child
(and same parent) classes only.
"""
child_cls = sender
if issubclass(child_cls, parent_cls):
func(sender=child_cls, **kw)
signal.connect(childs_receiver, **kwargs)
return childs_receiver
return wrapper
It's also possible to use content types to discover subclasses - assuming you have the base class and subclasses packaged in the same app. Something like this would work:
from django.contrib.contenttypes.models import ContentType
content_types = ContentType.objects.filter(app_label="your_app")
for content_type in content_types:
model = content_type.model_class()
post_save.connect(update_attachment_count_on_save, sender=model)
In addition to #clwainwright answer, I configured his answer to instead work for the m2m_changed signal. I had to post it as an answer for the code formatting to make sense:
#classmethod
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
for m2m_field in cls._meta.many_to_many:
if hasattr(cls, m2m_field.attname) and hasattr(getattr(cls, m2m_field.attname), 'through'):
models.signals.m2m_changed.connect(m2m_changed_receiver, weak=False, sender=getattr(cls, m2m_field.attname).through)
It does a couple of checks to ensure it doesn't break if anything changes in future Django versions.
Related
I want to create an immutable copy of the model instance, such that the user be able to access the details of the model, including its attributes, but not the save and the delete methods.
The use case is that there are two repos accessing the django model, where one is supposed to have a writable access to the model, while another should only have a readable access to it.
I have been researching ways of doing this. One way, I could think is the readable repo gets the model instance with a wrapper, which is a class containing the model instance as a private variable.
class ModelA(models.Model):
field1=models.CharField(max_length=11)
class ModelWrapper:
def __init__(self,instance):
self.__instance=instance
def __getattr__(self,name):
self.__instance.__getattr__(name)
The obvious problem with this approach is that the user can access the instance from the wrapper instance:
# model_wrapper is the wrapper created around the instance. Then
# model_wrapper._ModelWrapper__instance refers to the ModelA instance. Thus
instance = model_wrapper._ModelWrapper__instance
instance.field2="changed"
instance.save()
Thus, he would be able to update the value. Is there a way to restrict this behaviour?
Try overriding the models save and delete in webapp where you want to restrict that:
class ModelA(models.Model):
field1=models.CharField(max_length=11)
def save(self, *args, **kwargs):
return # Or raise an exception if needed
def delete(self, *args, **kwargs):
return # Or raise an exception if needed
If you are using update or delete on a queryset you might also need a pre_save and pre_delete signal:
from django.db.models.signals import pre_delete
#receiver(pre_delete, sender=ModelA)
def pre_delete_handler(sender, instance, *args, **kwargs):
raise Exception('Cannot delete')
Edit: Looks like querysets don't send the pre_save/post_save signal so that cannot be used there, the delete signals are emitted though.
class ModelA(models.Model):
field1=models.CharField(max_length=11)
class ModelWrapper:
def __init__(self, instance):
self.__instance=instance
# Delete unwanted attributes
delattr(self.__instance, 'save')
delattr(self.__instance, 'delete')
def __getattr__(self,name):
self.__instance.__getattr__(name)
I have a Django model that makes use of some libraries which I would like to be able to override. For instance, when testing I'd like to pass a mock instead of having my model tightly coupled. I can do this in python, but for the life of me I can't figure out how to do it with a Django model. Here's a simplified example not using Django:
import requests
class APIClient:
def __init__(self, **kwargs):
self.http_lib = kwargs.get("http_lib", requests)
def get_url(self, url):
return self.http_lib.get(url)
For regular use of this class I can still use requests but if I want to use a different library for some reason or if I want to test certain outcomes, I can invoke the class with client = APIClient(http_lib=MockRequests())
But how do I do that with a Django model? If I try to pass kwargs that aren't backed by a database field Django throws an error. Overriding __init__ is not considered a good practice either. Is there a way in Django to set and get a value that isn't backed by a database column?
Do you have a settings.TEST var? If so, you could make http_lib a function that returns the proper lib:
from django.conf import settings
def get_http_lib(mock=None):
if not mock:
return requests
return MockRequests()
class APIClient(Model):
def __init__(self, **kwargs):
# ...whatever...
#property
def some_column(self):
http_lib = get_http_lib(settings.TEST)
# ...etc...
Not ideal, but passable.
PRE-EDIT ANSWER (doesn't work):
What if you setattr subsequent to instantiating the Model?
# In model...
class APIClient(Model):
def __init__(self, **kwargs):
self.http_lib = requests
# ...etc...
# In tests...
client = APIClient()
setattr(client, 'http_lib', MockRequests())
In our Django project, we have a receiver function for post_save signals sent by the User model:
#receiver(post_save, sender=User)
def update_intercom_attributes(sender, instance, **kwargs):
# if the user is not yet in the app, do not attempt to setup/update their Intercom profile.
if instance.using_app:
intercom.update_intercom_attributes(instance)
This receiver calls an external API, and we'd like to disable it when generating test fixtures with factory_boy. As far as I can tell from https://factoryboy.readthedocs.io/en/latest/orms.html#disabling-signals, however, all one can do is mute all post_save signals, not a specific receiver.
At the moment, the way we are going about this is by defining an IntercomMixin which every test case inherits from (in the first position in the inheritance chain):
from unittest.mock import patch
class IntercomMixin:
#classmethod
def setUpClass(cls):
cls.patcher = patch('lucy_web.lib.intercom.update_intercom_attributes')
cls.intercomMock = cls.patcher.start()
super().setUpClass()
#classmethod
def tearDownClass(cls):
super().tearDownClass()
cls.patcher.stop()
However, it is cumbersome and repetitive to add this mixin to every test case, and ideally, we'd like to build this patching functionality into the test factories themselves.
Is there any way to do this in Factory Boy? (I had a look at the source code (https://github.com/FactoryBoy/factory_boy/blob/2d735767b7f3e1f9adfc3f14c28eeef7acbf6e5a/factory/django.py#L256) and it seems like the __enter__ method is setting signal.receivers = []; perhaps this could be modified so that it accepts a receiver function and pops it out of the the signal.receivers list)?
For anyone looking for just this thing and finding themselves on this question you can find the solution here: https://stackoverflow.com/a/26490827/1108593
Basically... call #factory.django.mute_signals(post_save) on the test method itself; or in my case the setUpTestData method.
Test:
# test_models.py
from django.test import TestCase
from django.db.models.signals import post_save
from .factories import ProfileFactory
import factory
class ProfileTest(TestCase):
#classmethod
#factory.django.mute_signals(post_save)
def setUpTestData(cls):
ProfileFactory(id=1) # This won't trigger user creation.
...
Profile Factory:
#factories.py
import factory
from factory.django import DjangoModelFactory
from profiles.models import Profile
from authentication.tests.factories import UserFactory
class ProfileFactory(DjangoModelFactory):
class Meta:
model = Profile
user = factory.SubFactory(UserFactory)
This allows your factories to keep working as expected and the tests to manipulate them as needed to test what they need.
In case you want to mute all signals of a type, you can configure that on your factory directly. For example:
from django.db.models.signals import post_save
#factory.django.mute_signals(post_save)
class UserFactory(DjangoModelFactory):
...
I'm working on a Django project that I did not start and I am facing a problem of inheritance.
I have a big model (simplified in the example) called MyModel that is supposed to represents different kind of items.
All the instance objects of MyModel should have the same fields but the methods behaviours varies a lot depending on the item type.
Up to this moment this has been designed using a single MyModel field called item_type.
Then methods defined in MyModel check for this field and perform different logic using multiple if:
def example_method(self):
if self.item_type == TYPE_A:
do_this()
if self.item_type == TYPE_B1:
do_that()
Even more, some of the sub-types have many things in common, so let's say the subtypes B and C represents a 1st level of inheritance.
Then these types have sub-types being for example B1, B2, C1, C2 (better explained in the example code below).
I would say that's not the best approach to perform polymorphism.
Now I want to change these models to use real inheritance.
Since all submodels have the same fields I think multi-table inheritance is not necessary. I was thinking to use proxy models because only their behaviour should change depending on their types.
This a pseudo-solution I came up to:
ITEM_TYPE_CHOICES = (
(TYPE_A, _('Type A')),
(TYPE_B1, _('Type B1')),
(TYPE_B2, _('Type B2')),
(TYPE_C1, _('Type C1')),
(TYPE_C2, _('Type C2')))
class MyModel(models.Model):
item_type = models.CharField(max_length=12, choices=ITEM_TYPE_CHOICES)
def common_thing(self):
pass
def do_something(self):
pass
class ModelA(MyModel):
class Meta:
proxy = True
def __init__(self, *args, **kwargs):
super().__init__(self, *args, **kwargs)
self.item_type = TYPE_A
def do_something(self):
return 'Hola'
class ModelB(MyModel):
class Meta:
proxy = True
def common_thing(self):
pass
class ModelB1(ModelB):
class Meta:
proxy = True
def __init__(self, *args, **kwargs):
super().__init__(self, *args, **kwargs)
self.item_type = TYPE_B1
def do_something(self):
pass
class ModelB2(ModelB):
class Meta:
proxy = True
def __init__(self, *args, **kwargs):
super().__init__(self, *args, **kwargs)
self.item_type = TYPE_B2
def do_something(self):
pass
This might work if we already know the type of the object we are working on.
Let's say we want to instantiate a MyModel object of type C1 then we could simply instantiate a ModelC1 and the item_type would be set up correctly.
The problem is how to get the correct proxy model from the generic MyModel instances?
The most common case is when we get a queryset result: MyModel.objects.all(), all these objects are instances of MyModel and they don't know anything about the proxies.
I've seen around different solution like django-polymorphic but as I've understood that relies on multi-table inheritance, isn't it?
Several SO answers and custom solutions I've seen:
https://stackoverflow.com/a/7526676/1191416
Polymorphism in Django
http://anthony-tresontani.github.io/Python/2012/09/11/django-polymorphism/
https://github.com/craigds/django-typed-models
Creating instances of Django proxy models from their base class
but none of them convinced me 100%..
Considering this might be a common scenario did anyone came up with a better solution?
When you use django-polymorphic in your base model, you'll get this casting behavior for free:
class MyModel(PolymorphicModel):
pass
Each model that extends from it (proxy model or concrete model), will be casted back to that model when you do a MyModel.objects.all()
I have few experience with model proxies so I can't tell if this would properly work (without bearking anything I mean) nor how complicated this might be, but you could use an item_type:ProxyClass mapping and override your model's queryset (or provide a second manager with custom queryset etc) that actually lookup this mapping and instanciates the correct proxy model.
BTW you may want at django.models.base.Model.from_db, which (from a very quick glance at the source code) seems to be the method called by QuerySet.populate() to instanciate models. Just overriding this method might possibly be enough to solve the problem - but here again it might also breaks something...
I came up with a custom solution inspired by this SO answer and this blog post:
from django.db import models
from django.dispatch.dispatcher import receiver
ITEM_TYPE_CHOICES = (
(TYPE_A, _('type_a')),
(TYPE_B1, _('type_b1')),
(TYPE_B2, _('type_b2')),
(TYPE_C1, _('type_c1')),
(TYPE_C2, _('type_c2')),
)
class MyModel(models.Model):
item_type = models.CharField(max_length=12, choices=ITEM_TYPE_CHOICES)
description = models.TextField(blank=True, null=True)
def common_thing(self):
pass
def do_something(self):
pass
# ****************
# Hacking Django *
# ****************
PROXY_CLASS_MAP = {} # We don't know this yet
#classmethod
def register_proxy_class(cls, item_type):
"""Class decorator for registering subclasses."""
def decorate(subclass):
cls.PROXY_CLASS_MAP[item_type] = subclass
return subclass
return decorate
def get_proxy_class(self):
return self.PROXY_CLASS_MAP.get(self.item_type, MyModel)
# REGISTER SUBCLASSES
#MyModel.register_proxy_class(TYPE_A)
class ModelA(MyModel):
class Meta:
proxy = True
def __init__(self, *args, **kwargs):
super().__init__(self, *args, **kwargs)
self.item_type = TYPE_A
def do_something(self):
pass
# No need to register this, it's never instantiated directly
class ModelB(MyModel):
class Meta:
proxy = True
def common_thing(self):
pass
#MyModel.register_proxy_class(TYPE_B1)
class ModelB1(ModelB):
class Meta:
proxy = True
def __init__(self, *args, **kwargs):
super().__init__(self, *args, **kwargs)
self.item_type = TYPE_B1
def do_something(self):
pass
#MyModel.register_proxy_class(TYPE_B2)
class ModelB2(ModelB):
class Meta:
proxy = True
def __init__(self, *args, **kwargs):
super().__init__(self, *args, **kwargs)
self.item_type = TYPE_B2
def do_something(self):
pass
# USING SIGNAL TO CHANGE `__class__` at runtime
#receiver(models.signals.post_init, sender=MyModel)
def update_proxy_object(sender, **kwargs):
instance = kwargs['instance']
if hasattr(instance, "get_proxy_class") and not instance._meta.proxy:
proxy_class = instance.get_proxy_class()
if proxy_class is not None:
instance.__class__ = proxy_class
I'm using the decorator register_proxy_class to register each subclass after MyModel has been declared otherwise I would have needed to explicitly declare a map of {type: subclass} inside MyModel.
This would have been bad:
because at declaration we can't reference any of the proxy subclasses from MyModel (we could solve these with string names)
the parent would be aware of its subclasses which breaks OOP principles.
How it works:
Using the #register_proxy_class(type) decorator each subclass register itself, in fact creating an entry into MyModel.PROXY_CLASS_MAP dict when the module is loaded.
Then update_proxy_object is executed whenever MyModel dispatch a post_init signal. It change the __class__ of MyModel instances at runtime to select the right proxy subclass.
So basically:
# a1: MyModel dispatch a post_init signal -> `update_proxy_object` set the proper instance __class__ = ModelA
# Do NOT call ModelA.__init__
a1 = MyModel(item_type=TYPE_A)
isinstance(a1, MyModel) # True
isinstance(a1, ModelA) # True
# a2: calls ModelA.__init__ that call the parent MyModel.__init__ then it sets up the item_type for us
a2 = ModelA() # <- no need to pass item_type
isinstance(a2,MyModel) # True
isinstance(a2, ModelA) #True
# Using custom managers of MyModel return all objects having item_type == 'TYPE_B1'
b1 = MyModel.objects.b1()[0] # get the first one
isinstance(b1, ModelB1) # True
isinstance(b1, ModelB) # True
isinstance(b1, MyModel) # True
isinstance(b1, ModelA) # False
It seems to work so far but I will experiment a bit more for possible problems I haven't think about.
Cool!
How to rewrite the Django model save method?
class Message(models.Model):
"""
message
"""
message_num = models.CharField(default=getMessageNum, max_length=16)
title = models.CharField(max_length=64)
content = models.CharField(max_length=1024)
def save(self, force_insert=False, force_update=False, using=None,
update_fields=None):
# I want send email there
pass
I mean, in the Django model, if I create instance success, I want to call a function, such as send a email in the function.
I find in the Django model have a save method. I am not sure whether should write other code, because there are so many params.
I mean whether I only should care about my send email logic?
When you override the save method, you still have to make sure that the it actually saves the instance. You can do that by simply calling the parent class' save via super:
class Message(models.Model):
# ...
def save(self, *args, **kwargs):
# this will take care of the saving
super(Message, self).save(*args, **kwargs)
# do email stuff
# better handle ecxeptions well or the saving might be rolled back
You can also connect the mail sending to the post_save (or pre_save, depending on your logic) signal. Whether you want to separate one orm the other in that way depends on how closely the two actions are linked and a bit on your taste.
Overriding save gives you the option to intervene in the saving process, e.g. you can change the value of fields based on whether the mail sending was successful or not save the instance at all.
The solution to what you want to do is to use Django Signals. By using Signals you can hook code to when a model is created and saved without having to rewrite the save method, that keep the separation of code and logic in a much nicer way, obviously the model does not need to know about the emails for example.
An example of how to use Signals would be to simply do the following:
from django.db.models.signals import pre_save
from django.dispatch import receiver
from myapp.models import MyModel
#receiver(pre_save, sender=MyModel)
def my_handler(sender, **kwargs):
# Code to execute whenever MyModel is saved...
If you still want to override the save() method you can use the Python super() method to do so (docs).
class MyModel(models.Model):
def save(self, *args, **kwargs):
# This will call the parent method that you are overriding
# so it will save your instance with the default behavior.
super(MyModel, self).save(*args, **kwargs)
# Then we add whatever extra code we want, e.g. send email...
Messenger.send_email()
You need to activate signal once your message is saved. That means, when your message is saved, django will issue signal as follows:
from django.db.models.signals import post_save
from django.dispatch import receiver
class Message(models.Model):
# fields...
# method for sending email
#receiver(post_save, sender=Message, dispatch_uid="send_email")
def send_email(sender, instance, **kwargs):
# your email send logic here..
You can put your signals in signals.py file inside your app folder and make sure to import that in your application config file as follows:
message/apps.py
from django.apps import AppConfig
class MyAppConfig(AppConfig):
name = 'message'
def ready(self):
import message.signals
And update init file as follows:
message/__init__.py
default_app_config = 'message.apps.MyAppConfig'