I am having trouble to figure out whether my signal handler is called during fixture loading or not. Most of my signal handlers receive an extra keyword raw when django load fixtures. However, this extra keyword only get passed through when handling 'pre/post' signals, it doesn't get pass through if the signal I am listening to is m2m_changed!
Is there any reliable way to tell whether I am in a "fixture loading mode" or not with m2m_changed
Well, if anyone found this just like me, one horrible, horrible way to solve this is the following:
https://code.djangoproject.com/ticket/8399#comment:7
In this old ticket of the django Project a way to determine whether a signal was triggered form a loaddata or not is requested.
After this, the raw keyword was the proposed solution, which does not appear in the m2m_changed signal. Before that there was the following proposed workaround, which still works:
try:
from functools import wraps
except ImportError:
from django.utils.functional import wraps
import inspect
def disable_for_loaddata(signal_handler):
#wraps(signal_handler)
def wrapper(*args, **kwargs):
for fr in inspect.stack():
if inspect.getmodulename(fr[1]) == 'loaddata':
return
signal_handler(*args, **kwargs)
return wrapper
You can than use this decorater to disable any signal on loaddata, like this:
from django.db.models.signals import m2m_changed
from django.dispatch import receiver
#receiver(m2m_changed, sender=models.Foo.bar.through)
#disable_for_loaddata
def some_signal(sender, instance: models.Foo, action: str, **kwargs):
# signal code
Related
In our Django project, we have a receiver function for post_save signals sent by the User model:
#receiver(post_save, sender=User)
def update_intercom_attributes(sender, instance, **kwargs):
# if the user is not yet in the app, do not attempt to setup/update their Intercom profile.
if instance.using_app:
intercom.update_intercom_attributes(instance)
This receiver calls an external API, and we'd like to disable it when generating test fixtures with factory_boy. As far as I can tell from https://factoryboy.readthedocs.io/en/latest/orms.html#disabling-signals, however, all one can do is mute all post_save signals, not a specific receiver.
At the moment, the way we are going about this is by defining an IntercomMixin which every test case inherits from (in the first position in the inheritance chain):
from unittest.mock import patch
class IntercomMixin:
#classmethod
def setUpClass(cls):
cls.patcher = patch('lucy_web.lib.intercom.update_intercom_attributes')
cls.intercomMock = cls.patcher.start()
super().setUpClass()
#classmethod
def tearDownClass(cls):
super().tearDownClass()
cls.patcher.stop()
However, it is cumbersome and repetitive to add this mixin to every test case, and ideally, we'd like to build this patching functionality into the test factories themselves.
Is there any way to do this in Factory Boy? (I had a look at the source code (https://github.com/FactoryBoy/factory_boy/blob/2d735767b7f3e1f9adfc3f14c28eeef7acbf6e5a/factory/django.py#L256) and it seems like the __enter__ method is setting signal.receivers = []; perhaps this could be modified so that it accepts a receiver function and pops it out of the the signal.receivers list)?
For anyone looking for just this thing and finding themselves on this question you can find the solution here: https://stackoverflow.com/a/26490827/1108593
Basically... call #factory.django.mute_signals(post_save) on the test method itself; or in my case the setUpTestData method.
Test:
# test_models.py
from django.test import TestCase
from django.db.models.signals import post_save
from .factories import ProfileFactory
import factory
class ProfileTest(TestCase):
#classmethod
#factory.django.mute_signals(post_save)
def setUpTestData(cls):
ProfileFactory(id=1) # This won't trigger user creation.
...
Profile Factory:
#factories.py
import factory
from factory.django import DjangoModelFactory
from profiles.models import Profile
from authentication.tests.factories import UserFactory
class ProfileFactory(DjangoModelFactory):
class Meta:
model = Profile
user = factory.SubFactory(UserFactory)
This allows your factories to keep working as expected and the tests to manipulate them as needed to test what they need.
In case you want to mute all signals of a type, you can configure that on your factory directly. For example:
from django.db.models.signals import post_save
#factory.django.mute_signals(post_save)
class UserFactory(DjangoModelFactory):
...
I'd like to notice whenever a model is saved and then do some processing and save another model. I need the model to already have an ID set by the database in the processing stage.
With Django one would override the .save() method of model or use signals like:
from django.db.models.signals import post_save
from django.dispatch import receiver
from .models import MyModel, OtherModel
#receiver(post_save, sender=MyModel)
def do_stuff(sender, instance, created, **kwargs):
assert instance.id is not None
...
OtherModel.create(related=instance, data=...)
How to do similar with SQLAlchemy and Flask? I looked up ORM Events and it seemed that expire IntanceEvent would fit the bill. It seems to fire whenever a model instance is saved but when I try to do the same kind of thing:
from sqlalchemy import event
from . import db
from .models import MyModel, OtherModel
#event.listens_for(MyModel, "expire")
def do_stuff(target, attrs):
assert target.id is not None
...
db.session.add(OtherModel(related=target, data=...))
db.session.commit()
It fails on assert instance.id is not None with:
InvalidRequestError: This session is in 'committed' state; no further SQL can be emitted within this transaction.
It might be that I'm just approaching this the wrong way or I'm missing something crucial but I cannot figure it out. The documentation is split among Flask, Flask-SQLAlchemy and SQLAlchemy and I have hard time piecing this together.
How should I make this kind of post save trigger with SQLAlchemy?
The event you want to listen for is 'after_insert', not 'expire':
#event.listens_for(MyModel, 'after_insert')
def do_stuff(mapper, connection, target):
assert target.id is not None
...
Also, after creating OtherModel inside the listener and calling db.session.add, don't call db.session.commit as it will throw a ResourceClosedError exception.
Have a look at the accepted answer to this question which gives an example of using SQLAlchemy's after_insert mapper event. It should do what you want, but using raw SQL rather than your session object is recommended.
I read the django docs about signals and wrote this piece of code for my model Car :
#receiver(request_finished)
def signal_callback(sender, **kwargs):
print 'Save Signal received'
#receiver(post_save, sender=Car)
def signal_handler(sender, **kwargs):
pass
request_finished(signal_callback, sender=car, dispatch_url="Unique save id")
But the problem is, that when I fire up my server, and just open up the admin, I get a lot of 'Save Signal received' in my terminal. What I am wondering about is I have restricted the signal_handler to post_save only. But still, without even saving anything, the message shows up a lot of times. I dont understand this.
Note : I will be honest. I understood parts of it, not everything from the documentation.
There is a simpler way to bind post_save signals
from django.db.models.signals import post_save
from myapp.models import Car
def do_something(sender, **kwargs):
print 'the object is now saved.'
car = kwargs['instance'] #now i have access to the object
post_save.connect(do_something, sender=Car)
The signal request finished gets called every time a HTTP request is made, which is a hog.
You binded request_finished signal to signal_callback. Remove(or comment out) signal_callback, and change signal_handler as follow.
#receiver(post_save, sender=Car)
def signal_handler(sender, **kwargs):
print 'Save signal received'
There are many Stack Overflow posts about recursion using the post_save signal, to which the comments and answers are overwhelmingly: "why not override save()" or a save that is only fired upon created == True.
Well I believe there's a good case for not using save() - for example, I am adding a temporary application that handles order fulfillment data completely separate from our Order model.
The rest of the framework is blissfully unaware of the fulfillment application and using post_save hooks isolates all fulfillment related code from our Order model.
If we drop the fulfillment service, nothing about our core code has to change. We delete the fulfillment app, and that's it.
So, are there any decent methods to ensure the post_save signal doesn't fire the same handler twice?
you can use update instead of save in the signal handler
queryset.filter(pk=instance.pk).update(....)
What you think about this solution?
#receiver(post_save, sender=Article)
def generate_thumbnails(sender, instance=None, created=False, **kwargs):
if not instance:
return
if hasattr(instance, '_dirty'):
return
do_something()
try:
instance._dirty = True
instance.save()
finally:
del instance._dirty
You can also create decorator
def prevent_recursion(func):
#wraps(func)
def no_recursion(sender, instance=None, **kwargs):
if not instance:
return
if hasattr(instance, '_dirty'):
return
func(sender, instance=instance, **kwargs)
try:
instance._dirty = True
instance.save()
finally:
del instance._dirty
return no_recursion
#receiver(post_save, sender=Article)
#prevent_recursion
def generate_thumbnails(sender, instance=None, created=False, **kwargs):
do_something()
Don't disconnect signals. If any new model of the same type is generated while the signal is disconnected the handler function won't be fired. Signals are global across Django and several requests can be running concurrently, making some fail while others run their post_save handler.
I think creating a save_without_signals() method on the model is more explicit:
class MyModel()
def __init__():
# Call super here.
self._disable_signals = False
def save_without_signals(self):
"""
This allows for updating the model from code running inside post_save()
signals without going into an infinite loop:
"""
self._disable_signals = True
self.save()
self._disable_signals = False
def my_model_post_save(sender, instance, *args, **kwargs):
if not instance._disable_signals:
# Execute the code here.
How about disconnecting then reconnecting the signal within your post_save function:
def my_post_save_handler(sender, instance, **kwargs):
post_save.disconnect(my_post_save_handler, sender=sender)
instance.do_stuff()
instance.save()
post_save.connect(my_post_save_handler, sender=sender)
post_save.connect(my_post_save_handler, sender=Order)
You should use queryset.update() instead of Model.save() but you need to take care of something else:
It's important to note that when you use it, if you want to use the new object you should get his object again, because it will not change the self object, for example:
>>> MyModel.objects.create(pk=1, text='')
>>> el = MyModel.objects.get(pk=1)
>>> queryset.filter(pk=1).update(text='Updated')
>>> print el.text
>>> ''
So, if you want to use the new object you should do again:
>>> MyModel.objects.create(pk=1, text='')
>>> el = MyModel.objects.get(pk=1)
>>> queryset.filter(pk=1).update(text='Updated')
>>> el = MyModel.objects.get(pk=1) # Do it again
>>> print el.text
>>> 'Updated'
You could also check the raw argument in post_save and then call save_baseinstead of save.
the Model's .objects.update() method bypasses the post_save signal
Try this something like this:
from django.db import models
from django.db.models.signals import post_save
class MyModel(models.Model):
name = models.CharField(max_length=200)
num_saves = models.PositiveSmallIntegerField(default=0)
#classmethod
def post_save(cls, sender, instance, created, *args, **kwargs):
MyModel.objects.filter(id=instance.id).update(save_counter=instance.save_counter + 1)
post_save.connect(MyModel.post_save, sender=MyModel)
In this example, an object has a name and each time .save() is called, the .num_saves property is incremented, but without recursion.
Check this out...
Each signal has it's own benefits as you can read about in the docs here but I wanted to share a couple things to keep in mind with the pre_save and post_save signals.
Both are called every time .save() on a model is called. In other words, if you save the model instance, the signals are sent.
running save() on the instance within a post_save can often create a never ending loop and therefore cause a max recursion depth exceeded error --- only if you don't use .save() correctly.
pre_save is great for changing just instance data because you do not have to call save() ever which eliminates the possibility for above. The reason you don't have to call save() is because a pre_save signal literally means right before being saved.
Signals can call other signals and or run delayed tasks (for Celery) which can be huge for usability.
Source: https://www.codingforentrepreneurs.com/blog/post-save-vs-pre-save-vs-override-save-method/
Regards!!
In post_save singal in django for avoiding recursion 'if created' check is required
from django.dispatch import receiver
from django.db.models.signals import post_save
#receiver(post_save, sender=DemoModel)
def _post_save_receiver(sender,instance,created, **kwargs):
if created:
print('hi..')
instance.save()
I was using the save_without_signals() method by #Rune Kaagaard until i updated my Django to 4.1. On Django 4.1 this method started raising an Integrity error on the database that gave me 4 days of headaches and i couldn't fix it.
So i started to use the queryset.update() method and it worked like a charm. It doesn't trigger the pre_save() neither post_save() and you don't need to override the save() method of your model. 1 line of code.
#receiver(pre_save, sender=Your_model)
def any_name(sender, instance, **kwargs):
Your_model.objects.filter(pk=instance.pk).update(model_attribute=any_value)
I have a django site, and some of the feeds are published through FeedBurner. I would like to ping FeedBurner whenever I save an instance of a particular model. FeedBurner's website says to use the XML-RPC ping mechanism, but I can't find a lot of documentation on how to implement it.
What's the easiest way to do the XML-RPC ping in django/Python?
You can use Django's signals feature to get a callback after a model is saved:
import xmlrpclib
from django.db.models.signals import post_save
from app.models import MyModel
def ping_handler(sender, instance=None, **kwargs):
if instance is None:
return
rpc = xmlrpclib.Server('http://ping.feedburner.google.com/')
rpc.weblogUpdates.ping(instance.title, instance.get_absolute_url())
post_save.connect(ping_handler, sender=MyModel)
Clearly, you should update this with what works for your app and read up on signals in case you want a different event.
Use pluggable apps, Luke!
http://github.com/svetlyak40wt/django-pingback/
maybe sth like that:
import xmlrpclib
j = xmlrpclib.Server('http://feedburnerrpc')
reply = j.weblogUpdates.ping('website title','http://urltothenewpost')