How to trigger Django signal when using flush? - python

I'm using redis for caching data on my project and delete the cache through the signal. Whenever there is a change on my database the signal clear the cache using the key provided as follow:
#receiver(post_save, sender=Book)
def cache_clean_books(sender, instance, created, **kwargs):
if created:
cache.delete("all_books")
else:
cache.delete(f'book_id_{instance.id}')
Everything is working fine from here but when I use python manage.py flush to clear data from the database my signal is not triggered.
Is there any way I can use to hit the signal whenever I flush data in django please?

The post_save signal won't be called here since running python manage.py flush doesn't act on the model instance, it simply executes SQL to flush the database. You can use the post_migrate signal if you want to run code whenever the flush command is run:
from django.db.models.signals import post_migrate
#receiver(post_migrate)
def clear_cache(sender, **kwargs):
# Clear your cache here
pass
Note: This will also run whenever you run a migration but I guess it can be safely assumed that a migration to the database should probably clear the cache.

Related

How do I ensure that my Django singleton model exists on startup?

I'm using a django third party app called django-solo to give me a SingletonModel that I can use for some global project settings, since I don't need multiple objects to represent these settings.
This works great, but on a fresh database, I need to go in and create an instance of this model, or it won't show up in Django Admin.
How can I make django automatically make sure that when django is starting up, when it connects to the database, it creates this?
I tried using the following code that I got here in my settings_app/apps.py, but it doesn't seem to fire at any point:
from django.db.backends.signals import connection_created
def init_my_app(sender, connection, **kwargs):
from .models import MyGlobalSettings
# This creates an instance of MyGlobalSettings if it doesn't exist
MyGlobalSettings.get_solo()
print("Instance of MyGlobalSettings created...")
class SettingsAppConfig(AppConfig):
...
def ready(self):
connection_created.connect(init_my_app, sender=self)
The instance isn't created, and I don't see my print statement in the logs. Am I doing something wrong?
The sample code has something about using post_migrate as well, but I don't need any special code to run after a migration, so I'm not sure that I need that.
Update:
My INSTALLED_APPS looks like this:
INSTALLED_APPS = [
...
'settings_app.apps.SettingsAppConfig',
'solo', # This is for django-solo
]
Also note that the ready() method does run. If I add a print statement to it, I do see it in the logs. It's just my init_my_app() function that doesn't run.
I think it's likely that the database connection is initialised before your app's ready() method runs - so attaching to the signal at that point is too late.
You don't need to attach to the signal there anyway - just do the check directly in the ready() method:
def ready(self):
from .models import MyGlobalSettings
MyGlobalSettings.get_solo()
Note that there is potential to run into other issues here - e.g., you will get an error if migrations for the MyGlobalSettings model haven't yet been applied on the database (when running manage.py migrate for the first time, for example) - you will probably need to catch specific database exceptions and skip the creation of this object in such cases.

Django: running code on every startup but after database is migrated

I thought there was an easy answer to this in recent versions of Django but I can't find it.
I have code that touches the database. I want it to run every time Django starts up. I seem to have two options:
Option 1. AppConfig.ready() - this works but also runs before database tables have been created (i.e. during tests or when reinitializing the app without data). If I use this I have to catch multiple types of Exceptions and guess that the cause is an empty db:
def is_db_init_error(e, table_name):
return ("{}' doesn't exist".format(table_name) in str(e) or
"no such table: {}".format(table_name) in str(e)
)
try:
# doing stuff
except Exception as e:
if not is_db_init_error(e, 'foo'):
raise
else:
logger.warn("Skipping updating Foo object as db table doesn't exist")
Option 2. Use post_migrate.connect(foo_init, sender=self) - but this only runs when I do a migration.
Option 3. The old way - call it from urls.py - I wanted to keep stuff like this out of urls.py and I thought AppConfig was the one true path
I've settled for option 2 so far - I don't like the smelly try/except stuff in Option 1 and Option 3 bugs me as urls.py becomes a dumping ground.
However Option 2 often trips me up when I'm developing locally - I need to remember to run migrations whenever I want my init code to run. Things like pulling down a production db or similar often cause problems because migrations aren't triggered.
I would suggest the connection_created signal, which is:
Sent when the database wrapper makes the initial connection to the
database. This is particularly useful if you’d like to send any post
connection commands to the SQL backend.
So it will execute the signal's code when the app connects to the database at the start of the application's cycle.
It will also work within a multiple database configuration and even separate the connections made by the app at initialization:
connection
The database connection that was opened. This can be used in a multiple-database configuration to differentiate connection signals
from different databases.
Note:
You may want to consider using a combination of post_migrate and connection_created signals while checking inside your AppConfig.ready() if a migration happened (ex. flag the activation of a post_migrate signal):
from django.apps import AppConfig
from django.db.models.signals import post_migrate, connection_created
# OR for Django 2.0+
# django.db.backends.signals import post_migrate, connection_created
migration_happened = false
def post_migration_callback(sender, **kwargs):
...
migration_happened = true
def init_my_app(sender, connection):
...
class MyAppConfig(AppConfig):
...
def ready(self):
post_migrate.connect(post_migration_callback, sender=self)
if !migration_happened:
connection_created.connect(init_my_app, sender=self)
In Django >= 3.2 the post_migrate signal is sent even if no migrations are run, so you can use it to run startup code that talks to the database.
https://docs.djangoproject.com/en/3.2/ref/signals/#post-migrate
Sent at the end of the migrate (even if no migrations are run) and flush commands. It’s not emitted for applications that lack a models module.
Handlers of this signal must not perform database schema alterations as doing so may cause the flush command to fail if it runs during the migrate command.

Entry point for a Django app which is simply a redis subscribe loop that needs access to models - no urls/views

I currently have an external non-Django python process which is a simple redis subscribe loop that simply munges the messages it receives and inserts the result in a user mailbox (redis list), which my main app accesses on requests.
My listener now needs access to models, so it makes sense (to me) to make this a Django app. Being a loop, however, I imagine it's probably best to run this as a separate process.
Edit: removed my own proposed solution using AppConfig.ready() and running the separate process via gunicorn.
What I'm doing is pretty simple, but I'm a bit confused as to where the entry point for this app should be. Any ideas?
Any help/suggestions would be appreciated,
-Scott
I went ahead with #DanielRoseman's suggested and used a management command as the entry point.
I simply added a management command 'runsubscriber' which looks like:
my_app/management/commands/redis_subscriber.py
def handle(self, *args, **options):
rsl = RedisSubcribeLoop()
try:
rsl.start()
except KeyboardInterrupt:
rsl.stop()
I can now run this as a separate process via ./manage.py runsubscriber
and kill it with a ^C. my stop() looks like:
myapp/redis_subscribe_loop.py
def stop(self):
self.rc.punsubscribe() # unsubscribe from all channels
self.rc.close()
so that it shuts down cleanly.
Thanks for all your help,
-Scott

Unable to use Python's threading.Thread in Django app

I am trying to create a web application as a front end to another Python app. I have the user enter data into a form, and upon submitting, the idea is for the data to be saved in a database, and for the data to be passed to a thread object class. The thread is something that is strictly kicked-off based on a user action. My problem is that I can import threading, but cannot access threading.Thread. When the thread ends, it will update the server, so when the user views the job information, they'll see the results.
View:
#login_required(login_url='/login')
def createNetworkView(request):
if request.method == "POST":
# grab my variables from POST
job = models.MyJob()
# load my variables into MyJob object
job.save()
t = ProcessJobThread(job.id, my, various, POST, inputs, here)
t.start()
return HttpResponseRedirect("/viewJob?jobID=" + str(job.id))
else:
return HttpResponseRedirect("/")
My thread class:
import threading # this works
print "About to make thread object" # This works, I see this in the log
class CreateNetworkThread(threading.Thread): # failure here
def __init__(self, jobid, blah1, blah2, blah3):
threading.Thread.__init__(self)
def run(self):
doCoolStuff()
updateDB()
I get:
Exception Type: ImportError
Exception Value: cannot import name Thread
However, if I run python on the command line, I can import threading and also do from threading import Thread. What's the deal?
I have seen other things, like How to use thread in Django and Celery but that seemed overkill, and I don't see how that example could import threading and use threading.Thread, when I can't.
Thank you.
Edit: I'm using Django 1.4.1, Python 2.7.3, Ubuntu 12.10, SQLite for the DB, and I'm running the web application with ./manage.py runserver.
This was a silly issue I had. First, I had made a file called "threading.py" and someone suggest I delete it, which I did (or thought I did). The problem was because of me using Eclipse, the PyDev (Python) plugin for Eclipse only deleted the threading.py file I created, and hides the *.pyc file. I had a lingering threading.pyc file lingering around, even though PyDev has an option that I had enabled to delete orphaned .pyc files.

Django post_syncdb signal handler not getting called?

I have a myapp/management/__init__.py that is registering a post_syncdb handler like so:
from django.db.models import signals
from features import models as features
def create_features(app, created_models, verbosity, **kwargs):
print "Creating features!"
# Do stuff...
signals.post_syncdb.connect(create_features, sender=features)
I've verified the following:
Both features and myapp are in settings.INSTALLED_APPS
myapp.management is getting loaded prior to the syncdb running (verified via a print statement at the module level)
The features app is getting processed by syncdb, and it is emitting a post_syncdb signal (verified by examining syncdb's output with --verbosity=2.
I'm using the exact same idiom for another pair of apps, and that handler is called correctly. I've compared the two modules and found no relevant differences between the invocations.
However, myapp.management.create_features is never called. What am I missing?
try putting it in your models.py
Just came across the same issue, and the way I solved it was to remove the sender from function arguments and check for it inside the callback function.
from django.db.models import signals
from features import models as features
def create_features(app, created_models, verbosity, **kwargs):
print "Creating features!"
if app != features #this will work as it compares models module instances
return
# Do stuff...
signals.post_syncdb.connect(create_features)
That way you can keep them in your management module like Django docs suggest. I agree that it should work like you suggested. You could probably dig into the implementation of the Signal class in django.dispatch.
The point is in the sender. Your custom callback is called only if sender worked out. In my case the sender was db.models and it is not worked out if syncdb called not the first time, i.o. the synced models are exist in database. In the documents it is written, but do not putted a proper emphasis.
sender
The models module that was just installed. That is, if syncdb just installed an app called "foo.bar.myapp", sender will be the foo.bar.myapp.models module.
So my solution was drop the database and install my app again.

Categories