Django + background-tasks how to initialize - python

I have a basic django projects that I use as a front end interface for a (Condor) computing cluster for generating simulations. From the django app the users can start simulations (in Condor). The simulation related meta-data and the simulation state are kept in a DB.
I need to add a new feature: notification when (some) simulations are done.
Since I want a simple solution (and I already using background tasks) I was thinking to use repeating task that at fixed intervals query Condor about the tasks, updates the DB and if necessary sends notifications.
So if I want to update every 10 min that statuses I will have something like:
#background(schedule=1)
def check_simulations(repeat=600):
# lookup simulation statuses
simulation_list = get_Simulations()
for sim in simulations_list:
if sim.status == Simulation.DONE:
user.email_user('Simulation Complete', 'You have been notified')
def initialize():
check_simulations()
However this task (or better say the initialize() method) must be started (called once) to create and schedule the check_simulations() task (which will practically serialize the call and save it in the DB); after that the background-tasks thread will read it and execute and also reschedule it (if there is error)
My questions:
where should I put the call to the initialize() method to only be run once ?
One such place could be for instance the urls.py but this is an extremely ugly solution. Is there a better way ?
how to ensure that a server restart will not create and schedule a new task (if one already exist)
This may happen if a task is already scheduled (so a serialized task is in the background-tasks table) and the webserver is restarted so the initialize() method is called again so a new task is created and scheduled ...

i had a similar problem and i solved it this way.
i initialize my task in urls.py, i dont know if you can use other places to put it ,also added and if, to check if the task its allready in the database
from background_task.models import Task
if not Task.objects.filter(verbose_name="update_orders").exists():
tasks.update_orders(repeat=300, verbose_name="update_orders")
i have tested it and it works fine, you can also search for the order with other parameters like name, hash ,...
you can check the task model here: https://github.com/arteria/django-background-tasks/blob/master/background_task/models.py

Related

Huey not calling tasks in Django

I have a Django rest framework app that calls 2 huey tasks in succession in a serializer create method like so:
...
def create(self, validated_data):
user = self.context['request'].user
player_ids = validated_data.get('players', [])
game = Game.objects.create()
tasks.make_players_friends_task(player_ids)
tasks.send_notification_task(user.id, game.id)
return game
# tasks.py
#db_task()
def make_players_friends_task(ids):
players = User.objects.filter(id__in=ids)
# process players
#db_task()
def send_notification_task(user_id, game_id):
user = User.objects.get(id=user_id)
game = Game.objects.get(id=game_id)
# send notifications
When running the huey process in the terminal, when I hit this endpoint, I can see that only one or the other of the tasks is ever called, but never both. I am running huey with the default settings (redis with 1 thread worker.)
If I alter the code so that I am passing in the objects themselves as parameters, rather than the ids, and remove the django queries in the #db_task methods, things seem to work alright.
The reason I initially used the ids as parameters is because I assumed (or read somewhere) that huey uses json serialization as default, but after looking into it, pickle is actually the default serializer.
One theory is that since I am only running one worker, and also have a #db_periodic_task method in the app, the process can only handle listening for tasks or executing them at any time, but not both. This is the way celery seems to work, where you need a separate process for a scheduler and a worker each, but this isn't mentioned in huey's documentation.
If you run the huey consumer it will actually spawn a separate scheduler together with the amount of workers you've specified, so that's not going to be your problem.
You're not giving enough information to actually properly see what's going wrong so check the following:
If you run the huey consumer in the terminal, observe whether all your tasks show up as properly registered so that the consumer is actually capable of consuming them.
Check whether your redis process is running.
Try performing the tasks with a blocking call to see on which tasks it fails:
task_result = tasks.make_players_friends_task(player_ids)
task_result.get(blocking=True)
task_result = tasks.send_notification_task(user.id, game.id)
task_result.get(blocking=True)
Do this with a debugger or print statements to see whether it makes it to the end of your function or where it gets stuck.
Make sure to always restart your consumer when you change code. It doesn't automatically pick up new code like the django dev server. The fact that your code works as intended while pickling whole objects instead of passing id's could point to this, as it would be really weird that this would break it. On the other hand, you shouldn't pass in django ORM objects. It makes way more sense to use your id approach.

Most efficient way of tracking player skill upgrades

So far I have investigated two different ways of persistently tracking player attribute skills in a game. These are mainly conceptual except for the threading option I came up with / found an example for.
The case:
Solo developing a web game. Geo political simulator but with a little twist in comparison to others out there which I won't reveal.
I'm using a combination of Flask and SQLAlchemy for which I have written routes for and have templates extending into a base dynamically.
Currently running it in dev mode locally with the intention of putting it behind a WSGI and a reverse proxy like Nginx on the cloud based Linux vm.
About the player attribute mechanics - a player will submit a post request which will specify a few bits of information. First we want to know which skill, intelligence, endurance etc. Next wee need to know which player, but all of this will be generated automatically, we can use Flask-LoginManager to get the current user with our nifty user_loader decorator and function. We can use the user ID it provides to query the rest of it, namely what level the player is. We can specify the math used to decide the wait time increase later in seconds.
The options;
Option 1:
As suggested by a colleague of mine. Allow the database to manage the timings of the skills. When the user submits the form, we will have created a new table to hold skill upgrade information. We take a note of what time the user submitted the form and also we multiply the current skill level by a factor of X amount of time and we put both pieces of data into the database. Then we create a new process that manages the constant checking of this table. Using timedelta, we can check if the amount of time that has elapsed since the form was submitted is equal to or greater than the time the player must wait until the upgrade is complete.
Option 2:
Import threading and create a class which expects the same information as abovr supplied on init and then simply use time.sleep for X amount of time then fire the upgrade and kill the thread when it's finished.
I hope this all makes sense. I haven't written either yet because I am undecided about which is the most efficient way around it.
I'm looking for the most scalable solution (even if it's not an option listed here) but one that is also as practical or an improvement on my concept of the skill tracking mechanic.
I'm open to adding another lib to the package but I really would rather not.
I'll expand on my comment a little bit:
In terms of scaleability:
What if the upgrade processes become very long? Hours or days?
What if you have a lot of users
What if people disconnect and reconnect to sessions at different times?
Hopefully it is clear you cannot ensure a robust process with option 2. Threading and waiting will put a continuous and potentially limiting load on a server and if a server fails all those threads likely to be lost.
In terms of robustness:
On the other hand if you record all of the information to a database you have the facility to cross check the states of any items and perform upgrade/downgrade actions as deemed necessary by some form of task scheduler. This allows you to ensure that character states are always consistent with what you expect. And you only need one process to scan through the DB periodically and perform actions on all of the open rows flagged for an upgrade.
You could, if you wanted, also avoid a global task scheduler altogether. When a user performs an activity on the site a little task could run in the background (as a kind of decorator) that checks the upgrade status and if the time is right performs the DB activity, otherwise just passes. But a user would need to be actively in a session to make sure this happens, as opposed to the scheduled task above.

Django with Celery - existing object not found

I am having problem with executing celery task from another celery task.
Here is the problematic snippet (data object already exists in database, its attributes are just updated inside finalize_data function):
def finalize_data(data):
data = update_statistics(data)
data.save()
from apps.datas.tasks import optimize_data
optimize_data.delay(data.pk)
#shared_task
def optimize_data(data_pk):
data = Data.objects.get(pk=data_pk)
#Do something with data
Get call in optimize_data function fails with "Data matching query does not exist."
If I call the retrieve by pk function in finalize_data function it works fine. It also works fine if I delay the celery task call for some time.
This line:
optimize_data.apply_async((data.pk,), countdown=10)
instead of
optimize_data.delay(data.pk)
works fine. But I don't want to use hacks in my code. Is it possible that .save() call is asynchronously blocking access to that row/object?
I know that this is an old post but I stumbled on this problem today. Lee's answer pointed me to the correct direction but I think a better solution exists today.
Using the on_commit handler provided by Django this problem can be solved without a hackish way of countdowns in the code which might not be intuitive to the user about why it exsits.
I'm not sure if this existed when the question was posted but I'm just posting the answer so that people who come here in the future know about the alternative.
I'm guessing your caller is inside a transaction that hasn't committed before celery starts to process the task. Hence celery can't find the record. That is why adding a countdown makes it work.
A 1 second countdown will probably work as well as the 10 second one in your example. I've used 1 second countdowns throughout code to deal with this issue.
Another solution is to stop using transactions.
You could use an on_commit hook to make sure the celery task isn't triggered until after the transaction commits?
DjangoDocs#performing-actions-after-commit
It's a feature that was added in Django 1.9.
from django.db import transaction
def do_something():
pass # send a mail, invalidate a cache, fire off a Celery task, etc.
transaction.on_commit(do_something)
You can also wrap your function in a lambda:
transaction.on_commit(lambda: some_celery_task.delay('arg1'))
The function you pass in will be called immediately after a hypothetical database write made where on_commit() is called would be successfully committed.
If you call on_commit() while there isn’t an active transaction, the callback will be executed immediately.
If that hypothetical database write is instead rolled back (typically when an unhandled exception is raised in an atomic() block), your function will be discarded and never called.

Distributed Task Queue Based on Sets as a Data Structure instead of Lists

I'm wondering if there's a way to set up RabbitMQ or Redis to work with Celery so that when I send a task to the queue, it doesn't go into a list of tasks, but rather into a Set of tasks keyed based on the payload of my task, in order to avoid duplicates.
Here's my setup for more context:
Python + Celery. I've tried RabbitMQ as a backend, now I'm using Redis as a backend because I don't need the 100% reliability, easier to use, small memory footprint, etc.
I have roughly 1000 ids that need work done repeatedly. Stage 1 of my data pipeline is triggered by a scheduler and it outputs tasks for stage 2. The tasks contain just the id for which work needs to be done and the actual data is stored in the database. I can run any combination or sequence of stage 1 and stage 2 tasks without harm.
If stage 2 doesn't have enough processing power to deal with the volume of tasks output by stage 1, my task queue grows and grows. This wouldn't have to be the case if the task queue used sets as the underlying data structure instead of lists.
Is there an off-the-shelf solution for switching from lists to sets as distributed task queues? Is Celery capable of this? I recently saw that Redis has just released an alpha version of a queue system, so that's not ready for production use just yet.
Should I architect my pipeline differently?
You can use an external data structure to store and monitor the current state of your celery queue.
1. Lets take a redis key-value for example. Whenever you push a task into celery, you mark a key with your 'id' field as true in redis.
Before trying to push a new task with any 'id', you would check if the key with 'id' is true in redis or not, if yes, you skip pushing the task.
To clear the keys at proper time, you can use after_return handler of celery, which runs when the task has returned. This handler will unset the key 'id' in redis , hence clearing the lock for next task push .
This method ensures you only have ONE instance per id of task running in celery queue. You can also enhance it to allow only N tasks per id by using INCR and DECR commands on the redis key, when the task is pushed and after_return of the task.
Can your tasks in stage 2 check whether the work has already been done and, if it has, then not do the work again? That way, even though your task list will grow, the amount of work you need to do won't.
I haven't come across a solution re the sets / lists, and I'd think there were lots of other ways of getting around this issue.
Use a SortedSet within Redis for your jobs queue. It is indeed a Set so if you put the exact same data inside it won't add a new value in it (it absolutely needs to be the exact same data, you can't override the hash function used in SortedSet in Redis).
You will need a score to use with SortedSet, you can use a timestamp (value as a double, using unixtime for instance) that will allow you to get the most recent items / oldest items if you want. ZRANGEBYSCORE is probably the command you will be looking for.
http://redis.io/commands/zrangebyscore
Moreover, if you need additional behaviours, you can wrap everything inside a Lua Script for atomistic behaviour and custom eviction strategy if needed. For instance calling a "get" script that gets the job and remove it from the queue atomically or evicts data if there is too much back pressure etc.

Set / get objects with memcached

In a Django Python app, I launch jobs with Celery (a task manager). When each job is launched, they return an object (lets call it an instance of class X) that lets you check on the job and retrieve the return value or errors thrown.
Several people (someday, I hope) will be able to use this web interface at the same time; therefore, several instances of class X may exist at the same time, each corresponding to a job that is queued or running in parallel. It's difficult to come up with a way to hold onto these X objects because I cannot use a global variable (a dictionary that allows me to look up each X objects from a key); this is because Celery uses different processes, not just different threads, so each would modify its own copy of the global table, causing mayhem.
Subsequently, I received the great advice to use memcached to share the memory across the tasks. I got it working and was able to set and get integer and string values between processes.
The trouble is this: after a great deal of debugging today, I learned that memcached's set and get don't seem to work for classes. This is my best guess: Perhaps under the hood memcached serializes objects to the shared memory; class X (understandably) cannot be serialized because it points at live data (the status of the job), and so the serial version may be out of date (i.e. it may point to the wrong place) when it is loaded again.
Attempts to use a SQLite database were similarly fruitless; not only could I not figure out how to serialize objects as database fields (using my Django models.py file), I would be stuck with the same problem: the handles of the launched jobs need to stay in RAM somehow (or use some fancy OS tricks underneath), so that they update as the jobs finish or fail.
My best guess is that (despite the advice that thankfully got me this far) I should be launching each job in some external queue (for instance Sun/Oracle Grid Engine). However, I couldn't come up with a good way of doing that without using a system call, which I thought may be bad style (and potentially insecure).
How do you keep track of jobs that you launch in Django or Django Celery? Do you launch them by simply putting the job arguments into a database and then have another job that polls the database and runs jobs?
Thanks a lot for your help, I'm quite lost.
I think django-celery does this work for you. Did you had a look at the tables made by django-celery? I.e. djcelery_taskstate holds all data for a given task like state, worker_id and so on. For periodic tasks there is a table called djcelery_periodictask.
In a Django view you can access the TaskMeta object:
from djcelery.models import TaskMeta
task = TaskMeta.objects.get(task_id=task_id)
print task.status

Categories