why there is no response_finished signal in Django? - python

I am trying to implement some kind of background task queue in Django, because Celery is too huge & complex, then it occured to me that, there is already a signal called request_finished
https://docs.djangoproject.com/en/dev/ref/signals/#django.core.signals.request_finished
But why Django do not have a signal called response_finished ?
Django may be synchronous, but I can do some post-response data processing and saving tasks, it only taks few more steps.
Is hacking a way to do some post-response work possible in Django?
TIA

You can write your own middleware (specifically using process_response) if you need to perform tasks after the response has been assembled. There would be no point in having a signal handler after the response is 'finished' as by that stage, you have executed your view and rendered your template.

since no one answers this, I have some conclusions myself
https://groups.google.com/d/topic/python-web-sig/OEahWtsPwq4/discussion
It's basically a wsgi design behavior. Wsgi will not care what happens after respons iterator stops.

Related

Queue in django/python

I have developed a django REST API. I send request/data to it to perfrom a task and it does it nicely. Though, In a way, I can send it multiple request/data to perfrom the task on each of them. The issue is that server where the task gets performed has limited memory and I need to perform these task one by one. So, I am thinking to have a queue system at django pipeline which can maintain the reqeust on hold till the task in front of the queue is done.
I am not sure if I am on right path, but not sure if celery is the option to solve my issue?
It seems a simple task and I didnt understand if celery is what i need. Can you point me what should be looking at?
If you want to keep your API as is, then celery will not help you. It's a good idea to keep API calls as short as possible. If you have some longer job done during API calls (sending emails, for example), then you better use celery. But the only thing you can get as a response to your API is that the task was queued.

Django: How to ignore tasks with Celery?

Without changing the code itself, Is there a way to ignore tasks in Celery?
For example, when using Django mails, there is a Dummy Backend setting. This is perfect since it allows me, from a .env file to deactivate mail sending in some environments (like testing, or staging). The code itself that handles mail sending is not changed with if statements or decorators.
For celery tasks, I know I could do it in code using mocks or decorators, but I'd like to do it in a clean way that is 12factors compliant, like with Django mails. Any idea?
EDIT to explain why I want to do this:
One of the main motivation behind this, is that it creates coupling between Django web server and Celery tasks.
For example, when running unit tests, if the broker server (Redis for me) is not running, then if delay() method is called, it freezes forever, because there is no timeout when Celery tries to send a task to Redis.
From an architecture view, this is very bad. I'd like my unit tests can run properly without the requirement to run a Celery broker!
Thanks!
As far as the coupling is concerned, your Django application would still be tied to celery if you use a dummy backend. Just your tasks won't execute. Maybe this is acceptable in your case but in my opinion, it can cause some problems. For example, if the piece of code you are trying to test, submits a task to celery, and in a later part, tries to retrieve the result for that task, it will fail. Because the dummy backend will never execute the task.
For unit testing, as you mentioned in your question, you can use the task_always_eager setting. If you turn it on, your Django app will no longer depend upon a running worker. It will execute tasks in the same thread in a synchronous fashion and return the result.

Celery django explanation

I have been learning about django recently and have stumbled upon celery. I don't seem to understand what it does. I've been to their site to no avail. Can anyone explain to me the concept and it's real world applications (in simple terms)?
Celery is an "asynchronous task queue/job queue based on distributed message passing". It is just a task queue, or something that one puts tasks into to do as soon as possible. You have a celery instance that you integrate directly with your django or python app- this is what you use to talk to celery. Then, you can configure celery to have 'workers' that perform the tasks you give them. The whole point is to be able to do tasks that don't fit within the normal request/response cycle very well that django handles so well.
What kinds of tasks are these? Well, as said before, they don't fit into the normal request/response cycle. The best example I can think of is emails- if you're building a web app and you want to keep your users, you need to keep them engaged and coming back, and a good way to do that is by sending emails. You send them once a week or once a day and they can maybe configure when to send. This would fit horribly within the request/response cycle, but it's perfect for something like Celery.
Other examples are long-running jobs with lots of computation. While you would typically use something like Hadoop for really big computations, you can schedule some queries with Celery. You could also use it to schedule builds if you're doing something like Travis. The uses go on and on, but you probably get the point.

How do I collect up logs for an App Engine request using Python logging?

Using Google App Engine, Python 2.7, threadsafe:true, webapp2.
I would like to include all logging.XXX() messages in my API responses, so I need an efficient way to collect up all the log messages that occur during the scope of a request. I also want to operate in threadsafe:true, so I need to be careful to get only the right log messages.
Currently, my strategy is to add a logging.Handler at the start of my webapp2 dispatch method, and then remove it at the end. To collect logs only for my thread, I instantiate the logging.Handler with the name of the current thread; the handler will simply throw out log records that are from a different thread. I am using thread name and not thread ID because I was getting some unexpected results on dev_appserver when using the ID.
Questions:
Is it efficient to constantly be adding/removing logging.Handler objects in this fashion? I.e., every request will add, then remove, a Handler. Is this "cheap"?
Is this the best way to get only the logging messages for my request? My big assumption is that each request gets its own thread, and that thread name will actually select the right items.
Am I fundamentally misunderstanding Python logging? Perhaps I should only have a single additional Handler added once at the "module-level" statically, and my dispatch should do something lighter.
Any advice is appreciated. I don't have a good understanding of what Python (and specifically App Engine Python) does under the hood with respect to logging. Obviously, this is eminently possible because the App Engine Log Viewer does exactly the same thing: it displays all the log messages for that request. In fact, if I could piggyback on that somehow, that would be even better. It absolutely needs to be super-cheap though - i.e., an RPC call is not going to cut it.
I can add some code if that will help.
I found lots of goodness here:
from google.appengine.api import logservice
entries = logservice.logs_buffer().parse_logs()

What's alternative choice for background worker, async defered task queue in django/wsgi besides celery?

Are there any pure wsgi implementation of background task?
I want to use local variables under the same context directly, not serialize/deserialize to another daemon process via a broker.
Is it possible to make this happen under the current wsgi infrastructure? E.g. after return response yield, run some callback functions?
This is a duplicate of question asked on the Python WEB-SIG. I reference the same page as provided in response to the question on the Python WEB-SIG so others can see it:
http://code.google.com/p/modwsgi/wiki/RegisteringCleanupCode
In doing this though, it ties up the request thread and so it would not be able to handle other requests until your task has finished.
Creating background threads at the end of a request is not a good idea unless you do it using a pooling mechanism such that you limit the number of worker threads for your tasks. Because the process can crash or be shutdown, you loose the job as only in memory and thus not persistent.
Better to use Celery, or if you think that is too heavy weight, have a look at Redis Queue (RQ) instead.
You could look at Django async. It uses an in-database queue and so handles transactions much better. All arguments need to be JSONable as does the return type. In some cases this means you may need to schedule a wrapper function, but that oughtn't to cause you any headaches.
http://pypi.python.org/pypi/django-async
You don't want to be doing this sort of thing inside the web server -- it's absolutely not the right place to do it. Django async provides a manage.py command for flushing the queue which you can run in a loop, possible on another machine from the web server.

Categories