How to make concurrent web service calls from a Django view? [duplicate] - python

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Asynchronous HTTP calls in Python
I have a Django view which needs to retrieve search results from multiple web services, blend the results together, and render them. I've never done any multithreading in Django before. What is a modern, efficient, safe way of doing this?
I don't know anything about it yet, but gevent seems like a reasonable option. Should I use that? Does it play well with Django? Should I look elsewhere?

Not sure about gevent. The simplest way is to use threads[*]. Here's a simple example of how to use threads in Python:
# std lib modules. "Batteries included" FTW.
import threading
import time
thread_result = -1
def ThreadWork():
global thread_result
thread_result = 1 + 1
time.sleep(5) # phew, I'm tiered after all that addition!
my_thread = threading.Thread(target=ThreadWork)
my_thread.start() # This will call ThreadWork in the background.
# In the mean time, you can do other stuff
y = 2 * 5 # Completely independent calculation.
my_thread.join() # Wait for the thread to finish doing it's thing.
# This should take about 5 seconds,
# due to time.sleep being called
print "thread_result * y =", thread_result * y
You can start multiple threads, have each make different web service calls, and join on all of those threads. Once all those join calls have returned, the results are in, and you'll be able to blend them.
more advanced tips: You should call join with a timeout; otherwise, your users might be waiting indefinitely for your app to send them a response. Even better would be for you to make those web service calls before the request arrives at your app; otherwise, the responsiveness of your app is at the mercy of the services that you rely on.
caveat about threading in general: Be careful with data that can be accessed by two (or more) different threads. Access to the same data needs to be "synchronized". The most popular synchronization device is a lock, but there is a plethora of others. threading.Lock implements a lock. If you're not careful about synchronization, you're likely to write a "race condition" into your app. Such bugs are notoriously difficult to debug, because they cannot be reliably reproduced.
In my simple example, thread_result was shared between my_thread and the main thread. I didn't need any locks, because the main thread did not access thread_result until my_thread terminated. If I hadn't called my_thread.join, the result would some times be -10 instead of 20. Go ahead and try it yourself.
[*] Python doesn't have true threading in the sense that concurrent threads do not execute simulatneously, even if you have idle cores. However, you still get concurrent execution; when one thread is blocked, other threads can execute.

I just nicely solved this problem using futures, available in 3.2 and backported to earlier versions including 2.x.
In my case I was retrieving results from an internal service and collating them:
def _getInfo(request,key):
return urllib2.urlopen(
'http://{0[SERVER_NAME]}:{0[SERVER_PORT]}'.format(request.META) +
reverse('my.internal.view', args=(key,))
, timeout=30)
…
with futures.ThreadPoolExecutor(max_workers=os.sysconf('SC_NPROCESSORS_ONLN')) as executor:
futureCalls = dict([ (
key,executor.submit(getInfo,request,key)
) for key in myListOfItems ])
curInfo = futureCalls[key]
if curInfo.exception() is not None:
# "exception calling for info: {0}".format(curInfo.exception())"
else:
# Handle the result…

gevent will not help you to process the task faster. It is just more efficient than threads when it comes to resource footprint. When running gevent with Django (usually via gunicorn) your web app will be able to handle more concurrent connections than a normal django wsgi app.
But: I think this has nothing to do with your problem. What you want to do is handle a huge task in one Django view, which is usually not a good idea. I personally advise you against using threads or gevents greenlets for this in Django. I see the point for standalone Python scripts or daemon's or other tools, but not for web. This mostly results in instability and more resource footprint. Instead I am agreeing with the comments of dokkaebi and Andrew Gorcester. Both comments differ somehow though, since it really depends of what your task is about.
If you can split your task into many smaller tasks you could create multiple views handling these subtasks. These views could return something like JSON and can be consumed via AJAX from your frontend. Like this you can build the content of your page as it "comes in" and the user does not need to wait until the whole page is loaded.
If you task is one huge chunk you are better off with a task queue handler. Celery comes in mind. If Celery is too overkill you can use zeroMQ. This basically works like mentioned above from Andrew: you schedule the task for processing and are polling the backend from your frontend page until the task is finished (usually also via AJAX). You could also use something like long polling here.

Related

Is it a bad practice to use sleep() in a web server in production?

I'm working with Django1.8 and Python2.7.
In a certain part of the project, I open a socket and send some data through it. Due to the way the other end works, I need to leave some time (let's say 10 miliseconds) between each data that I send:
while True:
send(data)
sleep(0.01)
So my question is: is it considered a bad practive to simply use sleep() to create that pause? Is there maybe any other more efficient approach?
UPDATED:
The reason why I need to create that pause is because the other end of the socket is an external service that takes some time to process the chunks of data I send. I should also point out that it doesnt return anything after having received or let alone processed the data. Leaving that brief pause ensures that each chunk of data that I send gets properly processed by the receiver.
EDIT: changed the sleep to 0.01.
Yes, this is bad practice and an anti-pattern. You will tie up the "worker" which is processing this request for an unknown period of time, which will make it unavailable to serve other requests. The classic pattern for web applications is to service a request as-fast-as-possible, as there is generally a fixed or max number of concurrent workers. While this worker is continually sleeping, it's effectively out of the pool. If multiple requests hit this endpoint, multiple workers are tied up, so the rest of your application will experience a bottleneck. Beyond that, you also have potential issues with database locks or race conditions.
The standard approach to handling your situation is to use a task queue like Celery. Your web-application would tell Celery to initiate the task and then quickly finish with the request logic. Celery would then handle communicating with the 3rd party server. Django works with Celery exceptionally well, and there are many tutorials to help you with this.
If you need to provide information to the end-user, then you can generate a unique ID for the task and poll the result backend for an update by having the client refresh the URL every so often. (I think Celery will automatically generate a guid, but I usually specify one.)
Like most things, short answer: it depends.
Slightly longer answer:
If you're running it in an environment where you have many (50+ for example) connections to the webserver, all of which are triggering the sleep code, you're really not going to like the behavior. I would strongly recommend looking at using something like celery/rabbitmq so Django can dump the time delayed part onto something else and then quickly respond with a "task started" message.
If this is production, but you're the only person hitting the webserver, it still isn't great design, but if it works, it's going to be hard to justify the extra complexity of the task queue approach mentioned above.

Running asynchronous python code in a Django web application

Is it OK to run certain pieces of code asynchronously in a Django web app. If so how?
For example:
I have a search algorithm that returns hundreds or thousands of results. I want to enter into the database that these items were the result of the search, so I can see what users are searching most. I don't want the client to have to wait an extra hundred or thousand more database inserts. Is there a way I can do this asynchronously? Is there any danger in doing so? Is there a better way to achieve this?
As far as Django is concerned yes.
The bigger concern is your web server and if it plays nice with threading. For instance, the sync workers of gunicorn are single threads, but there are other engines, such as greenlet. I'm not sure how well they play with threads.
Combining threading and multiprocessing can be an issue if you're forking from threads:
Status of mixing multiprocessing and threading in Python
http://bugs.python.org/issue6721
That being said, I know of popular performance analytics utilities that have been using threads to report on metrics, so seems to be an accepted practice.
In sum, seems safest to use the threading.Thread object from the standard library, so long as whatever you do in it doesn't fork (python's multiprocessing library)
https://docs.python.org/2/library/threading.html
Offloading requests from the main thread is a common practice; as the end goal is to return a result to the client (browser) as quickly as possible.
As I am sure you are aware, HTTP is blocking - so until you return a response, the client cannot do anything (it is blocked, in a waiting state).
The de-facto way of offloading requests is through celery which is a task queuing system.
I highly recommend you read the introduction to celery topic, but in summary here is what happens:
You mark certain pieces of codes as "tasks". These are usually functions that you want to run asynchronously.
Celery manages workers - you can think of them as threads - that will run these tasks.
To communicate with the worker a message queue is required. RabbitMQ is the one often recommended.
Once you have all the components running (it takes but a few minutes); your workflow goes like this:
In your view, when you want to offload some work; you will call the function that does that work with the .delay() option. This will trigger the worker to start executing the method in the background.
Your view then returns a response immediately.
You can then check for the result of the task, and take appropriate actions based on what needs to be done. There are ways to track progress as well.
It is also good practice to include caching - so that you are not executing expensive tasks unnecessarily. For example, you might choose to offload a request to do some analytics on search keywords that will be placed in a report.
Once the report is generated, I would cache the results (if applicable) so that the same report can be displayed if requested later - rather than be generated again.

What's alternative choice for background worker, async defered task queue in django/wsgi besides celery?

Are there any pure wsgi implementation of background task?
I want to use local variables under the same context directly, not serialize/deserialize to another daemon process via a broker.
Is it possible to make this happen under the current wsgi infrastructure? E.g. after return response yield, run some callback functions?
This is a duplicate of question asked on the Python WEB-SIG. I reference the same page as provided in response to the question on the Python WEB-SIG so others can see it:
http://code.google.com/p/modwsgi/wiki/RegisteringCleanupCode
In doing this though, it ties up the request thread and so it would not be able to handle other requests until your task has finished.
Creating background threads at the end of a request is not a good idea unless you do it using a pooling mechanism such that you limit the number of worker threads for your tasks. Because the process can crash or be shutdown, you loose the job as only in memory and thus not persistent.
Better to use Celery, or if you think that is too heavy weight, have a look at Redis Queue (RQ) instead.
You could look at Django async. It uses an in-database queue and so handles transactions much better. All arguments need to be JSONable as does the return type. In some cases this means you may need to schedule a wrapper function, but that oughtn't to cause you any headaches.
http://pypi.python.org/pypi/django-async
You don't want to be doing this sort of thing inside the web server -- it's absolutely not the right place to do it. Django async provides a manage.py command for flushing the queue which you can run in a loop, possible on another machine from the web server.

Python - question regarding the concurrent use of `multiprocess`

I want to use Python's multiprocessing to do concurrent processing without using locks (locks to me are the opposite of multiprocessing) because I want to build up multiple reports from different resources at the exact same time during a web request (normally takes about 3 seconds but with multiprocessing I can do it in .5 seconds).
My problem is that, if I expose such a feature to the web and get 10 users pulling the same report at the same time, I suddenly have 60 interpreters open at the same time (which would crash the system). Is this just the common sense result of using multiprocessing, or is there a trick to get around this potential nightmare?
Thanks
If you're really worried about having too many instances you could think about protecting the call with a Semaphore object. If I understand what you're doing then you can use the threaded semaphore object:
from threading import Semaphore
sem = Semaphore(10)
with sem:
make_multiprocessing_call()
I'm assuming that make_multiprocessing_call() will cleanup after itself.
This way only 10 "extra" instances of python will ever be opened, if another request comes along it will just have to wait until the previous have completed. Unfortunately this won't be in "Queue" order ... or any order in particular.
Hope that helps
You are barking up the wrong tree if you are trying to use multiprocess to add concurrency to a network app. You are barking up a completely wrong tree if you're creating processes for each request. multiprocess is not what you want (at least as a concurrency model).
There's a good chance you want an asynchronous networking framework like Twisted.
locks are only ever nessecary if you have multiple agents writing to a source. If they are just accessing, locks are not needed (and as you said defeat the purpose of multiprocessing).
Are you sure that would crash the system? On a web server using CGI, each request spawns a new process, so it's not unusual to see thousands of simultaneous processes (granted in python one should use wsgi and avoid this), which do not crash the system.
I suggest you test your theory -- it shouldn't be difficult to manufacture 10 simultaneous accesses -- and see if your server really does crash.

Python: Architecture for url polling and posting

I have a simple problem. I have to fetch a url (about once a minute), check if there is any new content, and if there is, post it to another url.
I have a working system with a cronjob every minute that basically:
for link in models.Link.objects.filter(enabled=True).select_related():
# do it in two phases in case there is cross pollination
# get posts
twitter_posts, meme_posts = [], []
if link.direction == "t2m" or link.direction == "both":
twitter_posts = utils.get_twitter_posts(link)
if link.direction == "m2t" or link.direction == "both":
meme_posts = utils.get_meme_posts(link)
# process them
if len(twitter_posts) > 0:
post_count += views.twitter_link(link, twitter_posts)
if len(meme_posts) > 0:
post_count += views.meme_link(link, meme_posts)
count += 1
msg = "%s links crawled and %s posts updated" % (count, post_count)
This works great for the 150 users I have now, but the synchronousness of it scares me. I have url timeouts built-in, but at some point my cronjob will take > 1 minute, and I'll be left with a million of them running overwriting eachother.
So, how should I rewrite it?
Some issues:
I don't want to hit the APIs too hard incase they block me. So I'd like to have at most 5 open connections to any API at any time.
Users keep registering in the system as this runs, so I need some way to add them
I'd like this to scale as well as possible
I'd like to reuse as much existing code as I can
So, some thoughts I've had:
Spawn a thread for each link
Use python-twisted - Keep one running process, that the cronjob just makes sure is running.
Use stackless - Don't really know much about it.
Ask StackOverflow :)
How would you do this?
Simplest: use a long-running process with sched (on its own thread) to handle the scheduling -- by posting requests to a Queue; have a fixed-size pool of threads (you can find a pre-made thread pool here, but it's easy to tweak it or roll your own) taking requests from the Queue (and returning results via a separate Queue). Registration and other system functions can be handled by a few more dedicated threads, if need be.
Threads aren't so bad, as long as (a) you never have to worry about synchronization among them (just have them communicate by intrinsically thread-safe Queue instances, never sharing access to any structure or subsystem that isn't strictly read-only), and (b) you never have too many (use a few dedicated threads for specialized functions, including scheduling, and a small thread-pool for general work -- never spawn a thread per request or anything like that, that will explode).
Twisted can be more scalable (at low hardware costs), but if you hinge your architecture on threading (and Queues) you have a built-in way to grow the system (by purchasing more hardware) to use the very similar multiprocessing module instead... almost a drop-in replacement, and a potential scaling up of orders of magnitude!-)

Categories