submitting multiple POST requests as multi threads in Python/Django - python

My development stack is Django/Python. (that includes Django-REST-framework)
I am currently looking for ways to do multiple distinct API calls.
client.py
def submit_order(list_of_orders):
//submit each element in list_of_orders in thread
for order in list_of_orders:
try:
response=request.POST(order, "www.confidential.url.com")
except:
//retry_again
else:
if !response.status==200:
//retry_again
In the above method, I am currently submitting order one by one, I want to submit all orders at once. Secondly, I want to retry submission for x times if it fails.
I currently do not know how well to achieve it.
I am looking for ways that python libraries or Django application provide rather than re-inventing the wheel.
Thanks

As #Selcuk said you can try django-celery which is a recommended approach in my opinion, but you will need to make some configuration and read some manuals.
On the other hand, you can try using multiprocessing like this:
from multiprocessing import Pool
def process_order(order):
#Handle each order here doing requests.post and then retrying if neccesary
pass
def submit_order(list_of_orders):
orders_pool = Pool(len(list_of_orders))
results = orders_pool.map(process_order, list_of_orders)
#Do something with the results here
It will depend on what you need to get done, if you can do the requests operations on the background and your api user can be notified later, just use django-celery and then notify the user accordingly, but if you want a simple approach to react immediately, you can use the one I "prototyped" for you.
You should consider some kind of delay on the responses for your requests (as you are doing some POST request). So make sure your POST request don't grow a lot, because it could affect the experience of the API clients calling your services.

Related

Can Flask or Django handle concurrent tasks?

What I'm trying to accomplish:
I have a sensor that is constantly reading in data. I need to print this data to a UI whenever data appears. While the aforementioned task is taking place, the user should be able to write data to the sensor. Ideally, both these tasks would / could happen at the same time. Currently, I have the program written using flask; but if django would be better suited (or a third party) I would be willing to make the switch. Note: this website will never be deployed so no need to worry about that. Only user will be me, running program from my laptop.
I have spent a lot of time researching flask async functions and coroutines; however I have not seen any clear indications if something like this would be possible.
Not looking for a line by line solution. Rather, a way (async, threading etc) to set up the code such that the aforementioned tasks are possible. All help is appreciated, thanks.
I'm a Django guy, so I'll throw out what I think could be possible
Django has a decorator #start_new_thread which can be put on any function and it will run in a thread.
You could make a view, POST to it with Javascript/Ajax and start a thread for communication with the sensor using the data POSTed.
You could also make a threading function that will read from the sensor
Could be a management command or a 'start' btn that POSTs to a view that then starts the thread
Note: You need to do Locks or some other logic so the two threads don't conflict when reading/writing
Maybe it's a single thread that reads/writes to the sensor and each loop it checks if there's anything to write (existence + contents of a file? Maybe db entry?
Per the UI, lets say a webpage. You're best best would be Websockets, but because you're the only one that will ever use it you could just write up some Javascript/Ajax that would Ping a view every x seconds and display the new data on the webpage
Note: that's essentially what websockets do, ping every x seconds
Now the common thread is Javascript/Ajax, this is so the page doesn't need to refresh and you can constantly see the data coming in without the page being refreshed.
You can probably do all of this in Flask if you find a similar threading ability and just add some javascript to the frontend
Hopefully you find some of this useful, and idk why stackoverflow hates these types of questions... They're literally fine

Is it a bad practice to use sleep() in a web server in production?

I'm working with Django1.8 and Python2.7.
In a certain part of the project, I open a socket and send some data through it. Due to the way the other end works, I need to leave some time (let's say 10 miliseconds) between each data that I send:
while True:
send(data)
sleep(0.01)
So my question is: is it considered a bad practive to simply use sleep() to create that pause? Is there maybe any other more efficient approach?
UPDATED:
The reason why I need to create that pause is because the other end of the socket is an external service that takes some time to process the chunks of data I send. I should also point out that it doesnt return anything after having received or let alone processed the data. Leaving that brief pause ensures that each chunk of data that I send gets properly processed by the receiver.
EDIT: changed the sleep to 0.01.
Yes, this is bad practice and an anti-pattern. You will tie up the "worker" which is processing this request for an unknown period of time, which will make it unavailable to serve other requests. The classic pattern for web applications is to service a request as-fast-as-possible, as there is generally a fixed or max number of concurrent workers. While this worker is continually sleeping, it's effectively out of the pool. If multiple requests hit this endpoint, multiple workers are tied up, so the rest of your application will experience a bottleneck. Beyond that, you also have potential issues with database locks or race conditions.
The standard approach to handling your situation is to use a task queue like Celery. Your web-application would tell Celery to initiate the task and then quickly finish with the request logic. Celery would then handle communicating with the 3rd party server. Django works with Celery exceptionally well, and there are many tutorials to help you with this.
If you need to provide information to the end-user, then you can generate a unique ID for the task and poll the result backend for an update by having the client refresh the URL every so often. (I think Celery will automatically generate a guid, but I usually specify one.)
Like most things, short answer: it depends.
Slightly longer answer:
If you're running it in an environment where you have many (50+ for example) connections to the webserver, all of which are triggering the sleep code, you're really not going to like the behavior. I would strongly recommend looking at using something like celery/rabbitmq so Django can dump the time delayed part onto something else and then quickly respond with a "task started" message.
If this is production, but you're the only person hitting the webserver, it still isn't great design, but if it works, it's going to be hard to justify the extra complexity of the task queue approach mentioned above.

Multithreading / Asynchronous I/O

I have a conceptual question.
I currently have a programme that performs within a never ending loop.
Def (mycode):
Perform login to server and retrieve cookies etc
While 1:
Perform an URL request (with custom headers, cookies etc)
Process the reply
Perform URL requests dependent upon the values in replies
Process reply
I am happy for this to continue as it is, as the URL's must be called one after the other.
Now the server limits a single account to a limited number of functions, it would be useful to be able to perform this function with two (or more) different accounts.
My question is: Is this possible to do? I have done a reasonable amount of reading on queues and multithreading, if you nice people could suggest a method with a good (easy to understand) example I would be most appreciative.
Gevent is a performant green threads implementation that has example.
I'm unsure if by doing this for different accounts to the same server you mean having different worker functions handling the url processing - in effect having Def (mycode) n times for each account. Perhaps you could expand on the detail.
>>> import gevent
>>> from gevent import socket
>>> urls = ['www.google.com', 'www.example.com', 'www.python.org']
>>> jobs = [gevent.spawn(socket.gethostbyname, url) for url in urls]
>>> gevent.joinall(jobs, timeout=2)
>>> [job.value for job in jobs]
['74.125.79.106', '208.77.188.166', '82.94.164.162']
In addition you could break up the problem by using something like beanstalkd which would allow you to run your main process 'n' times for each account and put the results on a beanstalk-queue for processing by another process.
Saves having to deal with threading which is always a good thing in non-trivial applications.

Better ways to handle AppEngine requests that time out?

Sometimes, with requests that do a lot, Google AppEngine returns an error. I have been handling this by some trickery: memcaching intermediate processed data and just requesting the page again. This often works because the memcached data does not have to be recalculated and the request finishes in time.
However... this hack requires seeing an error, going back, and clicking again. Obviously less than ideal.
Any suggestions?
inb4: "optimize your process better", "split your page into sub-processes", and "use taskqueue".
Thanks for any thoughts.
Edit - To clarify:
Long wait for requests is ok because the function is administrative. I'm basically looking to run a data-mining function. I'm searching over my datastore and modifying a bunch of objects. I think the correct answer is that AppEngine may not be the right tool for this. I should be exporting the data to a computer where I can run functions like this on my own. It seems AppEngine is really intended for serving with lighter processing demands. Maybe the quota/pricing model should offer the option to increase processing timeouts and charge extra.
If interactive user requests are hitting the 30 second deadline, you have bigger problems: your user has almost certainly given up and left anyway.
What you can do depends on what your code is doing. There's a lot to be optimized by batching datastore operations, or reducing them by changing how you model your data; you can offload work to the Task Queue; for URLFetches, you can execute them in parallel. Tell us more about what you're doing and we may be able to provide more concrete suggestions.
I have been handling something similar by building a custom automatic retry dispatcher on the client. Whenever an ajax call to the server fails, the client will retry it.
This works very well if your page is ajaxy. If your app spits entire HTML pages then you can use a two pass process: first send an empty page containing only an ajax request. Then, when AppEngine receives that ajax request, it outputs the same HTML you had before. If the ajax call succeeds it fills the DOM with the result. If it fails, it retries once.

Is there any way to make an asynchronous function call from Python [Django]?

I am creating a Django application that does various long computations with uploaded files. I don't want to make the user wait for the file to be handled - I just want to show the user a page reading something like 'file is being parsed'.
How can I make an asynchronous function call from a view?
Something that may look like that:
def view(request):
...
if form.is_valid():
form.save()
async_call(handle_file)
return render_to_response(...)
Rather than trying to manage this via subprocesses or threads, I recommend you separate it out completely. There are two approaches: the first is to set a flag in a database table somewhere, and have a cron job running regularly that checks the flag and performs the required operation.
The second option is to use a message queue. Your file upload process sends a message on the queue, and a separate listener receives the message and does what's needed. I've used RabbitMQ for this sort of thing, but others are available.
Either way, your user doesn't have to wait for the process to finish, and you don't have to worry about managing subprocesses.
I have tried to do the same and failed after multiple attempt due of the nature of django and other asynchronous call.
The solution I have come up which could be a bit over the top for you is to have another asynchronous server in the background processing messages queues from the web request and throwing some chunked javascript which get parsed directly from the browser in an asynchronous way (ie: ajax).
Everything is made transparent for the end user via mod_proxy setting.
Unless you specifically need to use a separate process, which seems to be the gist of the other questions S.Lott is indicating as duplicate of yours, the threading module from the Python standard library (documented here) may offer the simplest solution. Just make sure that handle_file is not accessing any globals that might get modified, nor especially modifying any globals itself; ideally it should communicate with the rest of your process only through Queue instances; etc, etc, all the usual recommendations about threading;-).
threading will break runserver if I'm not mistaken. I've had good luck with multiprocess in request handlers with mod_wsgi and runserver. Maybe someone can enlighten me as to why this is bad:
def _bulk_action(action, objs):
# mean ponies here
def bulk_action(request, t):
...
objs = model.objects.filter(pk__in=pks)
if request.method == 'POST':
objs.update(is_processing=True)
from multiprocessing import Process
p = Process(target=_bulk_action,args=(action,objs))
p.start()
return HttpResponseRedirect(next_url)
context = {'t': t, 'action': action, 'objs': objs, 'model': model}
return render_to_response(...)
http://docs.python.org/library/multiprocessing.html
New in 2.6

Categories