How to execute a method after a response is complete in Pyramid? - python

Using Pyramid with Akhet, how do I execute a method after a response has been returned to the client? I believe this was done with the __after__ method in Pylons. I'm trying to execute a DB query and don't want it to block the request response.

You can use a response callback for your case.
EDITED after Michael Merickel's comment: The response callback blocks the request to which is added, but you shouldn't worry about that callback blocking other requests since each request runs in a different thread. If you still need not to block the request with the callback, you can spawn a different thread or process (if you can afford it) or look into message queuing systems as mentioned in the comment below.

Related

Initiating a Kafka consumer on server (Flask) and returning response

I have a Flask API, which when hit by a client, will subscribe to a Kafka topic and start consuming (polling) data from the topic and write it to a database.
Currently the consumer function runs an infinite loop to poll the data and write it to the database. But the problem here is, the client doesn't receive a success response here. The further client processes happens after the server timeout.
But the objective is, client has to receive the response right after the consumer is initialized and is ready to consume.
How to implement it in a proper way?
Edit:
I came across this thread - Flask end response and continue processing
Seems that using Python's Thread Library won't work here according to the comments on the top voted answer.
And the Accepted Answer suggested to use WSGI middleware that adds a hook to the close method of the response iterator. But is it advisable in this case where the consumer will be running for an infinite amount of time?
Is there an alternative to these?

Flask request waiting for asynchronous background job

I have an HTTP API using Flask and in one particular operation clients use it to retrieve information obtained from a 3rd party API. The retrieval is done with a celery task. Usually, my approach would be to accept the client request for that information and return a 303 See Other response with an URI that can be polled for the response as the background job is finished.
However, some clients require the operation to be done in a single request. They don't want to poll or follow redirects, which means I have to run the background job synchronously, hold on to the connection until it's finished, and return the result in the same response. I'm aware of Flask streaming, but how to do such long-pooling with Flask?
Tornado would do the trick.
Flask is not designed for asynchronization. A Flask instance processes one request at a time in one thread. Therefore, when you hold the connection, it will not proceed to next request.

Access HttpRequest object in request_finished callback in Django

I am trying to call a function after a particular view is finished sending the response object to the user - so the user does not have to wait for the function to be executed.
I am trying to use request_finished of the Django Signals Framework but I do not know how to access the HttpRequest object in the kwargs that Django signal sends to my callback.
Looks like the Signal object does not contain any useful information about the request.
ALSO, is this the best way to execute a function outside the request-response cycle? I do not want to use an advanced solution like Celery at this point in time.
That signal doesn't do what you think it does. As you can see from the handler code, the request_finished signal is sent when the request has been processed, but before the response is returned to the user. So anything that you add to that signal will still happen before the user sees any of the response.
Because of the way web servers work, there's no way to run code after the response is returned to the user. Really, the only thing to do is use something like Celery - you could knock up your own version that simulates a task queue using a db table, then have a cron job pick up items from the table, but it'll be a whole lot easier just to use Celery.
The crosstown_traffic API of hendrix, which uses Twisted to serve Django, is designed specifically to defer logic until immediately after the Response has gone out over the wire to the client.
http://hendrix.readthedocs.org/en/latest/crosstown_traffic/

Python Webserver: How to serve requests asynchronously

I need to create a python middleware that will do the following:
a) Accept http get/post requests from multiple clients.
b) Modify and Dispatch these requests to a backend remote application (via socket communication). I do not have any control over this remote application.
c) Receive processed results from backend application and return these results back to the requesting clients.
Now the clients are expecting a synchronous request/response scenario. But the backend application is not returning the results synchronously. That is, some requests take much longer to process than others. Hence,
Client 1 : send http request C1 --> get response R1
Client 2 : send http request C2 --> get response R2
Client 3 : send http request C3 --> get response R3
Python middleware receives them in some order: C2, C3, C1. Dispatches them in this order to backend (as non-http messages). Backend responds with results in mixed order R1, R3, R2. Python middleware should package these responses back into http response objects and send the response back to the relevant client.
Is there any sample code to program this sort of behavior. There seem to be something like 20 different web frameworks for python and I'm confused as to which one would be best for this scenario (would prefer something as lightweight as possible ... I would consider Django too heavy ... I tried bottle, but I am not sure how to go about programming that for this scenario).
================================================
Update (based on discussions below): Requests have a request id. Responses have a response id (which should match the request id that they correspond to). There is only one socket connection between the middleware and the remote backend application. While we can maintain a {request_id : ip_address} dictionary, the issue is how to construct a HTTP response object to the correct client. I assume, threading might solve this problem where each thread maintains its own response object.
Screw frameworks. This exactly the kind of task for asyncore. This module allows event-based network programming: given a set of sockets, it calls back given handlers when data is ready on any of them. That way, threads are not necessary just to dumbly wait for data on one socket to arrive and painfully pass it to another thread. You would have to implement the http handling yourself, but examples can be found on that. Alternatively, you could use the async feature of uwsgi, which would allow your application to be integrated with an existing webserver, but that does not integrate with asyncore by default --- though it wouldn't be hard to make it work. Depends on specific needs.
Quoting your comment:
The middleware uses a single persistent socket connection to the backend. All requests from middleware are forwarded via this single socket. Clients do send a request id along with their requests. Response id should match the request id. So the question remains: How does the middleware (web server) keep track of which request id belonged to which client? I mean, is there any way for a cgi script in middleware to create a db of tuples like and once a response id matches, then send a http response to clientip:clienttcpport ?
Is there any special reason for doing all this processing in a middleware? You should be able to do all this in a decorator, or somewhere else, if more appropriate.
Anyway, you need to maintain a global concurrent dictionary (extend dict and protect it using threading.Lock). Upon a new request, store the given request-id as key, and associate it to the respective client (sender). Whenever your backend responds, retrieve the client from this dictionary, and remove the entry so it doesn't accumulate forever.
UPDATE: someone already extended the dictionary for you - check this answer.
Ultimately your going from the synchronous http request-response protocol from your clients to an asynchronous queuing/messaging protocol with your backend. So you've two choices (1) either make requests wait until the backend has no outstanding work, then process one (2) write something that marries the backend responses with their associated request (using a dictionary of request or something)
One way might be to run your server in one thread while dealing with your backend in another (see... Run Python HTTPServer in Background and Continue Script Execution) or maybe look at aiohttp (https://docs.aiohttp.org/en/v0.12.0/web.html)

async http request on Google App Engine Python

Does anybody know how to make http request from Google App Engine without waiting a response?
It should be like a push data with http without latency for response.
I think that this section of the AppEngine docs is what you are looking for.
Use the taskqueue. If you're just pushing data, there's no sense in waiting for the response.
What you could do is in the request handler enqueue a task with whatever data was received (using the deferred library). As soon as the task has been enqueued successfully you can return a '200 OK' response and be ready for the next push.
I've done this before by setting doing a URLFetch and setting a very low value for the deadline parameter. I put 0.1 as my value, so 100ms. You need to wrap the URLFetch in a try/catch also since the request will timeout.

Categories