Does anybody know how to make http request from Google App Engine without waiting a response?
It should be like a push data with http without latency for response.
I think that this section of the AppEngine docs is what you are looking for.
Use the taskqueue. If you're just pushing data, there's no sense in waiting for the response.
What you could do is in the request handler enqueue a task with whatever data was received (using the deferred library). As soon as the task has been enqueued successfully you can return a '200 OK' response and be ready for the next push.
I've done this before by setting doing a URLFetch and setting a very low value for the deadline parameter. I put 0.1 as my value, so 100ms. You need to wrap the URLFetch in a try/catch also since the request will timeout.
Related
My system architecture looks very similar to the figure posted in the question here. The primary difference between my implementation and the posted question is that I'll be using fastapi/flask for the web-server (in python) and rabbitmq for messaging.
My high level pseudo code (using fastAPI) is as follows:
from fastapi import APIRouter
from starlette.responses import Response
router = APIRouter()
#router.post("/users/{message}")
async def provide_suggestions(message: str, response: Response):
uuid = generate_uuid(message)
message_dict = {"uuid": uuid, "data": message}.
result = await post_message_to_rabbit_mq(message_dict)
response.status_code = SOME_VALID_HTTP_RESPONSE_CODE # what would this be?
Question 1: What would the HTTP response code be? Basically, the web server needs to notify the client to come back after a certain period of time and check for result (and return suggestions then).
Once the web server posts message via rabbitmq, the workers would generate relevant suggestions based on the message (by looking up a database). This message along with the uuid would be posted back in another rabbitmq message queue. Now the web server becomes a consumer.
Question 2: Assuming the webserver is registered as a consumer for the message queue on the egress path, would the webserver get the data on a separate thread for the message queue?
Question 3: Instead of waiting for another HTTP request from the client to send the suggestions, can the client and the server communicate asynchronously via web-sockets?
To answer your questions:
1: According to REST standards, status code 202 seems to do it here:
HTTP Status 202 indicates that request has been accepted for
processing, but the processing has not been completed. This status
code is useful when the actual operation is asynchronous in nature.
2: You would want a different process within the service to consume from the queue and update the local server database. This would generally not be a part of your fastapi webserver, but a seperate process. Your fastapi webserver could then query the local database every so often, or you could have a seperate endpoint on the webserver than can be called by this process when the database has been updated.
3: If you have client utilities that can deal with the websocket connection, then yes. See fastapi's documentation on it here. Otherwise it might be better to return status code 202 on the first request and have the client query the webserver every few seconds. Another option is to use a callback url, but that depends on the client's situation.
I was wondering if anyone knows a way to take payments Asynchronously using django and celery. My desired process would be this:
Client submits payment form
Request is sent to Django
The request is passed to celery to do the work
Django closes the request by returning a response
Client is then notified when the payment has complete
I'd like to do this without polling.
How would I get the "payment complete" signal back to the client if the original request was closed? Would some kind of node/tornado integration be possible here?
Thanks.
I have an HTTP API using Flask and in one particular operation clients use it to retrieve information obtained from a 3rd party API. The retrieval is done with a celery task. Usually, my approach would be to accept the client request for that information and return a 303 See Other response with an URI that can be polled for the response as the background job is finished.
However, some clients require the operation to be done in a single request. They don't want to poll or follow redirects, which means I have to run the background job synchronously, hold on to the connection until it's finished, and return the result in the same response. I'm aware of Flask streaming, but how to do such long-pooling with Flask?
Tornado would do the trick.
Flask is not designed for asynchronization. A Flask instance processes one request at a time in one thread. Therefore, when you hold the connection, it will not proceed to next request.
There is a strange API I need to work with.
I want to make a HTTP call to the API and the API will return success but I need to wait for request from this API before I respond to the client.
What is the best way to accomplish that?
Is it an option to make your API REST-ful?
An example flow: Have the client POST to a url to create a new resource and GET/HEAD for the state of that resource, that way you don't need to block your client while you do any blocking stuff.
Very occasionally when making a http request, I am waiting for an age for a response that never comes. What is the recommended way to cancel this request after a reasonable period of time?
Set the HTTP request timeout.
The timeout parameter to urllib2.urlopen, or httplib. The original urllib has no such convenient feature. You can also use an asynchronous HTTP client such as twisted.web.client, but that's probably not necessary.
If you are making a lot of HTTP requests, you can change this globally by calling socket.setdefaulttimeout