Python Threads (or their equivalent) on Google Application Engine Workaround? - python

I want to make a Google App Engine app that does the following:
Client makes an asynchronous http request
Server starts processing that request
Client makes ajax http requests to get progress
The problem is that the server processing (step #2) may take more than 30 seconds.
I know that you can't have threads on Google Application Engine and that all tasks must complete within 30 seconds or they get shut down. Is there some way to work around this?
Also, I'm using python-django as a backend.

You'll want to use the Task Queue API, probably via deferred tasks. The deferred API makes working with Task Queues dramatically simpler.
Essentially, you'll want to spawn a task to start the processing. That task should catch DeadlineExceeded exceptions and reschedule itself (again via the deferred API) to continue processing. This requires that your tasks be able to keep track of their own progress. They can also update their own status in memcache, which you can use to write a view that checks a task's status. That view can then be polled via Ajax.

Related

How to handle high response time

There are two different services. One service -Django is getting the request from the front-end and then calling an API in the other service -Flask.
But the response time of the Flask service is high and if the user navigates to another page that request will be canceled.
Should it be a background task or a pub/sub pattern? If so, how to do it in the background and then tell the user here is your last result?
You have two main options possible:
Make an initial request to a "simple" view of Django, which load a skeleton HTML page with a spinner where some JS will trigger a XHR request to a second Django view which will contain the other service (Flask) call. Thus, you can even properly alert your user the loading takes times and handle the exit on the browser side (ask confirmation before leaving/abort the request...)
If possible, cache the result of the Flask service, so you don't need to call it at each page load.
You can combine those two solutions by calling the service in a asynchronous request and cache its result (depending on context, you may need to customize the cache depending on the user connected for example).
The first solution can be declined with pub/sub, websockets, whatever, but a classical XHR seems fine for your case.
On our project, we have a couple of time-expensive endpoints. Our solution was similar to a previous answer:
Once we receive a request we call a Celery task that does its expensive work in async mode. We do not wait for its results and return a quick response to the user. Celery task sends its progress/results via WebSockets to a user. Frontend handles this WS message. The benefit of this approach is that we do not spend the CPU of our backend. We spend the CPU of the Celery worker that is running on another machine.

Tweets and followers fetching app with Google App Engine

I'm trying to build an app in Python with Google App Engine that fetches followers of specific accounts and then their tweets. I'm basing it on this template and changing it to adapt it to what I need.
The issue at the moment is that when I try to fetch followers, I get an DeadlineExceededError due to the Twitter API waiting time.
I have found this post on how to fix the same problem and I think that in my case the best solution would be to use backends, but I noticed that they are deprecated.
Does someone know how I can achieve the same result without the deprecated module?
You have a couple options that you can use for long-running tasks:
Use GAE Task Queues: GAE provides push and pull queues which allow you to do work asynchronously outside of the individual request.
Use Cloud Pub/Sub: A type of pull queue, this would allow your App Engine app to publish a message every time you wanted fetch followers or fetch tweets. The subscriber would then take the message from the queue, perform a long-running task, and then put the result into some datastore.
Use GAE Services: This would allow you to create a background service and manually scale it to run as long as you need.
Backends (modules) have been deprecated in favor of Services:
https://cloud.google.com/appengine/docs/flexible/python/an-overview-of-app-engine
For the Service you want to be able to handle requests longer than 60 seconds, set it to Manual Scaling. Then, a request can run for up to 24 hours (or until you shut it down). See:
https://cloud.google.com/appengine/docs/standard/python/how-instances-are-managed#instance_scaling
Of course, your costs may go up with long running instances and request.

App Engine frontend waiting for backend to finish and return data - what's the right way to do it?

I'm using a frontend built in angularjs and a backend built in python and webapp2 in app engine.
The backend makes calls to a third party API, fetches data and returns to the frontend.
The API request from the backend may take upto 30s or more. The problem is the frontend can't really progress any further until it gets the data.
I tried running 3 simultaneous requests to the backend using different tabs and 2 of them failed. I'm afraid that this seems to suggest that the app only allows one user at a time.
What's the best way to handle this? One thought I have is:
Use task queues to run the API call to 3rd party in the background
Create a new handler which reads from the queue for the last task sent and let the frontend poll this one at regular intervals
Update the frontend once data is available
Is that the right way? I'm sure this is a problem solved in a frontend+backend kind of world, but I just don't know what to search for.
Thanks!
Requests from the frontend are capped at 30 seconds; after that they time out in the server side. That is part of GAE's design. Requests originating from the task queue get 10 minutes, so your idea is viable. However, you'll want some identifier to use for polling, rather than just using "the last sent," to distinguish between concurrent tasks.

Long-running connection HTTP server (Python)

I am trying to design a web application that processes large quantities of large mixed-media files coming from asynchronous processes. Each process can take several minutes.
The files are either uploaded as a POST body or pulled by the web server according to a source URL provided. The files can be processed by a variety of external tools in a synchronous or asynchronous way.
I need to be able to load balance this application so I can process multiple large files simultaneously for as much as I can afford to scale.
I think Python is my best choice for this project, but beside this, I am open to any solution. The app can either deliver the file back or rely on a messaging channel to notify the clients about the process completion.
Some approaches I thought I might use:
1) Use a non-blocking web server such as Tornado that keeps the connection open until the file processing is done. The external processing command is launched and the web server waits until the file is ready and pipes the resulting IO stream directly back to the web app that returns it. Since the processes sending requests are asynchronous, they might afford to wait (unless memory or some other issues come up).
2) Use a regular web server like Cherrypy (which I am more confident with) and have the webapp use a messaging channel to report the processing progress. The web server returns a HTTP response as soon as it receives the file, validates it and sends it to a background process. At the same time it sends a message notifying the process start. The background process then takes care of delivering the file to an available location and sending another message to the channel notifying the location of the new file. This solution looks more flexible than 1), but requires writing a separate script to handle the messages outside the web application, as well as a separate storage space for the temp files that have to be cleaned up at a certain point.
3) Use some internal messaging capability of any of the webserves mentioned above, which I am not familiar with...
Edit: something like CherryPy's pub-sub engine (http://cherrypy.readthedocs.org/en/latest/extend.html?highlight=messaging#publish-subscribe-pattern) could be a good solution.
Any suggestions?
Thank you,
gm
I had a similar situation come up with a really large scale data processing engine that my team implemented. We wanted to build our api calls in Flask, some of which can take many hours to complete, but have a way to notify the user in real time what is going on.
Basically what I came up with is was what you described as option 2. On the same machine that I am serving the flask app through apache, I created a tornado app that serves up a websocket that reports progress to the end user. Once my main page is served, it establishes the websocket connection to the tornado server, and the flask app periodically sends updates to the tornado app, and down to the end user. Even if the browser is closed during the long running application, apache keeps the request alive and processing, and upon logging back in, I can still see the current progress.
I wrote about this solution in some more detail here:
http://jonfeatherstone.com/2013/08/01/mongo-and-websockets-for-application-logging/
Good luck!

Flask request waiting for asynchronous background job

I have an HTTP API using Flask and in one particular operation clients use it to retrieve information obtained from a 3rd party API. The retrieval is done with a celery task. Usually, my approach would be to accept the client request for that information and return a 303 See Other response with an URI that can be polled for the response as the background job is finished.
However, some clients require the operation to be done in a single request. They don't want to poll or follow redirects, which means I have to run the background job synchronously, hold on to the connection until it's finished, and return the result in the same response. I'm aware of Flask streaming, but how to do such long-pooling with Flask?
Tornado would do the trick.
Flask is not designed for asynchronization. A Flask instance processes one request at a time in one thread. Therefore, when you hold the connection, it will not proceed to next request.

Categories