I'm trying to run some code on Appengine using dynamic backends (python), but I find Appengine dynamic backends documentation to be inadequate. Does anyone have any sample code regarding how this can be done?
I have already configured my backends.yaml like so
backends:
- name: fileupload-backend
options: dynamic
start: backend_handler.py
And I understand that dynamic backends run when Appengine sends a start request to /_ah/start, then run when they receive a request from the application and stops when they receive no further requests from the application. But how do I write code in my backends_handler.py in order to prompt appengine to do this?
You've slightly misunderstood the point of the start option. This is the script that is automatically invoked when AppEngine hits the /_ah/start - it's not responsible for calling it, it's responsible for responding to that call. Most of the time you won't need this: it's really for when your backend needs specific things to be set up on startup. It's perfectly OK in fact to not handle the /_ah/start call at all, and let it respond with a 404 - that's enough to trigger the backend to start up.
If you're confused about how to actually run code on the backend, your best bet is to configure a task queue to run on that backend with the target parameter, and then get your frontend code to trigger a task on that queue.
For a nice example on how to use Google App Engine Backends, you can take a look at Google App Engine Tutorial - Code Lab Exercise 8: Queues and Backends.
This tutorial presents an example on how to user Task Queues and Backends.
Hope this helps!
Related
Problem: Habitica is a habit-tracking app, but its personal data logs are not as detailed as I want. I want to create a local log of when I mark off habits/todo's in the app. Habitica offers certain webhooks that trigger when habits/todo's are checked off, which seems perfect for what I want, but how do I turn these triggers into a local log? I would like to use Python for this.
Ideas: It seems to me that I would need to set up some kind of personal cloud server to receive this data, turn it into a log, and then store it for download. I have previously deployed a Flask app using Heroku, so if this could be done similarly, that would be ideal. However, I don't know much about this, so I would welcome any ideas or advice.
Creating the Habitica webhook as Flask application is a good approach.
Heroku supports Python/Flask very nicely however the file system is ephemeral, hence it gets wiped out at every application restart.
In order to persist data you can look at various options:
save the file to AWS S3
save the data into a DB (Heroku has a free plan for PostgreSQL)
If I develop a REST service hosted in Apache and a Python plugin which services GET, PUT, DELETE, PATCH; and this service is consumed by an Angular client (or other REST interacting browser technology). Then how do I make it scale-able with RabbitMQ (AMQP)?
Potential Solution #1
Multiple Apache's still faces off against the browser's HTTP calls.
Each Apache instance uses an AMQP plugin and then posts message to a queue
Python microservices monitor a queue and pull a message, service it and return response
Response passed back to Apache plugin, in turn Apache generates the HTTP response
Does this mean the Python microservice no longer has any HTTP server code at all. This will change that component a lot. Perhaps best to decide upfront if you want to use this pattern as it seems it would be a task to rip out any HTTP server code.
Other potential solutions? I am genuinely puzzled as to how we're supposed to take a classic REST server component and upgrade it to be scale-able with RabbitMQ/AMQP with minimal disruption.
I would recommend switching wsgi to asgi(nginx can help here), Im not sure why you think rabbitmq is the solution to your problem, as nothing you described seems like that would be solved by using this method.
asgi is not supported by apache as far as I know, but it allows the server to go do work, and while its working it can continue to service new requests that come in. (gross over simplification)
If for whatever reason you really want to use job workers (rabbitmq, etc) then I would suggest returning to the user a "token" (really just the job_id) and then they can call with that token, and it will report back either the current job status or the result
I'm trying to build an app in Python with Google App Engine that fetches followers of specific accounts and then their tweets. I'm basing it on this template and changing it to adapt it to what I need.
The issue at the moment is that when I try to fetch followers, I get an DeadlineExceededError due to the Twitter API waiting time.
I have found this post on how to fix the same problem and I think that in my case the best solution would be to use backends, but I noticed that they are deprecated.
Does someone know how I can achieve the same result without the deprecated module?
You have a couple options that you can use for long-running tasks:
Use GAE Task Queues: GAE provides push and pull queues which allow you to do work asynchronously outside of the individual request.
Use Cloud Pub/Sub: A type of pull queue, this would allow your App Engine app to publish a message every time you wanted fetch followers or fetch tweets. The subscriber would then take the message from the queue, perform a long-running task, and then put the result into some datastore.
Use GAE Services: This would allow you to create a background service and manually scale it to run as long as you need.
Backends (modules) have been deprecated in favor of Services:
https://cloud.google.com/appengine/docs/flexible/python/an-overview-of-app-engine
For the Service you want to be able to handle requests longer than 60 seconds, set it to Manual Scaling. Then, a request can run for up to 24 hours (or until you shut it down). See:
https://cloud.google.com/appengine/docs/standard/python/how-instances-are-managed#instance_scaling
Of course, your costs may go up with long running instances and request.
I am currently working with MS Azure. There I have a worker role and a web role. In worker role I start an infinite loop to process some data continously. The web role is performing the interaction with the client. There I use a MVC Framework, which on server side is written in C# and on client side in Javascript.
Now I'm interested in GAE engine. I read a lot about the app engine. I want to build an application in Python. But I don't really understand the architecture. Is there a counterpart in the project structure like the worker and web role in Azure?
The closest thing to what you want is what Google App Engine calls modules. Modules are (roughly) pools of instances that can be set up with different runtimes and performance characteristics:
https://cloud.google.com/appengine/docs/python/modules/
I'm not an expert with Azure, but the big difference I see between GAE's approach and Azure's is that, unlike in Azure, "back-end modules" (not an official term) in GAE are still basically web services at heart. Everything in the module is still basically written as HTTP handlers. So, the main ways you control that module are via HTTP: using push queues to hit HTTP endpoints, using cron to trigger HTTP endpoints that read from pull queues/the datastore/Google Cloud Storage, and/or making HTTP requests from your "front-end module" directly to your "back-end module".
Note that Google App Engine historically provided the concept of "backends" and "backend instances" that you could use for much the same purpose as modules for longer-running background processes. However, the module system is more flexible, and it is now recommended.
yes there is. look at backend and frontend instances. your question is too broad to go into more detail. in general the backend type of instance is used for long running tasks but you could also do everyrhing in the frontend instance.
I'm working on a web interface which currently runs using PHP and communicates locally to a python script.
I'm moving the web side to appengine, which so far is going well when being used locally, I'm currently communicating from the appengine app to the python app via get requests that are handled by the python script.
The problem is, that obviously the machine running the python script will be behind a firewall, I've never needed to do this before and am not sure on how to implement this best.
The only idea I have so far is for the python script to send post requests to the appengine with some data and then as a response, send back some other data. The only problem with this is that the web interface should update the client quite fast.
Any ideas?
Take a look at ProtoRPC Python API: https://developers.google.com/appengine/docs/python/tools/protorpc/overview
Though it is still marked as experimental, it seems to be a decent framework for what you are trying to do - send messages back and forth between the apps.
Since you said your local app runs behind a firewall, I'm assuming you cannot open up an endpoint and protect it with some form of authentication.
Once you have messages flowing, you can either use Channel API to keep the front-end updated: https://developers.google.com/appengine/docs/python/channel/overview
Or if you want to go more basic, just implement long/short polling through AJAX.
Sorry with the limited amount of info you have provided, that's all I can think of right now. Please feel free to post more details and I'll try to help further.