App Engine serving old version intermittently - python

I've deployed a new version which contains just one image replacement. After migrating traffic (100%) to the new version I can see that only this version now has active instances. However 2 days later and App engine is still intermittently serving the old image. So I assume the previous version. When I ping the domain I can see that the latest version has one IP address and the old version has another.
My question is how do I force App Engine to only server the new version? I'm not using traffic splitting either.
Any help would be much appreciated
Regards,
Danny

You have multiple layers of caches beyond memcache,
Googles edge cache will definitely cache static content especially if you app is referenced by your domain and not appspot.com .
You will probably need to use some cache busting techniques.
You can test this by requesting the url that is presenting old content with the same url but appending something like ?x=1 to the url.
If you then get current content then the edge cache is your problem and therefore the need to use cache busting techniques.

Related

Flask application alongside Node.JS application

I wrote a Flask web application for a system that our company uses. However, we have another web application, which is running on Node.JS. The "problem" is that my colleague writes everything on node, while I write everything in Python.
We want to implement both applications on one webpage - for example:
My application will run on example.com/assistant
His application will run on example.com/app1 and example.com/app2
How can we do this? Can we somehow implement the templates that I use with his templates and vice-versa?
Thank you in advance!
V
Serving different apps from the same domain
You can use haproxy for directing requests to specific service based on ACL rules.
You could use path_beg rule, to direct any request beginning with specific path to be directed to corresponding server. See example below.
/etc/haproxy/haproxy.cfg
# only relevant part of the config file
# assumes all apps are on one machine
frontend http-in
bind *:80
acl py_app1 path_beg /assistant
acl node_app1 path_beg /app1
acl node_app2 path_beg /app2
default_backend main_servers
backend py_app1
server flask_app 127.0.0.1:5000
backend node_app1
server nodejs1 127.0.0.1:4001
backend node_app2
server nodejs2 127.0.0.1:4002
backend main_servers
server other1 127.0.0.1:3000 # nginx, apache, or whatever
Sharing template code between apps
This would be harder, as you would need to both agree on some kind of format, which needs to be language and framework-agnostic, and probably logic-less.
Mustache claims to be "framework-agnostic way to render logic-free views". I used it sparringly a few years ago so this one is first that came to mind, however you should do more research on this, maybe there is some better fit.
Python implementation
JS implementation
The problem would be to actually keep the templates always in sync with apps, and not break functionality of the views. If a template changes then you would need to test all apps that use this template file. Also, you probably will block one another from updating your apps at different times, because if one of you change the template files, then you must come to a consensus, update all relevant apps, and deploy them at one time.

Does python with wsgi (uwsgi) under nginx have some small default cache?

In my small web-site I feel need to make some data widely available, to avoid exchanging with database for every request made. E.g. this could be the list of current users show in the bottom of every page or the time of last update of ranking.
The stuff works in Python (Flask) running upon nginx + uwsgi (this docker image).
I wonder, do I have some small cache or shared memory for keeping such information "out of the box", or I need to take care of explicitly setting up some dedicated cache? Or perhaps some thing like this is provided by nginx?
alternatively I still can use database for it has its own cache I think, anyway
Sorry if question seems to be naive/silly - for I come from java world (where things a bit different as we serve all requests with one fat instance of java application) - and have some difficulty grasping what powers does wsgi/uwsgi provide. Thanks in advance!
Firstly, nginx has cache:
https://www.nginx.com/blog/nginx-caching-guide/
But for flask cacheing you also have options:
https://pythonhosted.org/Flask-Cache/
http://flask.pocoo.org/docs/1.0/patterns/caching/
Did you have a look at caching section from Flask docs?
It literally says:
Flask itself does not provide caching for you, but Werkzeug, one of the libraries it is based on, has some very basic cache support
You create a cache object once and keep it around, similar to how Flask objects are created. If you are using the development server you can create a SimpleCache object, that one is a simple cache that keeps the item stored in the memory of the Python interpreter:
from werkzeug.contrib.cache import SimpleCache
cache = SimpleCache()
-- UPDATE --
Or you could solve on the frontend side storing data in the web browser local storage.
If there's nothing in the local storage you call the DB, else you use the information from local storage rather than making db call.
Hope it helps.

How to migrate Google App Engine Project to Compute Engine completely?

We have been using Google App Engine for the backend services of a project which has been developed completely as a Gooogle App Engine Project.
Lately, the front end instances were consuming more than 60-70% of our project expense. Thus we decided to do away with it completely and migrate to Google Compute Engine instead.
Wanted to know if anyone has migrated their GAE project to GCE. I understand that GCE VMs could be dynamically spun up from within a GAE app, but we want to completely do away with GAE. (Source)
As a last option, I shall host a Django project and use GAE files as the controller for the web services.
However, wanted to know if there are other potentially easier options for moving GAE projects to GCE while keeping the datastore integration intact.
TIA
Unfortunately the uniqueness of the standard environment support for the application may make your migration quite difficult.
Take, for example, the significant differences just between the standard env and the flexible env (which, if you want, would be like an intermediary step towards total migration to GCE): Migrating Services from the Standard Environment to the Flexible Environment. To me they're practically different beasts.
To make matters worse the very thing you consider the most important in your migration - keeping the datastore integration intact - is also the most likely to stand against your migration.
That's because chances are that your app uses one of the dedicated client libraries, optimized for and only available to the standard environment GAE apps. If so - the migration effectively means re-designing the entire interaction with the datastore to make it use one of the more generic datastore libraries instead. Which means more than just translating API calls - there are conceptual and functional differences that would need to be addressed.
So the answer to the title question may very well be: redesign your app for GCE. Personally I'm unsure if GCE is overall more cost-effective - I still prefer the standard env GAE. Assuming at some point the costs go up enough to maybe re-consider, I'd:
take a closer look at the pricing and the current app costs breakdown, to see which components are the heavier ones: if the majority of the costs come, for example, from the datastore usage - I wouldn't expect a migration to GCE to significantly help
try to tune the app's config and/or code to reduce costs: for example if the instance hours represent the majority in the costs tuning the scalability configurations depending on the actual traffic patterns might lower the bill
estimate the costs for similar usage patterns but with the corresponding components available on GCE (and/or GAE flex)
if the respective components are also available on GAE flex I'd make some experiments using that instead of going full GCE (which would pretty much require the re-write first).
A gradual transition using the flexible environment as a stepping stone could reveal if the estimated costs savings aren't quite there, thus helping drop the whole transition before doing the entire re-write. And also could help with the re-write, in case the transition still remains a "go".
Update: There might be another solution to consider for reducing costs: running the existing GAE app code through AppScale (see also appscale) on a more cost-effective IaaS provider.

How do I access Production Datastore from my local development server?

I have a existing Website deployed in Google App Engine for Python. Now I have setup the local development server in my System. But I don't know how to get the updated DataBase from live server. There is no Export option in Google's developer console.
And, I don't want to read the data for each request from Production Datastore, I want to set it up locally for once. The google manual says that it stores the local datastore in sqlite file.
Any hint would be appreciated.
First, make sure your app.yaml enables the "remote" built-in, with a stanza such as:
builtins:
- remote_api: on
This app.yaml of course must be the one deployed to your appspot.com (or whatever) "production" GAE app.
Then, it's a job for /usr/local/google_appengine/bulkloader.py or wherever you may have installed the bulkloader component. Run it with -h to get a list of the many, many options you can pass.
You may need to generate an application-specific password for this use on your google accounts page. Then, the general use will be something like:
/usr/local/google_appengine/bulkloader.py --dump --url=http://your_app.appspot.com/_ah/remote_api --filename=allkinds.sq3
You may not (yet) be able to use this "all kinds" query -- the server only generates the needed statistics for the all-kinds query "periodically", so you may get an error message including info such as:
[ERROR ] Unable to download kind stats for all-kinds download.
[ERROR ] Kind stats are generated periodically by the appserver
[ERROR ] Kind stats are not available on dev_appserver.
If that's the case, then you can still get things "one kind at a time" by adding the option --kind=EntityKind and running the bulkloader repeatedly (with separate sqlite3 result files) for each kind of entity.
Once you've dumped (kind by kind if you have to, all at once if you can) the production datastore, you can use the bulkloader again, this time with --restore and addressing your localhost dev_appserver instance, to rebuild the latter's datastore.
It should be possible to explicitly list kinds in the --kind flag (by separating them with commas and putting them all in parentheses) but unfortunately I think I've found a bug stopping that from working -- I'll try to get it fixed but don't hold your breath. In any case, this feature is not documented (I just found it by studying the open-source release of bulkloader.py) so it may be best not to rely on it!-)
More info about the then-new bulkloader can be found in a blog post by Nick Johnson at http://blog.notdot.net/2010/04/Using-the-new-bulkloader (though it doesn't cover newer functionalities such as the sqlite3 format of results in the "zero configuration" approach I outlined above). There's also a demo, with plenty of links, at http://bulkloadersample.appspot.com/ (also a bit outdated, alas).
Check out the remote API. This will tunnel your database calls over HTTP to the production database.

App Engine Version, Memcache

I am developing an App Engine App that uses memcache. Since there is only a single memcache shared among all versions of your app I am potentially sending bad data from a new version to the production version memcache. To prevent this, I think I may append the app version to the memcache key string to allow various versions of the app to keep their data separate.
I could do this manually, but I'd like to pull in the version from the app.yaml
How can I access the app version from within the python code?
The os.environ variable contains a key called CURRENT_VERSION_ID that you can use. It's value is composed of the version from app.yaml concatenated together with a period and what I suspect is the api_version. If I set version to 42 it gives me the value of 42.1. You should have no problems extracting the version number alone, but it might not be such a bad idea to keep the api_version aswell.
EDIT:
#Nick Johnson has pointed out that the number to the right of the period is the minor version, a number which is incremented each time you deploy your code. On the development server this number is always 1.

Categories