I am developing an App Engine App that uses memcache. Since there is only a single memcache shared among all versions of your app I am potentially sending bad data from a new version to the production version memcache. To prevent this, I think I may append the app version to the memcache key string to allow various versions of the app to keep their data separate.
I could do this manually, but I'd like to pull in the version from the app.yaml
How can I access the app version from within the python code?
The os.environ variable contains a key called CURRENT_VERSION_ID that you can use. It's value is composed of the version from app.yaml concatenated together with a period and what I suspect is the api_version. If I set version to 42 it gives me the value of 42.1. You should have no problems extracting the version number alone, but it might not be such a bad idea to keep the api_version aswell.
EDIT:
#Nick Johnson has pointed out that the number to the right of the period is the minor version, a number which is incremented each time you deploy your code. On the development server this number is always 1.
Related
In my small web-site I feel need to make some data widely available, to avoid exchanging with database for every request made. E.g. this could be the list of current users show in the bottom of every page or the time of last update of ranking.
The stuff works in Python (Flask) running upon nginx + uwsgi (this docker image).
I wonder, do I have some small cache or shared memory for keeping such information "out of the box", or I need to take care of explicitly setting up some dedicated cache? Or perhaps some thing like this is provided by nginx?
alternatively I still can use database for it has its own cache I think, anyway
Sorry if question seems to be naive/silly - for I come from java world (where things a bit different as we serve all requests with one fat instance of java application) - and have some difficulty grasping what powers does wsgi/uwsgi provide. Thanks in advance!
Firstly, nginx has cache:
https://www.nginx.com/blog/nginx-caching-guide/
But for flask cacheing you also have options:
https://pythonhosted.org/Flask-Cache/
http://flask.pocoo.org/docs/1.0/patterns/caching/
Did you have a look at caching section from Flask docs?
It literally says:
Flask itself does not provide caching for you, but Werkzeug, one of the libraries it is based on, has some very basic cache support
You create a cache object once and keep it around, similar to how Flask objects are created. If you are using the development server you can create a SimpleCache object, that one is a simple cache that keeps the item stored in the memory of the Python interpreter:
from werkzeug.contrib.cache import SimpleCache
cache = SimpleCache()
-- UPDATE --
Or you could solve on the frontend side storing data in the web browser local storage.
If there's nothing in the local storage you call the DB, else you use the information from local storage rather than making db call.
Hope it helps.
I've deployed a new version which contains just one image replacement. After migrating traffic (100%) to the new version I can see that only this version now has active instances. However 2 days later and App engine is still intermittently serving the old image. So I assume the previous version. When I ping the domain I can see that the latest version has one IP address and the old version has another.
My question is how do I force App Engine to only server the new version? I'm not using traffic splitting either.
Any help would be much appreciated
Regards,
Danny
You have multiple layers of caches beyond memcache,
Googles edge cache will definitely cache static content especially if you app is referenced by your domain and not appspot.com .
You will probably need to use some cache busting techniques.
You can test this by requesting the url that is presenting old content with the same url but appending something like ?x=1 to the url.
If you then get current content then the edge cache is your problem and therefore the need to use cache busting techniques.
I have a existing Website deployed in Google App Engine for Python. Now I have setup the local development server in my System. But I don't know how to get the updated DataBase from live server. There is no Export option in Google's developer console.
And, I don't want to read the data for each request from Production Datastore, I want to set it up locally for once. The google manual says that it stores the local datastore in sqlite file.
Any hint would be appreciated.
First, make sure your app.yaml enables the "remote" built-in, with a stanza such as:
builtins:
- remote_api: on
This app.yaml of course must be the one deployed to your appspot.com (or whatever) "production" GAE app.
Then, it's a job for /usr/local/google_appengine/bulkloader.py or wherever you may have installed the bulkloader component. Run it with -h to get a list of the many, many options you can pass.
You may need to generate an application-specific password for this use on your google accounts page. Then, the general use will be something like:
/usr/local/google_appengine/bulkloader.py --dump --url=http://your_app.appspot.com/_ah/remote_api --filename=allkinds.sq3
You may not (yet) be able to use this "all kinds" query -- the server only generates the needed statistics for the all-kinds query "periodically", so you may get an error message including info such as:
[ERROR ] Unable to download kind stats for all-kinds download.
[ERROR ] Kind stats are generated periodically by the appserver
[ERROR ] Kind stats are not available on dev_appserver.
If that's the case, then you can still get things "one kind at a time" by adding the option --kind=EntityKind and running the bulkloader repeatedly (with separate sqlite3 result files) for each kind of entity.
Once you've dumped (kind by kind if you have to, all at once if you can) the production datastore, you can use the bulkloader again, this time with --restore and addressing your localhost dev_appserver instance, to rebuild the latter's datastore.
It should be possible to explicitly list kinds in the --kind flag (by separating them with commas and putting them all in parentheses) but unfortunately I think I've found a bug stopping that from working -- I'll try to get it fixed but don't hold your breath. In any case, this feature is not documented (I just found it by studying the open-source release of bulkloader.py) so it may be best not to rely on it!-)
More info about the then-new bulkloader can be found in a blog post by Nick Johnson at http://blog.notdot.net/2010/04/Using-the-new-bulkloader (though it doesn't cover newer functionalities such as the sqlite3 format of results in the "zero configuration" approach I outlined above). There's also a demo, with plenty of links, at http://bulkloadersample.appspot.com/ (also a bit outdated, alas).
Check out the remote API. This will tunnel your database calls over HTTP to the production database.
I have some simple python code running in Google App Engine such as this:
types = memcache.get('types')
if types is None:
# do something, creating a 'types' object
memcache.set('types', types, 36000000)
Whenever I run this on the local development server, memcache.get('types') always returns None. It is not the same live on App Engine, the memcache calls work correctly.
Is it necessary to install a separate package along with the GAE development server locally?
The time argument to memcache.set can be a maximum of one month to indicate a relative lifetime, otherwise it is interpreted as an absolute unix timestamp (seconds since 1970). 36000000 is much more than a month and so it's setting the entry to expire in February 1971.
If you want something to stay in cache for as long as possible, then leave out the time argument.
When doing rolling restarts, some servers are still running the old code while some are being restarted with the new code. If you have a large number of machines/processes, there might be significant delay between the first server and the last server.
This can be a problem when there are changes to the database schema, such as columns get renamed, tables removed and etc. And this would mean that the old code (e.g using previous column names, or old tables) is still being used before the rolling restart is done.
I wonder if Django provides any guarantees or conventions to make this work well. From my own observation, when adding new models (tables) and new fields (columns) in Django this doesn't seem to cause issues with old code, because the old code doesn't even know it exists and doesn't care.
Are there any best practices or conventions in Django that one should follow to ensure minimum problems when doing a rolling restart?
There are infinite ways to deploy a Django application - each web platform stack may have a distinct set of "best practices". That said, the 12 factors are good design principles for modern web applications.
So far, I have deployed Django using:
linux + apache + mod_wsgi
linux + nginx + uwsgi
I prefer to configure the server to reload the application when I touch some file (usually a blank file named "reload.me").
In my experience it is a non-issue, that is why you may not find much information about it.