when setting up Django to use Memcached for caching (in my case, I want to to use session caching), in settings.py we set
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
I will be running the project in App Engine so my question is what do I do to for the LOCATION entry?
As it happens, I have been porting a Django (1.6.5) application to GAE over the last few days (GAE Development SDK 1.9.6). I don't have a big need for caching right now but it's good to know it's available if I need it.
So I just tried using django.core.cache.backends.memcached.MemcachedCache as my cache backend (set up as you describe in your question, and I put python-memcached in my libs folder) and
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
to manage my sessions and GAE gave me the error:
RuntimeError: Unable to create a new session key. It is likely that the cache is unavailable.
Anyway...
...even if you could get this to work it's surely better to use Google's API lib and borrow from the Django Memcached implementation, especially as the Google lib has been designed to be compatible with python-memcached and otherwise your app could break at any time with an SDK update. Create a python module such as my_project/backends.py:
import pickle
from django.core.cache.backends.memcached import BaseMemcachedCache
class GaeMemcachedCache(BaseMemcachedCache):
"An implementation of a cache binding using google's app engine memcache lib (compatible with python-memcached)"
def __init__(self, server, params):
from google.appengine.api import memcache
super(GaeMemcachedCache, self).__init__(server, params,
library=memcache,
value_not_found_exception=ValueError)
#property
def _cache(self):
if getattr(self, '_client', None) is None:
self._client = self._lib.Client(self._servers, pickleProtocol=pickle.HIGHEST_PROTOCOL)
return self._client
Then your cache setting becomes:
CACHES = {
'default': {
'BACKEND': 'my_project.backends.GaeMemcachedCache',
}
}
That's it! This seems to work fine but I should be clear that it is not rigorously tested!
Aside
Have a poke around in google.appengine.api.memcache.__init__.py in your GAE SDK folder and you will find:
def __init__(self, servers=None, debug=0,
pickleProtocol=cPickle.HIGHEST_PROTOCOL,
pickler=cPickle.Pickler,
unpickler=cPickle.Unpickler,
pload=None,
pid=None,
make_sync_call=None,
_app_id=None):
"""Create a new Client object.
No parameters are required.
Arguments:
servers: Ignored; only for compatibility.
...
i.e. Even if you could find a LOCATION for your memcache instance in the cloud, Google's own library would ignore it.
The location should be set as your ip and port where your memcache daemon is running.
Check this in django official documentation.
Set LOCATION to ip:port values, where ip is the IP address of the
Memcached daemon and port is the port on which Memcached is running,
or to a unix:path value, where path is the path to a Memcached Unix
socket file.
https://docs.djangoproject.com/en/dev/topics/cache/
If you are following this documentation
http://www.allbuttonspressed.com/projects/djangoappengine
And cloning this (as asked in the above link)
https://github.com/django-nonrel/djangoappengine/blob/master/djangoappengine/settings_base.py
I don't think you need to define a location. Is it throwing an error when you don't define it?
Related
I have a Google App Engine Standard Environment application written in Python 3, using Flask as the framework, and firestore in native mode as the database. All of the database calls are done in the App Engine code, hidden behind Flask end points/views/handlers. Client browsers do not execute any javascript that directly call the firestore database. Client side javascript is basically 'dumb' code used for cosmetics. The only time client side javascript does "anything" is when a user creates a new account or logs in using the firebase auth ui.
Having said so, I noticed that some online resources mention that it is absolutely necessary to secure the firestore database since anything that is not disallowed by security rules are basically allowed (i.e. the firestore database is insecure by default), however, I suspect that this is only the case for apps that have thick clients (i.e. the client side code or javascript is in charge of doing the heavy lifting of querying and writing to firestore).
So my question is, is writing these security rules necessary only for mobile/web clients and not for firestore databases accessed only by server side code? Or is it necessary for all firestore projects to define these security rules? If so, then I would appreciate any pointers as to where to find reasonable default security rules to start securing my firestore database.
I am including a caricature of my flask main.py file for reference.
# main.py
from google.cloud import firestore
from mylibrary import function_that_fetches_user_data
from mylibrary2 import function_that_writes_user_content
def validate_cookie(protected_function):
def wrapper(*args, **kwargs):
# handle cookie validation
# run protected function
return wrapper
# The dashboard is meant to display user data and user content to the user.
# It is not meant to be seen by other users.
#app.route("/user_dashboard")
#validate_cookie
def dashboard():
user_id = get_uid_from_cookie
firestore_client = firestore.Client()
user_data = function_that_fetches_user_data(user_id, firestore_client)
return render_template('dashboard.html', user_data)
# The write function creates user content that should only be accessible to the author
# and the system/app.
#app.route("/write_user_content")
#validate_cookie
def write_user_content():
user_id = get_uid_from_cookie
firestore_client = firestore.Client()
result = function_that_writes_user_content(user_id, firestore_client)
return render_template('success.html', result)
Security rules are only necessary to control access coming from web and mobile clients. Backend SDK accessing Firestore actually bypass security rules altogether, so writing any rules at all won't change the behavior of your backend code at all.
If you simply do not directly access the database from web or mobile, then you can set the security rules to reject all access, and that's fine.
match /{document=**} {
allow read, write: if false;
}
I am testing out Pipelines on Heroku.
I have a staging app and a Production app in a pipeline and I had two issues which arose at the same time, and so may or nor may not be interrelated....
The first was how to run commands from my CLI on both my staging and production app.
This partially answered my question but not entirely. I found the solution was to set my staging app as the default: git config heroku.remote staging
Then to run my production apps commands I can run a command like so: heroku run python manage.py createsuperuser -a, --app your-app-name
The other issue which remains unresolved, seems to have a solution for Ruby is how to control my robots.txt from staging to production. I want my staging app to be hidden from Google indexing etc. but I don't want this to be transferred over to my production app (of course). Perhaps I shouldn't be using robots at all? Any help would be appreciated...
In the absence of any suggestions, I created a solution for this problem, namely how to prevent indexing of a staging app by google when using Heroku pipelines.
The issue is when "promoting" your linked repo from staging to production there seemed to be no obvious way to prevent the staging app being indexed by search engines but whilst still ensuring your production app is indexed.
I decided on limiting all views via a middleware according to IP address. Now only specific IPs can access the staging app on heroku. Perhaps this is not the best way, but in the absence of any other answer, this seems to work:
from django.core.exceptions import PermissionDenied
import os
def IPcheckMIddleware(get_response):
def middleware(request, *args, **kwargs):
herokuEnv = os.environ['IS_LIVE']
if herokuEnv == 'FALSE':
ip1=os.environ['IP_CHECKER']
ip2=os.environ['IP_CHECKER_1']
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
if x_forwarded_for:
ips = x_forwarded_for.split(',')[-1]
else:
ips = request.META.get('REMOTE_ADDR')
if ips in [ip1,ip2]:
pass
else:
raise PermissionDenied
else:
pass
response = get_response(request)
return response
return middleware
Hope that helps anyone with the same/similar issue...!
I'm working on a project that uses Google Cloud Platform's App Engine in the Python 3 Flexible Environment using Django, and I'm trying to permanently redirect all requests over http to https for all routes, but so far have not been successful. I can access the site over https, but only if explicitly written in the address bar.
I've looked at this post: How to permanently redirect `http://` and `www.` URLs to `https://`? but did not find the answer useful.
The app works properly in every sense except for the redirecting. Here is my app.yaml file:
# [START runtime]
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT myproject.wsgi
runtime_config:
python_version: 3
# [END runtime]
In myproject/settings.py I have these variables defined:
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SECURE_PROXY_SSL_HEADER = ('HTTP-X-FORWARDED-PROTO', 'https')
On my local machine, when I set SECURE_SSL_REDIRECT to True, I was redirected to https properly, even though SSL is not supported on localhost. In production, I am still able to access the site using just http.
Is there something I'm missing or doing wrong to cause the redirect not to happen?
Setting secure in app.yaml only works for GAE Standard but not in Flexible. The app.yaml docs for Flexible do not mention this key at all.
You will probably have to do it on application level by inspecting the value of the X-Forwarded-Proto header. It will be set to https if the request to your app came by HTTPS. You can find more info on environment-provided headers in Flexible environment in the docs here.
Make sure you have SecurityMiddleware and CommonMiddleware enabled, and assign a Base_URL:
settings.py:
MIDDLEWARE_CLASSES = (
...
'django.middleware.security.SecurityMiddleware'
'django.middleware.common.CommonMiddleware',
)
BASE_URL = 'https://www.example.com'
Or, you could write your own middleware:
MIDDLEWARE_CLASSES = (
...
'core.my_middleware.ForceHttps',
)
BASE_URL = 'https://www.example.com'
my_middleware.py:
from django.http import HttpResponsePermanentRedirect
class ForceHttps(object):
def process_request(self, request):
if not (request.META.get('HTTPS') == 'on' and settings. BASE_URL == 'https://' + request.META.get('HTTP_HOST') ):
return HttpResponsePermanentRedirect(settings. BASE_URL + request.META.get('PATH_INFO'))
else:
return None
The issue is the header name. When accessing Django through a WSGI server, you should use the X-Forwarded-Proto header instead of the HTTP_X_FORWARDED_PROTO.
See: Why does django ignore HTTP_X_FORWARDED_PROTO from the wire but not in tests?
I had a similar problem and tried a number changes both in the app.yaml and also in settings.py for a custom domain (with the default ssl cert supplied by GAE).
Through trial and error I found that in settings.py updating the allowed hosts to the appropriate domains had the desired result:
ALLOWED_HOSTS = ['https://{your-project-name}.appspot.com','https://www.yourcustomdomain.com']
Update: I am no longer sure the above is the reason as on a subsequent deploy the above was rejected and I was getting a hosts error. However the redirect is still in place... :(
Before this change I was able to switch between http:// and https:// manually in the address bar now it redirects automaticlly.
In order to make this work both on App Engine Flexible and your local machine when testing, you should set the following in your settings.py
if os.getenv('GAE_INSTANCE'):
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
else:
# Running on your local machine
SECURE_SSL_REDIRECT = False
SECURE_PROXY_SSL_HEADER = None
That should be all you need to do to ensure that redirect is happening properly when you are on app engine.
NOTE: If you are using old school app engine cron jobs (via cron.yaml) then you will need to start using the much improved cloud scheduler instead. App engine cron jobs do not properly support redirection to HTTPS but you can easily get it working with cloud scheduler.
I'm evaluating using Google Cloud and Google App Engine for our company's new product. I'm trying to adapt this tutorial to use Postgres instead of MySQL:
https://cloud.google.com/python/django/flexible-environment
While I'm able to successfully connect to the database locally, when I try in production, I get the following 500 error:
OperationalError at /admin/login/
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/cloudsql/<project_name_hidden>:us-central1:<database_id_hidden>/.s.PGSQL.5432"?
To connect to Postgres, I made three changes to the sample project. I have this snippet in app.yaml:
beta_settings:
cloud_sql_instances: <project_name_hidden>:us-central1:<database_id_hidden>
I have this snippet in settings.py:
# [START dbconfig]
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'polls',
'USER': '<db_user_name_hidden>',
'PASSWORD': '<db_password_hidden>',
'PORT': '5432',
}
}
# In the flexible environment, you connect to CloudSQL using a unix socket.
# Locally, you can use the CloudSQL proxy to proxy a localhost connection
# to the instance
DATABASES['default']['HOST'] = '/cloudsql/<project_name_hidden>:us-central1:<database_id_hidden>'
if os.getenv('GAE_INSTANCE'):
pass
else:
DATABASES['default']['HOST'] = '127.0.0.1'
# [END dbconfig]
and have this requirements.py:
Django==1.10.6
#mysqlclient==1.3.10
psycopg2==2.7.1
wheel==0.29.0
gunicorn==19.7.0
Never mind, seems like Google fixed something on their end and the service is working now. I'm trying to figure out what exactly changed...
I battled with this for hours and the only way to fix it was to create a new instance of Postgres. Brute force unfortunately, but it seems that sometimes instances on GCP can start off in a corrupt state and will never work.
I've created a service called 'timesTwo' and dropped the file in the correct directory. When I try and call it from my client-side code however, it tells me that the service doesn't exist. Once I've created my server side code, how do I expose the service? What step(s) am I missing?
Server-side code:
from pyamf.remoting.gateway.wsgi import WSGIGateway
def timesTwo(data):
return data * 2
services = {
'timesTwo': timesTwo,
# Add other exposed functions here
}
gateway = WSGIGateway(services)
I'm having a really hard time finding online documentation. Thanks for the help!
Sidenote: Is there some resource (website, book, ANYthing) that would be more thorough than what's on pyamf.org that you would recommend?!