Django app defaults? - python

I'm looking for a way to have application defaults and settings that are easy to use, difficult to get wrong, and have little overhead..
Currently I have it organized as follows:
myapp/defaults.py
# application defaults
import sys
if sys.platform == 'win32':
MYAPP_HOME_ROOT = os.path.dirname(os.environ['USERPROFILE'])
else:
MYAPP_HOME_ROOT = '/home'
in my project I have:
mysite/settings.py
from myapp.defaults import * # import all default from myapp
MYAPP_HOME_ROOT = '/export/home' # overriding myapp.defaults
With this setup I could import and use settings in the regular django way (from django.conf import settings and settings.XXX).
update-3 (why we need this)
Default settings ("defaults"):
An application is more convenient to use if it can be configured by overriding a set of sensible default settings.
the application "has domain knowledge", so it makes sense for it to provide sensible defaults whenever possible.
it isn't convenient for a user of the application to need to provide all the settings needed by every app, it should be sufficient to override a small subset and leave the rest with default values.
it is very useful if defaults can react to the environment. You'll often want to do something different when DEBUG is True, but any other global setting could be useful: e.g. MYAPP_IDENTICON_DIR = os.path.join(settings.MEDIA_ROOT, 'identicons') (https://en.wikipedia.org/wiki/Identicon)
project (site/global) settings must override app-defaults, i.e. a user who defined MYAPP_IDENTICON_DIR = 's3://myappbucket.example.com/identicons' in the (global) settings.py file for their site should get this value, and not the application's default.
any solution that keeps close to the normal way of using settings (import .. settings; settings.FOO) is superior to a solution that needs new syntax (since new syntax will diverge and we would get new and unique ways to use settings from app to app).
the zen of python probably applies here:
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
(The original post posited the two key problems below, leaving the above assumptions unstated..)
Problem #1: When running unit tests for the app there is no site however, so settings wouldn't have any of the myapp.defaults.
Problem #2: There is also a big problem if myapp.defaults needs to use anything from settings (e.g. settings.DEBUG), since you can't import settings from defaults.py (since that would be a circular import).
To solve problem #1 I created a layer of indirection:
myapp/conf.py
from . import defaults
from django.conf import settings
class Conf(object):
def __getattr__(self, attr):
try:
return getattr(settings, attr)
except AttributeError:
return getattr(defaults, attr)
conf = Conf() # !<-- create Conf instance
and usage:
myapp/views.py
from .conf import conf as settings
...
print settings.MYAPP_HOME_ROOT # will print '/export/home' when used from mysite
This allows the conf.py file to work with an "empty" settings file too, and the myapp code can continue using the familiar settings.XXX.
It doesn't solve problem #2, defining application settings based on e.g. settings.DEBUG. My current solution is to add to the Conf class:
from . import defaults
from django.conf import settings
class Conf(object):
def __getattr__(self, attr):
try:
return getattr(settings, attr)
except AttributeError:
return getattr(defaults, attr)
if settings.DEBUG:
MYAPP_HOME_ROOT = '/Users'
else:
MYAPP_HOME_ROOT = '/data/media'
conf = Conf() # !<-- create Conf instance
but this is not satisfying since mysite can no longer override the setting, and myapp's defaults/settings are now spread over two files...
Is there an easier way to do this?
update-4: "just use the django test runner.."
The app you are testing relies on the Django framework - and you cannot get around the fact that you need to bootstrap the framework first before you can test the app. Part of the bootstrapping process is creating a default settings.py and further using the django-supplied test runners to ensure that your apps are being testing in the environment that they are likely to be run.
While that sounds like it ought to be true, it doesn't actually make much sense, e.g. there is no such thing as a default settings.py (at least not for a reusable app). When talking about integration testing it makes sense to test an app with the site/settings/apps/database(s)/cache(s)/resource-limits/etc. that it will encounter in production. For unit testing, however, we want to test just the unit of code that we're looking at - with as few external dependencies (and settings) as possible. The Django test runner(s) do, and should, mock out major parts of the framework, so it can't be said to be running in any "real" environment.
While the Django test runner(s) are great, there are a long list of issues it doesn't handle. The two biggies for us are (i) running tests sequentially is so slow that the test suite becomes unused (<5mins when running in parallel, almost an hour when running sequentially), (ii) sometimes you just need to run tests on big databases (we restore last night's backup to a test database that the tests can run against - much too big for fixtures).
The people who made nose, py.test, twill, Selenium, and any of the fuzz testing tools really do know testing well, since that is their only focus. It would be a shame to not be able to draw on their collective experience.
I am not the first to have encountered this problem, and there doesn't look like there is an easy or common solution. Here are two projects that have different solution:
Update, python-oidc-provider method:
The python-oidc-provider package (https://github.com/juanifioren/django-oidc-provider) has another creative way to solve the app-settings/defaults problem. It uses properties to define defaults in a myapp/settings.py file:
from django.conf import settings
class DefaultSettings(object):
#property
def MYAPP_HOME_ROOT(self):
return ...
default_settings = DefaultSettings()
def get(name):
value = None
try:
value = getattr(default_settings, name)
value = getattr(settings, name)
except AttributeError:
if value is None:
raise Exception("Missing setting: " + name)
using a setting inside myapp becomes:
from myapp import settings
print settings.get('MYAPP_HOME_ROOT')
good: solves problem #2 (using settings when defining defaults), solves problem #1 (using default settings from tests).
bad: different syntax for accessing settings (settings.get('FOO') vs the normal settings.FOO), myapp cannot provide defaults for settings that will be used outside of myapp (the settings you get from from django.conf import settings will not contain any defaults from myapp). External code can do from myapp import settings to get regular settings and myapp defaults, but this breaks down if more than one app wants to do this...
Update2, the django-appconf package:
(Note: not related to Django's AppConfig..)
With django-appconfig, app settings are created in myapp/conf.py (which needs to be loaded early, so you should probably import the conf.py file from models.py - since it is loaded early):
from django.conf import settings
from appconf import AppConf
class MyAppConf(AppConf):
HOME_ROOT = '...'
usage:
from myapp.conf import settings
print settings.MYAPP_HOME_ROOT
AppConf will automagically add the MYAPP_ prefix, and also automagically detect if MYAPP_HOME_ROOT has been redefined/overridden in the project's settings.
pro: simple to use, solves problem #1 (accessing app-settings from tests), and problem #2 (using settings when defining defaults). As long as the conf.py file is loaded early, external code should be able to use defaults defined in myapp.
con: significantly magical. The name of the setting in conf.py is different from its usage (since appconf automatically adds the MYAPP_ prefix). Extenal/opaque dependency.

I've just created django-app-defaults based on all of the requirements. It's basically a generalization of the second approach highlighted in the question (class Conf(object):).
Usage:
# my_app/defaults.py
# `django.conf.settings` or any other module can be imported if needed
# required
DEFAULT_SETTINGS_MODULE = True
# define default settings below
MY_DEFAULT_SETTING = "yey"
Then anywhere within your project:
from app_defaults import settings
print(settings.MY_DEFAULT_SETTING)
# yey
# All `django.conf.settings` are also available
print(settings.DEBUG)
# True
To load default setting for a single app instead of all of the apps, just do:
# Note: when apps or modules are explicitly passed,
# the `DEFAULT_SETTINGS_MODULE` var is not required
from app_defaults import Settings
settings = Settings(apps=["my_app"])
# or
from my_app import defaults
settings = Settings(modules=[defaults])

I have written a django package for the management of app settings called django-pluggableappsettings.
It's a package that allows you to define sensible defaults for your settings but also adds advanced features like type checking or callable settings. It makes use of metaclasses to allow for an easy definition of the apps settings. Of course this adds an external dependency to your project.
Edit 1:
Example usage could be as follows:
Install the package using pip:
pip install django-pluggableappsettings
Create your AppSettings class in any of your project's files. E.g. in 'app_settings.py'.
app_settings.py
from django_pluggableappsettings import AppSettings, Setting, IntSetting
class MyAppSettings(AppSettings):
MY_SETTING = Setting('DEFAULT_VALUE')
# Checks for "MY_SETTING" in django.conf.settings and
# returns 'DEFAULT_VALUE' if it is not present
ANOTHER_SETTING = IntSetting(42, aliases=['OTHER_SETTING_NAME'])
# Checks for "ANOTHER_SETTING" or "OTHER_SETTING_NAME" in
# django.conf.settings and returns 42 if it is not present.
# also validates that the given value is of type int
Import your MyAppSettings in any file and access its attributes
from mypackage.app_settings import MyAppSettings
MyAppSettings.MY_SETTING
MyAppSettings.ANOTHER_SETTING
Note, that the settings are only initialized on first access, so if you never access a setting, its presence in django.conf.settings is never checked.

Problem #1: When running unit tests for the app there is no site
however, so settings wouldn't have any of the myapp.defaults.
This problem is solved by using the testing framework that comes with django (see the documentation) as it bootstraps the test environment correctly for you.
Keep in mind that django tests always run with DEBUG=False
Problem #2: There is also a big problem if myapp.defaults needs to use
anything from settings (e.g. settings.DEBUG), since you can't import
settings from defaults.py (since that would be a circular import).
This is not really a problem; once you import from myapp.defaults in myapp.settings, everything in settings will be in scope. So you don't to import DEBUG, it is already available to you as its in the global scope.

When I am structuring apps, I try to define functionality in the form of mixins. Each setting should be picked up by one functionality mixin, if possible.
So in your example from above:
from django.conf import settings
class AppHomeRootMixin:
home_root = getattr(settings, "MYAPP_HOME_ROOT", "/default/path/here")
Usage:
class MyView(AppHomeRootMixin, TemplateView):
def dispatch(self, *args, **kwargs):
print(self.home_root)
return super().dispatch(*args, **kwargs)
This is really easy for another developer to see exactly what is going on, alleviates the need for third-party or first-party "magic", and allows us to think about things in terms of functionalities, rather than in terms of individual settings.
I just think that the settings layer of Django should be kept as simple as possible, and any complexity should be the responsibility of the view layer. I have run into a lot of confusing settings configurations that were created by other developers, and those configurations consumed a lot of my time when there was nobody there to explain what was going on under the hood.

Related

Occurrence of a failure to migrate a Django [duplicate]

What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.
Currently, I am adding all constants to settings.py. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes... :(
Edit: looks like there is no standard answer to this question, I've accepted the most popular method.
Two Scoops of Django: Best Practices for Django 1.5 suggests using version control for your settings files and storing the files in a separate directory:
project/
app1/
app2/
project/
__init__.py
settings/
__init__.py
base.py
local.py
production.py
manage.py
The base.py file contains common settings (such as MEDIA_ROOT or ADMIN), while local.py and production.py have site-specific settings:
In the base file settings/base.py:
INSTALLED_APPS = (
# common apps...
)
In the local development settings file settings/local.py:
from project.settings.base import *
DEBUG = True
INSTALLED_APPS += (
'debug_toolbar', # and other apps for local development
)
In the file production settings file settings/production.py:
from project.settings.base import *
DEBUG = False
INSTALLED_APPS += (
# other apps for production site
)
Then when you run django, you add the --settings option:
# Running django for local development
$ ./manage.py runserver 0:8000 --settings=project.settings.local
# Running django shell on the production site
$ ./manage.py shell --settings=project.settings.production
The authors of the book have also put up a sample project layout template on Github.
In settings.py:
try:
from local_settings import *
except ImportError as e:
pass
You can override what needed in local_settings.py; it should stay out of your version control then. But since you mention copying I'm guessing you use none ;)
Instead of settings.py, use this layout:
.
└── settings/
   ├── __init__.py <= not versioned
   ├── common.py
   ├── dev.py
   └── prod.py
common.py is where most of your configuration lives.
prod.py imports everything from common, and overrides whatever it needs to override:
from __future__ import absolute_import # optional, but I like it
from .common import *
# Production overrides
DEBUG = False
#...
Similarly, dev.py imports everything from common.py and overrides whatever it needs to override.
Finally, __init__.py is where you decide which settings to load, and it's also where you store secrets (therefore this file should not be versioned):
from __future__ import absolute_import
from .prod import * # or .dev if you want dev
##### DJANGO SECRETS
SECRET_KEY = '(3gd6shenud#&57...'
DATABASES['default']['PASSWORD'] = 'f9kGH...'
##### OTHER SECRETS
AWS_SECRET_ACCESS_KEY = "h50fH..."
What I like about this solution is:
Everything is in your versioning system, except secrets
Most configuration is in one place: common.py.
Prod-specific things go in prod.py, dev-specific things go in dev.py. It's simple.
You can override stuff from common.py in prod.py or dev.py, and you can override anything in __init__.py.
It's straightforward python. No re-import hacks.
I use a slightly modified version of the "if DEBUG" style of settings that Harper Shelby posted. Obviously depending on the environment (win/linux/etc.) the code might need to be tweaked a bit.
I was in the past using the "if DEBUG" but I found that occasionally I needed to do testing with DEUBG set to False. What I really wanted to distinguish if the environment was production or development, which gave me the freedom to choose the DEBUG level.
PRODUCTION_SERVERS = ['WEBSERVER1','WEBSERVER2',]
if os.environ['COMPUTERNAME'] in PRODUCTION_SERVERS:
PRODUCTION = True
else:
PRODUCTION = False
DEBUG = not PRODUCTION
TEMPLATE_DEBUG = DEBUG
# ...
if PRODUCTION:
DATABASE_HOST = '192.168.1.1'
else:
DATABASE_HOST = 'localhost'
I'd still consider this way of settings a work in progress. I haven't seen any one way to handling Django settings that covered all the bases and at the same time wasn't a total hassle to setup (I'm not down with the 5x settings files methods).
I use a settings_local.py and a settings_production.py. After trying several options I've found that it's easy to waste time with complex solutions when simply having two settings files feels easy and fast.
When you use mod_python/mod_wsgi for your Django project you need to point it to your settings file. If you point it to app/settings_local.py on your local server and app/settings_production.py on your production server then life becomes easy. Just edit the appropriate settings file and restart the server (Django development server will restart automatically).
TL;DR: The trick is to modify os.environment before you import settings/base.py in any settings/<purpose>.py, this will greatly simplify things.
Just thinking about all these intertwining files gives me a headache.
Combining, importing (sometimes conditionally), overriding, patching of what was already set in case DEBUG setting changed later on.
What a nightmare!
Through the years I went through all different solutions. They all somewhat work, but are so painful to manage.
WTF! Do we really need all that hassle? We started with just one settings.py file.
Now we need a documentation just to correctly combine all these together in a correct order!
I hope I finally hit the (my) sweet spot with the solution below.
Let's recap the goals (some common, some mine)
Keep secrets a secret — don't store them in a repo!
Set/read keys and secrets through environment settings, 12 factor style.
Have sensible fallback defaults. Ideally for local development you don't need anything more beside defaults.
…but try to keep defaults production safe. It's better to miss a setting override locally,
than having to remember to adjust default settings safe for production.
Have the ability to switch DEBUG on/off in a way that can have an effect on other settings (eg. using javascript compressed or not).
Switching between purpose settings, like local/testing/staging/production, should be based only on DJANGO_SETTINGS_MODULE, nothing more.
…but allow further parameterization through environment settings like DATABASE_URL.
…also allow them to use different purpose settings and run them locally side by side, eg. production setup on local developer machine, to access production database or smoke test compressed style sheets.
Fail if an environment variable is not explicitly set (requiring an empty value at minimum), especially in production, eg. EMAIL_HOST_PASSWORD.
Respond to default DJANGO_SETTINGS_MODULE set in manage.py during django-admin startproject
Keep conditionals to a minimum, if the condition is the purposed environment type (eg. for production set log file and it's rotation), override settings in associated purposed settings file.
Do not's
Do not let django read DJANGO_SETTINGS_MODULE setting form a file.
Ugh! Think of how meta this is. If you need to have a file (like docker
env) read that into the environment before staring up a django process.
Do not override DJANGO_SETTINGS_MODULE in your project/app code, eg. based on hostname or process name.
If you are lazy to set environment variable (like for setup.py test) do it in tooling just before you run your project code.
Avoid magic and patching of how django reads it's settings, preprocess the settings but do not interfere afterwards.
No complicated logic based nonsense. Configuration should be fixed and materialized not computed on the fly.
Providing a fallback defaults is just enough logic here.
Do you really want to debug, why locally you have correct set of settings but in production on a remote server,
on one of hundred machines, something computed differently? Oh! Unit tests? For settings? Seriously?
Solution
My strategy consists of excellent django-environ used with ini style files,
providing os.environment defaults for local development, some minimal and short settings/<purpose>.py files that have an
import settings/base.py AFTER the os.environment was set from an INI file. This effectively give us a kind of settings injection.
The trick here is to modify os.environment before you import settings/base.py.
To see the full example go do the repo: https://github.com/wooyek/django-settings-strategy
.
│ manage.py
├───data
└───website
├───settings
│ │ __init__.py <-- imports local for compatibility
│ │ base.py <-- almost all the settings, reads from proces environment
│ │ local.py <-- a few modifications for local development
│ │ production.py <-- ideally is empty and everything is in base
│ │ testing.py <-- mimics production with a reasonable exeptions
│ │ .env <-- for local use, not kept in repo
│ __init__.py
│ urls.py
│ wsgi.py
settings/.env
A defaults for local development. A secret file, to mostly set required environment variables.
Set them to empty values if they are not required in local development.
We provide defaults here and not in settings/base.py to fail on any other machine if the're missing from the environment.
settings/local.py
What happens in here, is loading environment from settings/.env, then importing common settings
from settings/base.py. After that we can override a few to ease local development.
import logging
import environ
logging.debug("Settings loading: %s" % __file__)
# This will read missing environment variables from a file
# We wan to do this before loading a base settings as they may depend on environment
environ.Env.read_env(DEBUG='True')
from .base import *
ALLOWED_HOSTS += [
'127.0.0.1',
'localhost',
'.example.com',
'vagrant',
]
# https://docs.djangoproject.com/en/1.6/topics/email/#console-backend
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
LOGGING['handlers']['mail_admins']['email_backend'] = 'django.core.mail.backends.dummy.EmailBackend'
# Sync task testing
# http://docs.celeryproject.org/en/2.5/configuration.html?highlight=celery_always_eager#celery-always-eager
CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = True
settings/production.py
For production we should not expect an environment file, but it's easier to have one if we're testing something.
But anyway, lest's provide few defaults inline, so settings/base.py can respond accordingly.
environ.Env.read_env(Path(__file__) / "production.env", DEBUG='False', ASSETS_DEBUG='False')
from .base import *
The main point of interest here are DEBUG and ASSETS_DEBUG overrides,
they will be applied to the python os.environ ONLY if they are MISSING from the environment and the file.
These will be our production defaults, no need to put them in the environment or file, but they can be overridden if needed. Neat!
settings/base.py
These are your mostly vanilla django settings, with a few conditionals and lot's of reading them from the environment.
Almost everything is in here, keeping all the purposed environments consistent and as similar as possible.
The main differences are below (I hope these are self explanatory):
import environ
# https://github.com/joke2k/django-environ
env = environ.Env()
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Where BASE_DIR is a django source root, ROOT_DIR is a whole project root
# It may differ BASE_DIR for eg. when your django project code is in `src` folder
# This may help to separate python modules and *django apps* from other stuff
# like documentation, fixtures, docker settings
ROOT_DIR = BASE_DIR
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env('DEBUG', default=False)
INTERNAL_IPS = [
'127.0.0.1',
]
ALLOWED_HOSTS = []
if 'ALLOWED_HOSTS' in os.environ:
hosts = os.environ['ALLOWED_HOSTS'].split(" ")
BASE_URL = "https://" + hosts[0]
for host in hosts:
host = host.strip()
if host:
ALLOWED_HOSTS.append(host)
SECURE_SSL_REDIRECT = env.bool('SECURE_SSL_REDIRECT', default=False)
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
if "DATABASE_URL" in os.environ: # pragma: no cover
# Enable database config through environment
DATABASES = {
# Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ
'default': env.db(),
}
# Make sure we use have all settings we need
# DATABASES['default']['ENGINE'] = 'django.contrib.gis.db.backends.postgis'
DATABASES['default']['TEST'] = {'NAME': os.environ.get("DATABASE_TEST_NAME", None)}
DATABASES['default']['OPTIONS'] = {
'options': '-c search_path=gis,public,pg_catalog',
'sslmode': 'require',
}
else:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
# 'ENGINE': 'django.contrib.gis.db.backends.spatialite',
'NAME': os.path.join(ROOT_DIR, 'data', 'db.dev.sqlite3'),
'TEST': {
'NAME': os.path.join(ROOT_DIR, 'data', 'db.test.sqlite3'),
}
}
}
STATIC_ROOT = os.path.join(ROOT_DIR, 'static')
# django-assets
# http://django-assets.readthedocs.org/en/latest/settings.html
ASSETS_LOAD_PATH = STATIC_ROOT
ASSETS_ROOT = os.path.join(ROOT_DIR, 'assets', "compressed")
ASSETS_DEBUG = env('ASSETS_DEBUG', default=DEBUG) # Disable when testing compressed file in DEBUG mode
if ASSETS_DEBUG:
ASSETS_URL = STATIC_URL
ASSETS_MANIFEST = "json:{}".format(os.path.join(ASSETS_ROOT, "manifest.json"))
else:
ASSETS_URL = STATIC_URL + "assets/compressed/"
ASSETS_MANIFEST = "json:{}".format(os.path.join(STATIC_ROOT, 'assets', "compressed", "manifest.json"))
ASSETS_AUTO_BUILD = ASSETS_DEBUG
ASSETS_MODULES = ('website.assets',)
The last bit shows the power here. ASSETS_DEBUG has a sensible default,
which can be overridden in settings/production.py and even that that can be overridden by an environment setting! Yay!
In effect we have a mixed hierarchy of importance:
settings/.py - sets defaults based on purpose, does not store secrets
settings/base.py - is mostly controlled by environment
process environment settings - 12 factor baby!
settings/.env - local defaults for easy startup
I manage my configurations with the help of django-split-settings.
It is a drop-in replacement for the default settings. It is simple, yet configurable. And refactoring of your exisitng settings is not required.
Here's a small example (file example/settings/__init__.py):
from split_settings.tools import optional, include
import os
if os.environ['DJANGO_SETTINGS_MODULE'] == 'example.settings':
include(
'components/default.py',
'components/database.py',
# This file may be missing:
optional('local_settings.py'),
scope=globals()
)
That's it.
Update
I wrote a blog post about managing django's settings with django-split-sttings. Have a look!
Remember that settings.py is a live code file. Assuming that you don't have DEBUG set on production (which is a best practice), you can do something like:
if DEBUG:
STATIC_PATH = /path/to/dev/files
else:
STATIC_PATH = /path/to/production/files
Pretty basic, but you could, in theory, go up to any level of complexity based on just the value of DEBUG - or any other variable or code check you wanted to use.
The problem with most of these solutions is that you either have your local settings applied before the common ones, or after them.
So it's impossible to override things like
the env-specific settings define the addresses for the memcached pool, and in the main settings file this value is used to configure the cache backend
the env-specific settings add or remove apps/middleware to the default one
at the same time.
One solution can be implemented using "ini"-style config files with the ConfigParser class. It supports multiple files, lazy string interpolation, default values and a lot of other goodies.
Once a number of files have been loaded, more files can be loaded and their values will override the previous ones, if any.
You load one or more config files, depending on the machine address, environment variables and even values in previously loaded config files. Then you just use the parsed values to populate the settings.
One strategy I have successfully used has been:
Load a default defaults.ini file
Check the machine name, and load all files which matched the reversed FQDN, from the shortest match to the longest match (so, I loaded net.ini, then net.domain.ini, then net.domain.webserver01.ini, each one possibly overriding values of the previous). This account also for developers' machines, so each one could set up its preferred database driver, etc. for local development
Check if there is a "cluster name" declared, and in that case load cluster.cluster_name.ini, which can define things like database and cache IPs
As an example of something you can achieve with this, you can define a "subdomain" value per-env, which is then used in the default settings (as hostname: %(subdomain).whatever.net) to define all the necessary hostnames and cookie things django needs to work.
This is as DRY I could get, most (existing) files had just 3 or 4 settings. On top of this I had to manage customer configuration, so an additional set of configuration files (with things like database names, users and passwords, assigned subdomain etc) existed, one or more per customer.
One can scale this as low or as high as necessary, you just put in the config file the keys you want to configure per-environment, and once there's need for a new config, put the previous value in the default config, and override it where necessary.
This system has proven reliable and works well with version control. It has been used for long time managing two separate clusters of applications (15 or more separate instances of the django site per machine), with more than 50 customers, where the clusters were changing size and members depending on the mood of the sysadmin...
I am also working with Laravel and I like the implementation there. I tried to mimic it and combining it with the solution proposed by T. Stone (look above):
PRODUCTION_SERVERS = ['*.webfaction.com','*.whatever.com',]
def check_env():
for item in PRODUCTION_SERVERS:
match = re.match(r"(^." + item + "$)", socket.gethostname())
if match:
return True
if check_env():
PRODUCTION = True
else:
PRODUCTION = False
DEBUG = not PRODUCTION
Maybe something like this would help you.
My solution to that problem is also somewhat of a mix of some solutions already stated here:
I keep a file called local_settings.py that has the content USING_LOCAL = True in dev and USING_LOCAL = False in prod
In settings.py I do an import on that file to get the USING_LOCAL setting
I then base all my environment-dependent settings on that one:
DEBUG = USING_LOCAL
if USING_LOCAL:
# dev database settings
else:
# prod database settings
I prefer this to having two separate settings.py files that I need to maintain as I can keep my settings structured in a single file easier than having them spread across several files. Like this, when I update a setting I don't forget to do it for both environments.
Of course that every method has its disadvantages and this one is no exception. The problem here is that I can't overwrite the local_settings.py file whenever I push my changes into production, meaning I can't just copy all files blindly, but that's something I can live with.
For most of my projects I use following pattern:
Create settings_base.py where I store settings that are common for all environments
Whenever I need to use new environment with specific requirements I create new settings file (eg. settings_local.py) which inherits contents of settings_base.py and overrides/adds proper settings variables (from settings_base import *)
(To run manage.py with custom settings file you simply use --settings command option: manage.py <command> --settings=settings_you_wish_to_use.py)
1 - Create a new folder inside your app and name settings to it.
2 - Now create a new __init__.py file in it and inside it write
from .base import *
try:
from .local import *
except:
pass
try:
from .production import *
except:
pass
3 - Create three new files in the settings folder name local.py and production.py and base.py.
4 - Inside base.py, copy all the content of previous settings.py folder and rename it with something different, let's say old_settings.py.
5 - In base.py change your BASE_DIR path to point to your new path of setting
Old path->
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
New path ->
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
This way, the project dir can be structured and can be manageable among production and local development.
I use a variation of what jpartogi mentioned above, that I find a little shorter:
import platform
from django.core.management import execute_manager
computername = platform.node()
try:
settings = __import__(computername + '_settings')
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file '%r_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % (computername, __file__))
sys.exit(1)
if __name__ == "__main__":
execute_manager(settings)
Basically on each computer (development or production) I have the appropriate hostname_settings.py file that gets dynamically loaded.
There is also Django Classy Settings. I personally am a big fan of it. It's built by one of the most active people on the Django IRC. You would use environment vars to set things.
http://django-classy-settings.readthedocs.io/en/latest/
Making multiple versions of settings.py is an anti pattern for 12 Factor App methodology.
use python-decouple or django-environ instead.
In order to use different settings configuration on different environment, create different settings file. And in your deployment script, start the server using --settings=<my-settings.py> parameter, via which you can use different settings on different environment.
Benefits of using this approach:
Your settings will be modular based on each environment
You may import the master_settings.py containing the base configuration in the environmnet_configuration.py and override the values that you want to change in that environment.
If you have huge team, each developer may have their own local_settings.py which they can add to the code repository without any risk of modifying the server configuration. You can add these local settings to .gitnore if you use git or .hginore if you Mercurial for Version Control (or any other). That way local settings won't even be the part of actual code base keeping it clean.
I had my settings split as follows
settings/
|
|- base.py
|- dev.py
|- prod.py
We have 3 environments
dev
staging
production
Now obviously staging and production should have the maximum possible similar environment. So we kept prod.py for both.
But there was a case where I had to identify running server is a production server. #T. Stone 's answer helped me write check as follows.
from socket import gethostname, gethostbyname
PROD_HOSTS = ["webserver1", "webserver2"]
DEBUG = False
ALLOWED_HOSTS = [gethostname(), gethostbyname(gethostname()),]
if any(host in PROD_HOSTS for host in ALLOWED_HOSTS):
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
I differentiate it in manage.py and created two separate settings file: local_settings.py and prod_settings.py.
In manage.py I check whether the server is local server or production server. If it is a local server it would load up local_settings.py and it is a production server it would load up prod_settings.py. Basically this is how it would look like:
#!/usr/bin/env python
import sys
import socket
from django.core.management import execute_manager
ipaddress = socket.gethostbyname( socket.gethostname() )
if ipaddress == '127.0.0.1':
try:
import local_settings # Assumed to be in the same directory.
settings = local_settings
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'local_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
sys.exit(1)
else:
try:
import prod_settings # Assumed to be in the same directory.
settings = prod_settings
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'prod_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file prod_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
sys.exit(1)
if __name__ == "__main__":
execute_manager(settings)
I found it to be easier to separate the settings file into two separate file instead of doing lots of ifs inside the settings file.
As an alternative to maintain different file if you wiil:
If you are using git or any other VCS to push codes from local to server, what you can do is add the settings file to .gitignore.
This will allow you to have different content in both places without any problem. SO on server you can configure an independent version of settings.py and any changes made on the local wont reflect on server and vice versa.
In addition, it will remove the settings.py file from github also, the big fault, which i have seen many newbies doing.
I think the best solution is suggested by #T. Stone, but I don't know why just don't use the DEBUG flag in Django. I Write the below code for my website:
if DEBUG:
from .local_settings import *
Always the simple solutions are better than complex ones.
I found the responses here very helpful. (Has this been more definitively solved? The last response was a year ago.) After considering all the approaches listed, I came up with a solution that I didn't see listed here.
My criteria were:
Everything should be in source control. I don't like fiddly bits lying around.
Ideally, keep settings in one file. I forget things if I'm not looking right at them :)
No manual edits to deploy. Should be able to test/push/deploy with a single fabric command.
Avoid leaking development settings into production.
Keep as close as possible to "standard" (*cough*) Django layout as possible.
I thought switching on the host machine made some sense, but then figured the real issue here is different settings for different environments, and had an aha moment. I put this code at the end of my settings.py file:
try:
os.environ['DJANGO_DEVELOPMENT_SERVER'] # throws error if unset
DEBUG = True
TEMPLATE_DEBUG = True
# This is naive but possible. Could also redeclare full app set to control ordering.
# Note that it requires a list rather than the generated tuple.
INSTALLED_APPS.extend([
'debug_toolbar',
'django_nose',
])
# Production database settings, alternate static/media paths, etc...
except KeyError:
print 'DJANGO_DEVELOPMENT_SERVER environment var not set; using production settings'
This way, the app defaults to production settings, which means you are explicitly "whitelisting" your development environment. It is much safer to forget to set the environment variable locally than if it were the other way around and you forgot to set something in production and let some dev settings be used.
When developing locally, either from the shell or in a .bash_profile or wherever:
$ export DJANGO_DEVELOPMENT_SERVER=yep
(Or if you're developing on Windows, set via the Control Panel or whatever its called these days... Windows always made it so obscure that you could set environment variables.)
With this approach, the dev settings are all in one (standard) place, and simply override the production ones where needed. Any mucking around with development settings should be completely safe to commit to source control with no impact on production.

Using the correct settings constant in Django view

I am moderatly experienced with Django and am setting up a new project using the now recommended "multiple settings files" pattern. Each settings.py file will import a base settings.py and then override specific setting. There will be one file for each staging environment (dev, qa, prod). When starting the Django process, I am making sure to set the settings flag to the appropriate settings.py file like so
manage.py runserver --settings=myproj.settings.dev
or
manage.py runfcgi --settings=myproj.settings.prod method=threaded daem...[more flags]
My question is, how do I get an environment specific constant into my view's functions. I have some specific constants (curl cert/host/port) for my project that are different for each environment. Currently I have only figured out how to include the environment in the import path, but this won't work for me, if someone can please help that would be awesome.
Here is an example views.py file that should help make my question a little clearer.
# A sample Django view.py file
from django.template.response import TemplateResponse
from myproj import settings
def index(request):
# these assignments work, but I have to add conditional logic to pick the correct
# value, I would prefer not to do this.
dev_curl_host = settings.dev.CONNECT['host']
qa_curl_host = settings.qa.CONNECT['host']
prod_curl_host = settings.prod.CONNECT['host']
# I want to do something like this, where the settings import get assigned the
# correct values for the staging environment.
# It seems like Django is already doing this with settings like Debug, how?
curl_host = settings.CONNECT['host']
Instead of
from myproj import settings
do
from django.conf import settings
That's why DEBUG works:
https://docs.djangoproject.com/en/dev/topics/settings/#using-settings-in-python-code

Create constants using a "settings" module?

I am relatively new to Python. I am looking to create a "settings" module where various application-specific constants will be stored.
Here is how I am wanting to set up my code:
settings.py
CONSTANT = 'value'
script.py
import settings
def func():
var = CONSTANT
# do some more coding
return var
I am getting a Python error stating:
global name 'CONSTANT' is not defined.
I have noticed on Django's source code their settings.py file has constants named just like I do. I am confused on how they can be imported to a script and referenced through the application.
EDIT
Thank you for all your answers! I tried the following:
import settings
print settings.CONSTANT
I get the same error
ImportError: cannot import name CONSTANT
The easiest way to do this is to just have settings be a module.
(settings.py)
CONSTANT1 = "value1"
CONSTANT2 = "value2"
(consumer.py)
import settings
print settings.CONSTANT1
print settings.CONSTANT2
When you import a python module, you have to prefix the the variables that you pull from it with the module name. If you know exactly what values you want to use from it in a given file and you are not worried about them changing during execution, then you can do
from settings import CONSTANT1, CONSTANT2
print CONSTANT1
print CONSTANT2
but I wouldn't get carried away with that last one. It makes it difficult for people reading your code to tell where values are coming from. and precludes those values being updated if another client module changes them. One final way to do it is
import settings as s
print s.CONSTANT1
print s.CONSTANT2
This saves you typing, will propagate updates and only requires readers to remember that anything after s is from the settings module.
step 1: create a new file settings.py on the same directory for easier access.
#database configuration settings
database = dict(
DATABASE = "mysql",
USER = "Lark",
PASS = ""
)
#application predefined constants
app = dict(
VERSION = 1.0,
GITHUB = "{url}"
)
step 2: importing settings module into your application file.
import settings as s # s is aliasing settings & settings is the actual file you do not have to add .py
print(s.database['DATABASE']) # should output mysql
print(s.app['VERSION']) # should output 1.0
if you do not like to use alias like s you can use a different syntax
from settings import database, app
print(database['DATABASE']) # should output mysql
print(app['VERSION']) # should output 1.0
notice on the second import method you can use the dict names directly
A small tip you can import all the code on the settings file by using * in case you have a large file and you will be using most of the settings on it on your application
from settings import * # * represent all the code on the file, it will work like step 2
print(database['USER']) # should output lark
print(app['VERSION']) # should output 1.0
i hope that helps.
When you import settings, a module object called settings is placed in the global namespace - and this object carries has that was in settings.py as attributes. I.e. outside of settings.py, you refer to CONSTANT as settings.CONSTANT.
Leave your settings.py exactly as it is, then you can use it just as Django does:
import settings
def func():
var = settings.CONSTANT
...Or, if you really want all the constants from settings.py to be imported into the global namespace, you can run
from settings import *
...but otherwise using settings.CONSTANT, as everyone else has mentioned here, is quite right.
See the answer I posted to Can I prevent modifying an object in Python? which does what you want (as well as force the use of UPPERCASE identifiers). It might actually be a better answer for this question than it was for the the other.
This way is more efficient since it loads/evaluates your settings variables only once. It works well for all my Python projects.
pip install python-settings
Docs here: https://github.com/charlsagente/python-settings
You need a settings.py file with all your defined constants like:
# settings.py
DATABASE_HOST = '10.0.0.1'
Then you need to either set an env variable (export SETTINGS_MODULE=settings) or manually calling the configure method:
# something_else.py
from python_settings import settings
from . import settings as my_local_settings
settings.configure(my_local_settings) # configure() receives a python module
The utility also supports Lazy initialization for heavy to load objects, so when you run your python project it loads faster since it only evaluates the settings variable when its needed
# settings.py
from python_settings import LazySetting
from my_awesome_library import HeavyInitializationClass # Heavy to initialize object
LAZY_INITIALIZATION = LazySetting(HeavyInitializationClass, "127.0.0.1:4222")
# LazySetting(Class, *args, **kwargs)
Just configure once and now call your variables where is needed:
# my_awesome_file.py
from python_settings import settings
print(settings.DATABASE_HOST) # Will print '10.0.0.1'
I'm new in python but if we define an constant like a function
on setings.py
def CONST1():
return "some value"
main.py
import setings
print setings.CONST1() ##take an constant value
here I see only one, value cant be changed but its work like a function..
Try this:
In settings.py:
CONSTANT = 5
In your main file:
from settings import CONSTANT
class A:
b = CONSTANT
def printb(self):
print self.b
I think your above error is coming from the settings file being imported too late. Make sure it's at the top of the file.
Also worth checking out is the simple-settings project which allows you to feed the settings into your script at runtim, which allows for environment-specific settings (think dev, test, prod,...)

How to manage local vs production settings in Django?

What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.
Currently, I am adding all constants to settings.py. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes... :(
Edit: looks like there is no standard answer to this question, I've accepted the most popular method.
Two Scoops of Django: Best Practices for Django 1.5 suggests using version control for your settings files and storing the files in a separate directory:
project/
app1/
app2/
project/
__init__.py
settings/
__init__.py
base.py
local.py
production.py
manage.py
The base.py file contains common settings (such as MEDIA_ROOT or ADMIN), while local.py and production.py have site-specific settings:
In the base file settings/base.py:
INSTALLED_APPS = (
# common apps...
)
In the local development settings file settings/local.py:
from project.settings.base import *
DEBUG = True
INSTALLED_APPS += (
'debug_toolbar', # and other apps for local development
)
In the file production settings file settings/production.py:
from project.settings.base import *
DEBUG = False
INSTALLED_APPS += (
# other apps for production site
)
Then when you run django, you add the --settings option:
# Running django for local development
$ ./manage.py runserver 0:8000 --settings=project.settings.local
# Running django shell on the production site
$ ./manage.py shell --settings=project.settings.production
The authors of the book have also put up a sample project layout template on Github.
In settings.py:
try:
from local_settings import *
except ImportError as e:
pass
You can override what needed in local_settings.py; it should stay out of your version control then. But since you mention copying I'm guessing you use none ;)
Instead of settings.py, use this layout:
.
└── settings/
   ├── __init__.py <= not versioned
   ├── common.py
   ├── dev.py
   └── prod.py
common.py is where most of your configuration lives.
prod.py imports everything from common, and overrides whatever it needs to override:
from __future__ import absolute_import # optional, but I like it
from .common import *
# Production overrides
DEBUG = False
#...
Similarly, dev.py imports everything from common.py and overrides whatever it needs to override.
Finally, __init__.py is where you decide which settings to load, and it's also where you store secrets (therefore this file should not be versioned):
from __future__ import absolute_import
from .prod import * # or .dev if you want dev
##### DJANGO SECRETS
SECRET_KEY = '(3gd6shenud#&57...'
DATABASES['default']['PASSWORD'] = 'f9kGH...'
##### OTHER SECRETS
AWS_SECRET_ACCESS_KEY = "h50fH..."
What I like about this solution is:
Everything is in your versioning system, except secrets
Most configuration is in one place: common.py.
Prod-specific things go in prod.py, dev-specific things go in dev.py. It's simple.
You can override stuff from common.py in prod.py or dev.py, and you can override anything in __init__.py.
It's straightforward python. No re-import hacks.
I use a slightly modified version of the "if DEBUG" style of settings that Harper Shelby posted. Obviously depending on the environment (win/linux/etc.) the code might need to be tweaked a bit.
I was in the past using the "if DEBUG" but I found that occasionally I needed to do testing with DEUBG set to False. What I really wanted to distinguish if the environment was production or development, which gave me the freedom to choose the DEBUG level.
PRODUCTION_SERVERS = ['WEBSERVER1','WEBSERVER2',]
if os.environ['COMPUTERNAME'] in PRODUCTION_SERVERS:
PRODUCTION = True
else:
PRODUCTION = False
DEBUG = not PRODUCTION
TEMPLATE_DEBUG = DEBUG
# ...
if PRODUCTION:
DATABASE_HOST = '192.168.1.1'
else:
DATABASE_HOST = 'localhost'
I'd still consider this way of settings a work in progress. I haven't seen any one way to handling Django settings that covered all the bases and at the same time wasn't a total hassle to setup (I'm not down with the 5x settings files methods).
I use a settings_local.py and a settings_production.py. After trying several options I've found that it's easy to waste time with complex solutions when simply having two settings files feels easy and fast.
When you use mod_python/mod_wsgi for your Django project you need to point it to your settings file. If you point it to app/settings_local.py on your local server and app/settings_production.py on your production server then life becomes easy. Just edit the appropriate settings file and restart the server (Django development server will restart automatically).
TL;DR: The trick is to modify os.environment before you import settings/base.py in any settings/<purpose>.py, this will greatly simplify things.
Just thinking about all these intertwining files gives me a headache.
Combining, importing (sometimes conditionally), overriding, patching of what was already set in case DEBUG setting changed later on.
What a nightmare!
Through the years I went through all different solutions. They all somewhat work, but are so painful to manage.
WTF! Do we really need all that hassle? We started with just one settings.py file.
Now we need a documentation just to correctly combine all these together in a correct order!
I hope I finally hit the (my) sweet spot with the solution below.
Let's recap the goals (some common, some mine)
Keep secrets a secret — don't store them in a repo!
Set/read keys and secrets through environment settings, 12 factor style.
Have sensible fallback defaults. Ideally for local development you don't need anything more beside defaults.
…but try to keep defaults production safe. It's better to miss a setting override locally,
than having to remember to adjust default settings safe for production.
Have the ability to switch DEBUG on/off in a way that can have an effect on other settings (eg. using javascript compressed or not).
Switching between purpose settings, like local/testing/staging/production, should be based only on DJANGO_SETTINGS_MODULE, nothing more.
…but allow further parameterization through environment settings like DATABASE_URL.
…also allow them to use different purpose settings and run them locally side by side, eg. production setup on local developer machine, to access production database or smoke test compressed style sheets.
Fail if an environment variable is not explicitly set (requiring an empty value at minimum), especially in production, eg. EMAIL_HOST_PASSWORD.
Respond to default DJANGO_SETTINGS_MODULE set in manage.py during django-admin startproject
Keep conditionals to a minimum, if the condition is the purposed environment type (eg. for production set log file and it's rotation), override settings in associated purposed settings file.
Do not's
Do not let django read DJANGO_SETTINGS_MODULE setting form a file.
Ugh! Think of how meta this is. If you need to have a file (like docker
env) read that into the environment before staring up a django process.
Do not override DJANGO_SETTINGS_MODULE in your project/app code, eg. based on hostname or process name.
If you are lazy to set environment variable (like for setup.py test) do it in tooling just before you run your project code.
Avoid magic and patching of how django reads it's settings, preprocess the settings but do not interfere afterwards.
No complicated logic based nonsense. Configuration should be fixed and materialized not computed on the fly.
Providing a fallback defaults is just enough logic here.
Do you really want to debug, why locally you have correct set of settings but in production on a remote server,
on one of hundred machines, something computed differently? Oh! Unit tests? For settings? Seriously?
Solution
My strategy consists of excellent django-environ used with ini style files,
providing os.environment defaults for local development, some minimal and short settings/<purpose>.py files that have an
import settings/base.py AFTER the os.environment was set from an INI file. This effectively give us a kind of settings injection.
The trick here is to modify os.environment before you import settings/base.py.
To see the full example go do the repo: https://github.com/wooyek/django-settings-strategy
.
│ manage.py
├───data
└───website
├───settings
│ │ __init__.py <-- imports local for compatibility
│ │ base.py <-- almost all the settings, reads from proces environment
│ │ local.py <-- a few modifications for local development
│ │ production.py <-- ideally is empty and everything is in base
│ │ testing.py <-- mimics production with a reasonable exeptions
│ │ .env <-- for local use, not kept in repo
│ __init__.py
│ urls.py
│ wsgi.py
settings/.env
A defaults for local development. A secret file, to mostly set required environment variables.
Set them to empty values if they are not required in local development.
We provide defaults here and not in settings/base.py to fail on any other machine if the're missing from the environment.
settings/local.py
What happens in here, is loading environment from settings/.env, then importing common settings
from settings/base.py. After that we can override a few to ease local development.
import logging
import environ
logging.debug("Settings loading: %s" % __file__)
# This will read missing environment variables from a file
# We wan to do this before loading a base settings as they may depend on environment
environ.Env.read_env(DEBUG='True')
from .base import *
ALLOWED_HOSTS += [
'127.0.0.1',
'localhost',
'.example.com',
'vagrant',
]
# https://docs.djangoproject.com/en/1.6/topics/email/#console-backend
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
LOGGING['handlers']['mail_admins']['email_backend'] = 'django.core.mail.backends.dummy.EmailBackend'
# Sync task testing
# http://docs.celeryproject.org/en/2.5/configuration.html?highlight=celery_always_eager#celery-always-eager
CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = True
settings/production.py
For production we should not expect an environment file, but it's easier to have one if we're testing something.
But anyway, lest's provide few defaults inline, so settings/base.py can respond accordingly.
environ.Env.read_env(Path(__file__) / "production.env", DEBUG='False', ASSETS_DEBUG='False')
from .base import *
The main point of interest here are DEBUG and ASSETS_DEBUG overrides,
they will be applied to the python os.environ ONLY if they are MISSING from the environment and the file.
These will be our production defaults, no need to put them in the environment or file, but they can be overridden if needed. Neat!
settings/base.py
These are your mostly vanilla django settings, with a few conditionals and lot's of reading them from the environment.
Almost everything is in here, keeping all the purposed environments consistent and as similar as possible.
The main differences are below (I hope these are self explanatory):
import environ
# https://github.com/joke2k/django-environ
env = environ.Env()
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Where BASE_DIR is a django source root, ROOT_DIR is a whole project root
# It may differ BASE_DIR for eg. when your django project code is in `src` folder
# This may help to separate python modules and *django apps* from other stuff
# like documentation, fixtures, docker settings
ROOT_DIR = BASE_DIR
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env('DEBUG', default=False)
INTERNAL_IPS = [
'127.0.0.1',
]
ALLOWED_HOSTS = []
if 'ALLOWED_HOSTS' in os.environ:
hosts = os.environ['ALLOWED_HOSTS'].split(" ")
BASE_URL = "https://" + hosts[0]
for host in hosts:
host = host.strip()
if host:
ALLOWED_HOSTS.append(host)
SECURE_SSL_REDIRECT = env.bool('SECURE_SSL_REDIRECT', default=False)
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
if "DATABASE_URL" in os.environ: # pragma: no cover
# Enable database config through environment
DATABASES = {
# Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ
'default': env.db(),
}
# Make sure we use have all settings we need
# DATABASES['default']['ENGINE'] = 'django.contrib.gis.db.backends.postgis'
DATABASES['default']['TEST'] = {'NAME': os.environ.get("DATABASE_TEST_NAME", None)}
DATABASES['default']['OPTIONS'] = {
'options': '-c search_path=gis,public,pg_catalog',
'sslmode': 'require',
}
else:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
# 'ENGINE': 'django.contrib.gis.db.backends.spatialite',
'NAME': os.path.join(ROOT_DIR, 'data', 'db.dev.sqlite3'),
'TEST': {
'NAME': os.path.join(ROOT_DIR, 'data', 'db.test.sqlite3'),
}
}
}
STATIC_ROOT = os.path.join(ROOT_DIR, 'static')
# django-assets
# http://django-assets.readthedocs.org/en/latest/settings.html
ASSETS_LOAD_PATH = STATIC_ROOT
ASSETS_ROOT = os.path.join(ROOT_DIR, 'assets', "compressed")
ASSETS_DEBUG = env('ASSETS_DEBUG', default=DEBUG) # Disable when testing compressed file in DEBUG mode
if ASSETS_DEBUG:
ASSETS_URL = STATIC_URL
ASSETS_MANIFEST = "json:{}".format(os.path.join(ASSETS_ROOT, "manifest.json"))
else:
ASSETS_URL = STATIC_URL + "assets/compressed/"
ASSETS_MANIFEST = "json:{}".format(os.path.join(STATIC_ROOT, 'assets', "compressed", "manifest.json"))
ASSETS_AUTO_BUILD = ASSETS_DEBUG
ASSETS_MODULES = ('website.assets',)
The last bit shows the power here. ASSETS_DEBUG has a sensible default,
which can be overridden in settings/production.py and even that that can be overridden by an environment setting! Yay!
In effect we have a mixed hierarchy of importance:
settings/.py - sets defaults based on purpose, does not store secrets
settings/base.py - is mostly controlled by environment
process environment settings - 12 factor baby!
settings/.env - local defaults for easy startup
I manage my configurations with the help of django-split-settings.
It is a drop-in replacement for the default settings. It is simple, yet configurable. And refactoring of your exisitng settings is not required.
Here's a small example (file example/settings/__init__.py):
from split_settings.tools import optional, include
import os
if os.environ['DJANGO_SETTINGS_MODULE'] == 'example.settings':
include(
'components/default.py',
'components/database.py',
# This file may be missing:
optional('local_settings.py'),
scope=globals()
)
That's it.
Update
I wrote a blog post about managing django's settings with django-split-sttings. Have a look!
Remember that settings.py is a live code file. Assuming that you don't have DEBUG set on production (which is a best practice), you can do something like:
if DEBUG:
STATIC_PATH = /path/to/dev/files
else:
STATIC_PATH = /path/to/production/files
Pretty basic, but you could, in theory, go up to any level of complexity based on just the value of DEBUG - or any other variable or code check you wanted to use.
The problem with most of these solutions is that you either have your local settings applied before the common ones, or after them.
So it's impossible to override things like
the env-specific settings define the addresses for the memcached pool, and in the main settings file this value is used to configure the cache backend
the env-specific settings add or remove apps/middleware to the default one
at the same time.
One solution can be implemented using "ini"-style config files with the ConfigParser class. It supports multiple files, lazy string interpolation, default values and a lot of other goodies.
Once a number of files have been loaded, more files can be loaded and their values will override the previous ones, if any.
You load one or more config files, depending on the machine address, environment variables and even values in previously loaded config files. Then you just use the parsed values to populate the settings.
One strategy I have successfully used has been:
Load a default defaults.ini file
Check the machine name, and load all files which matched the reversed FQDN, from the shortest match to the longest match (so, I loaded net.ini, then net.domain.ini, then net.domain.webserver01.ini, each one possibly overriding values of the previous). This account also for developers' machines, so each one could set up its preferred database driver, etc. for local development
Check if there is a "cluster name" declared, and in that case load cluster.cluster_name.ini, which can define things like database and cache IPs
As an example of something you can achieve with this, you can define a "subdomain" value per-env, which is then used in the default settings (as hostname: %(subdomain).whatever.net) to define all the necessary hostnames and cookie things django needs to work.
This is as DRY I could get, most (existing) files had just 3 or 4 settings. On top of this I had to manage customer configuration, so an additional set of configuration files (with things like database names, users and passwords, assigned subdomain etc) existed, one or more per customer.
One can scale this as low or as high as necessary, you just put in the config file the keys you want to configure per-environment, and once there's need for a new config, put the previous value in the default config, and override it where necessary.
This system has proven reliable and works well with version control. It has been used for long time managing two separate clusters of applications (15 or more separate instances of the django site per machine), with more than 50 customers, where the clusters were changing size and members depending on the mood of the sysadmin...
I am also working with Laravel and I like the implementation there. I tried to mimic it and combining it with the solution proposed by T. Stone (look above):
PRODUCTION_SERVERS = ['*.webfaction.com','*.whatever.com',]
def check_env():
for item in PRODUCTION_SERVERS:
match = re.match(r"(^." + item + "$)", socket.gethostname())
if match:
return True
if check_env():
PRODUCTION = True
else:
PRODUCTION = False
DEBUG = not PRODUCTION
Maybe something like this would help you.
My solution to that problem is also somewhat of a mix of some solutions already stated here:
I keep a file called local_settings.py that has the content USING_LOCAL = True in dev and USING_LOCAL = False in prod
In settings.py I do an import on that file to get the USING_LOCAL setting
I then base all my environment-dependent settings on that one:
DEBUG = USING_LOCAL
if USING_LOCAL:
# dev database settings
else:
# prod database settings
I prefer this to having two separate settings.py files that I need to maintain as I can keep my settings structured in a single file easier than having them spread across several files. Like this, when I update a setting I don't forget to do it for both environments.
Of course that every method has its disadvantages and this one is no exception. The problem here is that I can't overwrite the local_settings.py file whenever I push my changes into production, meaning I can't just copy all files blindly, but that's something I can live with.
For most of my projects I use following pattern:
Create settings_base.py where I store settings that are common for all environments
Whenever I need to use new environment with specific requirements I create new settings file (eg. settings_local.py) which inherits contents of settings_base.py and overrides/adds proper settings variables (from settings_base import *)
(To run manage.py with custom settings file you simply use --settings command option: manage.py <command> --settings=settings_you_wish_to_use.py)
1 - Create a new folder inside your app and name settings to it.
2 - Now create a new __init__.py file in it and inside it write
from .base import *
try:
from .local import *
except:
pass
try:
from .production import *
except:
pass
3 - Create three new files in the settings folder name local.py and production.py and base.py.
4 - Inside base.py, copy all the content of previous settings.py folder and rename it with something different, let's say old_settings.py.
5 - In base.py change your BASE_DIR path to point to your new path of setting
Old path->
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
New path ->
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
This way, the project dir can be structured and can be manageable among production and local development.
I use a variation of what jpartogi mentioned above, that I find a little shorter:
import platform
from django.core.management import execute_manager
computername = platform.node()
try:
settings = __import__(computername + '_settings')
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file '%r_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % (computername, __file__))
sys.exit(1)
if __name__ == "__main__":
execute_manager(settings)
Basically on each computer (development or production) I have the appropriate hostname_settings.py file that gets dynamically loaded.
There is also Django Classy Settings. I personally am a big fan of it. It's built by one of the most active people on the Django IRC. You would use environment vars to set things.
http://django-classy-settings.readthedocs.io/en/latest/
Making multiple versions of settings.py is an anti pattern for 12 Factor App methodology.
use python-decouple or django-environ instead.
In order to use different settings configuration on different environment, create different settings file. And in your deployment script, start the server using --settings=<my-settings.py> parameter, via which you can use different settings on different environment.
Benefits of using this approach:
Your settings will be modular based on each environment
You may import the master_settings.py containing the base configuration in the environmnet_configuration.py and override the values that you want to change in that environment.
If you have huge team, each developer may have their own local_settings.py which they can add to the code repository without any risk of modifying the server configuration. You can add these local settings to .gitnore if you use git or .hginore if you Mercurial for Version Control (or any other). That way local settings won't even be the part of actual code base keeping it clean.
I had my settings split as follows
settings/
|
|- base.py
|- dev.py
|- prod.py
We have 3 environments
dev
staging
production
Now obviously staging and production should have the maximum possible similar environment. So we kept prod.py for both.
But there was a case where I had to identify running server is a production server. #T. Stone 's answer helped me write check as follows.
from socket import gethostname, gethostbyname
PROD_HOSTS = ["webserver1", "webserver2"]
DEBUG = False
ALLOWED_HOSTS = [gethostname(), gethostbyname(gethostname()),]
if any(host in PROD_HOSTS for host in ALLOWED_HOSTS):
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
I differentiate it in manage.py and created two separate settings file: local_settings.py and prod_settings.py.
In manage.py I check whether the server is local server or production server. If it is a local server it would load up local_settings.py and it is a production server it would load up prod_settings.py. Basically this is how it would look like:
#!/usr/bin/env python
import sys
import socket
from django.core.management import execute_manager
ipaddress = socket.gethostbyname( socket.gethostname() )
if ipaddress == '127.0.0.1':
try:
import local_settings # Assumed to be in the same directory.
settings = local_settings
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'local_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
sys.exit(1)
else:
try:
import prod_settings # Assumed to be in the same directory.
settings = prod_settings
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'prod_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file prod_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
sys.exit(1)
if __name__ == "__main__":
execute_manager(settings)
I found it to be easier to separate the settings file into two separate file instead of doing lots of ifs inside the settings file.
As an alternative to maintain different file if you wiil:
If you are using git or any other VCS to push codes from local to server, what you can do is add the settings file to .gitignore.
This will allow you to have different content in both places without any problem. SO on server you can configure an independent version of settings.py and any changes made on the local wont reflect on server and vice versa.
In addition, it will remove the settings.py file from github also, the big fault, which i have seen many newbies doing.
I think the best solution is suggested by #T. Stone, but I don't know why just don't use the DEBUG flag in Django. I Write the below code for my website:
if DEBUG:
from .local_settings import *
Always the simple solutions are better than complex ones.
I found the responses here very helpful. (Has this been more definitively solved? The last response was a year ago.) After considering all the approaches listed, I came up with a solution that I didn't see listed here.
My criteria were:
Everything should be in source control. I don't like fiddly bits lying around.
Ideally, keep settings in one file. I forget things if I'm not looking right at them :)
No manual edits to deploy. Should be able to test/push/deploy with a single fabric command.
Avoid leaking development settings into production.
Keep as close as possible to "standard" (*cough*) Django layout as possible.
I thought switching on the host machine made some sense, but then figured the real issue here is different settings for different environments, and had an aha moment. I put this code at the end of my settings.py file:
try:
os.environ['DJANGO_DEVELOPMENT_SERVER'] # throws error if unset
DEBUG = True
TEMPLATE_DEBUG = True
# This is naive but possible. Could also redeclare full app set to control ordering.
# Note that it requires a list rather than the generated tuple.
INSTALLED_APPS.extend([
'debug_toolbar',
'django_nose',
])
# Production database settings, alternate static/media paths, etc...
except KeyError:
print 'DJANGO_DEVELOPMENT_SERVER environment var not set; using production settings'
This way, the app defaults to production settings, which means you are explicitly "whitelisting" your development environment. It is much safer to forget to set the environment variable locally than if it were the other way around and you forgot to set something in production and let some dev settings be used.
When developing locally, either from the shell or in a .bash_profile or wherever:
$ export DJANGO_DEVELOPMENT_SERVER=yep
(Or if you're developing on Windows, set via the Control Panel or whatever its called these days... Windows always made it so obscure that you could set environment variables.)
With this approach, the dev settings are all in one (standard) place, and simply override the production ones where needed. Any mucking around with development settings should be completely safe to commit to source control with no impact on production.

How do you configure Django for simple development and deployment?

I tend to use SQLite when doing Django
development, but on a live server something more robust is
often needed (MySQL/PostgreSQL, for example).
Invariably, there are other changes to make to the Django
settings as well: different logging locations / intensities,
media paths, etc.
How do you manage all these changes to make deployment a
simple, automated process?
Update: django-configurations has been released which is probably a better option for most people than doing it manually.
If you would prefer to do things manually, my earlier answer still applies:
I have multiple settings files.
settings_local.py - host-specific configuration, such as database name, file paths, etc.
settings_development.py - configuration used for development, e.g. DEBUG = True.
settings_production.py - configuration used for production, e.g. SERVER_EMAIL.
I tie these all together with a settings.py file that firstly imports settings_local.py, and then one of the other two. It decides which to load by two settings inside settings_local.py - DEVELOPMENT_HOSTS and PRODUCTION_HOSTS. settings.py calls platform.node() to find the hostname of the machine it is running on, and then looks for that hostname in the lists, and loads the second settings file depending on which list it finds the hostname in.
That way, the only thing you really need to worry about is keeping the settings_local.py file up to date with the host-specific configuration, and everything else is handled automatically.
Check out an example here.
Personally, I use a single settings.py for the project, I just have it look up the hostname it's on (my development machines have hostnames that start with "gabriel" so I just have this:
import socket
if socket.gethostname().startswith('gabriel'):
LIVEHOST = False
else:
LIVEHOST = True
then in other parts I have things like:
if LIVEHOST:
DEBUG = False
PREPEND_WWW = True
MEDIA_URL = 'http://static1.grsites.com/'
else:
DEBUG = True
PREPEND_WWW = False
MEDIA_URL = 'http://localhost:8000/static/'
and so on. A little bit less readable, but it works fine and saves having to juggle multiple settings files.
At the end of settings.py I have the following:
try:
from settings_local import *
except ImportError:
pass
This way if I want to override default settings I need to just put settings_local.py right next to settings.py.
I have two files. settings_base.py which contains common/default settings, and which is checked into source control. Each deployment has a separate settings.py, which executes from settings_base import * at the beginning and then overrides as needed.
The most simplistic way I found was:
1) use the default settings.py for local development and 2)
create a production-settings.py starting with:
import os
from settings import *
And then just override the settings that differ in production:
DEBUG = False
TEMPLATE_DEBUG = DEBUG
DATABASES = {
'default': {
....
}
}
Somewhat related, for the issue of deploying Django itself with multiple databases, you may want to take a look at Djangostack. You can download a completely free installer that allows you to install Apache, Python, Django, etc. As part of the installation process we allow you to select which database you want to use (MySQL, SQLite, PostgreSQL). We use the installers extensively when automating deployments internally (they can be run in unattended mode).
I have my settings.py file in an external directory. That way, it doesn't get checked into source control, or over-written by a deploy. I put this in the settings.py file under my Django project, along with any default settings:
import sys
import os.path
def _load_settings(path):
print "Loading configuration from %s" % (path)
if os.path.exists(path):
settings = {}
# execfile can't modify globals directly, so we will load them manually
execfile(path, globals(), settings)
for setting in settings:
globals()[setting] = settings[setting]
_load_settings("/usr/local/conf/local_settings.py")
Note: This is very dangerous if you can't trust local_settings.py.
In addition to the multiple settings files mentioned by Jim, I also tend to place two settings into my settings.py file at the top BASE_DIR and BASE_URL set to the path of the code and the URL to the base of the site, all other settings are modified to append themselves to these.
BASE_DIR = "/home/sean/myapp/"
e.g. MEDIA_ROOT = "%smedia/" % BASEDIR
So when moving the project I only have to edit these settings and not search the whole file.
I would also recommend looking at fabric and Capistrano (Ruby tool, but it can be used to deploy Django applications) which facilitate automation of remote deployment.
Well, I use this configuration:
At the end of settings.py:
#settings.py
try:
from locale_settings import *
except ImportError:
pass
And in locale_settings.py:
#locale_settings.py
class Settings(object):
def __init__(self):
import settings
self.settings = settings
def __getattr__(self, name):
return getattr(self.settings, name)
settings = Settings()
INSTALLED_APPS = settings.INSTALLED_APPS + (
'gunicorn',)
# Delete duplicate settings maybe not needed, but I prefer to do it.
del settings
del Settings
So many complicated answers!
Every settings.py file comes with :
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
I use that directory to set the DEBUG variable like this (reaplace with the directoy where your dev code is):
DEBUG=False
if(BASE_DIR=="/path/to/my/dev/dir"):
DEBUG = True
Then, every time the settings.py file is moved, DEBUG will be False and it's your production environment.
Every time you need different settings than the ones in your dev environment just use:
if(DEBUG):
#Debug setting
else:
#Release setting
Why make things so much complicated? I come into Django from a PHP/Laravel background. I use .env and you can easily configure it.
Install this package
django-environ
Now, in the folder where you've settings.py, create a file .env (make sure to put this file in gitignore)
In the .env file, put the env variables like debug setting state, secret key, mail credentials etc
A snapshot of example .env
SECRET_KEY="django-insecure-zy%)s5$=aql=#ox54lzfjyyx!&uv1-q0kp^54p(^251&_df75i"
DB_NAME=bugfree
DB_USER=postgres
DB_PASSWORD=koushik
DB_PORT=5433
DB_HOST=localhost
APP_DEBUG=True # everything is string here
In the settings, make sure to instantiate it using this
import environ
env = environ.Env()
environ.Env.read_env()
Now you can import values from the .env file and put them wherever you want. Some examples in settings.py
SECRET_KEY = env('SECRET_KEY')
DEBUG = bool(env('APP_DEBUG', False))
You can also put default value too like this
env('DB_NAME', 'default value here')
TIP
You can create another .env.example in the same folder where you've .env file and you can have a template of .env and you can commit the .example file. It helps the future dev to know easily what env variables are there.
.env.example would be something like this
SECRET_KEY=VALUE_HERE
DB_NAME=VALUE_HERE
DB_USER=VALUE_HERE
DB_PASSWORD=VALUE_HERE
DB_PORT=VALUE_HERE
DB_HOST=VALUE_HERE
EMAIL_HOST=VALUE_HERE
EMAIL_PORT=VALUE_HERE
EMAIL_HOST_USER=VALUE_HERE
EMAIL_HOST_PASSWORD=VALUE_HERE
DEFAULT_FROM_EMAIL=VALUE_HERE
I think it depends on the size of the site as to whether you need to step up from using SQLite, I've successfully used SQLite on several smaller live sites and it runs great.
I use environment:
if os.environ.get('WEB_MODE', None) == 'production' :
from settings_production import *
else :
from settings_dev import *
I believe this is a much better approach, because eventually you need special settings for your test environment, and you can easily add it to this condition.
This is an older post but I think if I add this useful library it will simplify things.
Use django-configuration
Quickstart
pip install django-configurations
Then subclass the included configurations.Configuration class in your project's settings.py or any other module you're using to store the settings constants, e.g.:
# mysite/settings.py
from configurations import Configuration
class Dev(Configuration):
DEBUG = True
Set the DJANGO_CONFIGURATION environment variable to the name of the class you just created, e.g. in ~/.bashrc:
export DJANGO_CONFIGURATION=Dev
and the DJANGO_SETTINGS_MODULE environment variable to the module import path as usual, e.g. in bash:
export DJANGO_SETTINGS_MODULE=mysite.settings
Alternatively supply the --configuration option when using Django management commands along the lines of Django's default --settings command line option, e.g.:
python manage.py runserver --settings=mysite.settings --configuration=Dev
To enable Django to use your configuration you now have to modify your manage.py or wsgi.py script to use django-configurations' versions of the appropriate starter functions, e.g. a typical manage.py using django-configurations would look like this:
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
os.environ.setdefault('DJANGO_CONFIGURATION', 'Dev')
from configurations.management import execute_from_command_line
execute_from_command_line(sys.argv)
Notice in line 10 we don't use the common tool django.core.management.execute_from_command_line but instead configurations.management.execute_from_command_line.
The same applies to your wsgi.py file, e.g.:
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
os.environ.setdefault('DJANGO_CONFIGURATION', 'Dev')
from configurations.wsgi import get_wsgi_application
application = get_wsgi_application()
Here we don't use the default django.core.wsgi.get_wsgi_application function but instead configurations.wsgi.get_wsgi_application.
That's it! You can now use your project with manage.py and your favorite WSGI enabled server.
In fact you should probably consider having the same (or almost the same) configs for your development and production environment. Otherwise, situations like "Hey, it works on my machine" will happen from time to time.
So in order to automate your deployment and eliminate those WOMM issues, just use Docker.

Categories