Related
I thought the easy way was to reload the config module with my software environment as 'Testing'.
My code has a config.py which works as configuration handling code where I define all my configuration setting per each environment I'd like to run.
config.py
class Config(object):
DEBUG = False
TESTING = False
DATABASE_URI = 'sqlite:///:memory:'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user#localhost/foo'
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
config_lookup = dict(
testing=TestingConfig(),
development=DevelopmentConfig(),
production=ProductionConfig(),
)
config = config_lookup[os.getenv('ENVIRONMENT', 'development')]
One approach could be using reload:
On my conftest.py I would like to make ensure that all my tests would run as per TestingConfig.
Therefore I would do something like this:
conftest.py
#pytest.fixture(autouse=True)
def cfg():
# Reload the global 'config' instance.
os.environ["ENVIRONMENT"] = "testing"
importlib.reload(sys.modules["config"])
Unfortunately, this approach doesn't seem to work as expected for all situations as the importlib.reload() seems to perform expected.
importlib.reload() Reload a previously imported module. The argument must be a module object, so it must have been successfully imported before.
Read more:importlib Docs
This solution may work in the first instance but it will become tricky when using a different type of imports.(object vs submodule)
An object is imported
from app.config import config
print('Environment is:',type(config).__name__)
# Environment is:Development <- Wrong
An module is imported
import app.config as _
print('Environment is:',type(_.config).__name__)
# Environment is:Testing <-- correct
Another approach is to ensure you always set your environment variable when executing pytests:
ENVIRONMENT=testing python -m pytest tests
Not a desirable as you might forget to append your environment.
Related issues:
difference-between-from-x-import-y-and-import-x-yPython Bug
Change environment variables before importlib.reload
Reloading All Loaded Modules
difference-between-from-x-import-y-and-import-x-y
pep-0221
Another approach that seems to work quite well is to ensure that you set your environment before running conftest.py. Is using python-env
[pytest]
env =
ENVIRONMENT=testing
Seems to work fine. The upsidedown is that is an old repository without maintenance but is also a small piece of code.
I'm looking for a way to have application defaults and settings that are easy to use, difficult to get wrong, and have little overhead..
Currently I have it organized as follows:
myapp/defaults.py
# application defaults
import sys
if sys.platform == 'win32':
MYAPP_HOME_ROOT = os.path.dirname(os.environ['USERPROFILE'])
else:
MYAPP_HOME_ROOT = '/home'
in my project I have:
mysite/settings.py
from myapp.defaults import * # import all default from myapp
MYAPP_HOME_ROOT = '/export/home' # overriding myapp.defaults
With this setup I could import and use settings in the regular django way (from django.conf import settings and settings.XXX).
update-3 (why we need this)
Default settings ("defaults"):
An application is more convenient to use if it can be configured by overriding a set of sensible default settings.
the application "has domain knowledge", so it makes sense for it to provide sensible defaults whenever possible.
it isn't convenient for a user of the application to need to provide all the settings needed by every app, it should be sufficient to override a small subset and leave the rest with default values.
it is very useful if defaults can react to the environment. You'll often want to do something different when DEBUG is True, but any other global setting could be useful: e.g. MYAPP_IDENTICON_DIR = os.path.join(settings.MEDIA_ROOT, 'identicons') (https://en.wikipedia.org/wiki/Identicon)
project (site/global) settings must override app-defaults, i.e. a user who defined MYAPP_IDENTICON_DIR = 's3://myappbucket.example.com/identicons' in the (global) settings.py file for their site should get this value, and not the application's default.
any solution that keeps close to the normal way of using settings (import .. settings; settings.FOO) is superior to a solution that needs new syntax (since new syntax will diverge and we would get new and unique ways to use settings from app to app).
the zen of python probably applies here:
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
(The original post posited the two key problems below, leaving the above assumptions unstated..)
Problem #1: When running unit tests for the app there is no site however, so settings wouldn't have any of the myapp.defaults.
Problem #2: There is also a big problem if myapp.defaults needs to use anything from settings (e.g. settings.DEBUG), since you can't import settings from defaults.py (since that would be a circular import).
To solve problem #1 I created a layer of indirection:
myapp/conf.py
from . import defaults
from django.conf import settings
class Conf(object):
def __getattr__(self, attr):
try:
return getattr(settings, attr)
except AttributeError:
return getattr(defaults, attr)
conf = Conf() # !<-- create Conf instance
and usage:
myapp/views.py
from .conf import conf as settings
...
print settings.MYAPP_HOME_ROOT # will print '/export/home' when used from mysite
This allows the conf.py file to work with an "empty" settings file too, and the myapp code can continue using the familiar settings.XXX.
It doesn't solve problem #2, defining application settings based on e.g. settings.DEBUG. My current solution is to add to the Conf class:
from . import defaults
from django.conf import settings
class Conf(object):
def __getattr__(self, attr):
try:
return getattr(settings, attr)
except AttributeError:
return getattr(defaults, attr)
if settings.DEBUG:
MYAPP_HOME_ROOT = '/Users'
else:
MYAPP_HOME_ROOT = '/data/media'
conf = Conf() # !<-- create Conf instance
but this is not satisfying since mysite can no longer override the setting, and myapp's defaults/settings are now spread over two files...
Is there an easier way to do this?
update-4: "just use the django test runner.."
The app you are testing relies on the Django framework - and you cannot get around the fact that you need to bootstrap the framework first before you can test the app. Part of the bootstrapping process is creating a default settings.py and further using the django-supplied test runners to ensure that your apps are being testing in the environment that they are likely to be run.
While that sounds like it ought to be true, it doesn't actually make much sense, e.g. there is no such thing as a default settings.py (at least not for a reusable app). When talking about integration testing it makes sense to test an app with the site/settings/apps/database(s)/cache(s)/resource-limits/etc. that it will encounter in production. For unit testing, however, we want to test just the unit of code that we're looking at - with as few external dependencies (and settings) as possible. The Django test runner(s) do, and should, mock out major parts of the framework, so it can't be said to be running in any "real" environment.
While the Django test runner(s) are great, there are a long list of issues it doesn't handle. The two biggies for us are (i) running tests sequentially is so slow that the test suite becomes unused (<5mins when running in parallel, almost an hour when running sequentially), (ii) sometimes you just need to run tests on big databases (we restore last night's backup to a test database that the tests can run against - much too big for fixtures).
The people who made nose, py.test, twill, Selenium, and any of the fuzz testing tools really do know testing well, since that is their only focus. It would be a shame to not be able to draw on their collective experience.
I am not the first to have encountered this problem, and there doesn't look like there is an easy or common solution. Here are two projects that have different solution:
Update, python-oidc-provider method:
The python-oidc-provider package (https://github.com/juanifioren/django-oidc-provider) has another creative way to solve the app-settings/defaults problem. It uses properties to define defaults in a myapp/settings.py file:
from django.conf import settings
class DefaultSettings(object):
#property
def MYAPP_HOME_ROOT(self):
return ...
default_settings = DefaultSettings()
def get(name):
value = None
try:
value = getattr(default_settings, name)
value = getattr(settings, name)
except AttributeError:
if value is None:
raise Exception("Missing setting: " + name)
using a setting inside myapp becomes:
from myapp import settings
print settings.get('MYAPP_HOME_ROOT')
good: solves problem #2 (using settings when defining defaults), solves problem #1 (using default settings from tests).
bad: different syntax for accessing settings (settings.get('FOO') vs the normal settings.FOO), myapp cannot provide defaults for settings that will be used outside of myapp (the settings you get from from django.conf import settings will not contain any defaults from myapp). External code can do from myapp import settings to get regular settings and myapp defaults, but this breaks down if more than one app wants to do this...
Update2, the django-appconf package:
(Note: not related to Django's AppConfig..)
With django-appconfig, app settings are created in myapp/conf.py (which needs to be loaded early, so you should probably import the conf.py file from models.py - since it is loaded early):
from django.conf import settings
from appconf import AppConf
class MyAppConf(AppConf):
HOME_ROOT = '...'
usage:
from myapp.conf import settings
print settings.MYAPP_HOME_ROOT
AppConf will automagically add the MYAPP_ prefix, and also automagically detect if MYAPP_HOME_ROOT has been redefined/overridden in the project's settings.
pro: simple to use, solves problem #1 (accessing app-settings from tests), and problem #2 (using settings when defining defaults). As long as the conf.py file is loaded early, external code should be able to use defaults defined in myapp.
con: significantly magical. The name of the setting in conf.py is different from its usage (since appconf automatically adds the MYAPP_ prefix). Extenal/opaque dependency.
I've just created django-app-defaults based on all of the requirements. It's basically a generalization of the second approach highlighted in the question (class Conf(object):).
Usage:
# my_app/defaults.py
# `django.conf.settings` or any other module can be imported if needed
# required
DEFAULT_SETTINGS_MODULE = True
# define default settings below
MY_DEFAULT_SETTING = "yey"
Then anywhere within your project:
from app_defaults import settings
print(settings.MY_DEFAULT_SETTING)
# yey
# All `django.conf.settings` are also available
print(settings.DEBUG)
# True
To load default setting for a single app instead of all of the apps, just do:
# Note: when apps or modules are explicitly passed,
# the `DEFAULT_SETTINGS_MODULE` var is not required
from app_defaults import Settings
settings = Settings(apps=["my_app"])
# or
from my_app import defaults
settings = Settings(modules=[defaults])
I have written a django package for the management of app settings called django-pluggableappsettings.
It's a package that allows you to define sensible defaults for your settings but also adds advanced features like type checking or callable settings. It makes use of metaclasses to allow for an easy definition of the apps settings. Of course this adds an external dependency to your project.
Edit 1:
Example usage could be as follows:
Install the package using pip:
pip install django-pluggableappsettings
Create your AppSettings class in any of your project's files. E.g. in 'app_settings.py'.
app_settings.py
from django_pluggableappsettings import AppSettings, Setting, IntSetting
class MyAppSettings(AppSettings):
MY_SETTING = Setting('DEFAULT_VALUE')
# Checks for "MY_SETTING" in django.conf.settings and
# returns 'DEFAULT_VALUE' if it is not present
ANOTHER_SETTING = IntSetting(42, aliases=['OTHER_SETTING_NAME'])
# Checks for "ANOTHER_SETTING" or "OTHER_SETTING_NAME" in
# django.conf.settings and returns 42 if it is not present.
# also validates that the given value is of type int
Import your MyAppSettings in any file and access its attributes
from mypackage.app_settings import MyAppSettings
MyAppSettings.MY_SETTING
MyAppSettings.ANOTHER_SETTING
Note, that the settings are only initialized on first access, so if you never access a setting, its presence in django.conf.settings is never checked.
Problem #1: When running unit tests for the app there is no site
however, so settings wouldn't have any of the myapp.defaults.
This problem is solved by using the testing framework that comes with django (see the documentation) as it bootstraps the test environment correctly for you.
Keep in mind that django tests always run with DEBUG=False
Problem #2: There is also a big problem if myapp.defaults needs to use
anything from settings (e.g. settings.DEBUG), since you can't import
settings from defaults.py (since that would be a circular import).
This is not really a problem; once you import from myapp.defaults in myapp.settings, everything in settings will be in scope. So you don't to import DEBUG, it is already available to you as its in the global scope.
When I am structuring apps, I try to define functionality in the form of mixins. Each setting should be picked up by one functionality mixin, if possible.
So in your example from above:
from django.conf import settings
class AppHomeRootMixin:
home_root = getattr(settings, "MYAPP_HOME_ROOT", "/default/path/here")
Usage:
class MyView(AppHomeRootMixin, TemplateView):
def dispatch(self, *args, **kwargs):
print(self.home_root)
return super().dispatch(*args, **kwargs)
This is really easy for another developer to see exactly what is going on, alleviates the need for third-party or first-party "magic", and allows us to think about things in terms of functionalities, rather than in terms of individual settings.
I just think that the settings layer of Django should be kept as simple as possible, and any complexity should be the responsibility of the view layer. I have run into a lot of confusing settings configurations that were created by other developers, and those configurations consumed a lot of my time when there was nobody there to explain what was going on under the hood.
As of now we have a file conf.py which stores most of our configuration variables for the service. We have code similar to this:
environment = 'dev' # could be dev, local, staging, production
configa = 'something'
configb = 'something else'
if environment = 'dev':
configa = 'something dev'
elif environment = 'local':
configa = 'something local'
Is this the right way to manage configuration file in a python project? Are these configuration loaded into variables at compile time (while creating pyc files), or are the if conditions checked every time the conf is imported in a python script or is it every time a configuration variable is accessed?
All code runs at import time. But since you are unlikely to import your application again and again while it's running you can ignore the (minimal) overhead.
This is subjective but there is a good discussion in this post:
What's the best practice using a settings file in Python?
With your method, it will be treated the same way as any other python script, i.e. on import. If you wanted it updated on access/or without restarting the service it is best to use an external/non-python config file (e.g. json, .ini) and set up functionality to refresh the file.
You must create a file, example settings.py,
add the path to the module where the file to the system.
Example:
sys.path.append(os.path.dirname(__file__))
Аfter anywhere you can import the file and to obtain from any setting:
import settings
env = settings.environment
Similarly, many working frameworks.
I am moderatly experienced with Django and am setting up a new project using the now recommended "multiple settings files" pattern. Each settings.py file will import a base settings.py and then override specific setting. There will be one file for each staging environment (dev, qa, prod). When starting the Django process, I am making sure to set the settings flag to the appropriate settings.py file like so
manage.py runserver --settings=myproj.settings.dev
or
manage.py runfcgi --settings=myproj.settings.prod method=threaded daem...[more flags]
My question is, how do I get an environment specific constant into my view's functions. I have some specific constants (curl cert/host/port) for my project that are different for each environment. Currently I have only figured out how to include the environment in the import path, but this won't work for me, if someone can please help that would be awesome.
Here is an example views.py file that should help make my question a little clearer.
# A sample Django view.py file
from django.template.response import TemplateResponse
from myproj import settings
def index(request):
# these assignments work, but I have to add conditional logic to pick the correct
# value, I would prefer not to do this.
dev_curl_host = settings.dev.CONNECT['host']
qa_curl_host = settings.qa.CONNECT['host']
prod_curl_host = settings.prod.CONNECT['host']
# I want to do something like this, where the settings import get assigned the
# correct values for the staging environment.
# It seems like Django is already doing this with settings like Debug, how?
curl_host = settings.CONNECT['host']
Instead of
from myproj import settings
do
from django.conf import settings
That's why DEBUG works:
https://docs.djangoproject.com/en/dev/topics/settings/#using-settings-in-python-code
I tend to use SQLite when doing Django
development, but on a live server something more robust is
often needed (MySQL/PostgreSQL, for example).
Invariably, there are other changes to make to the Django
settings as well: different logging locations / intensities,
media paths, etc.
How do you manage all these changes to make deployment a
simple, automated process?
Update: django-configurations has been released which is probably a better option for most people than doing it manually.
If you would prefer to do things manually, my earlier answer still applies:
I have multiple settings files.
settings_local.py - host-specific configuration, such as database name, file paths, etc.
settings_development.py - configuration used for development, e.g. DEBUG = True.
settings_production.py - configuration used for production, e.g. SERVER_EMAIL.
I tie these all together with a settings.py file that firstly imports settings_local.py, and then one of the other two. It decides which to load by two settings inside settings_local.py - DEVELOPMENT_HOSTS and PRODUCTION_HOSTS. settings.py calls platform.node() to find the hostname of the machine it is running on, and then looks for that hostname in the lists, and loads the second settings file depending on which list it finds the hostname in.
That way, the only thing you really need to worry about is keeping the settings_local.py file up to date with the host-specific configuration, and everything else is handled automatically.
Check out an example here.
Personally, I use a single settings.py for the project, I just have it look up the hostname it's on (my development machines have hostnames that start with "gabriel" so I just have this:
import socket
if socket.gethostname().startswith('gabriel'):
LIVEHOST = False
else:
LIVEHOST = True
then in other parts I have things like:
if LIVEHOST:
DEBUG = False
PREPEND_WWW = True
MEDIA_URL = 'http://static1.grsites.com/'
else:
DEBUG = True
PREPEND_WWW = False
MEDIA_URL = 'http://localhost:8000/static/'
and so on. A little bit less readable, but it works fine and saves having to juggle multiple settings files.
At the end of settings.py I have the following:
try:
from settings_local import *
except ImportError:
pass
This way if I want to override default settings I need to just put settings_local.py right next to settings.py.
I have two files. settings_base.py which contains common/default settings, and which is checked into source control. Each deployment has a separate settings.py, which executes from settings_base import * at the beginning and then overrides as needed.
The most simplistic way I found was:
1) use the default settings.py for local development and 2)
create a production-settings.py starting with:
import os
from settings import *
And then just override the settings that differ in production:
DEBUG = False
TEMPLATE_DEBUG = DEBUG
DATABASES = {
'default': {
....
}
}
Somewhat related, for the issue of deploying Django itself with multiple databases, you may want to take a look at Djangostack. You can download a completely free installer that allows you to install Apache, Python, Django, etc. As part of the installation process we allow you to select which database you want to use (MySQL, SQLite, PostgreSQL). We use the installers extensively when automating deployments internally (they can be run in unattended mode).
I have my settings.py file in an external directory. That way, it doesn't get checked into source control, or over-written by a deploy. I put this in the settings.py file under my Django project, along with any default settings:
import sys
import os.path
def _load_settings(path):
print "Loading configuration from %s" % (path)
if os.path.exists(path):
settings = {}
# execfile can't modify globals directly, so we will load them manually
execfile(path, globals(), settings)
for setting in settings:
globals()[setting] = settings[setting]
_load_settings("/usr/local/conf/local_settings.py")
Note: This is very dangerous if you can't trust local_settings.py.
In addition to the multiple settings files mentioned by Jim, I also tend to place two settings into my settings.py file at the top BASE_DIR and BASE_URL set to the path of the code and the URL to the base of the site, all other settings are modified to append themselves to these.
BASE_DIR = "/home/sean/myapp/"
e.g. MEDIA_ROOT = "%smedia/" % BASEDIR
So when moving the project I only have to edit these settings and not search the whole file.
I would also recommend looking at fabric and Capistrano (Ruby tool, but it can be used to deploy Django applications) which facilitate automation of remote deployment.
Well, I use this configuration:
At the end of settings.py:
#settings.py
try:
from locale_settings import *
except ImportError:
pass
And in locale_settings.py:
#locale_settings.py
class Settings(object):
def __init__(self):
import settings
self.settings = settings
def __getattr__(self, name):
return getattr(self.settings, name)
settings = Settings()
INSTALLED_APPS = settings.INSTALLED_APPS + (
'gunicorn',)
# Delete duplicate settings maybe not needed, but I prefer to do it.
del settings
del Settings
So many complicated answers!
Every settings.py file comes with :
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
I use that directory to set the DEBUG variable like this (reaplace with the directoy where your dev code is):
DEBUG=False
if(BASE_DIR=="/path/to/my/dev/dir"):
DEBUG = True
Then, every time the settings.py file is moved, DEBUG will be False and it's your production environment.
Every time you need different settings than the ones in your dev environment just use:
if(DEBUG):
#Debug setting
else:
#Release setting
Why make things so much complicated? I come into Django from a PHP/Laravel background. I use .env and you can easily configure it.
Install this package
django-environ
Now, in the folder where you've settings.py, create a file .env (make sure to put this file in gitignore)
In the .env file, put the env variables like debug setting state, secret key, mail credentials etc
A snapshot of example .env
SECRET_KEY="django-insecure-zy%)s5$=aql=#ox54lzfjyyx!&uv1-q0kp^54p(^251&_df75i"
DB_NAME=bugfree
DB_USER=postgres
DB_PASSWORD=koushik
DB_PORT=5433
DB_HOST=localhost
APP_DEBUG=True # everything is string here
In the settings, make sure to instantiate it using this
import environ
env = environ.Env()
environ.Env.read_env()
Now you can import values from the .env file and put them wherever you want. Some examples in settings.py
SECRET_KEY = env('SECRET_KEY')
DEBUG = bool(env('APP_DEBUG', False))
You can also put default value too like this
env('DB_NAME', 'default value here')
TIP
You can create another .env.example in the same folder where you've .env file and you can have a template of .env and you can commit the .example file. It helps the future dev to know easily what env variables are there.
.env.example would be something like this
SECRET_KEY=VALUE_HERE
DB_NAME=VALUE_HERE
DB_USER=VALUE_HERE
DB_PASSWORD=VALUE_HERE
DB_PORT=VALUE_HERE
DB_HOST=VALUE_HERE
EMAIL_HOST=VALUE_HERE
EMAIL_PORT=VALUE_HERE
EMAIL_HOST_USER=VALUE_HERE
EMAIL_HOST_PASSWORD=VALUE_HERE
DEFAULT_FROM_EMAIL=VALUE_HERE
I think it depends on the size of the site as to whether you need to step up from using SQLite, I've successfully used SQLite on several smaller live sites and it runs great.
I use environment:
if os.environ.get('WEB_MODE', None) == 'production' :
from settings_production import *
else :
from settings_dev import *
I believe this is a much better approach, because eventually you need special settings for your test environment, and you can easily add it to this condition.
This is an older post but I think if I add this useful library it will simplify things.
Use django-configuration
Quickstart
pip install django-configurations
Then subclass the included configurations.Configuration class in your project's settings.py or any other module you're using to store the settings constants, e.g.:
# mysite/settings.py
from configurations import Configuration
class Dev(Configuration):
DEBUG = True
Set the DJANGO_CONFIGURATION environment variable to the name of the class you just created, e.g. in ~/.bashrc:
export DJANGO_CONFIGURATION=Dev
and the DJANGO_SETTINGS_MODULE environment variable to the module import path as usual, e.g. in bash:
export DJANGO_SETTINGS_MODULE=mysite.settings
Alternatively supply the --configuration option when using Django management commands along the lines of Django's default --settings command line option, e.g.:
python manage.py runserver --settings=mysite.settings --configuration=Dev
To enable Django to use your configuration you now have to modify your manage.py or wsgi.py script to use django-configurations' versions of the appropriate starter functions, e.g. a typical manage.py using django-configurations would look like this:
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
os.environ.setdefault('DJANGO_CONFIGURATION', 'Dev')
from configurations.management import execute_from_command_line
execute_from_command_line(sys.argv)
Notice in line 10 we don't use the common tool django.core.management.execute_from_command_line but instead configurations.management.execute_from_command_line.
The same applies to your wsgi.py file, e.g.:
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
os.environ.setdefault('DJANGO_CONFIGURATION', 'Dev')
from configurations.wsgi import get_wsgi_application
application = get_wsgi_application()
Here we don't use the default django.core.wsgi.get_wsgi_application function but instead configurations.wsgi.get_wsgi_application.
That's it! You can now use your project with manage.py and your favorite WSGI enabled server.
In fact you should probably consider having the same (or almost the same) configs for your development and production environment. Otherwise, situations like "Hey, it works on my machine" will happen from time to time.
So in order to automate your deployment and eliminate those WOMM issues, just use Docker.