Django separate settings files in Docker [duplicate] - python
I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
The DJANGO_SETTINGS_MODULE environment variable controls which settings file Django will load.
You therefore create separate configuration files for your respective environments (note that they can of course both import * from a separate, "shared settings" file), and use DJANGO_SETTINGS_MODULE to control which one to use.
Here's how:
As noted in the Django documentation:
The value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g. mysite.settings. Note that the settings module should be on the Python import search path.
So, let's assume you created myapp/production_settings.py and myapp/test_settings.py in your source repository.
In that case, you'd respectively set DJANGO_SETTINGS_MODULE=myapp.production_settings to use the former and DJANGO_SETTINGS_MODULE=myapp.test_settings to use the latter.
From here on out, the problem boils down to setting the DJANGO_SETTINGS_MODULE environment variable.
Setting DJANGO_SETTINGS_MODULE using a script or a shell
You can then use a bootstrap script or a process manager to load the correct settings (by setting the environment), or just run it from your shell before starting Django: export DJANGO_SETTINGS_MODULE=myapp.production_settings.
Note that you can run this export at any time from a shell — it does not need to live in your .bashrc or anything.
Setting DJANGO_SETTINGS_MODULE using a Process Manager
If you're not fond of writing a bootstrap script that sets the environment (and there are very good reasons to feel that way!), I would recommend using a process manager:
Supervisor lets you pass environment variables to managed processes using a program's environment configuration key.
Honcho (a pure-Python equivalent of Ruby's Foreman) lets you define environment variables in an "environment" (.env) file.
Finally, note that you can take advantage of the PYTHONPATH variable to store the settings in a completely different location (e.g. on a production server, storing them in /etc/). This allows for separating configuration from application files. You may or may not want that, it depends on how your app is structured.
By default use production settings, but create a file called settings_dev.py in the same folder as your settings.py file. Add overrides there, such as DEBUG=True.
On the computer that will be used for development, add this to your ~/.bashrc file:
export DJANGO_DEVELOPMENT=true
Or turn it on one time by prefixing your command:
DJANGO_DEVELOPMENT=true python manage.py runserver
At the bottom of your settings.py file, add the following.
# Override production variables if DJANGO_DEVELOPMENT env variable is true
if os.getenv('DJANGO_DEVELOPMENT') == 'true':
from settings_dev import * # or specific overrides
(Note that importing * should generally be avoided in Python)
By default the production servers will not override anything. Done!
Compared to the other answers, this one is simpler because it doesn't require updating PYTHONPATH, or setting DJANGO_SETTINGS_MODULE which only allows you to work on one django project at a time.
This is how I did it in 6 easy steps:
Create a folder inside your project directory and name it settings.
Project structure:
myproject/
myapp1/
myapp2/
myproject/
settings/
Create four python files inside of the settings directory namely __init__.py, base.py, dev.py and prod.py
Settings files:
settings/
__init__.py
base.py
prod.py
dev.py
Open __init__.py and fill it with the following content:
init.py:
from .base import *
# you need to set "myproject = 'prod'" as an environment variable
# in your OS (on which your website is hosted)
if os.environ['myproject'] == 'prod':
from .prod import *
else:
from .dev import *
Open base.py and fill it with all the common settings (that will be used in both production as well as development.) for example:
base.py:
import os
...
INSTALLED_APPS = [...]
MIDDLEWARE = [...]
TEMPLATES = [{...}]
...
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_ROOT = os.path.join(BASE_DIR, '/path/')
MEDIA_URL = '/path/'
Open dev.py and include that stuff which is development specific for example:
dev.py:
DEBUG = True
ALLOWED_HOSTS = ['localhost']
...
Open prod.py and include that stuff which is production specific for example:
prod.py:
DEBUG = False
ALLOWED_HOSTS = ['www.example.com']
LOGGING = [...]
...
Update
As ANDRESMA suggested in comments. Update BASE_DIR in your base.py file to reflect your updated path by adding another .parent to the end. For example:
BASE_DIR = Path(__file__).resolve().parent.parent.parent
I usually have one settings file per environment, and a shared settings file:
/myproject/
settings.production.py
settings.development.py
shared_settings.py
Each of my environment files has:
try:
from shared_settings import *
except ImportError:
pass
This allows me to override shared settings if necessary (by adding the modifications below that stanza).
I then select which settings files to use by linking it in to settings.py:
ln -s settings.development.py settings.py
I use the awesome django-configurations, and all the settings are stored in my settings.py:
from configurations import Configuration
class Base(Configuration):
# all the base settings here...
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
...
class Develop(Base):
# development settings here...
DEBUG = True
...
class Production(Base):
# production settings here...
DEBUG = False
To configure the Django project I just followed the docs.
Create multiple settings*.py files, extrapolating the variables that need to change per environment. Then at the end of your master settings.py file:
try:
from settings_dev import *
except ImportError:
pass
You keep the separate settings_* files for each stage.
At the top of your settings_dev.py file, add this:
import sys
globals().update(vars(sys.modules['settings']))
To import variables that you need to modify.
This wiki entry has more ideas on how to split your settings.
Here is the approach we use :
a settings module to split settings into multiple files for readability ;
a .env.json file to store credentials and parameters that we want excluded from our git repository, or that are environment specific ;
an env.py file to read the .env.json file
Considering the following structure :
...
.env.json # the file containing all specific credentials and parameters
.gitignore # the .gitignore file to exclude `.env.json`
project_name/ # project dir (the one which django-admin.py creates)
accounts/ # project's apps
__init__.py
...
...
env.py # the file to load credentials
settings/
__init__.py # main settings file
database.py # database conf
storage.py # storage conf
...
venv # virtualenv
...
With .env.json like :
{
"debug": false,
"allowed_hosts": ["mydomain.com"],
"django_secret_key": "my_very_long_secret_key",
"db_password": "my_db_password",
"db_name": "my_db_name",
"db_user": "my_db_user",
"db_host": "my_db_host",
}
And project_name/env.py :
<!-- language: lang-python -->
import json
import os
def get_credentials():
env_file_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
with open(os.path.join(env_file_dir, '.env.json'), 'r') as f:
creds = json.loads(f.read())
return creds
credentials = get_credentials()
We can have the following settings:
<!-- language: lang-py -->
# project_name/settings/__init__.py
from project_name.env import credentials
from project_name.settings.database import *
from project_name.settings.storage import *
...
SECRET_KEY = credentials.get('django_secret_key')
DEBUG = credentials.get('debug')
ALLOWED_HOSTS = credentials.get('allowed_hosts', [])
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
...
]
if DEBUG:
INSTALLED_APPS += ['debug_toolbar']
...
# project_name/settings/database.py
from project_name.env import credentials
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': credentials.get('db_name', ''),
'USER': credentials.get('db_user', ''),
'HOST': credentials.get('db_host', ''),
'PASSWORD': credentials.get('db_password', ''),
'PORT': '5432',
}
}
the benefits of this solution are :
user specific credentials and configurations for local development without modifying the git repository ;
environment specific configuration, you can have for example three different environments with three different .env.json like dev, stagging and production ;
credentials are not in the repository
I hope this helps, just let me know if you see any caveats with this solution.
I use the folloring file structure:
project/
...
settings/
settings/common.py
settings/local.py
settings/prod.py
settings/__init__.py -> local.py
So __init__.py is a link (ln in unix or mklink in windows) to local.py or can be to prod.py so the configuration is still in the project.settings module is clean and organized, and if you want to use a particular config you can use the environment variable DJANGO_SETTINGS_MODULE to project.settings.prod if you need to run a command for production environment.
In the files prod.py and local.py:
from .shared import *
DATABASE = {
...
}
and the shared.py file keeps as global without specific configs.
Use settings.py for production. In the same directory create settings_dev.py for overrides.
# settings_dev.py
from .settings import *
DEBUG = False
On a dev machine run your Django app with:
DJANGO_SETTINGS_MODULE=<your_app_name>.settings_dev python3 manage.py runserver
On a prod machine run as if you just had settings.py and nothing else.
ADVANTAGES
settings.py (used for production) is completely agnostic to the fact that any other environments even exist.
To see the difference between prod and dev you just look into a single location - settings_dev.py. No need to gather configurations scattered across settings_prod.py, settings_dev.py and settings_shared.py.
If someone adds a setting to your prod config after troubleshooting a production issue you can rest assured that it will appear in your dev config as well (unless explicitly overridden). Thus the divergence between different config files will be minimized.
building off cs01's answer:
if you're having problems with the environment variable, set its value to a string (e.g. I did DJANGO_DEVELOPMENT="true").
I also changed cs01's file workflow as follows:
#settings.py
import os
if os.environ.get('DJANGO_DEVELOPMENT') is not None:
from settings_dev import *
else:
from settings_production import *
#settings_dev.py
development settings go here
#settings_production.py
production settings go here
This way, Django doesn't have to read through the entirety of a settings file before running the appropriate settings file. This solution comes in handy if your production file needs stuff that's only on your production server.
Note: in Python 3, imported files need to have a . appended (e.g. from .settings_dev import *)
If you want to keep 1 settings file, and your development operating system is different than your production operating system, you can put this at the bottom of your settings.py:
from sys import platform
if platform == "linux" or platform == "linux2":
# linux
# some special setting here for when I'm on my prod server
elif platform == "darwin":
# OS X
# some special setting here for when I'm developing on my mac
elif platform == "win32":
# Windows...
# some special setting here for when I'm developing on my pc
Read more: How do I check the operating system in Python?
You want to be able to switch settings, secretes, environment variables and others based on the git branch that you are in and relying on different settings file is okay but in an enterprise situation you would like to hide all your sensitive information from the repo. It is not a best security best practice to expose all the environment variables, secrets of all environments (develop, staging, production, qa etc.,) to all the developers. The following should achieve 2.
isolation of settings as per their environment of deployment
hide sensitive information from git repo
My run.sh
#!/bin/bash
# default environment
export DJANGO_ENVIRONMENT="develop"
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if [ $BRANCH == "main" ]; then
export DJANGO_ENVIRONMENT="production"
elif [ $BRANCH == "release/"* ]; then
export DJANGO_ENVIRONMENT="staging"
else
# for all other branches (feature, support, hotfix etc.,)
echo ''
fi
echo "
BRANCH: $BRANCH
ENVIRONMENT: $DJANGO_ENVIRONMENT
"
python3 myapp/manage.py makemigrations
python3 myapp/manage.py migrate --noinput
python3 myapp/manage.py runserver 0:8000
My vars.py (or secrets.py or whatever name) in the same folder as settings.py of django
vars = {
'develop': {
'environment': 'develop',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
},
'production': {
'environment': 'production',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "False"
},
'staging': {
'environment': 'staging',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
}
}
then in settings.py just do the following
from . import vars # container environment specific vars
import os
DJANGO_ENVIRONMENT = os.getenv("DJANGO_ENVIRONMENT") # declared in run.sh
envs = vars.vars[DJANGO_ENVIRONMENT] # SECURITY WARNING: keep the secret key
used in production secret!
SECRET_KEY = envs["SECRET_KEY"]
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = envs["DEBUG"]
Let developers have their own vars.py in their local machine but during deployment your cicd pipeline can insert the actual vars.py with actual valures or some script should insert it. If you are using gitlab cicd then you can store the entire vars.py as an environment variable
This seems to have been answered, however a method which I use as combined with version control is the following:
Setup a env.py file in the same directory as settings on my local development environment that I also add to .gitignore:
env.py:
#!usr/bin/python
DJANGO_ENV = True
ALLOWED_HOSTS = ['127.0.0.1', 'dev.mywebsite.com']
.gitignore:
mywebsite/env.py
settings.py:
if os.path.exists(os.getcwd() + '/env.py'):
#env.py is excluded using the .gitignore file - when moving to production we can automatically set debug mode to off:
from env import *
else:
DJANGO_ENV = False
DEBUG = DJANGO_ENV
I just find this works and is far more elegant - with env.py it is easy to see our local environment variables and we can handle all of this without multiple settings.py files or the likes. This methods allows for all sorts of local environment variables to be used that we wouldn't want set on our production server. Utilising the .gitignore via version control we are also keeping everything seamlessly integrated.
For the problem of setting files, I choose to copy
Project
|---__init__.py [ write code to copy setting file from subdir to current dir]
|---settings.py (do not commit this file to git)
|---setting1_dir
| |-- settings.py
|---setting2_dir
| |-- settings.py
When you run django, __init__py will be ran. At this time , settings.py in setting1_dir will replace settings.py in Project.
How to choose different env?
modify __init__.py directly.
make a bash file to modify __init__.py.
modify env in linux, and then let __init__.py read this variable.
Why use to this way?
Because I don't like so many files in the same directory, too many files will confuse other partners and not very well for IDE.(IDE cannot find what file we use)
If you do not want to see all these details, you can divide project into two part.
make your small tool like Spring Initializr, just for setup your project.(do sth like copy file)
your project code
I'm using different app.yaml file to change configuration between environments in google cloud app engine.
You can use this to create a proxy connection in your terminal command:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433
https://cloud.google.com/sql/docs/sqlserver/connect-admin-proxy#macos-64-bit
File: app.yaml
# [START django_app]
service: development
runtime: python37
env_variables:
DJANGO_DB_HOST: '/cloudsql/myproject:myregion:myinstance'
DJANGO_DEBUG: True
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
I create a file named "production" in the working directory in production.
#settings.py
production = Path("production")
DEBUG = False
#if it's dev mode
if not production.is_file():
INSTALLED_APPS +=[
#apps_in_development_mode,
#...
]
DEBUG = True
#other settings to override the default production settings
You're probably going to use the wsgi.py file for production (this file is created automatically when you create the django project). That file points to a settings file. So make a separate production settings file and reference it in your wsgi.py file.
What we do here is to have an .ENV file for each environment. This file contains a lot of variables like ENV=development
The settings.py file is basically a bunch of os.environ.get(), like ENV = os.environ.get('ENV')
So when you need to access that you can do ENV = settings.ENV.
You would have to have a .env file for your production, testing, development.
This is my solution, with different environements for dev, test and prod
import socket
[...]
DEV_PC = 'PC059'
host_name = socket.gethostname()
if host_name == DEV_PC:
#do something
pass
elif [...]
Related
Django code changes not reflected without restart
I used python manage.py runserver to start the django server locally. I noticed the change of HTML code is not reflected if I don't re-start the server. Is it normal? Is it possible to see the change without restarting the server? Update: I saw the I am in the production env, so the Debug is False. I am wondering how can I change to Development mode?
It is always recommended to create a local settings so you can work in a "development environment", so, you can have a settings.py where you set all the configuration for your production server, always with DEBUG=False, never set DEBUG=True in production. And also, you can additionally create a local_settings.py where you change only those variables that you need to change for your development environment, like the DEBUG value, so, in your local_settings.py you can have only this: # local_settings.py DEBUG=True And in your settings.py add this at the end: # settings.py try: from local_settings import * except ImportError: pass This will override the variables you set in the local_settings when you run the development server. Make sure you don't push this file to your server (if you're using git add it to your .gitignore file)
Why does gunicorn not see the corrent environment variables?
On my production server, I've set environment variables both inside and outside my virtualenv (only because I don't understand this issue going on) including a variable HELLO_WORLD_PROD which I've set to '1'. in the python interpreter, both inside and out my venv, os.environ.get('HELLO_WORLD_PROD') == '1' returns True. In my settings folder, I have: import os if os.environ.get('HELLO_WORLD_PROD') == '1': from hello_world.settings.prod import * # noqa else: from hello_world.settings.dev import * # noqa Both prod.py and dev.py inherit from base.py, and in base DEBUG = False, and only in dev.py does DEBUG = True. However, when I trigger an error through the browser, I'm seeing the debug page. I'm using nginx and gunicorn. Why is my application importing the wrong settings file? You can see my gunicorn conf here Thanks in advance for your patience!
I was using sudo service gunicorn start to run gunicorn. The problem is service strips all environment variables but TERM, PATH and LANG. To fix it, in my exec line in my gunicorn.conf I added the environment variables there using the --env flag, like exec env/bin/gunicorn --env HELLO_WORLD_PROD=1 --env DB_PASSWORD=secret etc.
Selecting the correct settings file to use in Django
I'm following the approach in Two Scoops of Django: Best Practices for Django 1.6 regarding multiple settings files. I'm using Django 1.7 and virtualenvwrapper. My setup is as follows: project/ app1/ app2/ project/ __init__.py settings/ __init__.py base.py local.py production.py manage.py I'm a bit confused as to how Django knows which settings file to use. I do not want to specify the settings file every time I run manage.py. I would rather like to set the DJANG_SETTINGS_MODULE environmental variable as explained in omouse anser here: What confuses me is in the wsgi.py file there is a line: os.environ.setdefault("DJANGO_SETTINGS_MODULE", "{{ project_name }}.settings.production") Is this file only used in the production server? What happens if I already have a DJANGO_SETTINGS_MODULE environmental variable defined on the server? When running it locally, I understand I need to set the DJANGO_SETTINGS_MODULE env variable every time I open the console. I've read here that I can define a postactivate hook in virtualenvwrapper. This hook will then create the environmental variables that I require everytime I activate the environment. Is this the recommended way of ensuring the correct DJANGO_SETTINGS_MODULE env variable is loaded on my local machine? Would I also need to setup a similar file on my hosting server? I'm planning on using PythonAnywhere for hosting. Lastly, if I run a staging server, how would I tell Django to load the staging settings file? The staging server is the practically the same as the production server, so I guess need a different wsgi.py file for the staging server, but that seems like a anti-pattern.
os.environ.setdefault only sets the value if it is not set. When you run in production, export the environment variable DJANGO_SETTINGS_MODULE and set it to your production/staging settings file, and you don't have to set anything when running in development (if you set it by default to your development settings). This is the DRY-est method. The method with a local_settings.py (which is most of the times kept out of the repo!) is not best practice and should be avoided.
Django 1.7 migrations not being picked up
Using Django 1.7 and it's new migration I am running into a strange issue. I split my settings files up into 3 files which I have always done pre version 1.7 eg... /settings __init__.py base.py development.py production.py __init__.py from .base import * if sys.argv[1] == 'runserver': from .development import * else: from .production import * Both development.py and production.py have there own database settings for there environment. However with the new migrations system running migrations does not detect anything UNLESS I put the database settings in the base.py files. Should I modifity this line to the following: if sys.argv[1] == 'runserver' or sys.argv[1] == 'migrate': Or there a better way?
You should avoid adding logic to your settings file, consider using che --settings option when testing with runserver, like this: ./manage.py --settings=project.settings.development runserver You can also use environment variable DJANGO_SETTINGS_MODULE to switch the settings module used by Django. In your development enviroment you could set: export DJANGO_SETTINGS_MODULE=project.settings.development While in production you could set DJANGO_SETTINGS_MODULE=project.settings.production. The details depend on the type of deployment and server you are using. Personally in my development setup I use virtualenv wrapper, and I set up the postactivate hook with something like this: #!/bin/bash # This hook is run after this virtualenv is activated. export DJANGO_SETTINGS_MODULE=project.settings.local cd /home/user/develop/git/project In this way I can type workon project And I have the environment variable correctly set, and my shell sent on the right folder. You can have a base.py settings file with all your common settings, then in development.py (and production.py) you can do something like this: from .base import * DATABASES = ... customize DB settings used for development/production ...
Django: How to manage development and production settings?
I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings. It would be great to know the following: How best to deal with development and production settings. How to keep apps such as django-debug-toolbar only in a development environment. Any other tips and best practices for development and deployment settings.
The DJANGO_SETTINGS_MODULE environment variable controls which settings file Django will load. You therefore create separate configuration files for your respective environments (note that they can of course both import * from a separate, "shared settings" file), and use DJANGO_SETTINGS_MODULE to control which one to use. Here's how: As noted in the Django documentation: The value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g. mysite.settings. Note that the settings module should be on the Python import search path. So, let's assume you created myapp/production_settings.py and myapp/test_settings.py in your source repository. In that case, you'd respectively set DJANGO_SETTINGS_MODULE=myapp.production_settings to use the former and DJANGO_SETTINGS_MODULE=myapp.test_settings to use the latter. From here on out, the problem boils down to setting the DJANGO_SETTINGS_MODULE environment variable. Setting DJANGO_SETTINGS_MODULE using a script or a shell You can then use a bootstrap script or a process manager to load the correct settings (by setting the environment), or just run it from your shell before starting Django: export DJANGO_SETTINGS_MODULE=myapp.production_settings. Note that you can run this export at any time from a shell — it does not need to live in your .bashrc or anything. Setting DJANGO_SETTINGS_MODULE using a Process Manager If you're not fond of writing a bootstrap script that sets the environment (and there are very good reasons to feel that way!), I would recommend using a process manager: Supervisor lets you pass environment variables to managed processes using a program's environment configuration key. Honcho (a pure-Python equivalent of Ruby's Foreman) lets you define environment variables in an "environment" (.env) file. Finally, note that you can take advantage of the PYTHONPATH variable to store the settings in a completely different location (e.g. on a production server, storing them in /etc/). This allows for separating configuration from application files. You may or may not want that, it depends on how your app is structured.
By default use production settings, but create a file called settings_dev.py in the same folder as your settings.py file. Add overrides there, such as DEBUG=True. On the computer that will be used for development, add this to your ~/.bashrc file: export DJANGO_DEVELOPMENT=true Or turn it on one time by prefixing your command: DJANGO_DEVELOPMENT=true python manage.py runserver At the bottom of your settings.py file, add the following. # Override production variables if DJANGO_DEVELOPMENT env variable is true if os.getenv('DJANGO_DEVELOPMENT') == 'true': from settings_dev import * # or specific overrides (Note that importing * should generally be avoided in Python) By default the production servers will not override anything. Done! Compared to the other answers, this one is simpler because it doesn't require updating PYTHONPATH, or setting DJANGO_SETTINGS_MODULE which only allows you to work on one django project at a time.
This is how I did it in 6 easy steps: Create a folder inside your project directory and name it settings. Project structure: myproject/ myapp1/ myapp2/ myproject/ settings/ Create four python files inside of the settings directory namely __init__.py, base.py, dev.py and prod.py Settings files: settings/ __init__.py base.py prod.py dev.py Open __init__.py and fill it with the following content: init.py: from .base import * # you need to set "myproject = 'prod'" as an environment variable # in your OS (on which your website is hosted) if os.environ['myproject'] == 'prod': from .prod import * else: from .dev import * Open base.py and fill it with all the common settings (that will be used in both production as well as development.) for example: base.py: import os ... INSTALLED_APPS = [...] MIDDLEWARE = [...] TEMPLATES = [{...}] ... STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') MEDIA_ROOT = os.path.join(BASE_DIR, '/path/') MEDIA_URL = '/path/' Open dev.py and include that stuff which is development specific for example: dev.py: DEBUG = True ALLOWED_HOSTS = ['localhost'] ... Open prod.py and include that stuff which is production specific for example: prod.py: DEBUG = False ALLOWED_HOSTS = ['www.example.com'] LOGGING = [...] ... Update As ANDRESMA suggested in comments. Update BASE_DIR in your base.py file to reflect your updated path by adding another .parent to the end. For example: BASE_DIR = Path(__file__).resolve().parent.parent.parent
I usually have one settings file per environment, and a shared settings file: /myproject/ settings.production.py settings.development.py shared_settings.py Each of my environment files has: try: from shared_settings import * except ImportError: pass This allows me to override shared settings if necessary (by adding the modifications below that stanza). I then select which settings files to use by linking it in to settings.py: ln -s settings.development.py settings.py
I use the awesome django-configurations, and all the settings are stored in my settings.py: from configurations import Configuration class Base(Configuration): # all the base settings here... BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) ... class Develop(Base): # development settings here... DEBUG = True ... class Production(Base): # production settings here... DEBUG = False To configure the Django project I just followed the docs.
Create multiple settings*.py files, extrapolating the variables that need to change per environment. Then at the end of your master settings.py file: try: from settings_dev import * except ImportError: pass You keep the separate settings_* files for each stage. At the top of your settings_dev.py file, add this: import sys globals().update(vars(sys.modules['settings'])) To import variables that you need to modify. This wiki entry has more ideas on how to split your settings.
Here is the approach we use : a settings module to split settings into multiple files for readability ; a .env.json file to store credentials and parameters that we want excluded from our git repository, or that are environment specific ; an env.py file to read the .env.json file Considering the following structure : ... .env.json # the file containing all specific credentials and parameters .gitignore # the .gitignore file to exclude `.env.json` project_name/ # project dir (the one which django-admin.py creates) accounts/ # project's apps __init__.py ... ... env.py # the file to load credentials settings/ __init__.py # main settings file database.py # database conf storage.py # storage conf ... venv # virtualenv ... With .env.json like : { "debug": false, "allowed_hosts": ["mydomain.com"], "django_secret_key": "my_very_long_secret_key", "db_password": "my_db_password", "db_name": "my_db_name", "db_user": "my_db_user", "db_host": "my_db_host", } And project_name/env.py : <!-- language: lang-python --> import json import os def get_credentials(): env_file_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) with open(os.path.join(env_file_dir, '.env.json'), 'r') as f: creds = json.loads(f.read()) return creds credentials = get_credentials() We can have the following settings: <!-- language: lang-py --> # project_name/settings/__init__.py from project_name.env import credentials from project_name.settings.database import * from project_name.settings.storage import * ... SECRET_KEY = credentials.get('django_secret_key') DEBUG = credentials.get('debug') ALLOWED_HOSTS = credentials.get('allowed_hosts', []) INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ... ] if DEBUG: INSTALLED_APPS += ['debug_toolbar'] ... # project_name/settings/database.py from project_name.env import credentials DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': credentials.get('db_name', ''), 'USER': credentials.get('db_user', ''), 'HOST': credentials.get('db_host', ''), 'PASSWORD': credentials.get('db_password', ''), 'PORT': '5432', } } the benefits of this solution are : user specific credentials and configurations for local development without modifying the git repository ; environment specific configuration, you can have for example three different environments with three different .env.json like dev, stagging and production ; credentials are not in the repository I hope this helps, just let me know if you see any caveats with this solution.
I use the folloring file structure: project/ ... settings/ settings/common.py settings/local.py settings/prod.py settings/__init__.py -> local.py So __init__.py is a link (ln in unix or mklink in windows) to local.py or can be to prod.py so the configuration is still in the project.settings module is clean and organized, and if you want to use a particular config you can use the environment variable DJANGO_SETTINGS_MODULE to project.settings.prod if you need to run a command for production environment. In the files prod.py and local.py: from .shared import * DATABASE = { ... } and the shared.py file keeps as global without specific configs.
Use settings.py for production. In the same directory create settings_dev.py for overrides. # settings_dev.py from .settings import * DEBUG = False On a dev machine run your Django app with: DJANGO_SETTINGS_MODULE=<your_app_name>.settings_dev python3 manage.py runserver On a prod machine run as if you just had settings.py and nothing else. ADVANTAGES settings.py (used for production) is completely agnostic to the fact that any other environments even exist. To see the difference between prod and dev you just look into a single location - settings_dev.py. No need to gather configurations scattered across settings_prod.py, settings_dev.py and settings_shared.py. If someone adds a setting to your prod config after troubleshooting a production issue you can rest assured that it will appear in your dev config as well (unless explicitly overridden). Thus the divergence between different config files will be minimized.
building off cs01's answer: if you're having problems with the environment variable, set its value to a string (e.g. I did DJANGO_DEVELOPMENT="true"). I also changed cs01's file workflow as follows: #settings.py import os if os.environ.get('DJANGO_DEVELOPMENT') is not None: from settings_dev import * else: from settings_production import * #settings_dev.py development settings go here #settings_production.py production settings go here This way, Django doesn't have to read through the entirety of a settings file before running the appropriate settings file. This solution comes in handy if your production file needs stuff that's only on your production server. Note: in Python 3, imported files need to have a . appended (e.g. from .settings_dev import *)
If you want to keep 1 settings file, and your development operating system is different than your production operating system, you can put this at the bottom of your settings.py: from sys import platform if platform == "linux" or platform == "linux2": # linux # some special setting here for when I'm on my prod server elif platform == "darwin": # OS X # some special setting here for when I'm developing on my mac elif platform == "win32": # Windows... # some special setting here for when I'm developing on my pc Read more: How do I check the operating system in Python?
You want to be able to switch settings, secretes, environment variables and others based on the git branch that you are in and relying on different settings file is okay but in an enterprise situation you would like to hide all your sensitive information from the repo. It is not a best security best practice to expose all the environment variables, secrets of all environments (develop, staging, production, qa etc.,) to all the developers. The following should achieve 2. isolation of settings as per their environment of deployment hide sensitive information from git repo My run.sh #!/bin/bash # default environment export DJANGO_ENVIRONMENT="develop" BRANCH=$(git rev-parse --abbrev-ref HEAD) if [ $BRANCH == "main" ]; then export DJANGO_ENVIRONMENT="production" elif [ $BRANCH == "release/"* ]; then export DJANGO_ENVIRONMENT="staging" else # for all other branches (feature, support, hotfix etc.,) echo '' fi echo " BRANCH: $BRANCH ENVIRONMENT: $DJANGO_ENVIRONMENT " python3 myapp/manage.py makemigrations python3 myapp/manage.py migrate --noinput python3 myapp/manage.py runserver 0:8000 My vars.py (or secrets.py or whatever name) in the same folder as settings.py of django vars = { 'develop': { 'environment': 'develop', 'SECRET_KEY': 'mysecretkey', "DEBUG": "True" }, 'production': { 'environment': 'production', 'SECRET_KEY': 'mysecretkey', "DEBUG": "False" }, 'staging': { 'environment': 'staging', 'SECRET_KEY': 'mysecretkey', "DEBUG": "True" } } then in settings.py just do the following from . import vars # container environment specific vars import os DJANGO_ENVIRONMENT = os.getenv("DJANGO_ENVIRONMENT") # declared in run.sh envs = vars.vars[DJANGO_ENVIRONMENT] # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = envs["SECRET_KEY"] # SECURITY WARNING: don't run with debug turned on in production! DEBUG = envs["DEBUG"] Let developers have their own vars.py in their local machine but during deployment your cicd pipeline can insert the actual vars.py with actual valures or some script should insert it. If you are using gitlab cicd then you can store the entire vars.py as an environment variable
This seems to have been answered, however a method which I use as combined with version control is the following: Setup a env.py file in the same directory as settings on my local development environment that I also add to .gitignore: env.py: #!usr/bin/python DJANGO_ENV = True ALLOWED_HOSTS = ['127.0.0.1', 'dev.mywebsite.com'] .gitignore: mywebsite/env.py settings.py: if os.path.exists(os.getcwd() + '/env.py'): #env.py is excluded using the .gitignore file - when moving to production we can automatically set debug mode to off: from env import * else: DJANGO_ENV = False DEBUG = DJANGO_ENV I just find this works and is far more elegant - with env.py it is easy to see our local environment variables and we can handle all of this without multiple settings.py files or the likes. This methods allows for all sorts of local environment variables to be used that we wouldn't want set on our production server. Utilising the .gitignore via version control we are also keeping everything seamlessly integrated.
For the problem of setting files, I choose to copy Project |---__init__.py [ write code to copy setting file from subdir to current dir] |---settings.py (do not commit this file to git) |---setting1_dir | |-- settings.py |---setting2_dir | |-- settings.py When you run django, __init__py will be ran. At this time , settings.py in setting1_dir will replace settings.py in Project. How to choose different env? modify __init__.py directly. make a bash file to modify __init__.py. modify env in linux, and then let __init__.py read this variable. Why use to this way? Because I don't like so many files in the same directory, too many files will confuse other partners and not very well for IDE.(IDE cannot find what file we use) If you do not want to see all these details, you can divide project into two part. make your small tool like Spring Initializr, just for setup your project.(do sth like copy file) your project code
I'm using different app.yaml file to change configuration between environments in google cloud app engine. You can use this to create a proxy connection in your terminal command: ./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433 https://cloud.google.com/sql/docs/sqlserver/connect-admin-proxy#macos-64-bit File: app.yaml # [START django_app] service: development runtime: python37 env_variables: DJANGO_DB_HOST: '/cloudsql/myproject:myregion:myinstance' DJANGO_DEBUG: True handlers: # This configures Google App Engine to serve the files in the app's static # directory. - url: /static static_dir: static/ # This handler routes all requests not caught above to your main app. It is # required when static routes are defined, but can be omitted (along with # the entire handlers section) when there are no static files defined. - url: /.* script: auto # [END django_app]
I create a file named "production" in the working directory in production. #settings.py production = Path("production") DEBUG = False #if it's dev mode if not production.is_file(): INSTALLED_APPS +=[ #apps_in_development_mode, #... ] DEBUG = True #other settings to override the default production settings
You're probably going to use the wsgi.py file for production (this file is created automatically when you create the django project). That file points to a settings file. So make a separate production settings file and reference it in your wsgi.py file.
What we do here is to have an .ENV file for each environment. This file contains a lot of variables like ENV=development The settings.py file is basically a bunch of os.environ.get(), like ENV = os.environ.get('ENV') So when you need to access that you can do ENV = settings.ENV. You would have to have a .env file for your production, testing, development.
This is my solution, with different environements for dev, test and prod import socket [...] DEV_PC = 'PC059' host_name = socket.gethostname() if host_name == DEV_PC: #do something pass elif [...]