Deploying a local django app using openshift - python

I've built a webapp using django. In order to host it I'm trying to use openshift but am having difficulty in getting anything working. There seems to be a lack of step by steps for this. So far I have git working fine, the app works on the local dev environment and I've successfully created an app on openshift.
Following the URL on openshift once created I just get the standard page of "Welcome to your Openshift App".
I've followed this https://developers.openshift.com/en/python-getting-started.html#step1 to try changing the wsgi.py file. Changed it to hello world, pushed it and yet I still get the openshift default page.
Is there a good comprehensive resource anywhere for getting local Django apps up and running on Openshift? Most of what I can find on google are just example apps which aren't that useful as I already have mine built.

Edit: Remember this is a platform-dependent answer and since the OpenShift platform serving Django may change, this answer could become invalid. As of Apr 1 2016, this answer remains valid at its whole extent.
Many times this happened to me and, since I had to mount at least 5 applications, I had to create my own lifecycle:
Don't use the Django cartridge, but the python 2.7 cartridge. Using the Django cart. and trying to update the django version brings many headaches, not included if you do it from scratch.
Clone your repository via git. You will get yourproject and...
# git clone yourrepo#rhcloud.com:app.git yourproject <- replace it with your actual openshift repo address
yourproject/
+---wsgi.py
+---setup.py
*---.openshift/ (with its contents - I omit them now)
Make a virtualenv for your brand-new repository cloned into your local machine. Activate it and install Django via pip and all the dependencies you would need (e.g. a new Pillow package, MySQL database package, ...). Create a django project there. Say, yourdjproject. Edit Create, alongside, a wsgi/static directory with an empty, dummy, file (e.g. .gitkeep - the name is just convention: you can use any name you want).
#assuming you have virtualenv-wrapper installed and set-up
mkvirtualenv myenvironment
workon myenvironment
pip install Django[==x.y[.z]] #select your version; optional.
#creating the project inside the git repository
cd path/to/yourproject/
django-admin.py startproject yourjdproject .
#creating dummy wsgi/static directory for collectstatic
mkdir -p wsgi/static
touch wsgi/static/.gitkeep
Create a django app there. Say, yourapp. Include it in your project.
You will have something like this (django 1.7):
yourproject/
+---wsgi/
| +---static/
| +---.gitkeep
+---wsgi.py
+---setup.py
+---.openshift/ (with its contents - I omit them now)
+---yourdjproject/
| +----__init__.py
| +----urls.py
| +----settings.py
| +----wsgi.py
+---+yourapp/
+----__init__.py
+----models.py
+----views.py
+----tests.py
+----migrations
+---__init__.py
Set up your django application as you'd always do (I will not detail it here). Remember to include all the dependencies you installed, in the setup.py file accordingly (This answer is not the place to describe WHY, but the setup.py is the package installer and openshift uses it to reinstall your app on each deploy, so keep it up to date with the dependencies).
Create your migrations for your models.
Edit the openshift-given WSGI script as follows. You will be including the django WSGI application AFTER including the virtualenv (openshift creates one for python cartridges), so the pythonpath will be properly set up.
#!/usr/bin/python
import os
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/'
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
pass
from yourdjproject.wsgi import application
Edit the hooks in .openshift/action_hooks to automatically perform db sincronization and media management:
build hook
#!/bin/bash
#this is .openshift/action/hooks/build
#remember to make it +x so openshift can run it.
if [ ! -d ${OPENSHIFT_DATA_DIR}media ]; then
mkdir -p ${OPENSHIFT_DATA_DIR}media
fi
ln -snf ${OPENSHIFT_DATA_DIR}media $OPENSHIFT_REPO_DIR/wsgi/static/media
######################### end of file
deploy hook
#!/bin/bash
#this one is the deploy hook .openshift/action_hooks/deploy
source $OPENSHIFT_HOMEDIR/python/virtenv/bin/activate
cd $OPENSHIFT_REPO_DIR
echo "Executing 'python manage.py migrate'"
python manage.py migrate
echo "Executing 'python manage.py collectstatic --noinput'"
python manage.py collectstatic --noinput
########################### end of file
Now you have the wsgi ready, pointing to the django wsgi by import, and you have your scripts running. It is time to consider the locations for static and media files we used in such scripts. Edit your Django settings to tell where did you want such files:
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static', 'media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'yourjdproject', 'static'),)
TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'yourjdproject', 'templates'),)
Create a sample view, a sample model, a sample migration, and PUSH everything.
Edit Remember to put the right settings to consider both environments so you can test and run in a local environment AND in openshift (usually, this would involve having a local_settings.py, optionally imported if the file exists, but I will omit that part and put everything in the same file). Please read this file conciously since things like yourlocaldbname are values you MUST set accordingly:
"""
Django settings for yourdjproject project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
ON_OPENSHIFT = False
if 'OPENSHIFT_REPO_DIR' in os.environ:
ON_OPENSHIFT = True
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '60e32dn-za#y=x!551tditnset(o9b#2bkh1)b$hn&0$ec5-j7'
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'yourapp',
#more apps here
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'yourdjproject.urls'
WSGI_APPLICATION = 'yourdjproject.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
if ON_OPENSHIFT:
DEBUG = True
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['*']
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'youropenshiftgenerateddatabasename',
'USER': os.getenv('OPENSHIFT_MYSQL_DB_USERNAME'),
'PASSWORD': os.getenv('OPENSHIFT_MYSQL_DB_PASSWORD'),
'HOST': os.getenv('OPENSHIFT_MYSQL_DB_HOST'),
'PORT': os.getenv('OPENSHIFT_MYSQL_DB_PORT'),
}
}
else:
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', #If you want to use MySQL
'NAME': 'yourlocaldbname',
'USER': 'yourlocalusername',
'PASSWORD': 'yourlocaluserpassword',
'HOST': 'yourlocaldbhost',
'PORT': '3306', #this will be the case for MySQL
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'yr-LC'
TIME_ZONE = 'Your/Timezone/Here'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static', 'media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'yourdjproject', 'static'),)
TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'yourdjproject', 'templates'),)
Git add, commit, push, enjoy.
cd path/to/yourproject/
git add .
git commit -m "Your Message"
git push origin master # THIS COMMAND WILL TAKE LONG
# git enjoy
Your sample Django app is almost ready to go! But if your application has external dependencies it will blow with no apparent reason. This is the reason I told you to develop a simple application. Now it is time to make your dependencies work.
[untested!] You can edit the deploy hook and add a command after the command cd $OPENSHIFT_REPO_DIR, like this: pip install -r requirements.txt, assuming the requirements.txt file exists in your project. pip should exist in your virtualenv, but if it does not, you can see the next solution.
Alternatively, the setup.py is an already-provided approach on OpenShift. What I did many times is -assuming the requirements.txt file exists- is:
Open that file, read all its lines.
For each line, if it has a #, remove the # and everything after.
strip leading and trailing whitespaces.
Discard empty lines, and have the result (i.e. remaining lines) as an array.
That result must be assigned to the install_requires= keyword argument in the setup call in the setup.py file.
I'm sorry I did not include this in the tutorial before! But you need to actually install Django in the server. Perhaps an obvious suggestion, and every Python developer could know that beforehand. But seizing this opportunity I remark: Include the appropriate Django dependency in the requirements.txt (or setup.py depending on whetheryou use or not a requirements.txt file), as you include any other dependency.
This should help you to mount a Django application, and took me a lot of time to standarize the process. Enjoy it and don't hesitate on contacting me via comment if something goes wrong
Edit (for those with the same problem who don't expect to find the answer in this post's comments): Remember that if you edit the build or deploy hook files under Windows and you push the files, they will fly to the server with 0644 permissions, since Windows does not support this permission scheme Unix has, and has no way to assign permissions since these files do not have any extension. You will notice this because your scripts will not be executed when deploying. So try to deploy those files only from Unix-based systems.
Edit 2: You can use git hooks (e.g. pre_commit) to set permissions for certain files, like pipeline scripts (build, deploy, ...). See the comments by #StijndeWitt and #OliverBurdekin in this answer, and also this question for more details.

1) Step 1 install Rubygems
Ubuntu - https://rubygems.org/pages/download
Windows - https://forwardhq.com/support/installing-ruby-windows
$ gem
or
C:\Windows\System32>gem
RubyGems is a sophisticated package manager for Ruby. This is a
basic help message containing pointers to more information……..
2) Step 2:
$ gem install rhc
Or
C:\Windows\System32> gem install rhc
3) $ rhc
Or
C:\Windows\System32> rhc
Usage: rhc [--help] [--version] [--debug] <command> [<args>]
Command line interface for OpenShift.
4) $ rhc app create -a mysite -t python-2.7
Or
C:\Windows\System32> rhc app create -a mysite -t python-2.7
# Here mysite would be the sitename of your choice
#It will ask you to enter your openshift account id and password
Login to openshift.redhat.com: Enter your openshift id here
Password : **********
Application Options
---------------------
Domain: mytutorials
Cartridges: python-2.7
Gear Size: Default
Scaling: no
......
......
Your application 'mysite' is now available.
URL : http://mysite.....................
SSH to : 39394949......................
Git remote: ssh://......................
Run 'rhc show-app mysite' for more details about your app.
5) Clone your site
$ rhc git-clone mysite
Or
D:\> rhc git-clone mysite
.......................
Your application Git repository has been cloned to "D:\mysite"
6) #”D:\mysite>” is the location we cloned.
D:\mysite> git remote add upstream -m master git://github.com/rancavil/django-openshift-quickstart.git
D:\mysite> git pull -s recursive -X theirs upstream master
7) D:\mysite> git push
remote : ................
remote: Django application credentials
user: admin
xertefkefkt
remote: Git Post-Receive Result: success
.............
8) D:\mysite>virtualenv venv --no-site-packages
D:\mysite>venv\Scripts\activate.bat
<venv> D:\mysite> python setup.py install
creating .....
Searching for Django<=1.6
.............
Finished processing dependencies for mysite==1.0
9) Change admin password
<venv> D:\mysite\wsgi\openshift> python manage.py changepassword admin
password:
...
Password changed successfully for user 'admin'
<venv> D:\mysite\wsgi\openshift> python manage.py runserver
Validating models….
10) Git add
<venv> D:\mysite> git add.
<venv> D:\mysite> git commit -am"activating the app on Django / Openshift"
.......
<venv> D:\mysite> git push
#----------------------------------------------------------------------------------
#-----------Edit your setup.py in mysite with packages you want to install----------
from setuptools import setup
import os
# Put here required packages
packages = ['Django<=1.6', 'lxml', 'beautifulsoup4', 'openpyxl']
if 'REDISCLOUD_URL' in os.environ and 'REDISCLOUD_PORT' in os.environ and 'REDISCLOUD_PASSWORD' in os.environ:
packages.append('django-redis-cache')
packages.append('hiredis')
setup(name='mysite',
version='1.0',
description='OpenShift App',
author='Tanveer Alam',
author_email='xyz#gmail.com',
url='https://pypi.python.org/pypi',
install_requires=packages,
)

These are steps that works for me:
I've done some steps manually, but you can automate them later to be done with each push command.
Create new django app with python-3.3 from website wizard
Add mysql cartridge to app (my option is mysql)
git clone created app to local
add requirements.txt to root folder
Add myapp to wsgi folder
Modify application to refer to myapp
execute git add, commit, push
Browse app and debug errors with "rhc tail myapp"
connect to ssh console
rhc ssh myapp
10.execute this
source $OPENSHIFT_HOMEDIR/python/virtenv/venv/bin/activate
install missing packages if any
go to app directory
cd ~/app-root/runtime/repo/wsgi/app_name
do migration with:
python manage.py migrate
create super user:
python manage.py createsuperuser
15.Restart the app

This is helpful for me take a look
http://what-i-learnt-today-blog.blogspot.in/2014/05/host-django-application-in-openshift-in.html

Related

Django Whitenoise with compressed staticfiles

I'm not able to get my django project to run with whitenoise and compressed staticfiles (including libsass). In links below, I read that it's only possible by offline compression of needed staticfiles. But when I started up the docker container, running compress command
docker-compose -f production.yml run --rm django python manage.py compress
gives me error:
ValueError: Missing staticfiles manifest entry for 'sass/app.scss'
While trying to request the site gives me following error (as expected?):
compressor.exceptions.OfflineGenerationError: You have offline compression enabled but key "..." is missing from offline manifest. You may need to run "python manage.py compress"
Settings are as follows (build with cookiecutter-django, see link for complete code base below):
STATIC_ROOT = str(ROOT_DIR("staticfiles"))
STATIC_URL = "/static/"
STATICFILES_DIRS = [str(APPS_DIR.path("static"))]
STATICFILES_FINDERS = [
"django.contrib.staticfiles.finders.FileSystemFinder",
"django.contrib.staticfiles.finders.AppDirectoriesFinder",
]
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
STATICFILES_FINDERS += ["compressor.finders.CompressorFinder"]
COMPRESS_PRECOMPILERS = [("text/x-scss", "django_libsass.SassCompiler")]
COMPRESS_CACHEABLE_PRECOMPILERS = (("text/x-scss", "django_libsass.SassCompiler"),)
COMPRESS_ENABLED = env.bool("COMPRESS_ENABLED", default=True)
COMPRESS_STORAGE = "compressor.storage.GzipCompressorFileStorage"
COMPRESS_URL = STATIC_URL
So after searching the internet for 1 day; I'm stuck... Thx for any help or suggestion!
Code base: https://github.com/rl-institut/E_Metrobus/tree/compress
which is build with cookiecutter-django-foundation
including the following changes to config/setttings/production.py:
COMPRESS_STORAGE = "compressor.storage.GzipCompressorFileStorage" # Instead of pre-set "storages.backends.s3boto3.S3Boto3Storage"
COMPRESS_ROOT = STATIC_ROOT # Just in case
COMPRESS_OFFLINE = True # Needed to run compress offline
Possible related links:
Whitenoise and django-compressor cause 404 for compressed files
Possible to use WhiteNoise with Django-Compressor?
Django staticfiles not found on Heroku (with whitenoise)
https://github.com/django-compressor/django-compressor/issues/486
EDIT
Solved it using Justins answer (see below, with additonal changes).
My mistake was trying to compress files with already running container, giving me the error above. After changing Dockerfile with following lines (Notice the duplicate collectstatic cmd!):
python /app/manage.py collectstatic --noinput
python /app/manage.py compress --force
python /app/manage.py collectstatic --noinput
/usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
and rebuilding image everything worked like a charm :)
Additionally, diverging from settings above, I had to set COMPRESS_ENABLED=True in my settings/env file.
I just had the same problem.
Add this to project/compose/production/django/start
python /app/manage.py compress --force
i.e.
python /app/manage.py collectstatic --noinput
python /app/manage.py compress --force
/usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
this is weird but it work very well.
collect and compress static files by whitenoise
python manage.py collectstatic --clear
set COMPRESS_STORAGE = 'compressor.storage.BrotliCompressorFileStorage'
to make .br files in CACHE directory
python manage.py compress --force
set COMPRESS_STORAGE = 'compressor.storage.GzipCompressorFileStorage'
to make .gzfiles in CACHE directory
python manage.py compress --force
to add new compressed files to whitenoise: manifest.json, manifest.json.gz,
manifest.json.br
--no-post-process option is to tell whitenoise not to compress static files again.
python manage.py collectstatic --no-post-process
make sure to run the commands in order.
to test if whitenoise is working
python manage.py runserver --nostatic

Django separate settings files in Docker [duplicate]

I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
The DJANGO_SETTINGS_MODULE environment variable controls which settings file Django will load.
You therefore create separate configuration files for your respective environments (note that they can of course both import * from a separate, "shared settings" file), and use DJANGO_SETTINGS_MODULE to control which one to use.
Here's how:
As noted in the Django documentation:
The value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g. mysite.settings. Note that the settings module should be on the Python import search path.
So, let's assume you created myapp/production_settings.py and myapp/test_settings.py in your source repository.
In that case, you'd respectively set DJANGO_SETTINGS_MODULE=myapp.production_settings to use the former and DJANGO_SETTINGS_MODULE=myapp.test_settings to use the latter.
From here on out, the problem boils down to setting the DJANGO_SETTINGS_MODULE environment variable.
Setting DJANGO_SETTINGS_MODULE using a script or a shell
You can then use a bootstrap script or a process manager to load the correct settings (by setting the environment), or just run it from your shell before starting Django: export DJANGO_SETTINGS_MODULE=myapp.production_settings.
Note that you can run this export at any time from a shell — it does not need to live in your .bashrc or anything.
Setting DJANGO_SETTINGS_MODULE using a Process Manager
If you're not fond of writing a bootstrap script that sets the environment (and there are very good reasons to feel that way!), I would recommend using a process manager:
Supervisor lets you pass environment variables to managed processes using a program's environment configuration key.
Honcho (a pure-Python equivalent of Ruby's Foreman) lets you define environment variables in an "environment" (.env) file.
Finally, note that you can take advantage of the PYTHONPATH variable to store the settings in a completely different location (e.g. on a production server, storing them in /etc/). This allows for separating configuration from application files. You may or may not want that, it depends on how your app is structured.
By default use production settings, but create a file called settings_dev.py in the same folder as your settings.py file. Add overrides there, such as DEBUG=True.
On the computer that will be used for development, add this to your ~/.bashrc file:
export DJANGO_DEVELOPMENT=true
Or turn it on one time by prefixing your command:
DJANGO_DEVELOPMENT=true python manage.py runserver
At the bottom of your settings.py file, add the following.
# Override production variables if DJANGO_DEVELOPMENT env variable is true
if os.getenv('DJANGO_DEVELOPMENT') == 'true':
from settings_dev import * # or specific overrides
(Note that importing * should generally be avoided in Python)
By default the production servers will not override anything. Done!
Compared to the other answers, this one is simpler because it doesn't require updating PYTHONPATH, or setting DJANGO_SETTINGS_MODULE which only allows you to work on one django project at a time.
This is how I did it in 6 easy steps:
Create a folder inside your project directory and name it settings.
Project structure:
myproject/
myapp1/
myapp2/
myproject/
settings/
Create four python files inside of the settings directory namely __init__.py, base.py, dev.py and prod.py
Settings files:
settings/
__init__.py
base.py
prod.py
dev.py
Open __init__.py and fill it with the following content:
init.py:
from .base import *
# you need to set "myproject = 'prod'" as an environment variable
# in your OS (on which your website is hosted)
if os.environ['myproject'] == 'prod':
from .prod import *
else:
from .dev import *
Open base.py and fill it with all the common settings (that will be used in both production as well as development.) for example:
base.py:
import os
...
INSTALLED_APPS = [...]
MIDDLEWARE = [...]
TEMPLATES = [{...}]
...
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_ROOT = os.path.join(BASE_DIR, '/path/')
MEDIA_URL = '/path/'
Open dev.py and include that stuff which is development specific for example:
dev.py:
DEBUG = True
ALLOWED_HOSTS = ['localhost']
...
Open prod.py and include that stuff which is production specific for example:
prod.py:
DEBUG = False
ALLOWED_HOSTS = ['www.example.com']
LOGGING = [...]
...
Update
As ANDRESMA suggested in comments. Update BASE_DIR in your base.py file to reflect your updated path by adding another .parent to the end. For example:
BASE_DIR = Path(__file__).resolve().parent.parent.parent
I usually have one settings file per environment, and a shared settings file:
/myproject/
settings.production.py
settings.development.py
shared_settings.py
Each of my environment files has:
try:
from shared_settings import *
except ImportError:
pass
This allows me to override shared settings if necessary (by adding the modifications below that stanza).
I then select which settings files to use by linking it in to settings.py:
ln -s settings.development.py settings.py
I use the awesome django-configurations, and all the settings are stored in my settings.py:
from configurations import Configuration
class Base(Configuration):
# all the base settings here...
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
...
class Develop(Base):
# development settings here...
DEBUG = True
...
class Production(Base):
# production settings here...
DEBUG = False
To configure the Django project I just followed the docs.
Create multiple settings*.py files, extrapolating the variables that need to change per environment. Then at the end of your master settings.py file:
try:
from settings_dev import *
except ImportError:
pass
You keep the separate settings_* files for each stage.
At the top of your settings_dev.py file, add this:
import sys
globals().update(vars(sys.modules['settings']))
To import variables that you need to modify.
This wiki entry has more ideas on how to split your settings.
Here is the approach we use :
a settings module to split settings into multiple files for readability ;
a .env.json file to store credentials and parameters that we want excluded from our git repository, or that are environment specific ;
an env.py file to read the .env.json file
Considering the following structure :
...
.env.json # the file containing all specific credentials and parameters
.gitignore # the .gitignore file to exclude `.env.json`
project_name/ # project dir (the one which django-admin.py creates)
accounts/ # project's apps
__init__.py
...
...
env.py # the file to load credentials
settings/
__init__.py # main settings file
database.py # database conf
storage.py # storage conf
...
venv # virtualenv
...
With .env.json like :
{
"debug": false,
"allowed_hosts": ["mydomain.com"],
"django_secret_key": "my_very_long_secret_key",
"db_password": "my_db_password",
"db_name": "my_db_name",
"db_user": "my_db_user",
"db_host": "my_db_host",
}
And project_name/env.py :
<!-- language: lang-python -->
import json
import os
def get_credentials():
env_file_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
with open(os.path.join(env_file_dir, '.env.json'), 'r') as f:
creds = json.loads(f.read())
return creds
credentials = get_credentials()
We can have the following settings:
<!-- language: lang-py -->
# project_name/settings/__init__.py
from project_name.env import credentials
from project_name.settings.database import *
from project_name.settings.storage import *
...
SECRET_KEY = credentials.get('django_secret_key')
DEBUG = credentials.get('debug')
ALLOWED_HOSTS = credentials.get('allowed_hosts', [])
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
...
]
if DEBUG:
INSTALLED_APPS += ['debug_toolbar']
...
# project_name/settings/database.py
from project_name.env import credentials
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': credentials.get('db_name', ''),
'USER': credentials.get('db_user', ''),
'HOST': credentials.get('db_host', ''),
'PASSWORD': credentials.get('db_password', ''),
'PORT': '5432',
}
}
the benefits of this solution are :
user specific credentials and configurations for local development without modifying the git repository ;
environment specific configuration, you can have for example three different environments with three different .env.json like dev, stagging and production ;
credentials are not in the repository
I hope this helps, just let me know if you see any caveats with this solution.
I use the folloring file structure:
project/
...
settings/
settings/common.py
settings/local.py
settings/prod.py
settings/__init__.py -> local.py
So __init__.py is a link (ln in unix or mklink in windows) to local.py or can be to prod.py so the configuration is still in the project.settings module is clean and organized, and if you want to use a particular config you can use the environment variable DJANGO_SETTINGS_MODULE to project.settings.prod if you need to run a command for production environment.
In the files prod.py and local.py:
from .shared import *
DATABASE = {
...
}
and the shared.py file keeps as global without specific configs.
Use settings.py for production. In the same directory create settings_dev.py for overrides.
# settings_dev.py
from .settings import *
DEBUG = False
On a dev machine run your Django app with:
DJANGO_SETTINGS_MODULE=<your_app_name>.settings_dev python3 manage.py runserver
On a prod machine run as if you just had settings.py and nothing else.
ADVANTAGES
settings.py (used for production) is completely agnostic to the fact that any other environments even exist.
To see the difference between prod and dev you just look into a single location - settings_dev.py. No need to gather configurations scattered across settings_prod.py, settings_dev.py and settings_shared.py.
If someone adds a setting to your prod config after troubleshooting a production issue you can rest assured that it will appear in your dev config as well (unless explicitly overridden). Thus the divergence between different config files will be minimized.
building off cs01's answer:
if you're having problems with the environment variable, set its value to a string (e.g. I did DJANGO_DEVELOPMENT="true").
I also changed cs01's file workflow as follows:
#settings.py
import os
if os.environ.get('DJANGO_DEVELOPMENT') is not None:
from settings_dev import *
else:
from settings_production import *
#settings_dev.py
development settings go here
#settings_production.py
production settings go here
This way, Django doesn't have to read through the entirety of a settings file before running the appropriate settings file. This solution comes in handy if your production file needs stuff that's only on your production server.
Note: in Python 3, imported files need to have a . appended (e.g. from .settings_dev import *)
If you want to keep 1 settings file, and your development operating system is different than your production operating system, you can put this at the bottom of your settings.py:
from sys import platform
if platform == "linux" or platform == "linux2":
# linux
# some special setting here for when I'm on my prod server
elif platform == "darwin":
# OS X
# some special setting here for when I'm developing on my mac
elif platform == "win32":
# Windows...
# some special setting here for when I'm developing on my pc
Read more: How do I check the operating system in Python?
You want to be able to switch settings, secretes, environment variables and others based on the git branch that you are in and relying on different settings file is okay but in an enterprise situation you would like to hide all your sensitive information from the repo. It is not a best security best practice to expose all the environment variables, secrets of all environments (develop, staging, production, qa etc.,) to all the developers. The following should achieve 2.
isolation of settings as per their environment of deployment
hide sensitive information from git repo
My run.sh
#!/bin/bash
# default environment
export DJANGO_ENVIRONMENT="develop"
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if [ $BRANCH == "main" ]; then
export DJANGO_ENVIRONMENT="production"
elif [ $BRANCH == "release/"* ]; then
export DJANGO_ENVIRONMENT="staging"
else
# for all other branches (feature, support, hotfix etc.,)
echo ''
fi
echo "
BRANCH: $BRANCH
ENVIRONMENT: $DJANGO_ENVIRONMENT
"
python3 myapp/manage.py makemigrations
python3 myapp/manage.py migrate --noinput
python3 myapp/manage.py runserver 0:8000
My vars.py (or secrets.py or whatever name) in the same folder as settings.py of django
vars = {
'develop': {
'environment': 'develop',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
},
'production': {
'environment': 'production',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "False"
},
'staging': {
'environment': 'staging',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
}
}
then in settings.py just do the following
from . import vars # container environment specific vars
import os
DJANGO_ENVIRONMENT = os.getenv("DJANGO_ENVIRONMENT") # declared in run.sh
envs = vars.vars[DJANGO_ENVIRONMENT] # SECURITY WARNING: keep the secret key
used in production secret!
SECRET_KEY = envs["SECRET_KEY"]
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = envs["DEBUG"]
Let developers have their own vars.py in their local machine but during deployment your cicd pipeline can insert the actual vars.py with actual valures or some script should insert it. If you are using gitlab cicd then you can store the entire vars.py as an environment variable
This seems to have been answered, however a method which I use as combined with version control is the following:
Setup a env.py file in the same directory as settings on my local development environment that I also add to .gitignore:
env.py:
#!usr/bin/python
DJANGO_ENV = True
ALLOWED_HOSTS = ['127.0.0.1', 'dev.mywebsite.com']
.gitignore:
mywebsite/env.py
settings.py:
if os.path.exists(os.getcwd() + '/env.py'):
#env.py is excluded using the .gitignore file - when moving to production we can automatically set debug mode to off:
from env import *
else:
DJANGO_ENV = False
DEBUG = DJANGO_ENV
I just find this works and is far more elegant - with env.py it is easy to see our local environment variables and we can handle all of this without multiple settings.py files or the likes. This methods allows for all sorts of local environment variables to be used that we wouldn't want set on our production server. Utilising the .gitignore via version control we are also keeping everything seamlessly integrated.
For the problem of setting files, I choose to copy
Project
|---__init__.py [ write code to copy setting file from subdir to current dir]
|---settings.py (do not commit this file to git)
|---setting1_dir
| |-- settings.py
|---setting2_dir
| |-- settings.py
When you run django, __init__py will be ran. At this time , settings.py in setting1_dir will replace settings.py in Project.
How to choose different env?
modify __init__.py directly.
make a bash file to modify __init__.py.
modify env in linux, and then let __init__.py read this variable.
Why use to this way?
Because I don't like so many files in the same directory, too many files will confuse other partners and not very well for IDE.(IDE cannot find what file we use)
If you do not want to see all these details, you can divide project into two part.
make your small tool like Spring Initializr, just for setup your project.(do sth like copy file)
your project code
I'm using different app.yaml file to change configuration between environments in google cloud app engine.
You can use this to create a proxy connection in your terminal command:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433
https://cloud.google.com/sql/docs/sqlserver/connect-admin-proxy#macos-64-bit
File: app.yaml
# [START django_app]
service: development
runtime: python37
env_variables:
DJANGO_DB_HOST: '/cloudsql/myproject:myregion:myinstance'
DJANGO_DEBUG: True
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
I create a file named "production" in the working directory in production.
#settings.py
production = Path("production")
DEBUG = False
#if it's dev mode
if not production.is_file():
INSTALLED_APPS +=[
#apps_in_development_mode,
#...
]
DEBUG = True
#other settings to override the default production settings
You're probably going to use the wsgi.py file for production (this file is created automatically when you create the django project). That file points to a settings file. So make a separate production settings file and reference it in your wsgi.py file.
What we do here is to have an .ENV file for each environment. This file contains a lot of variables like ENV=development
The settings.py file is basically a bunch of os.environ.get(), like ENV = os.environ.get('ENV')
So when you need to access that you can do ENV = settings.ENV.
You would have to have a .env file for your production, testing, development.
This is my solution, with different environements for dev, test and prod
import socket
[...]
DEV_PC = 'PC059'
host_name = socket.gethostname()
if host_name == DEV_PC:
#do something
pass
elif [...]

Collectstatic error while deploying Django app to Heroku

I'm trying to deploy a Django app to Heroku, it starts to build, download and installs everything, but that's what I get when it comes to collecting static files
$ python manage.py collectstatic --noinput
remote: Traceback (most recent call last):
remote: File "manage.py", line 10, in <module>
remote: execute_from_command_line(sys.argv)
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
remote: utility.execute()
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute
remote: self.fetch_command(subcommand).run_from_argv(self.argv)
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/base.py", line 390, in run_from_argv
remote: self.execute(*args, **cmd_options)
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/base.py", line 441, in execute
remote: output = self.handle(*args, **options)
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 168, in handle
remote: collected = self.collect()
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 98, in collect
remote: for path, storage in finder.list(self.ignore_patterns):
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/finders.py", line 112, in list
remote: for path in utils.get_files(storage, ignore_patterns):
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/utils.py", line 28, in get_files
remote: directories, files = storage.listdir(location)
remote: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/files/storage.py", line 300, in listdir
remote: for entry in os.listdir(path):
remote: OSError: [Errno 2] No such file or directory: '/app/blogproject/static'
remote:
remote: ! Error while running '$ python manage.py collectstatic --noinput'.
remote: See traceback above for details.
remote:
remote: You may need to update application code to resolve this error.
remote: Or, you can disable collectstatic for this application:
remote:
remote: $ heroku config:set DISABLE_COLLECTSTATIC=1
remote:
remote: https://devcenter.heroku.com/articles/django-assets
remote:
remote: ! Push rejected, failed to compile Python app
remote:
remote: Verifying deploy...
remote:
remote: ! Push rejected to pin-a-voyage.
This is the whole settings.py file
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
import dj_database_url
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '*********************'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'blog',
'custom_user',
'django_markdown',
'parsley',
)
#### AUTH ###
AUTH_USER_MODEL = 'custom_user.CustomUser'
AUTHENTICATION_BACKENDS = (
'custom_user.backends.CustomUserAuth',
'django.contrib.auth.backends.ModelBackend',
# 'django.contrib.auth.backends.RemoteUserBackend',
)
#############
#### EMAIL ###
EMAIL_USE_TLS = True
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_PASSWORD = '***' #my gmail password
EMAIL_HOST_USER = 'voyage.pin#gmail.com' #my gmail username
DEFAULT_FROM_EMAIL = 'voyage.pin#gmail.com'
SERVER_EMAIL = 'voyage.pin#gmail.com'
EMAIL_PORT = 587
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
##############
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'blogproject.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'blogproject.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'blogproject',
'USER': '***',
'PASSWORD': '***',
'HOST': 'localhost',
'PORT': '',
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Update database configuration with $DATABASE_URL.
db_from_env = dj_database_url.config(conn_max_age=500)
DATABASES['default'].update(db_from_env)
# Honor the 'X-Forwarded-Proto' header for request.is_secure()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Allow all host headers
ALLOWED_HOSTS = ['*']
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_ROOT = os.path.join(PROJECT_ROOT, 'staticfiles')
STATIC_URL = '/static/'
# Extra places for collectstatic to find static files.
STATICFILES_DIRS = (
os.path.join(PROJECT_ROOT, 'static'),
)
# Simplified static file serving.
# https://warehouse.python.org/project/whitenoise/
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
This is the structure of the project
blog-project -- blog -- migrations
-- static
-- templates
-- blogproject
-- blogprojectenv
-- custom_user
-- media
-- .git
Any thoughts?
I just updated to Django 1.10 today and had the exact same problem.
Your static settings are identical to mine as well.
This worked for me, run the following commands:
disable the collectstatic during a deploy
heroku config:set DISABLE_COLLECTSTATIC=1
deploy
git push heroku master
run migrations (django 1.10 added at least one)
heroku run python manage.py migrate
run collectstatic using bower
heroku run 'bower install --config.interactive=false;grunt prep;python manage.py collectstatic --noinput'
enable collecstatic for future deploys
heroku config:unset DISABLE_COLLECTSTATIC
try it on your own (optional)
heroku run python manage.py collectstatic
future deploys should work as normal from now on
You have STATICFILES_DIRS configured to expect a static directory in the same directory as your settings.py file, so make sure it's there not somewhere else.
Also, do you have any files in that static directory? If you don't then git won't track it and so although it exists locally it won't exist in git. The usual solution to this is to create an empty file called .keep in the directory which will ensure that git tracks it. But once you have some static files in this directory then it won't be a problem anymore.
DO NOT disable collectstatic on heroku with heroku config:set DISABLE_COLLECTSTATIC=1. This will just hide the error and not make your app healthy.
Instead, it's better to understand why the collectstatic command fails because it means something is not right with your settings.
Step 1
Run locally both commands:
python manage.py collectstatic
python manage.py test
You should see one or more error messages. Most of the time, it's a missing variable (for ex: STATIC_ROOT) you must add to your project settings.py file.
It's necessary to add the test command because some collectstatic related issues will only surface with test, such as this one
Step 2
Once you've fixed all the error messages locally, push again to heroku.
Troubleshooting
Remember you can also run commands directly in your heroku VM.
If you cannot reproduce locally, run the collecstatic command in heroku and check what's going on directly in your production environment:
python manage.py collectstatic --dry-run --noinput
(Same goes for heroku console obviously)
Run python manage.py collectstatic locally and fix any errors. In my case there were reference errors that prevented that command from running successfully.
if you use django-heroku library
maybe you forget for put this setting in the bottom of line text settings.py for can possible read all config parameters
import django_heroku
django_heroku.settings(locals())
as like as the documentation:
Usage of Django-Heroku
In settings.py, at the very bottom::
…
# Configure Django App for Heroku.
import django_heroku
django_heroku.settings(locals())
This will automatically configure DATABASE_URL, ALLOWED_HOSTS, WhiteNoise (for static assets), Logging, and Heroku CI for your application.
p.s: sorry for my bad english
This error has occurred because you do not have staticfiles in
your Project's Root Directory.
Don't worry. The solution is SIMPLE.
You only need TWO STEPS.
Step 1: Open your settings.py file and write
import os
from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
STATIC_ROOT = BASE_DIR / 'staticfiles'
Step 2: Run below given command in terminal in your Project's root directory.
(If you are using any Virtual environment for Django Project then go inside your Virtual environment and then go into your Project's root directory and then run below given command.)
python manage.py collectstatic
Congrats : Your Problem is Solved.
Now, you can COMMIT and PUSH your "changes" so that it is reflected in your Repository and then you are good to go.
This worked for me:
step 1 - heroku config:set DISABLE_COLLECTSTATIC=1
step 2 - git push heroku master
I face same problem..
Follow this step
heroku config:set DISABLE_COLLECTSTATIC=1
git push heroku master
python manage.py collectstatic
python manage.py test
If any error occurred after running test..check your
STATIC_ROOT is correct like this ==> STATIC_ROOT = os.path.join(BASE_DIR, 'static').
After run collectstatic command check all static files are
store in static directory for your root dir. level(manage.py
dir. level)...
heroku run python manage.py collectstatic.
heroku run python manage.py migrate
heroku config:unset DISABLE_COLLECTSTATIC (for future use).
Heroku had made a document with suggestions on how to handle this https://devcenter.heroku.com/articles/django-assets
add to settings.py
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
make a directory in the root of your project called staticfiles, put a favicon or something in there, just make sure git tracks it. Then the collectstatic command should finish on heroku.
Ran to that issue after trying to deploy an app again. The problem got cured after I specified these commands:
$ heroku config:set SECRET_KEY="*secret_key*"
$ heroku config:set DEBUG_VALUE="True"
$ heroku config:set EMAIL_USER="*user-email*"
$ heroku config:set EMAIL_PASS="*pass*"
These variables in settings.py were invoked with local environment variables,
which heroku didn't have on its environment, hence the error.
It seems to me that it's having problems creating that blogproject/static folder. I see you have a static folder inside your blog app, but it should be up one level in your blogproject folder.
Try creating a static folder inside your blogproject folder and that error should go away.
Today, not all of the requirements came in properly with $ pipenv install django from the heroku-django-template and $ pip install -r requirements.txt.
The latest version of the template includes a /static folder with a humans.txt, so the previous solution is likely not the proplem
Try running $ pipenv install whitenoise and then $ pip freeze > requirements.txt.
If that works, I would recommend $ pip install psycopg2 --ignore-installed and $ pip freeze > requirements.txt as well, otherwise you will similarly have problems migrating.
I faced the same issue while deploying my app. I realized I had updated my pip version, installed few plugins but forgot to create a fresh requirements.txt file.
Run pip freeze > requirements.txt in your terminal
Run python manage.py collectstatic
Now push the code to github and deploy to heroku server
Hope this helps if that is the case
insert this line of code to your setting.py file.
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
In my case was an error almost like described above, after push-ing that resulted in errors, I set the SECRET_KEY "heroku config:set SECRET_KEY='*************************'",
git push heroku main (again)
,
heroku run python manage.py migrate
,
heroku run python manage.py createsuperuser .. and everything
,
heroku open
and it worked :)
removing STATICFILES_DIRS worked in my case
heroku config:set DISABLE_COLLECTSTATIC=1 --app #yourappname
Just run the command
This problem occurs because Heroku tries to run manage.py.
While executing manage.py
we have to write like
python manage.py 'some_command'
But Heroku tries it as
python manage.py --noinput
So in this case we can make changes to our manage.py file:
Initialy it looks like this:
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE',
'your_project.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv) # just put this in try block
if __name__ == '__main__':
main()
So we change our main.py to:
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE',
'your_project.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
try:
execute_from_command_line(sys.argv) # just put this in try block
except:
pass
if __name__ == '__main__':
main()
If you used .env files and python-decouple you'll have to define the environmental variables in Heroku app settings > Config Vars. Otherwise collectstatic won't work.
After testing everything that was posted on this thread, here's what worked for me:
Keep the Heroku environment variable DISABLE_COLLECTSTATIC set to 0, as it won't really solve the issue, but just mask it and mess with your site's assets
When using django_heroku lib on the settings file, include the argument "staticfiles=False", like this: django_heroku.settings(locals(), staticfiles=False)
The STATIC_ROOT variable on settings.py should be set as STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles'), being BASE_DIR set as BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))). This is described on Heroku's django-assets doc
After doing all that, run a python manage.py migrateon Heroku CLI and you should see a message about assets downloaded.
Before making it to actually work, the collectstatic command python manage.py collectstatic on Heroku CLI was giving a correct output, so be aware that you may get no errors with this command, but still have something wrong going on.

How to deploy / migrate an existing django app / project to a production server on Heroku?

I have a basic django app (Newsdiffs)that runs just fine at localhost:8000 with python website/manage.py runserver but I'd like to migrate it to Heroku and I can't figure out what my next step is.
I thought getting it running locally would translate to running it on Heroku, but I'm realizing that python website/manage.py runserver is launching the dev settings and I'm not sure how to tell it to use the main settings.
All that is in my Procfile is this:
web: python website/manage.py runserver
Locally, that works fine, though it launches it at http://127.0.0.1:8000/ which is probably not what I want on Heroku. So how do I figure out where to set the hostname and port? I don't see either in the app anyplace.
I have just drawn this list for myself two days ago.
It was put together after having followed the steps described in Heroku's help pages for python.
It's by no means definitive nor perfect, and it will change, but it's a valid trace, since I was able to put the site online.
Some issues remain, to be checked thoroughly, e.g. the location of the media/ directory where files are uploaded should/could live outside your project for security reasons (now it works, but I have noticed if the dyno sleeps then the files are not reached/displayed by the template later).
The same goes for the staticfiles/ directory (although this one seems to work fine).
Also, you might want to set django's debug mode to false.
So here it is:
My first steps to deploy an EXISTING django application to Heroku
ASSUMPTIONS:
a) your django project is in a virtual environment already
b) you have already collected all your project's required packages with
pip freeze > requirements.txt
and committed it to git
git add requirements.txt
git commit -m 'my prj requirements'
0) Activate your project's virtual environment
workon xyz #using virtualenvwrapper
then go to your django project's directory (DPD for short) if not already taken there
cd ~/prj/xyz (or cdproject with virtualenvwrapper if setup properly)
and create a new git branch for heroku twiddling to prevent messing things up
git checkout -b he
1) Create the app on heroku
heroku create xyz
that also adds heroku as a remote of your repo
2) Add the needed packages to requirements.txt
vi requirements.txt
add
dj-database-url==0.3.0
django-postgrespool==0.3.0
gunicorn==19.3.0
psycopg2==2.6
django-toolbelt==0.0.1
static3==0.5.1
whitenoise==2.0.3
3) Install all dependencies in the local venv
pip install -r requirements.txt --allow-all-external
4) Setup the heroku django settings
cd xyz
create a copy
cp setting.py settings_heroku.py
and edit it
vi settings_heroku.py
import os
import dj_database_url
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'), )
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
SECRET_KEY = os.environ["DJANGO_SECRET_KEY"]
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
replace django's std db cfg with
DATABASES['default'] = dj_database_url.config()
DATABASES['default']['ENGINE'] = 'django_postgrespool'
and
WSGI_APPLICATION = 'xyz.wsgi_heroku.application'
5) Configure the necessary environment variables (heroku configs)
edit the .env file
vi .env
e.g.
DJANGO_SECRET_KEY=whatever
EMAIL_HOST_USER=youruser#gmail.com
EMAIL_HOST_PASSWORD=whateveritis
and/or set them manually if needed (in my case .env had no effect, wasn't loaded apparently, and had to set the vars manually for now)
heroku config:set DJANGO_SECRET_KEY=whatever
heroku config:set EMAIL_HOST_USER=youruser#gmail.com
heroku config:set EMAIL_HOST_PASSWORD=whateveritis
6) Create a separate wsgi file for heroku
cd xyx
cp wsgi.py wsgi_heroku.py
and edit it to make it point to the right settings
vi wsgi_heroku.py
from whitenoise.django import DjangoWhiteNoise
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "xyz.settings_heroku")
application = get_wsgi_application()
application = DjangoWhiteNoise(application)
7) Make sure all the templates use
{% load staticfiles %}
8) Define the Procfile file so that it points to the right wsgi
e.g.
cd ~/prj/xyz (DPD)
vi Procfile
add
web: gunicorn xyz.wsgi_heroku --log-file -
9) Collect all static content into DPD/staticfiles/
locally, make sure django points to the right wsgi settings
export WSGI_APPLICATION=blogger.wsgi_heroku.application
python manage.py collectstatic
10) add the changes to the local git repo (he branch)
git add --all .
git commit -m 'first 4 heroku'
11) check the whole thing works locally
heroku local # in heroku's help they also add `web`, not needed?!
12) push your code to heroku
git push heroku he:master
13) make sure a instance of the app is running
heroku ps:scale web=1
14) create the tables on the heroku DB
heroku run python manage.py migrate
Note: if you see a message that says, “You just installed Django’s auth system, which means you don’t have any superusers defined. Would you like to create one now?”, type no.
15) add the superuser to the heroku DB
heroku run bash
python manage.py createsuperuser
and fill in the details, as usual
16) Populate the DB with the necessary fixtures
heroku run python manage.py loaddata yourfile.json
17) Visit the website page on heroku's webserver
heroku open
or go to
https://xyz.herokuapp.com/
and the admin
https://xyz.herokuapp.com/admin/
and the DB
https://xyz.herokuapp.com/db
Useful commands:
View the app's logs
heroku logs [--tail]
List add-ons deployed
heroku addons
and use one:
heroku addons:open <add-on-name>
Run a command on heroku (the remote env, where you are deploying)
heroku run python manage.py shell
heroku run bash
Set a config var on Heroku
heroku config:set VARNAME=whatever
View the config vars that are set (including the DB's)
heroku config
View postgres DB details
heroku pg
If you know some python and have a lot of experience building web apps in other languages but don't totally understand where Heroku fits, I highly recommend Discover Flask, which patched a lot of the holes in my understanding of how these pieces all fit together.
Some of the things that I worked out:
you really do need an isolated virtual environment if you're going to deploy to Heroku, because Heroku installs Python modules from the requirements.txt file.
Gunicorn is a web server, and you definitely need to run your app under Gunicorn or it won't run on Heroku.
The "Procfile" doesn't just give the command you use to run the app locally. And Heroku requires it. So if you've got an app that was built to run on Heroku and it doesn't include a Procfile, they left something out.
You don't tell Heroku what your hostname is. When you run heroku create it should tell you what your domain name is going to be. And every time you run git push heroku master (or whatever branch you're pushing, maybe it isn't master), Heroku will (try to) restart your app.
Heroku doesn't support sqlite. You have to run your Production DB in Postgres.
This doesn't directly answer my question, but it does fill in some of the missing pieces that were making it hard for me to even ask the right question. RTFM notwithstanding. :)

Django: How to manage development and production settings?

I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
The DJANGO_SETTINGS_MODULE environment variable controls which settings file Django will load.
You therefore create separate configuration files for your respective environments (note that they can of course both import * from a separate, "shared settings" file), and use DJANGO_SETTINGS_MODULE to control which one to use.
Here's how:
As noted in the Django documentation:
The value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g. mysite.settings. Note that the settings module should be on the Python import search path.
So, let's assume you created myapp/production_settings.py and myapp/test_settings.py in your source repository.
In that case, you'd respectively set DJANGO_SETTINGS_MODULE=myapp.production_settings to use the former and DJANGO_SETTINGS_MODULE=myapp.test_settings to use the latter.
From here on out, the problem boils down to setting the DJANGO_SETTINGS_MODULE environment variable.
Setting DJANGO_SETTINGS_MODULE using a script or a shell
You can then use a bootstrap script or a process manager to load the correct settings (by setting the environment), or just run it from your shell before starting Django: export DJANGO_SETTINGS_MODULE=myapp.production_settings.
Note that you can run this export at any time from a shell — it does not need to live in your .bashrc or anything.
Setting DJANGO_SETTINGS_MODULE using a Process Manager
If you're not fond of writing a bootstrap script that sets the environment (and there are very good reasons to feel that way!), I would recommend using a process manager:
Supervisor lets you pass environment variables to managed processes using a program's environment configuration key.
Honcho (a pure-Python equivalent of Ruby's Foreman) lets you define environment variables in an "environment" (.env) file.
Finally, note that you can take advantage of the PYTHONPATH variable to store the settings in a completely different location (e.g. on a production server, storing them in /etc/). This allows for separating configuration from application files. You may or may not want that, it depends on how your app is structured.
By default use production settings, but create a file called settings_dev.py in the same folder as your settings.py file. Add overrides there, such as DEBUG=True.
On the computer that will be used for development, add this to your ~/.bashrc file:
export DJANGO_DEVELOPMENT=true
Or turn it on one time by prefixing your command:
DJANGO_DEVELOPMENT=true python manage.py runserver
At the bottom of your settings.py file, add the following.
# Override production variables if DJANGO_DEVELOPMENT env variable is true
if os.getenv('DJANGO_DEVELOPMENT') == 'true':
from settings_dev import * # or specific overrides
(Note that importing * should generally be avoided in Python)
By default the production servers will not override anything. Done!
Compared to the other answers, this one is simpler because it doesn't require updating PYTHONPATH, or setting DJANGO_SETTINGS_MODULE which only allows you to work on one django project at a time.
This is how I did it in 6 easy steps:
Create a folder inside your project directory and name it settings.
Project structure:
myproject/
myapp1/
myapp2/
myproject/
settings/
Create four python files inside of the settings directory namely __init__.py, base.py, dev.py and prod.py
Settings files:
settings/
__init__.py
base.py
prod.py
dev.py
Open __init__.py and fill it with the following content:
init.py:
from .base import *
# you need to set "myproject = 'prod'" as an environment variable
# in your OS (on which your website is hosted)
if os.environ['myproject'] == 'prod':
from .prod import *
else:
from .dev import *
Open base.py and fill it with all the common settings (that will be used in both production as well as development.) for example:
base.py:
import os
...
INSTALLED_APPS = [...]
MIDDLEWARE = [...]
TEMPLATES = [{...}]
...
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_ROOT = os.path.join(BASE_DIR, '/path/')
MEDIA_URL = '/path/'
Open dev.py and include that stuff which is development specific for example:
dev.py:
DEBUG = True
ALLOWED_HOSTS = ['localhost']
...
Open prod.py and include that stuff which is production specific for example:
prod.py:
DEBUG = False
ALLOWED_HOSTS = ['www.example.com']
LOGGING = [...]
...
Update
As ANDRESMA suggested in comments. Update BASE_DIR in your base.py file to reflect your updated path by adding another .parent to the end. For example:
BASE_DIR = Path(__file__).resolve().parent.parent.parent
I usually have one settings file per environment, and a shared settings file:
/myproject/
settings.production.py
settings.development.py
shared_settings.py
Each of my environment files has:
try:
from shared_settings import *
except ImportError:
pass
This allows me to override shared settings if necessary (by adding the modifications below that stanza).
I then select which settings files to use by linking it in to settings.py:
ln -s settings.development.py settings.py
I use the awesome django-configurations, and all the settings are stored in my settings.py:
from configurations import Configuration
class Base(Configuration):
# all the base settings here...
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
...
class Develop(Base):
# development settings here...
DEBUG = True
...
class Production(Base):
# production settings here...
DEBUG = False
To configure the Django project I just followed the docs.
Create multiple settings*.py files, extrapolating the variables that need to change per environment. Then at the end of your master settings.py file:
try:
from settings_dev import *
except ImportError:
pass
You keep the separate settings_* files for each stage.
At the top of your settings_dev.py file, add this:
import sys
globals().update(vars(sys.modules['settings']))
To import variables that you need to modify.
This wiki entry has more ideas on how to split your settings.
Here is the approach we use :
a settings module to split settings into multiple files for readability ;
a .env.json file to store credentials and parameters that we want excluded from our git repository, or that are environment specific ;
an env.py file to read the .env.json file
Considering the following structure :
...
.env.json # the file containing all specific credentials and parameters
.gitignore # the .gitignore file to exclude `.env.json`
project_name/ # project dir (the one which django-admin.py creates)
accounts/ # project's apps
__init__.py
...
...
env.py # the file to load credentials
settings/
__init__.py # main settings file
database.py # database conf
storage.py # storage conf
...
venv # virtualenv
...
With .env.json like :
{
"debug": false,
"allowed_hosts": ["mydomain.com"],
"django_secret_key": "my_very_long_secret_key",
"db_password": "my_db_password",
"db_name": "my_db_name",
"db_user": "my_db_user",
"db_host": "my_db_host",
}
And project_name/env.py :
<!-- language: lang-python -->
import json
import os
def get_credentials():
env_file_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
with open(os.path.join(env_file_dir, '.env.json'), 'r') as f:
creds = json.loads(f.read())
return creds
credentials = get_credentials()
We can have the following settings:
<!-- language: lang-py -->
# project_name/settings/__init__.py
from project_name.env import credentials
from project_name.settings.database import *
from project_name.settings.storage import *
...
SECRET_KEY = credentials.get('django_secret_key')
DEBUG = credentials.get('debug')
ALLOWED_HOSTS = credentials.get('allowed_hosts', [])
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
...
]
if DEBUG:
INSTALLED_APPS += ['debug_toolbar']
...
# project_name/settings/database.py
from project_name.env import credentials
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': credentials.get('db_name', ''),
'USER': credentials.get('db_user', ''),
'HOST': credentials.get('db_host', ''),
'PASSWORD': credentials.get('db_password', ''),
'PORT': '5432',
}
}
the benefits of this solution are :
user specific credentials and configurations for local development without modifying the git repository ;
environment specific configuration, you can have for example three different environments with three different .env.json like dev, stagging and production ;
credentials are not in the repository
I hope this helps, just let me know if you see any caveats with this solution.
I use the folloring file structure:
project/
...
settings/
settings/common.py
settings/local.py
settings/prod.py
settings/__init__.py -> local.py
So __init__.py is a link (ln in unix or mklink in windows) to local.py or can be to prod.py so the configuration is still in the project.settings module is clean and organized, and if you want to use a particular config you can use the environment variable DJANGO_SETTINGS_MODULE to project.settings.prod if you need to run a command for production environment.
In the files prod.py and local.py:
from .shared import *
DATABASE = {
...
}
and the shared.py file keeps as global without specific configs.
Use settings.py for production. In the same directory create settings_dev.py for overrides.
# settings_dev.py
from .settings import *
DEBUG = False
On a dev machine run your Django app with:
DJANGO_SETTINGS_MODULE=<your_app_name>.settings_dev python3 manage.py runserver
On a prod machine run as if you just had settings.py and nothing else.
ADVANTAGES
settings.py (used for production) is completely agnostic to the fact that any other environments even exist.
To see the difference between prod and dev you just look into a single location - settings_dev.py. No need to gather configurations scattered across settings_prod.py, settings_dev.py and settings_shared.py.
If someone adds a setting to your prod config after troubleshooting a production issue you can rest assured that it will appear in your dev config as well (unless explicitly overridden). Thus the divergence between different config files will be minimized.
building off cs01's answer:
if you're having problems with the environment variable, set its value to a string (e.g. I did DJANGO_DEVELOPMENT="true").
I also changed cs01's file workflow as follows:
#settings.py
import os
if os.environ.get('DJANGO_DEVELOPMENT') is not None:
from settings_dev import *
else:
from settings_production import *
#settings_dev.py
development settings go here
#settings_production.py
production settings go here
This way, Django doesn't have to read through the entirety of a settings file before running the appropriate settings file. This solution comes in handy if your production file needs stuff that's only on your production server.
Note: in Python 3, imported files need to have a . appended (e.g. from .settings_dev import *)
If you want to keep 1 settings file, and your development operating system is different than your production operating system, you can put this at the bottom of your settings.py:
from sys import platform
if platform == "linux" or platform == "linux2":
# linux
# some special setting here for when I'm on my prod server
elif platform == "darwin":
# OS X
# some special setting here for when I'm developing on my mac
elif platform == "win32":
# Windows...
# some special setting here for when I'm developing on my pc
Read more: How do I check the operating system in Python?
You want to be able to switch settings, secretes, environment variables and others based on the git branch that you are in and relying on different settings file is okay but in an enterprise situation you would like to hide all your sensitive information from the repo. It is not a best security best practice to expose all the environment variables, secrets of all environments (develop, staging, production, qa etc.,) to all the developers. The following should achieve 2.
isolation of settings as per their environment of deployment
hide sensitive information from git repo
My run.sh
#!/bin/bash
# default environment
export DJANGO_ENVIRONMENT="develop"
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if [ $BRANCH == "main" ]; then
export DJANGO_ENVIRONMENT="production"
elif [ $BRANCH == "release/"* ]; then
export DJANGO_ENVIRONMENT="staging"
else
# for all other branches (feature, support, hotfix etc.,)
echo ''
fi
echo "
BRANCH: $BRANCH
ENVIRONMENT: $DJANGO_ENVIRONMENT
"
python3 myapp/manage.py makemigrations
python3 myapp/manage.py migrate --noinput
python3 myapp/manage.py runserver 0:8000
My vars.py (or secrets.py or whatever name) in the same folder as settings.py of django
vars = {
'develop': {
'environment': 'develop',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
},
'production': {
'environment': 'production',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "False"
},
'staging': {
'environment': 'staging',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
}
}
then in settings.py just do the following
from . import vars # container environment specific vars
import os
DJANGO_ENVIRONMENT = os.getenv("DJANGO_ENVIRONMENT") # declared in run.sh
envs = vars.vars[DJANGO_ENVIRONMENT] # SECURITY WARNING: keep the secret key
used in production secret!
SECRET_KEY = envs["SECRET_KEY"]
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = envs["DEBUG"]
Let developers have their own vars.py in their local machine but during deployment your cicd pipeline can insert the actual vars.py with actual valures or some script should insert it. If you are using gitlab cicd then you can store the entire vars.py as an environment variable
This seems to have been answered, however a method which I use as combined with version control is the following:
Setup a env.py file in the same directory as settings on my local development environment that I also add to .gitignore:
env.py:
#!usr/bin/python
DJANGO_ENV = True
ALLOWED_HOSTS = ['127.0.0.1', 'dev.mywebsite.com']
.gitignore:
mywebsite/env.py
settings.py:
if os.path.exists(os.getcwd() + '/env.py'):
#env.py is excluded using the .gitignore file - when moving to production we can automatically set debug mode to off:
from env import *
else:
DJANGO_ENV = False
DEBUG = DJANGO_ENV
I just find this works and is far more elegant - with env.py it is easy to see our local environment variables and we can handle all of this without multiple settings.py files or the likes. This methods allows for all sorts of local environment variables to be used that we wouldn't want set on our production server. Utilising the .gitignore via version control we are also keeping everything seamlessly integrated.
For the problem of setting files, I choose to copy
Project
|---__init__.py [ write code to copy setting file from subdir to current dir]
|---settings.py (do not commit this file to git)
|---setting1_dir
| |-- settings.py
|---setting2_dir
| |-- settings.py
When you run django, __init__py will be ran. At this time , settings.py in setting1_dir will replace settings.py in Project.
How to choose different env?
modify __init__.py directly.
make a bash file to modify __init__.py.
modify env in linux, and then let __init__.py read this variable.
Why use to this way?
Because I don't like so many files in the same directory, too many files will confuse other partners and not very well for IDE.(IDE cannot find what file we use)
If you do not want to see all these details, you can divide project into two part.
make your small tool like Spring Initializr, just for setup your project.(do sth like copy file)
your project code
I'm using different app.yaml file to change configuration between environments in google cloud app engine.
You can use this to create a proxy connection in your terminal command:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433
https://cloud.google.com/sql/docs/sqlserver/connect-admin-proxy#macos-64-bit
File: app.yaml
# [START django_app]
service: development
runtime: python37
env_variables:
DJANGO_DB_HOST: '/cloudsql/myproject:myregion:myinstance'
DJANGO_DEBUG: True
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
I create a file named "production" in the working directory in production.
#settings.py
production = Path("production")
DEBUG = False
#if it's dev mode
if not production.is_file():
INSTALLED_APPS +=[
#apps_in_development_mode,
#...
]
DEBUG = True
#other settings to override the default production settings
You're probably going to use the wsgi.py file for production (this file is created automatically when you create the django project). That file points to a settings file. So make a separate production settings file and reference it in your wsgi.py file.
What we do here is to have an .ENV file for each environment. This file contains a lot of variables like ENV=development
The settings.py file is basically a bunch of os.environ.get(), like ENV = os.environ.get('ENV')
So when you need to access that you can do ENV = settings.ENV.
You would have to have a .env file for your production, testing, development.
This is my solution, with different environements for dev, test and prod
import socket
[...]
DEV_PC = 'PC059'
host_name = socket.gethostname()
if host_name == DEV_PC:
#do something
pass
elif [...]

Categories