Problems debugging Django `collectstatic` in a cloudControl deployment - python

I have a Django application deployed to cloudControl. Configuration is standard and the push/deploy happens without (apparent) errors.
But the collectstatic step is not being executed: it fails silently (I see no -----> Collecting static files message). After the deploy, the static folder for the application is empty, so you get 500 Server Errors continually.
I can solve it changing the Procfile, but it is not consistent either:
web: python manage.py collectstatic --noinput; gunicorn app.wsgi:application --config gunicorn_cnf.py --bind 0.0.0.0:${PORT:-5000}`
collectstatic works as it should locally, and if I run cctrlapp app/deployment run "python manage.py collectstatic --noinput" no errors are shown either:
669 static files copied to '/srv/www/staticfiles/static', 669 post-processed.
But /srv/www/staticfiles/static is empty.
How can I know why collectstatic is not being executed in the push phase?

I've been able to debug the problem, using a custom python buildpack, so here is the answer for further reference.
The problem was in the settings.py file. The first thing I do in this file is to check if we are in a cloudControl environment or in a local environment. I do it looking for the CRED_FILE environment variable (not so different of what is suggested): if no variable is found, I load a local JSON file that mimics that credentials variable for development:
try:
cred_file = os.environ['CRED_FILE']
DEBUG = False
except KeyError:
cred_file = os.path.join(BASE_DIR, 'creds.json')
DEBUG = True
Once I know the environment, I can have different INSTALLED_APPS (requirements.txt files are slightly different in production and development, too) or change some settings.
Now the bad news: in the push phase there is no CRED_FILE available.
So I was trying to load apps that were not installed (because they were only in the development requirements file, like coverage or django-debug-toolbar) or use credentials that were not set (creds.json is, of course, not uploaded to the repository: only a TXT with dummy values is uploaded as a reference). That's why collectstatic was failing silently in the push phase.
Here is my solution (it will work as long as you have a dummy credentials file in your repo):
try:
cred_file = os.environ['CRED_FILE']
DEBUG = False
except KeyError:
if os.path.exists(os.path.join(BASE_DIR, 'creds.json')):
cred_file = os.path.join(BASE_DIR, 'creds.json')
DEBUG = True
else:
cred_file = os.path.join(BASE_DIR, 'creds.json.txt')
DEBUG = False
Credentials are not used by collectstatic, so you can have anything in the creds.json.txt file. Not very clean, but it works as expected now.
EDIT
As pointed by #pst in a comment there is an environment variable to know if the buildpack is running, so we could use that one too to load the desired credentials and set DEBUG.
if 'CRED_FILE' in os.environ:
cred_file = os.environ['CRED_FILE']
DEBUG = False
elif 'BUILDPACK_RUNNING' in os.environ:
cred_file = os.path.join(BASE_DIR, 'creds.json.txt')
DEBUG = False
else:
cred_file = os.path.join(BASE_DIR, 'creds.json')
DEBUG = True

Related

Confusion regarding Django and SECRET_KEY

Recently finished my app and I am ready to deploy it but I don't understand how to set the application's SECRET_KEY. I'm trying to change my database from sqlite to postgresql, but I get the following error:
raise KeyError(key) from None
KeyError: 'SECRET_KEY'
development.py
from nurs_course.settings.common import *
ALLOWED_HOSTS = ['0.0.0.0', 'localhost']
SECRET_KEY = '9t*re^fdqd%-o_&zsu25(!#kcbk*k=6vebh(d*9r)+j8w%7ci1'
DEBUG = True
production.py
from nurs_course.settings.common import *
DEBUG = False
SECRET_KEY = os.environ['SECRET_KEY']
# SECURITY WARNING: update this when you have the production host
ALLOWED_HOSTS = ['0.0.0.0', 'localhost']
common.py has all the other settings required. I use Windows OS w/ Powershell. I've been stuck on this for a bit and I am just unsure how to set the SECRET_KEY properly. Any help would be appreciated!
As given here.
If you're using a virtual environment, you might want to activate it and run this code:
export SECRET_KEY='9t*re^fdqd%-o_&zsu25(!#kcbk*k=6vebh(d*9r)+j8w%7ci1'
After that run python manage.py shell --settings=entri.settings.prod
You have to export SECRET_KEY in the following way,
export SECRET_KEY="somesecretvalue"
If you are using Python 2.x, try:
os.getenv('SECRET_KEY')

Django code changes not reflected without restart

I used python manage.py runserver to start the django server locally. I noticed the change of HTML code is not reflected if I don't re-start the server. Is it normal? Is it possible to see the change without restarting the server?
Update:
I saw the I am in the production env, so the Debug is False. I am wondering how can I change to Development mode?
It is always recommended to create a local settings so you can work in a "development environment", so, you can have a settings.py where you set all the configuration for your production server, always with DEBUG=False, never set DEBUG=True in production.
And also, you can additionally create a local_settings.py where you change only those variables that you need to change for your development environment, like the DEBUG value, so, in your local_settings.py you can have only this:
# local_settings.py
DEBUG=True
And in your settings.py add this at the end:
# settings.py
try:
from local_settings import *
except ImportError:
pass
This will override the variables you set in the local_settings when you run the development server.
Make sure you don't push this file to your server (if you're using git add it to your .gitignore file)

How to deploy / migrate an existing django app / project to a production server on Heroku?

I have a basic django app (Newsdiffs)that runs just fine at localhost:8000 with python website/manage.py runserver but I'd like to migrate it to Heroku and I can't figure out what my next step is.
I thought getting it running locally would translate to running it on Heroku, but I'm realizing that python website/manage.py runserver is launching the dev settings and I'm not sure how to tell it to use the main settings.
All that is in my Procfile is this:
web: python website/manage.py runserver
Locally, that works fine, though it launches it at http://127.0.0.1:8000/ which is probably not what I want on Heroku. So how do I figure out where to set the hostname and port? I don't see either in the app anyplace.
I have just drawn this list for myself two days ago.
It was put together after having followed the steps described in Heroku's help pages for python.
It's by no means definitive nor perfect, and it will change, but it's a valid trace, since I was able to put the site online.
Some issues remain, to be checked thoroughly, e.g. the location of the media/ directory where files are uploaded should/could live outside your project for security reasons (now it works, but I have noticed if the dyno sleeps then the files are not reached/displayed by the template later).
The same goes for the staticfiles/ directory (although this one seems to work fine).
Also, you might want to set django's debug mode to false.
So here it is:
My first steps to deploy an EXISTING django application to Heroku
ASSUMPTIONS:
a) your django project is in a virtual environment already
b) you have already collected all your project's required packages with
pip freeze > requirements.txt
and committed it to git
git add requirements.txt
git commit -m 'my prj requirements'
0) Activate your project's virtual environment
workon xyz #using virtualenvwrapper
then go to your django project's directory (DPD for short) if not already taken there
cd ~/prj/xyz (or cdproject with virtualenvwrapper if setup properly)
and create a new git branch for heroku twiddling to prevent messing things up
git checkout -b he
1) Create the app on heroku
heroku create xyz
that also adds heroku as a remote of your repo
2) Add the needed packages to requirements.txt
vi requirements.txt
add
dj-database-url==0.3.0
django-postgrespool==0.3.0
gunicorn==19.3.0
psycopg2==2.6
django-toolbelt==0.0.1
static3==0.5.1
whitenoise==2.0.3
3) Install all dependencies in the local venv
pip install -r requirements.txt --allow-all-external
4) Setup the heroku django settings
cd xyz
create a copy
cp setting.py settings_heroku.py
and edit it
vi settings_heroku.py
import os
import dj_database_url
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'), )
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
SECRET_KEY = os.environ["DJANGO_SECRET_KEY"]
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
replace django's std db cfg with
DATABASES['default'] = dj_database_url.config()
DATABASES['default']['ENGINE'] = 'django_postgrespool'
and
WSGI_APPLICATION = 'xyz.wsgi_heroku.application'
5) Configure the necessary environment variables (heroku configs)
edit the .env file
vi .env
e.g.
DJANGO_SECRET_KEY=whatever
EMAIL_HOST_USER=youruser#gmail.com
EMAIL_HOST_PASSWORD=whateveritis
and/or set them manually if needed (in my case .env had no effect, wasn't loaded apparently, and had to set the vars manually for now)
heroku config:set DJANGO_SECRET_KEY=whatever
heroku config:set EMAIL_HOST_USER=youruser#gmail.com
heroku config:set EMAIL_HOST_PASSWORD=whateveritis
6) Create a separate wsgi file for heroku
cd xyx
cp wsgi.py wsgi_heroku.py
and edit it to make it point to the right settings
vi wsgi_heroku.py
from whitenoise.django import DjangoWhiteNoise
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "xyz.settings_heroku")
application = get_wsgi_application()
application = DjangoWhiteNoise(application)
7) Make sure all the templates use
{% load staticfiles %}
8) Define the Procfile file so that it points to the right wsgi
e.g.
cd ~/prj/xyz (DPD)
vi Procfile
add
web: gunicorn xyz.wsgi_heroku --log-file -
9) Collect all static content into DPD/staticfiles/
locally, make sure django points to the right wsgi settings
export WSGI_APPLICATION=blogger.wsgi_heroku.application
python manage.py collectstatic
10) add the changes to the local git repo (he branch)
git add --all .
git commit -m 'first 4 heroku'
11) check the whole thing works locally
heroku local # in heroku's help they also add `web`, not needed?!
12) push your code to heroku
git push heroku he:master
13) make sure a instance of the app is running
heroku ps:scale web=1
14) create the tables on the heroku DB
heroku run python manage.py migrate
Note: if you see a message that says, “You just installed Django’s auth system, which means you don’t have any superusers defined. Would you like to create one now?”, type no.
15) add the superuser to the heroku DB
heroku run bash
python manage.py createsuperuser
and fill in the details, as usual
16) Populate the DB with the necessary fixtures
heroku run python manage.py loaddata yourfile.json
17) Visit the website page on heroku's webserver
heroku open
or go to
https://xyz.herokuapp.com/
and the admin
https://xyz.herokuapp.com/admin/
and the DB
https://xyz.herokuapp.com/db
Useful commands:
View the app's logs
heroku logs [--tail]
List add-ons deployed
heroku addons
and use one:
heroku addons:open <add-on-name>
Run a command on heroku (the remote env, where you are deploying)
heroku run python manage.py shell
heroku run bash
Set a config var on Heroku
heroku config:set VARNAME=whatever
View the config vars that are set (including the DB's)
heroku config
View postgres DB details
heroku pg
If you know some python and have a lot of experience building web apps in other languages but don't totally understand where Heroku fits, I highly recommend Discover Flask, which patched a lot of the holes in my understanding of how these pieces all fit together.
Some of the things that I worked out:
you really do need an isolated virtual environment if you're going to deploy to Heroku, because Heroku installs Python modules from the requirements.txt file.
Gunicorn is a web server, and you definitely need to run your app under Gunicorn or it won't run on Heroku.
The "Procfile" doesn't just give the command you use to run the app locally. And Heroku requires it. So if you've got an app that was built to run on Heroku and it doesn't include a Procfile, they left something out.
You don't tell Heroku what your hostname is. When you run heroku create it should tell you what your domain name is going to be. And every time you run git push heroku master (or whatever branch you're pushing, maybe it isn't master), Heroku will (try to) restart your app.
Heroku doesn't support sqlite. You have to run your Production DB in Postgres.
This doesn't directly answer my question, but it does fill in some of the missing pieces that were making it hard for me to even ask the right question. RTFM notwithstanding. :)

Deploying a local django app using openshift

I've built a webapp using django. In order to host it I'm trying to use openshift but am having difficulty in getting anything working. There seems to be a lack of step by steps for this. So far I have git working fine, the app works on the local dev environment and I've successfully created an app on openshift.
Following the URL on openshift once created I just get the standard page of "Welcome to your Openshift App".
I've followed this https://developers.openshift.com/en/python-getting-started.html#step1 to try changing the wsgi.py file. Changed it to hello world, pushed it and yet I still get the openshift default page.
Is there a good comprehensive resource anywhere for getting local Django apps up and running on Openshift? Most of what I can find on google are just example apps which aren't that useful as I already have mine built.
Edit: Remember this is a platform-dependent answer and since the OpenShift platform serving Django may change, this answer could become invalid. As of Apr 1 2016, this answer remains valid at its whole extent.
Many times this happened to me and, since I had to mount at least 5 applications, I had to create my own lifecycle:
Don't use the Django cartridge, but the python 2.7 cartridge. Using the Django cart. and trying to update the django version brings many headaches, not included if you do it from scratch.
Clone your repository via git. You will get yourproject and...
# git clone yourrepo#rhcloud.com:app.git yourproject <- replace it with your actual openshift repo address
yourproject/
+---wsgi.py
+---setup.py
*---.openshift/ (with its contents - I omit them now)
Make a virtualenv for your brand-new repository cloned into your local machine. Activate it and install Django via pip and all the dependencies you would need (e.g. a new Pillow package, MySQL database package, ...). Create a django project there. Say, yourdjproject. Edit Create, alongside, a wsgi/static directory with an empty, dummy, file (e.g. .gitkeep - the name is just convention: you can use any name you want).
#assuming you have virtualenv-wrapper installed and set-up
mkvirtualenv myenvironment
workon myenvironment
pip install Django[==x.y[.z]] #select your version; optional.
#creating the project inside the git repository
cd path/to/yourproject/
django-admin.py startproject yourjdproject .
#creating dummy wsgi/static directory for collectstatic
mkdir -p wsgi/static
touch wsgi/static/.gitkeep
Create a django app there. Say, yourapp. Include it in your project.
You will have something like this (django 1.7):
yourproject/
+---wsgi/
| +---static/
| +---.gitkeep
+---wsgi.py
+---setup.py
+---.openshift/ (with its contents - I omit them now)
+---yourdjproject/
| +----__init__.py
| +----urls.py
| +----settings.py
| +----wsgi.py
+---+yourapp/
+----__init__.py
+----models.py
+----views.py
+----tests.py
+----migrations
+---__init__.py
Set up your django application as you'd always do (I will not detail it here). Remember to include all the dependencies you installed, in the setup.py file accordingly (This answer is not the place to describe WHY, but the setup.py is the package installer and openshift uses it to reinstall your app on each deploy, so keep it up to date with the dependencies).
Create your migrations for your models.
Edit the openshift-given WSGI script as follows. You will be including the django WSGI application AFTER including the virtualenv (openshift creates one for python cartridges), so the pythonpath will be properly set up.
#!/usr/bin/python
import os
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/'
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
pass
from yourdjproject.wsgi import application
Edit the hooks in .openshift/action_hooks to automatically perform db sincronization and media management:
build hook
#!/bin/bash
#this is .openshift/action/hooks/build
#remember to make it +x so openshift can run it.
if [ ! -d ${OPENSHIFT_DATA_DIR}media ]; then
mkdir -p ${OPENSHIFT_DATA_DIR}media
fi
ln -snf ${OPENSHIFT_DATA_DIR}media $OPENSHIFT_REPO_DIR/wsgi/static/media
######################### end of file
deploy hook
#!/bin/bash
#this one is the deploy hook .openshift/action_hooks/deploy
source $OPENSHIFT_HOMEDIR/python/virtenv/bin/activate
cd $OPENSHIFT_REPO_DIR
echo "Executing 'python manage.py migrate'"
python manage.py migrate
echo "Executing 'python manage.py collectstatic --noinput'"
python manage.py collectstatic --noinput
########################### end of file
Now you have the wsgi ready, pointing to the django wsgi by import, and you have your scripts running. It is time to consider the locations for static and media files we used in such scripts. Edit your Django settings to tell where did you want such files:
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static', 'media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'yourjdproject', 'static'),)
TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'yourjdproject', 'templates'),)
Create a sample view, a sample model, a sample migration, and PUSH everything.
Edit Remember to put the right settings to consider both environments so you can test and run in a local environment AND in openshift (usually, this would involve having a local_settings.py, optionally imported if the file exists, but I will omit that part and put everything in the same file). Please read this file conciously since things like yourlocaldbname are values you MUST set accordingly:
"""
Django settings for yourdjproject project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
ON_OPENSHIFT = False
if 'OPENSHIFT_REPO_DIR' in os.environ:
ON_OPENSHIFT = True
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '60e32dn-za#y=x!551tditnset(o9b#2bkh1)b$hn&0$ec5-j7'
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'yourapp',
#more apps here
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'yourdjproject.urls'
WSGI_APPLICATION = 'yourdjproject.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
if ON_OPENSHIFT:
DEBUG = True
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['*']
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'youropenshiftgenerateddatabasename',
'USER': os.getenv('OPENSHIFT_MYSQL_DB_USERNAME'),
'PASSWORD': os.getenv('OPENSHIFT_MYSQL_DB_PASSWORD'),
'HOST': os.getenv('OPENSHIFT_MYSQL_DB_HOST'),
'PORT': os.getenv('OPENSHIFT_MYSQL_DB_PORT'),
}
}
else:
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', #If you want to use MySQL
'NAME': 'yourlocaldbname',
'USER': 'yourlocalusername',
'PASSWORD': 'yourlocaluserpassword',
'HOST': 'yourlocaldbhost',
'PORT': '3306', #this will be the case for MySQL
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'yr-LC'
TIME_ZONE = 'Your/Timezone/Here'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'wsgi', 'static', 'media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'yourdjproject', 'static'),)
TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'yourdjproject', 'templates'),)
Git add, commit, push, enjoy.
cd path/to/yourproject/
git add .
git commit -m "Your Message"
git push origin master # THIS COMMAND WILL TAKE LONG
# git enjoy
Your sample Django app is almost ready to go! But if your application has external dependencies it will blow with no apparent reason. This is the reason I told you to develop a simple application. Now it is time to make your dependencies work.
[untested!] You can edit the deploy hook and add a command after the command cd $OPENSHIFT_REPO_DIR, like this: pip install -r requirements.txt, assuming the requirements.txt file exists in your project. pip should exist in your virtualenv, but if it does not, you can see the next solution.
Alternatively, the setup.py is an already-provided approach on OpenShift. What I did many times is -assuming the requirements.txt file exists- is:
Open that file, read all its lines.
For each line, if it has a #, remove the # and everything after.
strip leading and trailing whitespaces.
Discard empty lines, and have the result (i.e. remaining lines) as an array.
That result must be assigned to the install_requires= keyword argument in the setup call in the setup.py file.
I'm sorry I did not include this in the tutorial before! But you need to actually install Django in the server. Perhaps an obvious suggestion, and every Python developer could know that beforehand. But seizing this opportunity I remark: Include the appropriate Django dependency in the requirements.txt (or setup.py depending on whetheryou use or not a requirements.txt file), as you include any other dependency.
This should help you to mount a Django application, and took me a lot of time to standarize the process. Enjoy it and don't hesitate on contacting me via comment if something goes wrong
Edit (for those with the same problem who don't expect to find the answer in this post's comments): Remember that if you edit the build or deploy hook files under Windows and you push the files, they will fly to the server with 0644 permissions, since Windows does not support this permission scheme Unix has, and has no way to assign permissions since these files do not have any extension. You will notice this because your scripts will not be executed when deploying. So try to deploy those files only from Unix-based systems.
Edit 2: You can use git hooks (e.g. pre_commit) to set permissions for certain files, like pipeline scripts (build, deploy, ...). See the comments by #StijndeWitt and #OliverBurdekin in this answer, and also this question for more details.
1) Step 1 install Rubygems
Ubuntu - https://rubygems.org/pages/download
Windows - https://forwardhq.com/support/installing-ruby-windows
$ gem
or
C:\Windows\System32>gem
RubyGems is a sophisticated package manager for Ruby. This is a
basic help message containing pointers to more information……..
2) Step 2:
$ gem install rhc
Or
C:\Windows\System32> gem install rhc
3) $ rhc
Or
C:\Windows\System32> rhc
Usage: rhc [--help] [--version] [--debug] <command> [<args>]
Command line interface for OpenShift.
4) $ rhc app create -a mysite -t python-2.7
Or
C:\Windows\System32> rhc app create -a mysite -t python-2.7
# Here mysite would be the sitename of your choice
#It will ask you to enter your openshift account id and password
Login to openshift.redhat.com: Enter your openshift id here
Password : **********
Application Options
---------------------
Domain: mytutorials
Cartridges: python-2.7
Gear Size: Default
Scaling: no
......
......
Your application 'mysite' is now available.
URL : http://mysite.....................
SSH to : 39394949......................
Git remote: ssh://......................
Run 'rhc show-app mysite' for more details about your app.
5) Clone your site
$ rhc git-clone mysite
Or
D:\> rhc git-clone mysite
.......................
Your application Git repository has been cloned to "D:\mysite"
6) #”D:\mysite>” is the location we cloned.
D:\mysite> git remote add upstream -m master git://github.com/rancavil/django-openshift-quickstart.git
D:\mysite> git pull -s recursive -X theirs upstream master
7) D:\mysite> git push
remote : ................
remote: Django application credentials
user: admin
xertefkefkt
remote: Git Post-Receive Result: success
.............
8) D:\mysite>virtualenv venv --no-site-packages
D:\mysite>venv\Scripts\activate.bat
<venv> D:\mysite> python setup.py install
creating .....
Searching for Django<=1.6
.............
Finished processing dependencies for mysite==1.0
9) Change admin password
<venv> D:\mysite\wsgi\openshift> python manage.py changepassword admin
password:
...
Password changed successfully for user 'admin'
<venv> D:\mysite\wsgi\openshift> python manage.py runserver
Validating models….
10) Git add
<venv> D:\mysite> git add.
<venv> D:\mysite> git commit -am"activating the app on Django / Openshift"
.......
<venv> D:\mysite> git push
#----------------------------------------------------------------------------------
#-----------Edit your setup.py in mysite with packages you want to install----------
from setuptools import setup
import os
# Put here required packages
packages = ['Django<=1.6', 'lxml', 'beautifulsoup4', 'openpyxl']
if 'REDISCLOUD_URL' in os.environ and 'REDISCLOUD_PORT' in os.environ and 'REDISCLOUD_PASSWORD' in os.environ:
packages.append('django-redis-cache')
packages.append('hiredis')
setup(name='mysite',
version='1.0',
description='OpenShift App',
author='Tanveer Alam',
author_email='xyz#gmail.com',
url='https://pypi.python.org/pypi',
install_requires=packages,
)
These are steps that works for me:
I've done some steps manually, but you can automate them later to be done with each push command.
Create new django app with python-3.3 from website wizard
Add mysql cartridge to app (my option is mysql)
git clone created app to local
add requirements.txt to root folder
Add myapp to wsgi folder
Modify application to refer to myapp
execute git add, commit, push
Browse app and debug errors with "rhc tail myapp"
connect to ssh console
rhc ssh myapp
10.execute this
source $OPENSHIFT_HOMEDIR/python/virtenv/venv/bin/activate
install missing packages if any
go to app directory
cd ~/app-root/runtime/repo/wsgi/app_name
do migration with:
python manage.py migrate
create super user:
python manage.py createsuperuser
15.Restart the app
This is helpful for me take a look
http://what-i-learnt-today-blog.blogspot.in/2014/05/host-django-application-in-openshift-in.html

Django: How to manage development and production settings?

I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
The DJANGO_SETTINGS_MODULE environment variable controls which settings file Django will load.
You therefore create separate configuration files for your respective environments (note that they can of course both import * from a separate, "shared settings" file), and use DJANGO_SETTINGS_MODULE to control which one to use.
Here's how:
As noted in the Django documentation:
The value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g. mysite.settings. Note that the settings module should be on the Python import search path.
So, let's assume you created myapp/production_settings.py and myapp/test_settings.py in your source repository.
In that case, you'd respectively set DJANGO_SETTINGS_MODULE=myapp.production_settings to use the former and DJANGO_SETTINGS_MODULE=myapp.test_settings to use the latter.
From here on out, the problem boils down to setting the DJANGO_SETTINGS_MODULE environment variable.
Setting DJANGO_SETTINGS_MODULE using a script or a shell
You can then use a bootstrap script or a process manager to load the correct settings (by setting the environment), or just run it from your shell before starting Django: export DJANGO_SETTINGS_MODULE=myapp.production_settings.
Note that you can run this export at any time from a shell — it does not need to live in your .bashrc or anything.
Setting DJANGO_SETTINGS_MODULE using a Process Manager
If you're not fond of writing a bootstrap script that sets the environment (and there are very good reasons to feel that way!), I would recommend using a process manager:
Supervisor lets you pass environment variables to managed processes using a program's environment configuration key.
Honcho (a pure-Python equivalent of Ruby's Foreman) lets you define environment variables in an "environment" (.env) file.
Finally, note that you can take advantage of the PYTHONPATH variable to store the settings in a completely different location (e.g. on a production server, storing them in /etc/). This allows for separating configuration from application files. You may or may not want that, it depends on how your app is structured.
By default use production settings, but create a file called settings_dev.py in the same folder as your settings.py file. Add overrides there, such as DEBUG=True.
On the computer that will be used for development, add this to your ~/.bashrc file:
export DJANGO_DEVELOPMENT=true
Or turn it on one time by prefixing your command:
DJANGO_DEVELOPMENT=true python manage.py runserver
At the bottom of your settings.py file, add the following.
# Override production variables if DJANGO_DEVELOPMENT env variable is true
if os.getenv('DJANGO_DEVELOPMENT') == 'true':
from settings_dev import * # or specific overrides
(Note that importing * should generally be avoided in Python)
By default the production servers will not override anything. Done!
Compared to the other answers, this one is simpler because it doesn't require updating PYTHONPATH, or setting DJANGO_SETTINGS_MODULE which only allows you to work on one django project at a time.
This is how I did it in 6 easy steps:
Create a folder inside your project directory and name it settings.
Project structure:
myproject/
myapp1/
myapp2/
myproject/
settings/
Create four python files inside of the settings directory namely __init__.py, base.py, dev.py and prod.py
Settings files:
settings/
__init__.py
base.py
prod.py
dev.py
Open __init__.py and fill it with the following content:
init.py:
from .base import *
# you need to set "myproject = 'prod'" as an environment variable
# in your OS (on which your website is hosted)
if os.environ['myproject'] == 'prod':
from .prod import *
else:
from .dev import *
Open base.py and fill it with all the common settings (that will be used in both production as well as development.) for example:
base.py:
import os
...
INSTALLED_APPS = [...]
MIDDLEWARE = [...]
TEMPLATES = [{...}]
...
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_ROOT = os.path.join(BASE_DIR, '/path/')
MEDIA_URL = '/path/'
Open dev.py and include that stuff which is development specific for example:
dev.py:
DEBUG = True
ALLOWED_HOSTS = ['localhost']
...
Open prod.py and include that stuff which is production specific for example:
prod.py:
DEBUG = False
ALLOWED_HOSTS = ['www.example.com']
LOGGING = [...]
...
Update
As ANDRESMA suggested in comments. Update BASE_DIR in your base.py file to reflect your updated path by adding another .parent to the end. For example:
BASE_DIR = Path(__file__).resolve().parent.parent.parent
I usually have one settings file per environment, and a shared settings file:
/myproject/
settings.production.py
settings.development.py
shared_settings.py
Each of my environment files has:
try:
from shared_settings import *
except ImportError:
pass
This allows me to override shared settings if necessary (by adding the modifications below that stanza).
I then select which settings files to use by linking it in to settings.py:
ln -s settings.development.py settings.py
I use the awesome django-configurations, and all the settings are stored in my settings.py:
from configurations import Configuration
class Base(Configuration):
# all the base settings here...
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
...
class Develop(Base):
# development settings here...
DEBUG = True
...
class Production(Base):
# production settings here...
DEBUG = False
To configure the Django project I just followed the docs.
Create multiple settings*.py files, extrapolating the variables that need to change per environment. Then at the end of your master settings.py file:
try:
from settings_dev import *
except ImportError:
pass
You keep the separate settings_* files for each stage.
At the top of your settings_dev.py file, add this:
import sys
globals().update(vars(sys.modules['settings']))
To import variables that you need to modify.
This wiki entry has more ideas on how to split your settings.
Here is the approach we use :
a settings module to split settings into multiple files for readability ;
a .env.json file to store credentials and parameters that we want excluded from our git repository, or that are environment specific ;
an env.py file to read the .env.json file
Considering the following structure :
...
.env.json # the file containing all specific credentials and parameters
.gitignore # the .gitignore file to exclude `.env.json`
project_name/ # project dir (the one which django-admin.py creates)
accounts/ # project's apps
__init__.py
...
...
env.py # the file to load credentials
settings/
__init__.py # main settings file
database.py # database conf
storage.py # storage conf
...
venv # virtualenv
...
With .env.json like :
{
"debug": false,
"allowed_hosts": ["mydomain.com"],
"django_secret_key": "my_very_long_secret_key",
"db_password": "my_db_password",
"db_name": "my_db_name",
"db_user": "my_db_user",
"db_host": "my_db_host",
}
And project_name/env.py :
<!-- language: lang-python -->
import json
import os
def get_credentials():
env_file_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
with open(os.path.join(env_file_dir, '.env.json'), 'r') as f:
creds = json.loads(f.read())
return creds
credentials = get_credentials()
We can have the following settings:
<!-- language: lang-py -->
# project_name/settings/__init__.py
from project_name.env import credentials
from project_name.settings.database import *
from project_name.settings.storage import *
...
SECRET_KEY = credentials.get('django_secret_key')
DEBUG = credentials.get('debug')
ALLOWED_HOSTS = credentials.get('allowed_hosts', [])
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
...
]
if DEBUG:
INSTALLED_APPS += ['debug_toolbar']
...
# project_name/settings/database.py
from project_name.env import credentials
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': credentials.get('db_name', ''),
'USER': credentials.get('db_user', ''),
'HOST': credentials.get('db_host', ''),
'PASSWORD': credentials.get('db_password', ''),
'PORT': '5432',
}
}
the benefits of this solution are :
user specific credentials and configurations for local development without modifying the git repository ;
environment specific configuration, you can have for example three different environments with three different .env.json like dev, stagging and production ;
credentials are not in the repository
I hope this helps, just let me know if you see any caveats with this solution.
I use the folloring file structure:
project/
...
settings/
settings/common.py
settings/local.py
settings/prod.py
settings/__init__.py -> local.py
So __init__.py is a link (ln in unix or mklink in windows) to local.py or can be to prod.py so the configuration is still in the project.settings module is clean and organized, and if you want to use a particular config you can use the environment variable DJANGO_SETTINGS_MODULE to project.settings.prod if you need to run a command for production environment.
In the files prod.py and local.py:
from .shared import *
DATABASE = {
...
}
and the shared.py file keeps as global without specific configs.
Use settings.py for production. In the same directory create settings_dev.py for overrides.
# settings_dev.py
from .settings import *
DEBUG = False
On a dev machine run your Django app with:
DJANGO_SETTINGS_MODULE=<your_app_name>.settings_dev python3 manage.py runserver
On a prod machine run as if you just had settings.py and nothing else.
ADVANTAGES
settings.py (used for production) is completely agnostic to the fact that any other environments even exist.
To see the difference between prod and dev you just look into a single location - settings_dev.py. No need to gather configurations scattered across settings_prod.py, settings_dev.py and settings_shared.py.
If someone adds a setting to your prod config after troubleshooting a production issue you can rest assured that it will appear in your dev config as well (unless explicitly overridden). Thus the divergence between different config files will be minimized.
building off cs01's answer:
if you're having problems with the environment variable, set its value to a string (e.g. I did DJANGO_DEVELOPMENT="true").
I also changed cs01's file workflow as follows:
#settings.py
import os
if os.environ.get('DJANGO_DEVELOPMENT') is not None:
from settings_dev import *
else:
from settings_production import *
#settings_dev.py
development settings go here
#settings_production.py
production settings go here
This way, Django doesn't have to read through the entirety of a settings file before running the appropriate settings file. This solution comes in handy if your production file needs stuff that's only on your production server.
Note: in Python 3, imported files need to have a . appended (e.g. from .settings_dev import *)
If you want to keep 1 settings file, and your development operating system is different than your production operating system, you can put this at the bottom of your settings.py:
from sys import platform
if platform == "linux" or platform == "linux2":
# linux
# some special setting here for when I'm on my prod server
elif platform == "darwin":
# OS X
# some special setting here for when I'm developing on my mac
elif platform == "win32":
# Windows...
# some special setting here for when I'm developing on my pc
Read more: How do I check the operating system in Python?
You want to be able to switch settings, secretes, environment variables and others based on the git branch that you are in and relying on different settings file is okay but in an enterprise situation you would like to hide all your sensitive information from the repo. It is not a best security best practice to expose all the environment variables, secrets of all environments (develop, staging, production, qa etc.,) to all the developers. The following should achieve 2.
isolation of settings as per their environment of deployment
hide sensitive information from git repo
My run.sh
#!/bin/bash
# default environment
export DJANGO_ENVIRONMENT="develop"
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if [ $BRANCH == "main" ]; then
export DJANGO_ENVIRONMENT="production"
elif [ $BRANCH == "release/"* ]; then
export DJANGO_ENVIRONMENT="staging"
else
# for all other branches (feature, support, hotfix etc.,)
echo ''
fi
echo "
BRANCH: $BRANCH
ENVIRONMENT: $DJANGO_ENVIRONMENT
"
python3 myapp/manage.py makemigrations
python3 myapp/manage.py migrate --noinput
python3 myapp/manage.py runserver 0:8000
My vars.py (or secrets.py or whatever name) in the same folder as settings.py of django
vars = {
'develop': {
'environment': 'develop',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
},
'production': {
'environment': 'production',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "False"
},
'staging': {
'environment': 'staging',
'SECRET_KEY': 'mysecretkey',
"DEBUG": "True"
}
}
then in settings.py just do the following
from . import vars # container environment specific vars
import os
DJANGO_ENVIRONMENT = os.getenv("DJANGO_ENVIRONMENT") # declared in run.sh
envs = vars.vars[DJANGO_ENVIRONMENT] # SECURITY WARNING: keep the secret key
used in production secret!
SECRET_KEY = envs["SECRET_KEY"]
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = envs["DEBUG"]
Let developers have their own vars.py in their local machine but during deployment your cicd pipeline can insert the actual vars.py with actual valures or some script should insert it. If you are using gitlab cicd then you can store the entire vars.py as an environment variable
This seems to have been answered, however a method which I use as combined with version control is the following:
Setup a env.py file in the same directory as settings on my local development environment that I also add to .gitignore:
env.py:
#!usr/bin/python
DJANGO_ENV = True
ALLOWED_HOSTS = ['127.0.0.1', 'dev.mywebsite.com']
.gitignore:
mywebsite/env.py
settings.py:
if os.path.exists(os.getcwd() + '/env.py'):
#env.py is excluded using the .gitignore file - when moving to production we can automatically set debug mode to off:
from env import *
else:
DJANGO_ENV = False
DEBUG = DJANGO_ENV
I just find this works and is far more elegant - with env.py it is easy to see our local environment variables and we can handle all of this without multiple settings.py files or the likes. This methods allows for all sorts of local environment variables to be used that we wouldn't want set on our production server. Utilising the .gitignore via version control we are also keeping everything seamlessly integrated.
For the problem of setting files, I choose to copy
Project
|---__init__.py [ write code to copy setting file from subdir to current dir]
|---settings.py (do not commit this file to git)
|---setting1_dir
| |-- settings.py
|---setting2_dir
| |-- settings.py
When you run django, __init__py will be ran. At this time , settings.py in setting1_dir will replace settings.py in Project.
How to choose different env?
modify __init__.py directly.
make a bash file to modify __init__.py.
modify env in linux, and then let __init__.py read this variable.
Why use to this way?
Because I don't like so many files in the same directory, too many files will confuse other partners and not very well for IDE.(IDE cannot find what file we use)
If you do not want to see all these details, you can divide project into two part.
make your small tool like Spring Initializr, just for setup your project.(do sth like copy file)
your project code
I'm using different app.yaml file to change configuration between environments in google cloud app engine.
You can use this to create a proxy connection in your terminal command:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433
https://cloud.google.com/sql/docs/sqlserver/connect-admin-proxy#macos-64-bit
File: app.yaml
# [START django_app]
service: development
runtime: python37
env_variables:
DJANGO_DB_HOST: '/cloudsql/myproject:myregion:myinstance'
DJANGO_DEBUG: True
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
I create a file named "production" in the working directory in production.
#settings.py
production = Path("production")
DEBUG = False
#if it's dev mode
if not production.is_file():
INSTALLED_APPS +=[
#apps_in_development_mode,
#...
]
DEBUG = True
#other settings to override the default production settings
You're probably going to use the wsgi.py file for production (this file is created automatically when you create the django project). That file points to a settings file. So make a separate production settings file and reference it in your wsgi.py file.
What we do here is to have an .ENV file for each environment. This file contains a lot of variables like ENV=development
The settings.py file is basically a bunch of os.environ.get(), like ENV = os.environ.get('ENV')
So when you need to access that you can do ENV = settings.ENV.
You would have to have a .env file for your production, testing, development.
This is my solution, with different environements for dev, test and prod
import socket
[...]
DEV_PC = 'PC059'
host_name = socket.gethostname()
if host_name == DEV_PC:
#do something
pass
elif [...]

Categories