I know this issue has been discussed before, but I am struggling to find a starightforward explanation of how to approach configuration between local development and production server.
What I have done so far: I had one my_app_config.py file that had a section with machine / scenario (test vs production) sections I could just comment out. I would develop with my local machine path hardcoded, test database connection string, my test spreadsheet location, etc. When it comes time to deploy the code to the server, I comment out the "test" section and uncomment the "production section". As you may guess, this is wrought with errors.
I recently adopted the Python ConfigParser library to use .ini files. Now, I have the following lines in my code
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read(os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'settings',
'my_app_config.ini')))
database_connect_string_admin = config.get('Database', 'admin_str')
The problems with this are many...
I need to have the import at the top of every file
The filename my_app_config.ini can't change. So, I rely on comments within the content of the .ini file to know which one I'm dealing with. They are stored in a folder tree so I know which is which.
notice the path to the config file is defined here. So, depending where the python file lives in the tree structure dictates if I get a copy / paste error.
I tried to set environment variables at the beginning of the program, but all the imports for all modules are performed right away at code launch. I was getting "not found" errors left and right.
What I want: To understand how to keep all the configurations stored in one place that is not easy to lose track of what I am doing. I want an easy way to keep these configuration files (ideally one file or script) under version control (security is a whole other issue, I digress). I want to be able to seamlessly switch contexts (local-test, local-production, serverA-test, serverA-production, serverB-test, serverB-production) My app uses
my_app_config.ini read by my parser
uwsgi.ini read by the uwsgi application server emperor
web_config.py used by the flask application
nginx.conf symlinked to the web server's configuration
celery configuration
not to mention different paths for everything (ideally handled within the magic config handling genie). I imagine once I figure this out I will be embarrassed it took so long to grasp.
Are Environment variables what I am trying to do here?
You have to try `simple-settings. It will resolve all you issues. One way set environment variable
in development
$ export SIMPLE_SETTINGS=settings.general,settings.development
$ python app.py
in production
$ export SIMPLE_SETTINGS=settings.general,settings.production
$ python app.py
You can keep `` development.pyandproduction.py` not in a repository for security reasons.
Example
settings/general.py
SIMPLE_CONF = 'simple'
app.py
from simple_settings import settings
print(settings.SIMPLE_CONF)
The documentation indicated many more features and benefits.
Related
I use python to develop code on my work laptop and then deploy to our server for automation purposes.
I just recently started using git and github through PyCharm in hopes of making the deployment process smoother.
My issue is that I have a config file (YAML) that uses different parameters respective to my build environment (laptop) and production (server). For example, the file path changes.
Is there a git practice that I could implement that when either pushing from my laptop or pulling from the server it will excluded changes to specific parts of a file?
I use .gitignore for files such as pyvenv.cfg but is there a way to do this within a file?
Another approach I thought of would be to utilize different branches for local and remote specific parameters...
For Example:
Local branch would contain local parameters and production branch would contain production parameters. In this case I would push 1st from my local to the local branch. Next I would make the necessary changes to the parameters for production, in my situation it is much easier to work on my laptop than through the server, then push to the production branch. However, I have a feeling this is against good practice or simply changes the use of branches.
Thank you.
Config files are also a common place to store credentials (eg : a login/pwd for the database, an API key for a web service ...) and it is generally a good idea to not store those in the repository.
A common practice is to store template files in the repo (eg : config.yml.sample), to not store the actual config file along with the code (even add it in .gitignore, if it is in a versioned directory), and add steps at deployment time to either set up the initial config file or update the existing one - those steps can be manual, or scripted. You can backup and version the config separately, if needed.
Another possibility is to take the elements that should be adapted from somewhere else (the environment for instance), and have some user: $APP_DB_USER entries in your config file. You should provision these entries on both your servers - eg : have an env.txt file on your local machine and a different one on your prod server.
I've been exposed to both of these tools , but they seem to serve the same purpose. My question is are they different and if so, how?
In my research it seems to me that autoenv is global in scope while dotenv is a bit more application specific. While this seems an advantage in many cases, I wonder if it could also create unforeseen problems.
Second what would be the pros / cons of using one over the other (or should I use each in different situations?)
I've read through documentation for each, but have been unable to find an article comparing the two. It is relatively recent that I've developed a stronger grasp on environment variables in general so apologies if I missed something obvious in the documentation.
I'm primarily developing web apps with Flask and deploying on Heroku if that would influence my choice.
Thanks in advance.
autoenv, is meant for the cli, to enable environments when you cd into a directory containing an .env file.
Fx. if you need some of the environment variables on your local development environment whenever you cd to the directory, you would use autoenv or the more mature alternative direnv.
dotenv is used in python to find an .env file in the running directory or parent directories and load their variables, this is good for services as they usually don't have a shell running.
So for your heroku deployments you should use dotenv.
If you however are putting in config vars straight in the heroku settings, you don't need either, you would simply use os.getenv:
from os import getenv
print(getenv('MY_ENVIRONMENT_VARIABLE'))
I am following a step-by-step tutorial blog on the flask micro framework for python.
I bumped into an issue, when they require me to 'setup' a configuration file in the root of the application folder, so it can easily be accessible if needed.
They called it config.py.
My question is, if the local path to my application is /home/test/application1/, should I create the file inside the ./application1/ directory? What gets me confused in this somewhat obvious question is that I did a quick search for other config.py files in the local directory inside /home/test/application1/, where I found 4 other files. There were in the following directories:
/home/test/application1/flask/lib/python2.7/site-packages/flask/testsuite/config.py
/home/test/application1/flask/lib/python2.7/site-packages/flask/config.py
/home/test/application1/flask/local/lib/python2.7/site-packages/flask/testsuite/config.py
/home/test/application1/flask/local/lib/python2.7/site-packages/flask/config.py
So should I create a new config.py file in the directory that I first mentioned or should I add some lines in one of the previously created config.py files.
Here is the source of the step-by-step tutorial:
It is at the beginning, right after Configuration.
Unlike other frameworks, Flask does not have a lot of rules, in general you can implement things in the way they make sense to you.
But note that the other config.py files that you found are all in the virtual environment, and are all scripts that come with Flask. They have nothing to do with the application configuration.
I wrote the tutorial you are following. In the tutorial I'm putting config.py outside of the application package. This is what I like, I consider the configuration separate from the application. My thinking is that you should be able to run the same application with different configuration files, so that for example, you can have a production configuration, a testing configuration and a development configuration, all different.
I hope this helps.
I have a python/django project that I've set up for development and production using git revision control. I have three settings.py files:
-settings.py (which has dummy variables for potential open source project),
-settings_production.py (for production variables), and
-settings_local.py (to override settings just for my local environment). This file is not tracked by git.
I use this method, which works great:
try:
from settings_production import *
except ImportError, e:
print 'Unable to load settings_production.py:', e
try:
from settings_local import *
except ImportError, e:
print 'Unable to load settings_local.py:', e
HOWEVER, I want this to be an open source project. I've set up two git remotes, one called 'heroku-production' and one called 'github-opensource'. How can I set it up so that the 'heroku-remote' includes settings_production.py while 'github-opensource' doesn't, so that I can keep those settings private?
Help! I've look at most of the resources over the internets, but they don't seem to address this use case. Is this the right way? Is there a better approach?
The dream would be to be able to push my local environment to either heroku-production or github-opensource without haveing to mess with the settings files.
Note: I've looked at the setup where you use environment variables or don't track the production settings, but that feels overly complicated. I like to see everything in front of me in my local setup. See this method.
I've also looked through all these methods, and they don't quite seem to fit the bill.
There's a very similar question here. One of the answers suggests git submodules
which I would say are the easiest way to go about this. This is a problem for your VCS, not your Python code.
I think using environment variables, as described on the Two Scoops of Django book, is the best way to do this.
I'm following this approach and I have an application running out of a private GitHub repository in production (with an average of half a million page views per month), staging and two development environments and I use a directory structure like this:
MyProject
-settings
--__init__.py
--base.py
--production.py
--staging.py
--development_1.py
--development_2.py
I keep everything that's common to all the environments in base.py and then make the appropiate changes on production.py, staging.py, development_1.py or development_2.py.
My deployment process for production includes virtualenv, Fabric, upstart, a bash script (used by upstart), gunicorn and Nginx. I have a slightly modified version of the bash script I use with upstart to run the test server; it is something like this:
#!/bin/bash -e
# starts the development server using environment variables and django-admin.py
PROJECTDIR=/home/user/project
PROJECTENV=/home/user/.virtualenvs/project_virtualenv
source $PROJECTENV/bin/activate
cd $PROJECTDIR
export LC_ALL="en_US.UTF-8"
export HOME="/home/user"
export DATABASES_DEFAULT_NAME_DEVELOPMENT="xxxx"
export DATABASES_DEFAULT_USER_DEVELOPMENT="xxxxx"
export DATABASES_DEFAULT_PASSWORD_DEVELOPMENT="xxx"
export DATABASES_DEFAULT_HOST_DEVELOPMENT="127.0.0.1"
export DATABASES_DEFAULT_PORT_DEVELOPMENT="5432"
export REDIS_HOST_DEVELOPMENT="127.0.0.1:6379"
django-admin.py runserver --pythonpath=`pwd` --settings=MyProject.settings.development_1 0.0.0.0:8006
Notice this is not the complete story and I'm simplifying to make my point. I have some extra Python code in base.py that takes the values from these environment variables too.
Play with this and make sure to check the relevant chapter in Two Scoops of Django, I was also using the import approach you mentioned but having settings out of the repository wasn't easy to manage and I made the switch a few months ago; it's helped me a lot.
I've just started using Jenkins today, so it's entirely possible that I've missed something in the docs.
I currently have Jenkins set up to run unit tests from a local Git repo (via plugin). I have set up the environment correctly (at least, in a seemingly working condition), but have run into a small snag.
I have a single settings.py file that I have excluded from my git repo (it contains a few keys that I'm using in my app). I don't want to include that file into my git repo as I'm planning on OS'ing the project when I'm done (anyone using the project would need their own keys). I realize that this may not be the best way of doing this, but it's what's done (and it's a small personal project), so I'm not concerned about it.
The problem is that because it's not under git management, Jenkins doesn't pick it up.
I'd like to be able to copy this single file from my source directory to the Jenkins build directory prior to running tests.
Is there a way to do this? I've tried using the copy to slave plugin, but it seems like any file that I want would first (manually) need to be copied or created in workspace/userContent. Am I missing something?
I would suggest using some environment variable, like MYPROJECT_SETTINGS. So when running the task by Jenkins you can overwrite the default path to whatever you can put your settings file for Jenkins in.
The other option, in case you don't want to copy settings file to each build-machine by hand, would be making a settings.py with some default fake keys, which you can add to your repo, and a local settings file with real keys, which overwrites some options, e.g.:
# settings.py file
SECRET_KEY = 'fake stuff'
try:
from settings_local import *
except ImportError:
pass
I am using the Copy Data To Workspace Plugin for this, Copy to Slave plugin should also work, but I found Copy Data To Workspace Plugin to be easier to work with for this use-case.
Why just not use "echo my-secret-keys > settings.txt" in jenkins and adjust your script to read this file so you can add it to report?