I'm fairly new to Python and Django, and I'm working on a webapp now that will be run on multiple servers. Each server has it's own little configuration details (commands, file paths, etc.) that I would like to just be able to store in a settings file, and then have a different copy of the file on each system.
I know that in Django, there's a settings file. However, is that only for Django-related things? Or am I supposed to put this type of stuff in there too?
This page discusses yours and several other situations involving configurations on Django servers: http://code.djangoproject.com/wiki/SplitSettings
It's for any type of settings, but it's better to put local settings in a separate file so that version upgrades don't clobber them. Have the global settings file detect the presence of the local settings file and then either import everything from it or just execfile() it.
There isn't a standard place to put config files. You can easily create your own config file as needed though. ConfigParser might suit your needs (and is a Python built-in).
Regardless of the format that I use for my config file, I usually have my scripts that depend on settings get the config file path from environment variables
(using bash):
export MY_APP_CONFIG_PATH=/path/to/config/file/on/this/system
and then my scripts pick up the settings like so:
import os
path_to_config = os.environ['MY_APP_CONFIG_PATH']
Related
In this answer, I was told that the logger needed to only be declared once globally.
On similar lines, how are program configuration settings loaded and used? Whether I use Dynaconf or simply use yaml.safe_load, what is the best practice for storing and loading configuration data?
Example folder structure:
main.py
|
|--diskOperations
| __init__.py
| fileAndFolderOps.py
|
|--timer
__init__.py
timers.py
stopwatch.py
Functionality needed:
Some program settings need to be loaded in main.py. The classes in
fileAndFolderOps.py of the diskOperations package need to load
some settings. The classes in timers.py and stopwatch.py of the
timer package need to load some settings.
The settings need to be segregated into settings that are loaded
during development, testing, production etc.
Questions:
What are the best practices for creating and storing the config
files? Are separate config files to be created and stored in each
package's folder, and do each of the .py files load the
configuration settings so that it becomes accessible to all the
classes in the .py file?
If there is any setting to be used globally across the entire
program, is it best to load it into main.py and then put it into
some ConfigurationHandler class and pass the class instance to all
instances of classes of every package file?
How best to switch between development, test and production
modes? At least in Dynaconf, there's an environment parameter.
What about other config libraries? I'm asking about the best
practice when needing to use settings across multiple packages.
To ensure that the variables in the classes match the configurations
in the config files, is it recommended to have a "generator"
function in each class, that auto-generates configuration files with
default values (if the config file does not exist)?
Perhaps it'd also help to point us to any open source project that uses config files and settings in an ideal way (it helps to know about the 12 factor app guide).
Say I have some API-key in my project that I don't wanna share into git repository, then I have to use environment variables. Now, why shouldn't I blatantly set the environmental variable on my local machine (like PATH) instead of making .env file and downloading python-dotenv library to my project for the sake of doing actually the same thing?
.env file is just manner to store this variables in file that can use packages for example dotenv to read as an os.environ varaible. so in short it is manner of storage of configuration.
many times your gitignore will have .env thus users can store the API key with .env file on local machine to help ease of use and ensuring dont accidently leave api keys within committed git files
if you just do a bear os.environ['API-KEY'] = 'stuff' within code it will store in commited git file (and env variables are there within a run of python process, not their permanently between sessions, thus why storing in file is preferred)
with this there are types of way to store configuration dynaconf is a great package and package shows many of the other types of configuration files
I'm new to Django. My localhost site is running fine. Since I am using pycharm it is easy to run any file. I decided to run each file in my django project, and came across several errors, such as this one in my views.py:
django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Even though the site is running, what seems like properly, I'm seeing this message. What is causing this, and is it typical?
You cannot run each file present in your django project individual.
No matter those are file with .py extension. They depend on the django framework to get the project running.
The reason you might be seeing that error is because you might be using the attributes present in the settings.py file which in turn requires django to set the application running like starting the WSGI server, getting all the dependencies and the installed apps ready before you actually use anything.
Understand that Django is a Framework and it relies on many underlying components to actually work. Even thought you can technically run any file in any manner, you cannot start the application itself.
There are other ways to do it if you like to test the application like using django shell by python manage.py shell to check and test the application, which would be a better way of doing individual testing rather than running each file standalone.
You can run any individual python file in a Django Project with Django, but keep in mind that the settings for Django must be supplied. This is not a good practise to run individual file with Django but for debugging purposes you may use it (for example. to test a parser that you wrote with database access),
You have to configure the Django settings before you can do anything with Django on a single file.
from django.conf import settings
settings.configure()
def do_something_with_a_model():
# does something with a model
print "I am done!!"
Note that relative imports may break when running on single files.
(for example. imports like from .another_module import AnotherClass may break when working with single file.)
I know this issue has been discussed before, but I am struggling to find a starightforward explanation of how to approach configuration between local development and production server.
What I have done so far: I had one my_app_config.py file that had a section with machine / scenario (test vs production) sections I could just comment out. I would develop with my local machine path hardcoded, test database connection string, my test spreadsheet location, etc. When it comes time to deploy the code to the server, I comment out the "test" section and uncomment the "production section". As you may guess, this is wrought with errors.
I recently adopted the Python ConfigParser library to use .ini files. Now, I have the following lines in my code
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read(os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'settings',
'my_app_config.ini')))
database_connect_string_admin = config.get('Database', 'admin_str')
The problems with this are many...
I need to have the import at the top of every file
The filename my_app_config.ini can't change. So, I rely on comments within the content of the .ini file to know which one I'm dealing with. They are stored in a folder tree so I know which is which.
notice the path to the config file is defined here. So, depending where the python file lives in the tree structure dictates if I get a copy / paste error.
I tried to set environment variables at the beginning of the program, but all the imports for all modules are performed right away at code launch. I was getting "not found" errors left and right.
What I want: To understand how to keep all the configurations stored in one place that is not easy to lose track of what I am doing. I want an easy way to keep these configuration files (ideally one file or script) under version control (security is a whole other issue, I digress). I want to be able to seamlessly switch contexts (local-test, local-production, serverA-test, serverA-production, serverB-test, serverB-production) My app uses
my_app_config.ini read by my parser
uwsgi.ini read by the uwsgi application server emperor
web_config.py used by the flask application
nginx.conf symlinked to the web server's configuration
celery configuration
not to mention different paths for everything (ideally handled within the magic config handling genie). I imagine once I figure this out I will be embarrassed it took so long to grasp.
Are Environment variables what I am trying to do here?
You have to try `simple-settings. It will resolve all you issues. One way set environment variable
in development
$ export SIMPLE_SETTINGS=settings.general,settings.development
$ python app.py
in production
$ export SIMPLE_SETTINGS=settings.general,settings.production
$ python app.py
You can keep `` development.pyandproduction.py` not in a repository for security reasons.
Example
settings/general.py
SIMPLE_CONF = 'simple'
app.py
from simple_settings import settings
print(settings.SIMPLE_CONF)
The documentation indicated many more features and benefits.
I've just started using Jenkins today, so it's entirely possible that I've missed something in the docs.
I currently have Jenkins set up to run unit tests from a local Git repo (via plugin). I have set up the environment correctly (at least, in a seemingly working condition), but have run into a small snag.
I have a single settings.py file that I have excluded from my git repo (it contains a few keys that I'm using in my app). I don't want to include that file into my git repo as I'm planning on OS'ing the project when I'm done (anyone using the project would need their own keys). I realize that this may not be the best way of doing this, but it's what's done (and it's a small personal project), so I'm not concerned about it.
The problem is that because it's not under git management, Jenkins doesn't pick it up.
I'd like to be able to copy this single file from my source directory to the Jenkins build directory prior to running tests.
Is there a way to do this? I've tried using the copy to slave plugin, but it seems like any file that I want would first (manually) need to be copied or created in workspace/userContent. Am I missing something?
I would suggest using some environment variable, like MYPROJECT_SETTINGS. So when running the task by Jenkins you can overwrite the default path to whatever you can put your settings file for Jenkins in.
The other option, in case you don't want to copy settings file to each build-machine by hand, would be making a settings.py with some default fake keys, which you can add to your repo, and a local settings file with real keys, which overwrites some options, e.g.:
# settings.py file
SECRET_KEY = 'fake stuff'
try:
from settings_local import *
except ImportError:
pass
I am using the Copy Data To Workspace Plugin for this, Copy to Slave plugin should also work, but I found Copy Data To Workspace Plugin to be easier to work with for this use-case.
Why just not use "echo my-secret-keys > settings.txt" in jenkins and adjust your script to read this file so you can add it to report?