This is a pretty simple idea conceptually. In terms of specifics, I'm working with a pretty generic Kotti installation, where I'm customizing some pages / templates.
Much of my configuration is shared between a production and development server. Currently, these settings are included in two separate ini files. It would be nice to DRY this configuration, with common settings in one place.
I'm quite open to this happening in python or an an ini file / section (or perhaps a third, yet-to-be-considered place). I think it's equivalent to use a [DEFAULT] section, or pass dictionaries to loadapp via global_conf, but that seems to be processed in a squirrelly way. For example, Kotti thinks I've properly set sqlalchemy.url, but sqlalchemy iteself fails on url = opts.pop('url'). Moreover, since Kotti defines some default settings, Paste doesn't end up searching for them in the global_conf (e.g., kotti_configurators).
I don't like the idea of passing in a large dict for %(var)s substitution, as it requires effectively renaming all of the shared variables.
In my initial experiments with Paste Deploy, it demands a "main" section in each ini file. So, I don't think I can just use a use = config:shared.ini line. But that's conceptually close to what I'd like to do.
Is there a way to explicitly (and properly) include settings from DEFAULT or global_conf? Or perhaps do this programmatically with python on the results of loadapp?
For example, Kotti thinks I've properly set sqlalchemy.url, but sqlalchemy iteself fails on url = opts.pop('url').
If you think something is odd and you're asking on SO it'd be wise to show a stacktrace and an example of what you tried to do.
Kotti gets its settings the same as any pyramid app. Your entry point (def main(global_conf, **settings) usually) is passed the global_conf and settings dictionary. You're then responsible for fixing that up and shuffling it off into the configurator.
For various reasons PasteDeploy keeps the global_conf separate from the settings, thus you're responsible for merging them if you wish.
[DEFAULT]
foo = bar
[app:main]
use = egg:myapp#main
baz = xyz
def main(global_conf, **app_settings):
settings = global_conf
settings.update(app_settings)
config = Configurator(settings=settings)
config.include('kotti')
return config.make_wsgi_app()
Related
Are there any best-practices for config-file documentation, especially for python?
Particularly in scientific computing, it is common to use a config file as the input to control a batch processing job (such as a simulation), and expect the user to customise a substantial portion of the config for their scenario. (The config also likely selects among different processing modules, each possessing different suites of config fields.) Thus, the user ought to know: what each setting means or effects; which settings are unused (in which circumstances); what are the default values (and the permissible values or ranges); etc.
I've found incomplete config file docs to be common. The fundamental problem seems to be that if the docs are maintained separately from the code, they grow out of sync. (This seems less of a problem with API docs due to standard practices involving colocated docstrings and autogeneration from function signatures/argspec.) For example if the standard python configparser is used once to parse the config file, then the code for accessing individual attributes (and implicitly determining the config schema) may still be spread out across the entire code base (and perhaps only available at runtime rather than when building docs).
Further thoughts:
Is it bad practice to replace a config file (yaml or similar) with a user-customised python script (so as to only need API docs)?
Distribution of a well commented example config file (that is also used in automatic tests): how to maintain if different scenarios duplicate large sections but need some completely different fields?
Can a single schema be maintained, both for use in code (to help parse, validate, and set defaults) and to generate docs somehow?
Is there a human readable/writeable way of (des)serialising the state of some (sub)class instance that represents a new batch process (so that config is covered by existing docs)?
Personally, I like to use the argparse module for configuration, and read the default value for each setting from an environment variable. That centralizes the settings and documentation in one place, and allows the user to either tweak settings on the command line or set and forget them in environment variables. Be careful about putting passwords on the command line, though, because other users can probably see your command line arguments in the process list.
Here's an example that uses argparse and environment variables:
def parse_args(argv=None):
parser = ArgumentParser(description='Watch the raw data folder for new runs.',
formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument(
'--kive_server',
default=os.environ.get('MICALL_KIVE_SERVER', 'http://localhost:8000'),
help='server to send runs to')
parser.add_argument(
'--kive_user',
default=os.environ.get('MICALL_KIVE_USER', 'kive'),
help='user name for Kive server')
parser.add_argument(
'--kive_password',
default=SUPPRESS,
help='password for Kive server (default not shown)')
args = parser.parse_args(argv)
if not hasattr(args, 'kive_password'):
args.kive_password = os.environ.get('MICALL_KIVE_PASSWORD', 'kive')
return args
Setting those environment variables can be a bit confusing, particularly for system services. If you're using systemd, look at the service unit, and be careful to use EnvironmentFile instead of Environment for any secrets. Environment values can be viewed by any user with systemctl show.
I usually make the default values useful for a developer running on their workstation, so they can start development without changing any configuration.
Another option is to put the configuration settings in a settings.py file, and just be careful not to commit that file to source control. I have often committed a settings_template.py file that users can copy.
If your settings are so complicated/flexible that environment variables or a settings file get messy, then I would convert the project to a library with an API. Instead of settings, users then write a script that calls your API. You don't have to go through the effort of hosting your library on PyPI, either. pip can install from a GitHub repository, for example.
From Django docs:
You shouldn’t alter settings in your applications at runtime. For
example, don’t do this in a view:
from django.conf import settings
settings.DEBUG = True # Don't do this!
The only place you should assign to settings is in a settings file.
I've noticed that Django testing code does alter settings. Why is it ok to do it there?
Is it ok to change settings?
Short answer:
No, unless you do it during the startup.
Long answer:
Django docs are correct, you should not modify settings at runtime. This means, no settings modifications after the app has been started, like changing configuration in views.py, serializers.py, models.py or other modules you add during the development. But it is ok to modify settings if they depend on local variables if you do it at startup and you are fully aware of what happens.
Can you modify settings while testing?
Yes, if you think you need it. Feel free to rely on override_settings to change settings value for testing purposes in the unit tests, see example of usage here
Also, everything that this decorator does - is overriding settings with provided values and restoring settings value after test has passed (decorated function executed).
Why Django does modify them while testing the code.
From what I see, they change settings only for testing purposes and the only thing they do - adding a local host to allowed host so they can test the code using a local domain. Examples like that seem pretty reasonable for me as change is done only once and during unit tests set up. Imagine having overrride_settings call every time, that would be monstrous.
General recommendation.
Try not to, there is no need to modify settings and if there is - think about it, maybe settings is not the right place for a mutable setting?
If you want to modify settings at runtime - please be aware that settings might be cached somewhere, copied and accessed all over the place - this is a plenty of space for new bugs. There is nothing bad in it, except having an unexpected behavior of the system due to an old/new value of the modified setting.
Hope this makes sense.
The answer is in the wording:
You shouldn’t alter settings in your applications at runtime.
Unit test code is not part of your application, so that statement doesn't apply to unit tests.
Why is it ok to do it there?
As per above, it is perfectly OK to override settings during tests, provided you do it in a localised manner (because tests are sometimes run in a multi-threaded manner).
Here is how they recommend doing it:
from django.test import TestCase
class LoginTestCase(TestCase):
def test_login(self):
# First check for the default behavior
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/accounts/login/?next=/sekrit/')
# Then override the LOGIN_URL setting
with self.settings(LOGIN_URL='/other/login/'):
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/other/login/?next=/sekrit/')
See docs:
https://docs.djangoproject.com/en/2.2/topics/testing/tools/#django.test.SimpleTestCase.settings
Changing the settings during tests is perfectly normal, expected, supported behavior. This is because you want to verify that your code works with lots of different settings. It's so normal in fact that the built-in method to do so has such a simple name it's a bit confusing to find in the docs:
e.g.
class QueueTests(TestCase):
def test_both_modes(self):
with self.settings(QUEUE_MODE='fast'):
self.assertTrue(run_a_test())
assert settings.QUEUE_MODE == 'fast'
with self.settings(QUEUE_MODE='slow'):
self.assertTrue(run_a_test())
assert settings.QUEUE_MODE == 'slow'
I have been told that doing this would be a not-very-good practice (it is present in the main answer of Python pattern for sharing configuration throughout application though):
configfile.py
SOUNDENABLED = 1
FILEPATH = 'D:\\TEMP\\hello.txt'
main.py
import configfile
if configfile.SOUNDENABLED == 1:
....
f = open(configfile.FILEPATH, 'a')
...
This is confirmed by the fact that many people use INI files for local configuration with ConfigParser module or iniparse or other similar modules.
Question: Why would using an INI file for local configuration + an INI parser Python module be better than just importing a configfile.py file containing the right config values as constants?
The only concern here is that a .py can have arbitrary Python code, so it has a potential to break your program in arbitrary ways.
If you can trust your users to use it responsibly, there's nothing wrong with this setup. If fact, at one of my previous occupations, we were doing just that, without any problems that I'm aware of. Just on the contrary: it allowed users to eliminate duplication by autogenerating repetitive parts and importing other config files.
Another concern is if you have many files, the configuration ones are better be separated from regular code ones, so users know which files they are supposed to be able to edit (the above link addresses this, too).
Importing a module executes any code that it contains. Nothing restricts your configfile.py to containing only definitions. Down the line, this is a recipe for security concerns and obscure errors. Also, you are bound to Python's module search path for finding the configuration file. What if you want to place the configuration file somewhere else, or if there is a name conflict?
This is a perfectly acceptable practice. Some examples of well-known projects using this method are Django and gunicorn.
It could be better for some reasons
The only extension that config file could have is py.
You cannot distribute your program with configs in separate directory unless you put an __init__.py into this directory
Nasty users of your program can put any python script in config and do bad things.
For example, the YouCompleteMe autocompletion engine stores config in python module, .ycm_extra_conf.py. By default, each time config is imported, it asks you, whether you sure that the file is safe to be executed.
How would you change configuration without restarting your app?
Generally, allowing execution of code that came from somewhere outside is a vulnerability, that could lead to very serious consequences.
However, if you don't care about these, for example, you are developing web application that executes only on your server, this is an acceptable practice to put configuration into python module. Django does so.
A "settings file" would be a file where things like "background color", "speed of execution", "number of x's" are defined. Currently, I implemented it as a single setting.py file, which I import in the beginning. Someone told me I should make it a settings.ini file instead, but I don't see why! Care to clarify, what is the optimal option?
There is no optimal solution; it is a matter of preference.*
Normally, settings do not need to be expressed in a Turing-complete language: they're often just a bunch of flags and options, sometimes strings and numbers, etc. An argument for having a settings.py file (though very unorthodox) would be if the end-user was expected to write code to generate very esoteric configurations (e.g. maps for a game). This would then be fairly similar to shell script .bashrc-style files.
But again, in 99.9% of programs, the settings are often just a bunch of flags and options, sometimes strings and numbers, etc. It's fine to store them as JSON or XML. It also makes it easy to perform reflection on your settings: for example, automatically listing them in a tree manner, or automatically creating a GUI out of the descriptions.
(Also it may be a (unlikely?) security issue if you allow people to inject code by modifying the settings file.)
*edit: no pun intended...
There are a few reasons why separating out config files from main codebase is a good idea. Of course it depends on your use case and you should evaluate against your usecase.
Configuration can be managed by end user, who do not understand programming languages. It makes more sense to factor out configuration and use a simple ini file which uses simple key-value pairs for config parameters.
Configuration varies based on the installation environment. Your code runs on multiple environment and they all use different configuration. It is very easy to maintain such cases by having separate config files and same source code installed on those environments.
There are package managers that knows what is a config file and what is a source file. They are intelligent to not override any changed config on version upgrade etc. So you do not have to worry about resetting config parameters after version upgrade of package. For example you ship your product with a default config file. User fine tuned few parameters. You shipped another version of the package. User should not expect a config reset after version upgrade.
One problem with having a settings file being a Python module is that it can contain code that will be executed when you import it. This may allow malicious code to be inserted into your program.
For Python use stock libraries:
YAML style configuration files:
http://www.yaml.org/start.html
http://pypi.python.org/pypi/PyYAML/
(Used e.g. Google App Engine)
INI: http://docs.python.org/library/configparser.html
Don't use XML for hand-edited config files.
I am developing a project that requires a single configuration file whose data is used by multiple modules.
My question is: what is the common approach to that? should i read the configuration file from each
of my modules (files) or is there any other way to do it?
I was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).
response: opted for the conf.py approach then since it it is modular, flexible and simple
I can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.
accepted answer: chose "Alex Martelli" response because it was the most complete. voted up other answers because they where good and useful too.
I like the approach of a single config.py module whose body (when first imported) parses one or more configuration-data files and sets its own "global variables" appropriately -- though I'd favor config.teamdata over the round-about config.data['teamdata'] approach.
This assumes configuration settings are read-only once loaded (except maybe in unit-testing scenarios, where the test-code will be doing its own artificial setting of config variables to properly exercise the code-under-test) -- it basically exploits the nature of a module as the simplest Pythonic form of "singleton" (when you don't need subclassing or other features supported only by classes and not by modules, of course).
"One or more" configuration files (e.g. first one somewhere in /etc for general default settings, then one under /usr/local for site-specific overrides thereof, then again possibly one in the user's home directory for user specific settings) is a common and useful pattern.
The approach you describe is ok. If you want to add support for user config files, you can use execfile(os.path.expanduser("~/.yourprogram/config.py")).
One nice approach is to parse the config file(s) into a Python object when the application starts and pass this object around to all classes and modules requiring access to the configuration.
This may save a lot of time parsing the config.
If you want to share your config across different machines, you could perhaps put it on a web server and do import like this:
import urllib2
confstr = urllib2.urlopen("http://yourhost/config.py").read()
exec(confstr)
And if you want to share it across different languages, perhaps you can use JSON to encode and parse the configuration:
import urllib2
import simplejson
confstr = urllib2.urlopen("http://yourhost/config.py").read()
config = simplejson.loads(confstr)