What's a djangonautic way of handling default settings in an app if one isn't defined in settings.py?
I've currently placed a default_settings file in the app and I've considered a few options. I'm leaning towards the first option, but there may be pitfalls I'm not aware of in using globals()
I've mostly seen apps do a FOO = getattr(settings, 'FOO', False) at the top of the file that uses the setting but I think there are readability/repetition problems with this approach if the values / names are long.
1: Place settings in a function and iterate over locals / set globals
def setup_defaults():
FOO = 'bar'
for key, value in locals().items():
globals()[key] = getattr(settings, key, value)
setup_defaults()
Pros:
Only have to write var name once to pull default of same name from django settings.
Cons:
Not used to using globals() and don't know of any implications
2: Write getattr(settings, 'MY_SETTING', default_settings.MY_SETTING) every call
Pros:
- Very clear.
Cons: - Repetitive
3: Always define settings as FOO = getattr(settings, 'FOO', '...setting here...')
Pros:
- Defaults are always overridden
Cons:
Repetitive (must define var twice - once in string form, once in var)
Setting is not as readable since it's now the third argument
4: Create utility function to get_or_default(setting)
Pros:
Simple
Don't have to repeat string representation of setting
Cons:
Have to call it
5: Create a settings class
class Settings(object):
FOO = 'bar'
def __init__(self):
# filter out the startswith('__') of
# self.__dict__.items() / compare to django.conf.settings?
my_settings = Settings()
Cons:
Can't do from foo.bar.my_settings import FOO (actually, that's a terrible deal breaker!)
I'd love to hear feedback.
I think it's quite common to create a settings.py in your app's package, where you define your settings like this:
from django.conf import settings
FOO = getattr(settings, 'FOO', "default_value")
In your app you can import them from your app's settings module:
from myapp.settings import *
def print_foo():
print FOO
But I think everybody agrees that Django is lacking a better generic architecture for this! If you're looking for a more sophisticated way to handle this, there are some third party apps for this like django-appconf, but it's your decision if you want to introduce one more dependency for your app or not!
Updated for 2020
In settings.py, put settings.* before the property.
from django.conf import settings
settings.FOO = getattr(settings, 'FOO', "default_value")
It seems that every solution I see there tends to create an internal copy of application settings, proxy, wrap or whatever. This is confusing and creates problems when settings are modified in run time like they do in tests.
To me all settings belong in django.conf.settings and only there. You should not read them from somewhere else nor copy it for later use (as they may change). You should set them once and don't bother about defaults later on.
I understand the impulse to drop the app prefix when app setting is used internally, but this also is IMHO a bad idea. When in trouble looking for SOME_APP_FOO will not yield results, as it's used just as FOO internally. Confusing right? And for what, few letters? Remember that explicit is better?
IMHO the best way is to just set those defaults in Django's own settings, and why don't use piping that is already there? No module import hooks or hijacking models.py being always imported to initialize some extra and complicated meta class piping.
Why not use AppConfig.ready for setting defaults?
class FooBarConfig(AppConfig):
name = 'foo_bar'
def ready(self):
from django.conf import settings
settings = settings._wrapped.__dict__
settings.setdefault('FOO_BAR_SETTING', 'whatever')
Or better yet define them in clean simple way in a separate module and import them as (or close to how) Settings class does it:
class FooBarConfig(AppConfig):
name = 'foo_bar'
def ready(self):
from . import app_settings as defaults
from django.conf import settings
for name in dir(defaults):
if name.isupper() and not hasattr(settings, name):
setattr(settings, name, getattr(defaults, name))
I'm not sure use of __dict__ is the best solution, but you get the idea, you can always user hasattr/setattr combo to get the efect.
This way your app settings are:
exposed to others — if they should rely on them in some rare cases, if of course apps are configured in order rely on each other
read normally as any other setting
nicely declared in their own module
lazy enough
controlled how they're are set in django.conf.settings — you can implement some transposition of names if you want to
PS. There is a warning about not modifying settings in run time but it does not explain why. So I think this one time, during initialization may be a reasonable exception ;)
PS2. Don't name the separate module just settings as this may get confusing when you import settings from django.conf.
How about this?
In myapp/settings.py:
from django.conf import settings
FOO = 'bar'
BAR = 'baz'
_g = globals()
for key, value in _g.items():
_g[key] = getattr(settings, key, value)
In myapp/other.py:
import myapp.settings
print myapp.settings.FOO
Given this answer by ncoghlan, I feel ok using globals() this way.
you can use django-zero-settings which lets you define your defaults and a setting key for user settings to auto-override defaults, has auto-import strings, removed settings management, cache, pre-checks, etc.
to create app settings like your example:
from zero_settings import ZeroSettings
app_settings = ZeroSettings(
key="APP",
defaults={
"FOO": "bar"
},
)
then you can use it like:
from app.settings import app_settings
print(app_settings.FOO) # which prints bar
user settings will auto override defaults, like:
# this is settings.py, Django settings file
SECRET_KEY = "some_key"
# other settings ...
# the key `APP` is same key arg for ZeroSettings
APP = {
"FOO": "not_bar"
}
and then:
from app.settings import app_settings
print(app_settings.FOO) # this time you get not_bar
In response to Phil Gyford's comment, exposing the problem of settings not overwritten in tests (since already imported in modules), what I did was to define an AppSettings class in __init__.py with:
an __init__ method to initialize each setting to None
a load method to load every settings from getters
static getters for each setting
Then in the code:
from . import AppSettings
def in_some_function():
some_setting = AppSettings.get_some_setting()
Or if you want to load them all in once (but overriding settings in tests won't work for the impacted module):
from . import AppSettings
app_settings = AppSettings()
app_settings.load()
def in_some_function():
print(app_settings.some_setting)
You can then use the override_settings decorator in your tests, and still have some DRY and clear way of using app settings, at the cost of more instructions executed each time you want to get a setting (just for tests...).
Number 3 is best because it is the most simple one. And very consistent look.
Number 1: it is easy to overlook it. If I'll open your code and I won't scroll to the bottom I'll miss it and I will think that settings can't be overridden in my own module.
Number 2: is not only repetitive, it is harder to read because it is too long, also default values will be defined multiple time and scattered all over your code.
Number 4: non-consistent-look, repetitive calls.
Number 5: Non consistent, we expect settings to be defined in a module not in a class. Well at least I do expect to find to be defined as module because I've seen many apps using method 3, and I use it my self so I might be biased.
Related
Well, I haven't been getting some answers or commentary partly because the codes in the original content below is so restricted to their own small contexts, so instead of that, I wanted to share the whole codebase with you (don't worry, I will permalink the selected lines) because I intend to open the source anyway so that you can review as much as you'd like to.
The whole codebase is here. It's perma/1 branch of the repository.
Original Content
I have a custom template tag as below:
# other imports
from django.conf import settings
DPS_TEMPLATE_TRUE_DEFAULT = getattr(settings, "DPS_TEMPLATE_TRUE_DEFAULT", "True")
#register.simple_tag(name="var")
def get_var(name, rit=DPS_TEMPLATE_TRUE_DEFAULT, rif="False", rin=""):
"""
A template tag to render value of a variable.
"""
_LOGGER.debug("Rendering value for `%s`...", name)
variable = models.Variable.objects.get(name=name)
value = variable.value
if value is None:
return rin
if isinstance(value, bool):
if value:
return rit
else:
return rif
return variable.value
As you can see, I would like to set rit by DPS_TEMPLATE_TRUE_DEFAULT. I test this behavior as below:
# `template_factory` and `context_factory` creates Template and Context instances accordingly.
# i use them in other tests. they work.
#pytest.mark.it("Render if True by settings")
def test_render_if_true_settings(
self, template_factory, context_factory, variable_factory, settings
):
settings.DPS_TEMPLATE_TRUE_DEFAULT = "this is true by settings"
variable_factory(True)
template = template_factory("FOO", tag_name=self.tag_name).render(
context_factory()
)
assert "<p>this is true by settings</p>" in template
I use pytest-django and, as the docs put, I can kinda mock the settings. However, when I run the test, it does not see DPS_TEMPLATE_TRUE_DEFAULT and uses "True". I debugged this behavior by removing "True" on getattr.
Why does it not see DPS_TEMPLATE_TRUE_DEFAULT even if I set it in tests?
Addition / New Content
In the custom template tag, you can see that I'd like to grab DPS_TEMPLATE_TRUE_DEFAULT from django.conf.settings and use it as rit kwarg in my var tag.
This is where I test this behavior by mutating the related setting with settings fixture of pytest-django and it fails.
As the troubleshooting section states I have also tried the other and possibly official ways to do this, they produce the same behavior. As to why it does that, I have no clue.
Troubleshooting
Using Standard Solutions
The odd thing is I have also tried good-old django.test.utils.override_settings and modify_settings, which show the same behavior.
Eager Initialization
I thought, maybe, the problem was I was using getattr outside the scope of get_var function, which would load it before it executes, which means before the tests and somehow does not let me set it again. So I moved getattr inside get_var function but the behavior was the same. It behaves like DPS_TEMPLATE_TRUE_DEFAULT does not exist in settings.
Hardcoding into Settings File
So I have hardcoded the "failing to see" setting in the settings.py file as below:
DPS_TEMPLATE_TRUE_DEFAULT = "this is true by settings"
Still it behaves like DPS_TEMPLATE_TRUE_DEFAULT does not exist.
This is also proven by removing the default value "True" from getattr in this line.
Environment
Django 2.2.8
Pytest Django 3.7.0
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
In my endless quest in over-complicating simple stuff, I am researching the most 'Pythonic' way to provide global configuration variables inside the typical 'config.py' found in Python egg packages.
The traditional way (aah, good ol' #define!) is as follows:
MYSQL_PORT = 3306
MYSQL_DATABASE = 'mydb'
MYSQL_DATABASE_TABLES = ['tb_users', 'tb_groups']
Therefore global variables are imported in one of the following ways:
from config import *
dbname = MYSQL_DATABASE
for table in MYSQL_DATABASE_TABLES:
print table
or:
import config
dbname = config.MYSQL_DATABASE
assert(isinstance(config.MYSQL_PORT, int))
It makes sense, but sometimes can be a little messy, especially when you're trying to remember the names of certain variables. Besides, providing a 'configuration' object, with variables as attributes, might be more flexible. So, taking a lead from bpython config.py file, I came up with:
class Struct(object):
def __init__(self, *args):
self.__header__ = str(args[0]) if args else None
def __repr__(self):
if self.__header__ is None:
return super(Struct, self).__repr__()
return self.__header__
def next(self):
""" Fake iteration functionality.
"""
raise StopIteration
def __iter__(self):
""" Fake iteration functionality.
We skip magic attribues and Structs, and return the rest.
"""
ks = self.__dict__.keys()
for k in ks:
if not k.startswith('__') and not isinstance(k, Struct):
yield getattr(self, k)
def __len__(self):
""" Don't count magic attributes or Structs.
"""
ks = self.__dict__.keys()
return len([k for k in ks if not k.startswith('__')\
and not isinstance(k, Struct)])
and a 'config.py' that imports the class and reads as follows:
from _config import Struct as Section
mysql = Section("MySQL specific configuration")
mysql.user = 'root'
mysql.pass = 'secret'
mysql.host = 'localhost'
mysql.port = 3306
mysql.database = 'mydb'
mysql.tables = Section("Tables for 'mydb'")
mysql.tables.users = 'tb_users'
mysql.tables.groups = 'tb_groups'
and is used in this way:
from sqlalchemy import MetaData, Table
import config as CONFIG
assert(isinstance(CONFIG.mysql.port, int))
mdata = MetaData(
"mysql://%s:%s#%s:%d/%s" % (
CONFIG.mysql.user,
CONFIG.mysql.pass,
CONFIG.mysql.host,
CONFIG.mysql.port,
CONFIG.mysql.database,
)
)
tables = []
for name in CONFIG.mysql.tables:
tables.append(Table(name, mdata, autoload=True))
Which seems a more readable, expressive and flexible way of storing and fetching global variables inside a package.
Lamest idea ever? What is the best practice for coping with these situations? What is your way of storing and fetching global names and variables inside your package?
How about just using the built-in types like this:
config = {
"mysql": {
"user": "root",
"pass": "secret",
"tables": {
"users": "tb_users"
}
# etc
}
}
You'd access the values as follows:
config["mysql"]["tables"]["users"]
If you are willing to sacrifice the potential to compute expressions inside your config tree, you could use YAML and end up with a more readable config file like this:
mysql:
- user: root
- pass: secret
- tables:
- users: tb_users
and use a library like PyYAML to conventiently parse and access the config file
I like this solution for small applications:
class App:
__conf = {
"username": "",
"password": "",
"MYSQL_PORT": 3306,
"MYSQL_DATABASE": 'mydb',
"MYSQL_DATABASE_TABLES": ['tb_users', 'tb_groups']
}
__setters = ["username", "password"]
#staticmethod
def config(name):
return App.__conf[name]
#staticmethod
def set(name, value):
if name in App.__setters:
App.__conf[name] = value
else:
raise NameError("Name not accepted in set() method")
And then usage is:
if __name__ == "__main__":
# from config import App
App.config("MYSQL_PORT") # return 3306
App.set("username", "hi") # set new username value
App.config("username") # return "hi"
App.set("MYSQL_PORT", "abc") # this raises NameError
.. you should like it because:
uses class variables (no object to pass around/ no singleton required),
uses encapsulated built-in types and looks like (is) a method call on App,
has control over individual config immutability, mutable globals are the worst kind of globals.
promotes conventional and well named access / readability in your source code
is a simple class but enforces structured access, an alternative is to use #property, but that requires more variable handling code per item and is object-based.
requires minimal changes to add new config items and set its mutability.
--Edit--:
For large applications, storing values in a YAML (i.e. properties) file and reading that in as immutable data is a better approach (i.e. blubb/ohaal's answer).
For small applications, this solution above is simpler.
How about using classes?
# config.py
class MYSQL:
PORT = 3306
DATABASE = 'mydb'
DATABASE_TABLES = ['tb_users', 'tb_groups']
# main.py
from config import MYSQL
print(MYSQL.PORT) # 3306
Let's be honest, we should probably consider using a Python Software Foundation maintained library:
https://docs.python.org/3/library/configparser.html
Config example: (ini format, but JSON available)
[DEFAULT]
ServerAliveInterval = 45
Compression = yes
CompressionLevel = 9
ForwardX11 = yes
[bitbucket.org]
User = hg
[topsecret.server.com]
Port = 50022
ForwardX11 = no
Code example:
>>> import configparser
>>> config = configparser.ConfigParser()
>>> config.read('example.ini')
>>> config['DEFAULT']['Compression']
'yes'
>>> config['DEFAULT'].getboolean('MyCompression', fallback=True) # get_or_else
Making it globally-accessible:
import configpaser
class App:
__conf = None
#staticmethod
def config():
if App.__conf is None: # Read only once, lazy.
App.__conf = configparser.ConfigParser()
App.__conf.read('example.ini')
return App.__conf
if __name__ == '__main__':
App.config()['DEFAULT']['MYSQL_PORT']
# or, better:
App.config().get(section='DEFAULT', option='MYSQL_PORT', fallback=3306)
....
Downsides:
Uncontrolled global mutable state.
A small variation on Husky's idea that I use. Make a file called 'globals' (or whatever you like) and then define multiple classes in it, as such:
#globals.py
class dbinfo : # for database globals
username = 'abcd'
password = 'xyz'
class runtime :
debug = False
output = 'stdio'
Then, if you have two code files c1.py and c2.py, both can have at the top
import globals as gl
Now all code can access and set values, as such:
gl.runtime.debug = False
print(gl.dbinfo.username)
People forget classes exist, even if no object is ever instantiated that is a member of that class. And variables in a class that aren't preceded by 'self.' are shared across all instances of the class, even if there are none. Once 'debug' is changed by any code, all other code sees the change.
By importing it as gl, you can have multiple such files and variables that lets you access and set values across code files, functions, etc., but with no danger of namespace collision.
This lacks some of the clever error checking of other approaches, but is simple and easy to follow.
Similar to blubb's answer. I suggest building them with lambda functions to reduce code. Like this:
User = lambda passwd, hair, name: {'password':passwd, 'hair':hair, 'name':name}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3']['password'] #> password
config['blubb']['hair'] #> black
This does smell like you may want to make a class, though.
Or, as MarkM noted, you could use namedtuple
from collections import namedtuple
#...
User = namedtuple('User', ['password', 'hair', 'name']}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3'].password #> passwd
config['blubb'].hair #> black
I did that once. Ultimately I found my simplified basicconfig.py adequate for my needs. You can pass in a namespace with other objects for it to reference if you need to. You can also pass in additional defaults from your code. It also maps attribute and mapping style syntax to the same configuration object.
please check out the IPython configuration system, implemented via traitlets for the type enforcement you are doing manually.
Cut and pasted here to comply with SO guidelines for not just dropping links as the content of links changes over time.
traitlets documentation
Here are the main requirements we wanted our configuration system to have:
Support for hierarchical configuration information.
Full integration with command line option parsers. Often, you want to read a configuration file, but then override some of the values with command line options. Our configuration system automates this process and allows each command line option to be linked to a particular attribute in the configuration hierarchy that it will override.
Configuration files that are themselves valid Python code. This accomplishes many things. First, it becomes possible to put logic in your configuration files that sets attributes based on your operating system, network setup, Python version, etc. Second, Python has a super simple syntax for accessing hierarchical data structures, namely regular attribute access (Foo.Bar.Bam.name). Third, using Python makes it easy for users to import configuration attributes from one configuration file to another.
Fourth, even though Python is dynamically typed, it does have types that can be checked at runtime. Thus, a 1 in a config file is the integer ‘1’, while a '1' is a string.
A fully automated method for getting the configuration information to the classes that need it at runtime. Writing code that walks a configuration hierarchy to extract a particular attribute is painful. When you have complex configuration information with hundreds of attributes, this makes you want to cry.
Type checking and validation that doesn’t require the entire configuration hierarchy to be specified statically before runtime. Python is a very dynamic language and you don’t always know everything that needs to be configured when a program starts.
To acheive this they basically define 3 object classes and their relations to each other:
1) Configuration - basically a ChainMap / basic dict with some enhancements for merging.
2) Configurable - base class to subclass all things you'd wish to configure.
3) Application - object that is instantiated to perform a specific application function, or your main application for single purpose software.
In their words:
Application: Application
An application is a process that does a specific job. The most obvious application is the ipython command line program. Each application reads one or more configuration files and a single set of command line options and then produces a master configuration object for the application. This configuration object is then passed to the configurable objects that the application creates. These configurable objects implement the actual logic of the application and know how to configure themselves given the configuration object.
Applications always have a log attribute that is a configured Logger. This allows centralized logging configuration per-application.
Configurable: Configurable
A configurable is a regular Python class that serves as a base class for all main classes in an application. The Configurable base class is lightweight and only does one things.
This Configurable is a subclass of HasTraits that knows how to configure itself. Class level traits with the metadata config=True become values that can be configured from the command line and configuration files.
Developers create Configurable subclasses that implement all of the logic in the application. Each of these subclasses has its own configuration information that controls how instances are created.
I want to have dict / list to which I can add values, just like models can be added to the admin register in django !
My attempt : (package -> __init__.py)
# Singleton object
# __init__.py (Package: pack)
class remember:
a = []
def add(data):
a.append[data]
def get():
return a
obj = remember()
# models1.py
import pack
pack.obj.add("data")
# models2.py
import pack
pack.obj.add("data2")
print pack.obj.get()
# We should get: ["data", "data2"]
# We get : ["data2"]
How to achieve the desired functionality ?
Some say that methods can do this if you don't need sub-classing, how to do this with methods ?
Update:
To be more clear :
Just like django admin register any one can import and register itself with admin, so that register is persisted between imports.
If it's a singleton you're after, have a look at this old blog post. It contains a link to a well documented implementation (here).
Don't. If you think you need a global you don't and you should reevaluate how you are approaching the problem because 99% of the time you're doing it wrong.
If you have a really good reason to do it perhaps thread_locals() will really solve the problem you're trying to solve. This allows you to set up thread level global data. Note: This is only slightly better than a true global and should in general be avoided, and it can cause you a lot of headaches.
If you're looking for a cross request "global" then you most likely want to look into storing values in memcached.
I am used to develop web applications on Django and gunicorn.
In case of Django, any application modules in a Django application can get deployment settings through django.conf.settings. The "settings.py" is written in Python, so that any arbitrary settings and pre-processing can be defined dynamically.
In case of gunicorn, it has three configuration places in order of precedence, and one settings registry class instance combines those.(But usually these settings are used only for gunicorn not application.)
Command line parameters.
Configuration file. (like Django, written in
Python which can have any arbitrary
settings dynamically.)
Paster application settings.
In case of Pyramid, according to Pyramid documentation, deployment settings may be usually put into pyramid.registry.Registry().settings. But it seems to be accessed only when a pyramid.router.Router() instances exists.
That is pyramid.threadlocal.get_current_registry().settings returns None, during the startup process in an application "main.py".
For example, I usually define some business logic in SQLAlchemy model modules, which requires deployment settings as follows.
myapp/models.py
from sqlalchemy import Table, Column, Types
from sqlalchemy.orm import mapper
from pyramid.threadlocal import get_current_registry
from myapp.db import session, metadata
settings = get_current_registry().settings
mytable = Table('mytable', metadata,
Column('id', Types.INTEGER, primary_key=True,)
(other columns)...
)
class MyModel(object):
query = session.query_property()
external_api_endpoint = settings['external_api_uri']
timezone = settings['timezone']
def get_api_result(self):
(interact with external api ...)
mapper(MyModel, mytable)
But, "settings['external_api_endpoint']" raises a TypeError exception because the "settings" is None.
I thought two solutions.
Define a callable which accepts "config" argument in "models.py" and "main.py" calls it with a
Configurator() instance.
myapp/models.py
from sqlalchemy import Table, Column, Types
from sqlalchemy.orm import mapper
from myapp.db import session, metadata
_g = globals()
def initialize(config):
settings = config.get_settings()
mytable = Table('mytable', metadata,
Column('id', Types.INTEGER, rimary_key = True,)
(other columns ...)
)
class MyModel(object):
query = session.query_property()
external_api_endpoint = settings['external_api_endpoint']
def get_api_result(self):
(interact with external api)...
mapper(MyModel, mytable)
_g['MyModel'] = MyModel
_g['mytable'] = mytable
Or, put an empty module "app/settings.py", and put setting into it later.
myapp/__init__.py
from pyramid.config import Configurator
from .resources import RootResource
def main(global_config, **settings):
config = Configurator(
settings = settings,
root_factory = RootResource,
)
import myapp.settings
myapp.setting.settings = config.get_settings()
(other configurations ...)
return config.make_wsgi_app()
Both and other solutions meet the requirements, but I feel troublesome. What I want is the followings.
development.ini
defines rough settings because development.ini can have only string type constants.
[app:myapp]
use = egg:myapp
env = dev0
api_signature = xxxxxx
myapp/settings.py
defines detail settings based on development.ini, beacause any arbitrary variables(types) can be set.
import datetime, urllib
from pytz import timezone
from pyramid.threadlocal import get_current_registry
pyramid_settings = get_current_registry().settings
if pyramid_settings['env'] == 'production':
api_endpoint_uri = 'http://api.external.com/?{0}'
timezone = timezone('US/Eastern')
elif pyramid_settings['env'] == 'dev0':
api_endpoint_uri = 'http://sandbox0.external.com/?{0}'
timezone = timezone('Australia/Sydney')
elif pyramid_settings['env'] == 'dev1':
api_endpoint_uri = 'http://sandbox1.external.com/?{0}'
timezone = timezone('JP/Tokyo')
api_endpoint_uri = api_endpoint_uri.format(urllib.urlencode({'signature':pyramid_settings['api_signature']}))
Then, other modules can get arbitrary deployment settings through "import myapp.settings".
Or, if Registry().settings is preferable than "settings.py", **settings kwargs and "settings.py" may be combined and registered into Registry().settings during "main.py" startup process.
Anyway, how to get the settings dictionay during startup time ? Or, Pyramid gently forces us to put every code which requires deployment settings in "views" callables which can get settings dictionary anytime through request.registry.settings ?
EDIT
Thanks, Michael and Chris.
I at last understand why Pyramid uses threadlocal variables(registry and request), in particular registry object for more than one Pyramid applications.
In my opinion, however, deployment settings usually affect business logics that may define application-specific somethings. Those logics are usually put in one or more Python modules that may be other than "app/init.py" or "app/views.py" that can easily get access to Config() or Registry(). Those Python modules are normally "global" at Python process level.
That is, even when more than one Pyramid applications coexist, despite their own threadlocal variables, they have to share those "global" Python modules that may contain applicatin-specific somethings at Python process level.
Of cause, every those modules can have "initialize()" callalbe which is called with a Configurator() by the application "main" callable, or passing Registory() or Request() object through so long series of function calls can meet usual requirements. But, I guess Pyramid beginers (like me) or developers who has "large application or so many settings" may feel troublesome, although that is Pyramid design.
So, I think, Registry().settings should have only real "thread-local" variables, and should not have normal business-logic settings. Responsibility for segregation of multiple application-specific module, classes, callables variables etc. should be taken by developer.
As of now, from my viewpoint, I will take Chris's answer. Or in "main" callable, do "execfile('settings.py', settings, settings)" and put it in some "global" space.
Another option, if you enjoy global configuration via Python, create a settings.py file. If it needs values from the ini file, parse the ini file and grab them out (at module scope, so it runs at import time):
from paste.deploy.loadwsgi import appconfig
config = appconfig('config:development.ini', 'myapp', relative_to='.')
if config['env'] == 'production':
api_endpoint_uri = 'http://api.external.com/?{0}'
timezone = timezone('US/Eastern')
# .. and so on ...
'config:development.ini' is the name of the ini file (prefixed with 'config:'). 'myapp' is the section name in the config file representing your app (e.g. [app:myapp]). "relative_to" is the directory name in which the config file can be found.
The pattern that I use is to pass the Configurator to modules that need to be initialized. Pyramid doesn't use any global variables because a design goal is to be able to run multiple instances of Pyramid in the same process. The threadlocals are global, but they are local to the current request, so different Pyramid apps can push to them at the same time from different threads.
With this in mind, if you do want a global settings dictionary you'll have to take care of that yourself. You could even push the registry onto the threadlocal manager yourself by calling config.begin().
I think the major thing to take away here is that you shouldn't be calling get_current_registry() at the module level, because at the time of import you aren't really guaranteed that the threadlocals are initialized, however in your init_model() function if you call get_current_registry(), you'd be fine if you previously called config.begin().
Sorry this is a little convoluted, but it's a common question and the best answer is: pass the configurator to your submodules that need it and allow them to add stuff to the registry/settings objects for use later.
Pyramid uses static configration by PasteDeploy, unlike Django.
Your [EDIT] part is a nice solution, I think Pyramid community should consider such usage.
I'm creating website based on Django (I know it's pure Python, so maybe it could be also answered by people who knows Python well) and I need to call some methods dynamically.
For example I have few applications (modules) in my website with the method "do_search()" in the views.py. Then I have one module called for example "search" and there I want to have an action which will be able to call all the existing "do_search()" in other applications. Of course I don't like to add each application to the import, then call it directly. I need some better way to do it dynamically.
I can read INSTALLED_APPS variable from settings and somehow run through all of the installed apps and look for the specific method? Piece of code will help here a lot :)
Thanks in advance!
Ignas
I'm not sure if I truly understand the question, but please clarify in a comment to my answer if I'm off.
# search.py
searchables = []
def search(search_string):
return [s.do_search(search_string) for s in searchables]
def register_search_engine(searchable):
if hasattr(searchable, 'do_search'):
# you want to see if this is callable also
searchables.append(searchable)
else:
# raise some error perhaps
# views.py
def do_search(search_string):
# search somehow, and return result
# models.py
# you need to ensure this method runs before any attempt at searching can begin
# like in models.py if this app is within installed_apps. the reason being that
# this module may not have been imported before the call to search.
import search
from views import do_search
search.register_search_engine(do_search)
As for where to register your search engine, there is some sort of helpful documentation in the signals docs for django which relates to this.
You can put signal handling and registration code anywhere you like. However, you'll need to make sure that the module it's in gets imported early on so that the signal handling gets registered before any signals need to be sent. This makes your app's models.py a good place to put registration of signal handlers.
So your models.py file should be a good place to register your search engine.
Alternative answer that I just thought of:
In your settings.py, you can have a setting that declares all your search functions. Like so:
# settings.py
SEARCH_ENGINES = ('app1.views.do_search', 'app2.views.do_search')
# search.py
from django.conf import settings
from django.utils import importlib
def search(search_string):
search_results = []
for engine in settings.SEARCH_ENGINES
i = engine.rfind('.')
module, attr = engine[:i], engine[i+1:]
mod = importlib.import_module(module)
do_search = getattr(mod, attr)
search_results.append(do_search(search_string))
return search_results
This works somewhat similar to registering MIDDLEWARE_CLASSES and TEMPLATE_CONTEXT_PROCESSORS. The above is all untested code, but if you look around the django source, you should be able to flesh this out and remove any errors.
If you can import the other applications through
import other_app
then it should be possible to perform
method = getattr(other_app, 'do_' + method_name)
result = method()
However your approach is questionable.