Looking in django.conf I noticed that settings are implemented like this:
class LazySettings(LazyObject):
...
What is the rationale behind making settings objects lazy?
Check out this section of the Django coding style. The reason is explained in there (quoted below).
In addition to performance, third-party modules can modify settings when they are imported. Accessing settings should be delayed to ensure this configuration happens first.
Modules should not in general use settings stored in
django.conf.settings at the top level (i.e. evaluated when the module
is imported). The explanation for this is as follows:
Manual configuration of settings (i.e. not relying on the
DJANGO_SETTINGS_MODULE environment variable) is allowed and possible
as follows:
from django.conf import settings
settings.configure({}, SOME_SETTING='foo')
However, if any setting is
accessed before the settings.configure line, this will not work.
(Internally, settings is a LazyObject which configures itself
automatically when the settings are accessed if it has not already
been configured).
So, if there is a module containing some code as follows:
from django.conf import settings
from django.core.urlresolvers import get_callable
default_foo_view = get_callable(settings.FOO_EXAMPLE_VIEW)
...then importing
this module will cause the settings object to be configured. That
means that the ability for third parties to import the module at the
top level is incompatible with the ability to configure the settings
object manually, or makes it very difficult in some circumstances.
Instead of the above code, a level of laziness or indirection must be
used, such as django.utils.functional.LazyObject,
django.utils.functional.lazy() or lambda.
Its a proxy object that abstracts the actual settings files, and makes it light weight until you actually access the settings you want. Once you start accessing the attributes, it will then load on demand. The idea is to reduce overhead in loading settings until you need them.
Additional information to the accepted answer:
Because we have both django/conf/global_settings.py setting which is the default global setting and site-specific setting which is configured via DJANGO_SETTINGS_MODULE environment variable, we need to have an object that handles the priority stuff of individual settings and abstract away this process. We shouldn't do that ourselves. It also decouples the code that uses settings from the location of your settings.
So an object is needed!
With that being said, why this object is lazy? Because it lets third-parties to configure the setting manually.
But how? Third-parties do that by settings.configure() method. They can only do that if the settings haven't already been loaded:
def configure(self, default_settings=global_settings, **options):
"""
Called to manually configure the settings. The 'default_settings'
"""
if self._wrapped is not empty:
raise RuntimeError("Settings already configured.")
And as a consequence, Django's documentation said:
Modules should not in general use settings stored in
django.conf.settings at the top level (i.e. evaluated when the module
is imported)
Why? If they do, in the __getattr__ method of this object, there is a checking that loads the settings if it hasn't already been loaded:
def __getattr__(self, name):
"""Return the value of a setting and cache it in self.__dict__."""
if (_wrapped := self._wrapped) is empty:
self._setup(name) #########
So it has to be lazy... Lazy means when an instance of the LazySettings class is instantiated, no settings will be configured.
I think that the purpose is to simplify settings from a developers point of view. So each project can have its own settings.py file without having the need to define all other Django settings as well. The LazySettings wrapper kind of combines everything from Django global_settings.py and your local settings. It lets the developer decide what settings he wants to overwrite, which he want to keep the defaults or which he wants to add.
The LazySettings class is maybe a wrong name for this, because I think it is not really lazy. Once you do something like from django.conf import settings the whole settings object is in your scope.
Related
From Django docs:
You shouldn’t alter settings in your applications at runtime. For
example, don’t do this in a view:
from django.conf import settings
settings.DEBUG = True # Don't do this!
The only place you should assign to settings is in a settings file.
I've noticed that Django testing code does alter settings. Why is it ok to do it there?
Is it ok to change settings?
Short answer:
No, unless you do it during the startup.
Long answer:
Django docs are correct, you should not modify settings at runtime. This means, no settings modifications after the app has been started, like changing configuration in views.py, serializers.py, models.py or other modules you add during the development. But it is ok to modify settings if they depend on local variables if you do it at startup and you are fully aware of what happens.
Can you modify settings while testing?
Yes, if you think you need it. Feel free to rely on override_settings to change settings value for testing purposes in the unit tests, see example of usage here
Also, everything that this decorator does - is overriding settings with provided values and restoring settings value after test has passed (decorated function executed).
Why Django does modify them while testing the code.
From what I see, they change settings only for testing purposes and the only thing they do - adding a local host to allowed host so they can test the code using a local domain. Examples like that seem pretty reasonable for me as change is done only once and during unit tests set up. Imagine having overrride_settings call every time, that would be monstrous.
General recommendation.
Try not to, there is no need to modify settings and if there is - think about it, maybe settings is not the right place for a mutable setting?
If you want to modify settings at runtime - please be aware that settings might be cached somewhere, copied and accessed all over the place - this is a plenty of space for new bugs. There is nothing bad in it, except having an unexpected behavior of the system due to an old/new value of the modified setting.
Hope this makes sense.
The answer is in the wording:
You shouldn’t alter settings in your applications at runtime.
Unit test code is not part of your application, so that statement doesn't apply to unit tests.
Why is it ok to do it there?
As per above, it is perfectly OK to override settings during tests, provided you do it in a localised manner (because tests are sometimes run in a multi-threaded manner).
Here is how they recommend doing it:
from django.test import TestCase
class LoginTestCase(TestCase):
def test_login(self):
# First check for the default behavior
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/accounts/login/?next=/sekrit/')
# Then override the LOGIN_URL setting
with self.settings(LOGIN_URL='/other/login/'):
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/other/login/?next=/sekrit/')
See docs:
https://docs.djangoproject.com/en/2.2/topics/testing/tools/#django.test.SimpleTestCase.settings
Changing the settings during tests is perfectly normal, expected, supported behavior. This is because you want to verify that your code works with lots of different settings. It's so normal in fact that the built-in method to do so has such a simple name it's a bit confusing to find in the docs:
e.g.
class QueueTests(TestCase):
def test_both_modes(self):
with self.settings(QUEUE_MODE='fast'):
self.assertTrue(run_a_test())
assert settings.QUEUE_MODE == 'fast'
with self.settings(QUEUE_MODE='slow'):
self.assertTrue(run_a_test())
assert settings.QUEUE_MODE == 'slow'
One of my views depends on a foo function. The behaviour of this function is defined by a setting BAR. I have different views that make use of foo, and for one of them I would like footo work as if it was slightly different. Unfortunately, foo is in a third-party dependency, so I cannot modify it.
I am considering doing as follows:
from django.test.utils import override_settings
def my_view(request):
with override_settings(BAR="newvalue"):
foo()
...
I know it's dirty and uncouth. But is it safe? Can I assume settings.BAR will have the right value in views other than my_view?
I am using Django 1.4 with Python 2.7, if that matters. My foo function is actually an upload function and I need it to upload files to different directories for different views.
That is not something safe. Check the module you are trying to import: django.TEST.utils. It is something designed to be used on tests, not live.
If you override your settings, you will not only change it for your other views, but for your other users too. Settings are global and should be immutable.
I don't know what module you are working on, but you could try finding the class that you need to change in their code and subclass it and then extend it as needed or see if signals are of use to you. That way you are not invading the app functionality but can extend it as needed.
I want to extend python(2.7)'s logging module (and specifically the logger class).
Currently I have:
class MyLogger(logging.Logger):
def __init__(self, name):
logging.Logger.__init__(self, name)
Is this the right way to initialize MyLogger?
Will I be able to use Logger.manager (logging.Logger.manager)?
Is it possible to "get" a logger (I only know logging.getLogger(name) - which is not available since I'm extending the Logger itself, and I know static methods aren't popular in python as they are in Java, for example)?
Where can I learn more about extending classes? The documentation in python.org is very poor and did not help me.
My goal is to be able to start a logger with standard configurations and handlers based on the caller module's name, and to set the entire system loggers to the same level with a short, readable, call.
Seems like my approach was wrong altogether.
I prefer the way stated in python.org:
Using configuration files for the cleans up code and allows to propagate changes easily.
A configuration file is loaded like so:
# view example on python.org site (logging for multiple modules)
logging.config.fileConfig('logging.conf')
As for batch abilities, since we keep logging.Logger.manager.loggerDict and logging.getLogger, batch operations can use simple loops to create changes (like setting loggers to a single level) throughout the system.
This is a pretty simple idea conceptually. In terms of specifics, I'm working with a pretty generic Kotti installation, where I'm customizing some pages / templates.
Much of my configuration is shared between a production and development server. Currently, these settings are included in two separate ini files. It would be nice to DRY this configuration, with common settings in one place.
I'm quite open to this happening in python or an an ini file / section (or perhaps a third, yet-to-be-considered place). I think it's equivalent to use a [DEFAULT] section, or pass dictionaries to loadapp via global_conf, but that seems to be processed in a squirrelly way. For example, Kotti thinks I've properly set sqlalchemy.url, but sqlalchemy iteself fails on url = opts.pop('url'). Moreover, since Kotti defines some default settings, Paste doesn't end up searching for them in the global_conf (e.g., kotti_configurators).
I don't like the idea of passing in a large dict for %(var)s substitution, as it requires effectively renaming all of the shared variables.
In my initial experiments with Paste Deploy, it demands a "main" section in each ini file. So, I don't think I can just use a use = config:shared.ini line. But that's conceptually close to what I'd like to do.
Is there a way to explicitly (and properly) include settings from DEFAULT or global_conf? Or perhaps do this programmatically with python on the results of loadapp?
For example, Kotti thinks I've properly set sqlalchemy.url, but sqlalchemy iteself fails on url = opts.pop('url').
If you think something is odd and you're asking on SO it'd be wise to show a stacktrace and an example of what you tried to do.
Kotti gets its settings the same as any pyramid app. Your entry point (def main(global_conf, **settings) usually) is passed the global_conf and settings dictionary. You're then responsible for fixing that up and shuffling it off into the configurator.
For various reasons PasteDeploy keeps the global_conf separate from the settings, thus you're responsible for merging them if you wish.
[DEFAULT]
foo = bar
[app:main]
use = egg:myapp#main
baz = xyz
def main(global_conf, **app_settings):
settings = global_conf
settings.update(app_settings)
config = Configurator(settings=settings)
config.include('kotti')
return config.make_wsgi_app()
I'm writing an application in python using CherryPy and Jinja as the template system. May be needed to say that i'm a beginner with these tools.
The problem I'm facing now is I cannot figure where to initialize Jinja's Environment class.
Currently I have
application.py (entry point, sets up Environment and starts server)
root.py (root page class for cherrypy, must be imported from 'application.py', and must import 'application.py' to retrieve instantiated Enviroment)
pages.py (other page classes for cherry.py, must import 'application.py', and must be imported from root to build the tree)
Trying to run that ends up in what seems to be a circular reference and fails (application > root > pages > application).
Should I stick to only one Environment instance or is it okay to have an instance at root.py and another in pages.py?
Which is the correct pattern for this?
You shouldn't really repeat yourself. If I were you I would create a new python module templates.py and put all the jinja environment configuration / creation there. Afterwards you can simply import that environment wherever you need it (e.g. from templates import jinjaenv). In this case you keep things simple and extensible for future use.