Modifying Django settings in tests - python

From Django docs:
You shouldn’t alter settings in your applications at runtime. For
example, don’t do this in a view:
from django.conf import settings
settings.DEBUG = True # Don't do this!
The only place you should assign to settings is in a settings file.
I've noticed that Django testing code does alter settings. Why is it ok to do it there?

Is it ok to change settings?
Short answer:
No, unless you do it during the startup.
Long answer:
Django docs are correct, you should not modify settings at runtime. This means, no settings modifications after the app has been started, like changing configuration in views.py, serializers.py, models.py or other modules you add during the development. But it is ok to modify settings if they depend on local variables if you do it at startup and you are fully aware of what happens.
Can you modify settings while testing?
Yes, if you think you need it. Feel free to rely on override_settings to change settings value for testing purposes in the unit tests, see example of usage here
Also, everything that this decorator does - is overriding settings with provided values and restoring settings value after test has passed (decorated function executed).
Why Django does modify them while testing the code.
From what I see, they change settings only for testing purposes and the only thing they do - adding a local host to allowed host so they can test the code using a local domain. Examples like that seem pretty reasonable for me as change is done only once and during unit tests set up. Imagine having overrride_settings call every time, that would be monstrous.
General recommendation.
Try not to, there is no need to modify settings and if there is - think about it, maybe settings is not the right place for a mutable setting?
If you want to modify settings at runtime - please be aware that settings might be cached somewhere, copied and accessed all over the place - this is a plenty of space for new bugs. There is nothing bad in it, except having an unexpected behavior of the system due to an old/new value of the modified setting.
Hope this makes sense.

The answer is in the wording:
You shouldn’t alter settings in your applications at runtime.
Unit test code is not part of your application, so that statement doesn't apply to unit tests.
Why is it ok to do it there?
As per above, it is perfectly OK to override settings during tests, provided you do it in a localised manner (because tests are sometimes run in a multi-threaded manner).
Here is how they recommend doing it:
from django.test import TestCase
class LoginTestCase(TestCase):
def test_login(self):
# First check for the default behavior
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/accounts/login/?next=/sekrit/')
# Then override the LOGIN_URL setting
with self.settings(LOGIN_URL='/other/login/'):
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/other/login/?next=/sekrit/')
See docs:
https://docs.djangoproject.com/en/2.2/topics/testing/tools/#django.test.SimpleTestCase.settings

Changing the settings during tests is perfectly normal, expected, supported behavior. This is because you want to verify that your code works with lots of different settings. It's so normal in fact that the built-in method to do so has such a simple name it's a bit confusing to find in the docs:
e.g.
class QueueTests(TestCase):
def test_both_modes(self):
with self.settings(QUEUE_MODE='fast'):
self.assertTrue(run_a_test())
assert settings.QUEUE_MODE == 'fast'
with self.settings(QUEUE_MODE='slow'):
self.assertTrue(run_a_test())
assert settings.QUEUE_MODE == 'slow'

Related

Is there a way to share common configuration using Paste Deploy?

This is a pretty simple idea conceptually. In terms of specifics, I'm working with a pretty generic Kotti installation, where I'm customizing some pages / templates.
Much of my configuration is shared between a production and development server. Currently, these settings are included in two separate ini files. It would be nice to DRY this configuration, with common settings in one place.
I'm quite open to this happening in python or an an ini file / section (or perhaps a third, yet-to-be-considered place). I think it's equivalent to use a [DEFAULT] section, or pass dictionaries to loadapp via global_conf, but that seems to be processed in a squirrelly way. For example, Kotti thinks I've properly set sqlalchemy.url, but sqlalchemy iteself fails on url = opts.pop('url'). Moreover, since Kotti defines some default settings, Paste doesn't end up searching for them in the global_conf (e.g., kotti_configurators).
I don't like the idea of passing in a large dict for %(var)s substitution, as it requires effectively renaming all of the shared variables.
In my initial experiments with Paste Deploy, it demands a "main" section in each ini file. So, I don't think I can just use a use = config:shared.ini line. But that's conceptually close to what I'd like to do.
Is there a way to explicitly (and properly) include settings from DEFAULT or global_conf? Or perhaps do this programmatically with python on the results of loadapp?
For example, Kotti thinks I've properly set sqlalchemy.url, but sqlalchemy iteself fails on url = opts.pop('url').
If you think something is odd and you're asking on SO it'd be wise to show a stacktrace and an example of what you tried to do.
Kotti gets its settings the same as any pyramid app. Your entry point (def main(global_conf, **settings) usually) is passed the global_conf and settings dictionary. You're then responsible for fixing that up and shuffling it off into the configurator.
For various reasons PasteDeploy keeps the global_conf separate from the settings, thus you're responsible for merging them if you wish.
[DEFAULT]
foo = bar
[app:main]
use = egg:myapp#main
baz = xyz
def main(global_conf, **app_settings):
settings = global_conf
settings.update(app_settings)
config = Configurator(settings=settings)
config.include('kotti')
return config.make_wsgi_app()

Why is django's settings object a LazyObject?

Looking in django.conf I noticed that settings are implemented like this:
class LazySettings(LazyObject):
...
What is the rationale behind making settings objects lazy?
Check out this section of the Django coding style. The reason is explained in there (quoted below).
In addition to performance, third-party modules can modify settings when they are imported. Accessing settings should be delayed to ensure this configuration happens first.
Modules should not in general use settings stored in
django.conf.settings at the top level (i.e. evaluated when the module
is imported). The explanation for this is as follows:
Manual configuration of settings (i.e. not relying on the
DJANGO_SETTINGS_MODULE environment variable) is allowed and possible
as follows:
from django.conf import settings
settings.configure({}, SOME_SETTING='foo')
However, if any setting is
accessed before the settings.configure line, this will not work.
(Internally, settings is a LazyObject which configures itself
automatically when the settings are accessed if it has not already
been configured).
So, if there is a module containing some code as follows:
from django.conf import settings
from django.core.urlresolvers import get_callable
default_foo_view = get_callable(settings.FOO_EXAMPLE_VIEW)
...then importing
this module will cause the settings object to be configured. That
means that the ability for third parties to import the module at the
top level is incompatible with the ability to configure the settings
object manually, or makes it very difficult in some circumstances.
Instead of the above code, a level of laziness or indirection must be
used, such as django.utils.functional.LazyObject,
django.utils.functional.lazy() or lambda.
Its a proxy object that abstracts the actual settings files, and makes it light weight until you actually access the settings you want. Once you start accessing the attributes, it will then load on demand. The idea is to reduce overhead in loading settings until you need them.
Additional information to the accepted answer:
Because we have both django/conf/global_settings.py setting which is the default global setting and site-specific setting which is configured via DJANGO_SETTINGS_MODULE environment variable, we need to have an object that handles the priority stuff of individual settings and abstract away this process. We shouldn't do that ourselves. It also decouples the code that uses settings from the location of your settings.
So an object is needed!
With that being said, why this object is lazy? Because it lets third-parties to configure the setting manually.
But how? Third-parties do that by settings.configure() method. They can only do that if the settings haven't already been loaded:
def configure(self, default_settings=global_settings, **options):
"""
Called to manually configure the settings. The 'default_settings'
"""
if self._wrapped is not empty:
raise RuntimeError("Settings already configured.")
And as a consequence, Django's documentation said:
Modules should not in general use settings stored in
django.conf.settings at the top level (i.e. evaluated when the module
is imported)
Why? If they do, in the __getattr__ method of this object, there is a checking that loads the settings if it hasn't already been loaded:
def __getattr__(self, name):
"""Return the value of a setting and cache it in self.__dict__."""
if (_wrapped := self._wrapped) is empty:
self._setup(name) #########
So it has to be lazy... Lazy means when an instance of the LazySettings class is instantiated, no settings will be configured.
I think that the purpose is to simplify settings from a developers point of view. So each project can have its own settings.py file without having the need to define all other Django settings as well. The LazySettings wrapper kind of combines everything from Django global_settings.py and your local settings. It lets the developer decide what settings he wants to overwrite, which he want to keep the defaults or which he wants to add.
The LazySettings class is maybe a wrong name for this, because I think it is not really lazy. Once you do something like from django.conf import settings the whole settings object is in your scope.

Python Django Global Variables

I'm looking for simple but recommended way in Django to store a variable in memory only. When Apache restarts or the Django development server restarts, the variable is reset back to 0. More specifically, I want to count how many times a particular action takes place on each model instance (database record), but for performance reasons, I don't want to store these counts in the database. I don't care if the counts disappear after a server restart. But as long as the server is up, I want these counts to be consistent between the Django shell and the web interface, and I want to be able to return how many times the action has taken place on each model instance.
I don't want the variables to be associated with a user or session because I might want to return these counts without being logged in (and I want the counts to be consistent no matter what user is logged in). Am I describing a global variable? If so, how do I use one in Django? I've noticed the files like the urls.py, settings.py and models.py seems to be parsed only once per server startup (by contrast to the views.py which seems to be parsed eache time a request is made). Does this mean I should declare my variables in one of those files? Or should I store it in a model attribute somehow (as long as it sticks around for as long as the server is running)? This is probably an easy question, but I'm just not sure how it's done in Django.
Any comments or advice is much appreciated.
Thanks,
Joe
Why one mustn't declare global variables? O_o. It just looks like a propaganda. If the author knows what he wants and what side-effects will be, why not. Maybe it's just a quick experiment.
You could declare your counter as a model class-member. Then to deal with race condition you have to add a method that will wait if some other client, from another thread works with counter. Something like this:
import threading
class MyModel(ModelBase):
_counter = 0
_counter_lock = threading.Lock()
#classmethod
def increment_counter(cls):
with cls._counter_lock:
cls._counter += 1
def some_action(self):
# core code
self.increment_counter()
# somewhere else
print MyModel._counter
Remember however: you have to have your application in one process. So if you've deployed the application under Apache, be sure it is configured to spawn many threads, but not many processes. If you're experimenting with ./manage.py run no actions are required.
You mustn't declare global variables. Settings (constants) are OK if done right. But variables violate with shared-nothing architecture and might cause a lot of trouble. (best case they'll be inconsistent)
I would simply store those statistics in the cache. (Well, actually I would store them in the database but you made clear you believe it will have a negative impact on performance, so...)
The new incr() and decr() methods are especially suitable for counting. See docs for more info.
I would create a "config.py" file on the root directory. and put all global variables inside:
x = 10
my_string = ''
at "view.py":
from your_project import config
def MyClass(reuqest):
y = config.x + 20
my_title = config.my_string
...
The benefit of creating this file is the variables can cross multiple .py files and it is easy to manage.
If you want to avoid the hassle with Django database, e.g. migrations or performance issues, you might consider in-memory database redis. Redis guarantees consistency even if there are multiple Django processes.
You can use variables from settings.py
see below example. It's an app that counts requests
settings.py
REQ_COUNTER = 0
View.py
from {**your project folder name **}.settings import REQ_COUNTER
def CountReq(request):
global = REQ_COUNTER
REQ_COUNTER = REQ_COUNTER + 1
return HttpResponse(REQ_COUNTER)
Thanks :)

Correct place to put extra startup code in django?

I would like to run some environment checks when my django process starts and die noisily in the case of an error. I'm thinking things like the database has an incorrect encoding or the machine has a python version we don't support.
I'd rather our team be faced with a fatal error that they have to fix, rather than be able to ignore it.
I'm Ok with writing these checks but I'm curious about where the best place to put them is. How do I get them to execute as part of django's startup process? I thought there might be a signal I could listen too, but I can't find a relevant one in the docs.
If you don't want to use settings module, then try project's __init__.py.
If you want to check that the system is correctly installed, I think that you should write your own admin command and run it as post-installation check.
I think that it doesn't worth to check if the python version is correctly installed too often especially if you are installing the django app on shared-host. My app is hosted at alwaysdata and they restart the FastCgi process every hour. These checks can have an impact on the application response time.
We use the top-level urls.py for this.
I would put them in settings.py. In the past, I have put system checks like this:
try:
from local_settings import *
except ImportError:
print "Missing %s" % os.path.join(PROJECT_ROOT, "local_settings.py")
if DEBUG:
for p in [PROJECT_ROOT, MEDIA_ROOT, THEME_DIR, ADMIN_MEDIA_ROOT] + list(TEMPLATE_DIRS):
p = os.path.normpath(p)
if not os.path.exists(p):
print "Missing path: %s" % p
I have tested all three __init__.py Settings.py and urls.py methods and this is what I have found.
When code is run from either __init__.py or Settings.py, the start up functions are run twice upon web server start up; when the start up functions are run from urls.py the code is run once, however it is run only upon the first request made to the we server leading to a potentially long wait for the first user to visit your site.
It is standard practise to call a 'warming' page when bringing large web applications back online so I don't see that calling a start up function from a CLEARLY identified location in the urls.py should be a problem.
Most of the answers here are extremely old, so I assume that's why Django's checks framework isn't mentioned.
It's been a part of django since at least v2 (django==3.1 at the time of writing).
That's probably the right place for most of the checks you're requiring.
I'd still consider using settings, as mentioned by other answers, for checks where settings contents are required but which also need to be run prior to readying the apps.
You can put it in settings.py as mentioned by others, but having code in the settings is not ideal. There is also the option of adding a handler for django.db.models.signals.class_prepared that does the desired start up checks after a specific model class is prepared.

Propagating application settings

Probably a very common question, but couldn't find suitable answer yet..
I have a (Python w/ C++ modules) application that makes heavy use of an SQLite database and its path gets supplied by user on application start-up.
Every time some part of application needs access to database, I plan to acquire a new session and discard it when done. For that to happen, I obviously need access to the path supplied on startup. Couple of ways that I see it happening:
1. Explicit arguments
The database path is passed everywhere it needs to be through an explicit parameter and database session is instantiated with that explicit path. This is perhaps the most modular, but seems to be incredibly awkward.
2. Database path singleton
The database session object would look like:
import foo.options
class DatabaseSession(object):
def __init__(self, path=foo.options.db_path):
...
I consider this to be the lesser-evil singleton, since we're storing only constant strings, which don't change during application runtime. This leaves it possible to override the default and unit test the DatabaseSession class if necessary.
3. Database path singleton + static factory method
Perhaps slight improvement over the above:
def make_session(path=None):
import foo.options
if path is None:
path = foo.options.db_path
return DatabaseSession(path)
class DatabaseSession(object):
def __init__(self, path):
...
This way the module doesn't depend on foo.options at all, unless we're using the factory method. Additionally, the method can perform stuff like session caching or whatnot.
And then there are other patterns, which I don't know of. I vaguely saw something similar in web frameworks, but I don't have any experience with those. My example is quite specific, but I imagine it also expands to other application settings, hence the title of the post.
I would like to hear your thoughts about what would be the best way to arrange this.
Yes, there are others. Your option 3 though is very Pythonic.
Use a standard Python module to encapsulate options (this is the way web frameworks like Django do it)
Use a factory to emit properly configured sessions.
Since SQLite already has a "connection", why not use that? What does your DatabaseSession class add that the built-in connection lacks?

Categories