In a Python Django project, I have a module with a class (let's say SomeDataStore) that abstracts file storage behaviour and requires a config setting with the correct path (different for development, prod, ...)
Currently, I've done it like this:
# settings.py
RELEVANT_PATH = os.environ.get("DJANGO_RELEVANT_PATH", "/some/default")
...
# datastore.py
from django.conf import settings
class SomeDataStore:
def list_files(self):
p = Path(settings.RELEVANT_PATH)
files = p.glob("*.csv")
return files
...
# views.py
from datastatus import SomeDataStore
def show_files(request):
files = SomeDataStore().get_files()
...
Goal: I'd like to decouple the datastore.py module completely from django, so that it can easily be used standalone. That would mean to get rid of the django.conf.settings usage in datastore.py
One obvious way to do it would be to provide the path as an init parameter to the DataStore class.
# datastore.py
class SomeDataStore:
def __init__(self, path) -> None:
self.path=path
...
However, I use this DataStore in many Django views, and that would require me to always specify the path when instantiating the class, e.g. like so
# views.py
from datastatus import SomeDataStore
def show_files(request):
files = SomeDataStore(settings.RELEVANT_PATH).get_files()
...
def show_something_else(request):
somethings = SomeDataStore(settings.RELEVANT_PATH).get_something_else()
...
I'd like to avoid the need in each instantiation to always specify the config setting for the path.
Question: Is there some clean way, pattern or approach to deal with this in Django? Or am I overlooking something obvious here?
Something I thought of is instantiating the DataStore in settings.py, but creating objects in settings.py seems bloating it. Or isn't it?
You could have a my_settings.py holding the PATH:
# my_settings.py
import os
RELEVANT_PATH = os.environ.get("whatever", "/some/default")
and use like
# datastore.py
from my_settings import RELEVANT_PATH
class SomeDataStore:
def list_files(self):
p = Path(RELEVANT_PATH)
files = p.glob("*.csv")
return files
...
In case you need this path whithin Django elsewhere, you could have this path as part of settings.py as well
# settings.py
from my_settings import RELEVANT_PATH as my_relevant_path
RELEVANT_PATH = my_relevant_path
# usage in other django app files
from django.conf import settings
# use settings.RELEVANT_PATH as usual
This would provide for some decoupling and you can change the path at a single place my_settings.py and import the path outside django as well as use it inside django with the usual django.conf.settings.xx syntax.
Related
I'm not sure if I can express my problem clearly. Let's say I need to have a config file for model.py and also train.py, where train.py imports model.py.
In the model.py, it has a function:
def my_model():
return config.a + config.b
However, I have another script train.py:
import model
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--a', default=None)
args = parser.parse_args()
if args.a is not None:
config.a = args.a # I need to change its value.
mymodel = model.my_model()
One option is to create a config.py file and import it in train.py and model.py. But the problem is that it can be confusing to use the argument parser because the actual configurations are not the ones shown in config.py but in the argument parser. If you change the config file but not the argument parser, nothing changes.
Another option is to change everything related to the config to parameters in model.py, namely
# model.py
def my_model(a, b):
return a + b
# train.py
...
model.my_model(config.a, config.b)
But I want to have a global configuration in model.py, so I can make sure all settings are consistent.
Are there any better options? Which is the standard practice?
We are using Django for Speedy Net and Speedy Match (currently Django 1.11.17, we can't upgrade to a newer version of Django because of one of our requirements, django-modeltranslation). I want to define some of our settings as classes. For example:
class UserSettings(object):
MIN_USERNAME_LENGTH = 6
MAX_USERNAME_LENGTH = 40
MIN_SLUG_LENGTH = 6
MAX_SLUG_LENGTH = 200
# Users can register from age 0 to 180, but can't be kept on the site after age 250.
MIN_AGE_ALLOWED_IN_MODEL = 0 # In years.
MAX_AGE_ALLOWED_IN_MODEL = 250 # In years.
MIN_AGE_ALLOWED_IN_FORMS = 0 # In years.
MAX_AGE_ALLOWED_IN_FORMS = 180 # In years.
MIN_PASSWORD_LENGTH = 8
MAX_PASSWORD_LENGTH = 120
PASSWORD_VALIDATORS = [
{
'NAME': 'speedy.core.accounts.validators.PasswordMinLengthValidator',
},
{
'NAME': 'speedy.core.accounts.validators.PasswordMaxLengthValidator',
},
]
(which is defined in https://github.com/speedy-net/speedy-net/blob/staging/speedy/net/settings/global_settings.py). And then in the models, I tried to use:
from django.conf import settings as django_settings
class User(ValidateUserPasswordMixin, PermissionsMixin, Entity, AbstractBaseUser):
settings = django_settings.UserSettings
(and then use attributes of settings, such as settings.MIN_USERNAME_LENGTH, in the class).
But it throws an exception
AttributeError: 'Settings' object has no attribute 'UserSettings'
(but it doesn't throw an exception if I use there a constant which is not a class).
This is the first problem. In the meantime, I defined instead:
from speedy.net.settings import global_settings as speedy_net_global_settings
class User(ValidateUserPasswordMixin, PermissionsMixin, Entity, AbstractBaseUser):
settings = speedy_net_global_settings.UserSettings
The second problem, is how do I override such settings in tests? For example, I use the following code:
from speedy.core.settings import tests as tests_settings
#override_settings(MAX_NUMBER_OF_FRIENDS_ALLOWED=tests_settings.OVERRIDE_MAX_NUMBER_OF_FRIENDS_ALLOWED)
in https://github.com/speedy-net/speedy-net/blob/staging/speedy/core/friends/tests/test_views.py. But if MAX_NUMBER_OF_FRIENDS_ALLOWED would be defined in the class UserSettings, how do I override it?
Django doesn't expect you to deviate much from its low-level design choices and it's usually a struggle to work around things that Django doesn't explicitly allow you to customize.
Django's settings object explicitly skips over any objects in your settings module with non-uppercase names. If you rename your class to USER_SETTINGS, it will work. If you really want to keep your object's original name a horrible solution would be to trick Django:
class UserSettings:
...
class AlwaysUppercaseStr(str):
def isupper(self):
return True
globals()[AlwaysUppercaseStr('UserSettings')] = globals().pop('UserSettings')
I have no idea if this is portable across Python implementations but it works with CPython's dir().
override_settings has no support for what you're trying to do so you will probably need to rewrite that class to allow the global settings object to be configurable.
Thanks to #Blender for the tip:
Django's settings object explicitly skips over any objects in your
settings module with non-uppercase names. If you rename your class to
USER_SETTINGS, it will work.
I was not aware that all the settings have to be uppercase. So I renamed class UserSettings to class USER_SETTINGS (although PyCharm doesn't like it), but I checked and it's also possible to add this code at the end of the file:
USER_SETTINGS = UserSettings
Without renaming the class.
As for my second question - how do I override such settings in tests? I added a file called utils.py:
def get_django_settings_class_with_override_settings(django_settings_class, **override_settings):
class django_settings_class_with_override_settings(django_settings_class):
pass
for setting, value in override_settings.items():
setattr(django_settings_class_with_override_settings, setting, value)
return django_settings_class_with_override_settings
(You can see it on https://github.com/speedy-net/speedy-net/blob/staging/speedy/core/base/test/utils.py)
And then in the tests:
from django.conf import settings as django_settings
from django.test import override_settings
from speedy.core.settings import tests as tests_settings
from speedy.core.base.test.utils import get_django_settings_class_with_override_settings
#override_settings(USER_SETTINGS=get_django_settings_class_with_override_settings(django_settings_class=django_settings.USER_SETTINGS, MAX_NUMBER_OF_FRIENDS_ALLOWED=tests_settings.OVERRIDE_USER_SETTINGS.MAX_NUMBER_OF_FRIENDS_ALLOWED))
def test_user_can_send_friend_request_if_not_maximum(self):
self.assertEqual(first=django_settings.USER_SETTINGS.MAX_NUMBER_OF_FRIENDS_ALLOWED, second=4)
I checked and I have to define another class (in this case, class django_settings_class_with_override_settings because if I change the class django_settings_class directly it also affects other tests which didn't use #override_settings.
This question has been asked earlier: What is the purpose of apps.py in Django 1.9?
Application configuration objects store metadata for an application. Some attributes can be configured in AppConfig subclasses. Others are set by Django and read-only.
However, what does it mean by metadata for application? Is it limited only to those AppConfig metadata:name, verbose_name, path, label, module, models_module?
Or does it make sense to extends beyonds the predefined metadata, especially for Application Specific metadata, for example in blog apps we have a date format configuration, usually defined as follows:
# File: settings.py
BLOG = {
'DATE_FORMAT': 'ddMMYYY',
}
At which it is being used as follow:
# File: blog/<...>.py
from django.conf import settings
date_format = settings.BLOG['DATE_FORMAT']
On contrary, we could move this configuration into blog/apps.py as BlogConfig?
class BlogConfig(AppConfig):
name = 'blog'
verbose_name = 'Awesome Blog'
date_format = 'ddMMYYYY'
So that throughout the code in the application, date_format is being used by:
# File: blog/<...>.py
from django.apps import apps
date_format = apps.get_app_config('blog').date_format
It sounds to me that settings.py is project settings, but not an application settings. Thus it is more sounds to put all application settings inside apps.py then settings.py. So, is this a valid assumption/argument/convention for me to put application configuration inside apps.py instead of settings.py?
A project is unique per django installation, while an app is supposed to be reusable.
If you put custom app settings in your project's settings.py they are supposed to be modifiable, especially if you (or others) reuse this app for another project.
Now, if you put these custom settings in your app's apps.py, this means they won't be modifiable on a per project basis. In which case there is no reason to put them in apps.py rather than in a constants submodule for instance. Unless you want to provide a limited set of possible configs:
class BlogConfig(AppConfig):
name = 'blog'
verbose_name = "Blog"
date_format = 'ddMMYYYY'
class CustomizableDateFormatBlogConfig(BlogConfig):
date_format = getattr(settings, 'BLOG_DATE_FORMAT', BlogConfig.date_format)
class I18nBlogConfig(BlogConfig)
verbose_name = _("Blog")
The default_app_config would be BlogConfig but the project using the app would be able to choose CustomizableDateFormatBlogConfig or I18nBlogConfig as well.
However this makes very poorly customizable apps. In the example above, if you want to let the app users use both CustomizableDateFormatBlogConfig and I18nBlogConfig, you would need to do something like this:
class BlogConfig(AppConfig):
name = 'blog'
verbose_name = "Blog"
date_format = 'ddMMYYYY'
class CustomizableDateFormatMixin:
date_format = getattr(settings, 'BLOG_DATE_FORMAT', BlogConfig.date_format)
class I18nMixin:
verbose_name = _("Blog")
class CustomizableDateFormatBlogConfig(CustomizableDateFormatMixin, BlogConfig):
pass
class I18nBlogConfig(I18nMixin, BlogConfig):
pass
class I18nCustomizableDateFormatBlogConfig(I18nMixin, CustomizableDateFormatMixin, BlogConfig):
pass
So, apart specific cases where you need to provide a set of few different app configs, you'd better put your custom app settings in the project's settings.py.
I'm using Jinja2 templates with Bottle.py and Google App Engine's dev_appserver for development. I want the templates to automatically reload on every request (or ideally only when they change), so that I don't have to keep restarting the server.
According to bottle's docs, you're supposed to be able to disable template caching by calling bottle.debug(True).
Jinja still seems to be caching its templates, though. I believe this to be because of the way the bottle jinja2 adapter is written (it just uses a default Jinja2 loader and doesn't expose many configuration options).
Following the Jinja2 Docs, I wrote this custom Loader that I would expect to trigger a template reload every time, but it doesn't seem to work either:
import settings
from bottle import jinja2_template
from bottle import Jinja2Template, template as base_template
class AutoreloadJinja2Template(Jinja2Template):
def loader(self, name):
def uptodate():
# Always reload the template if we're in DEVMODE (a boolean flag)
return not settings.DEVMODE
fname = self.search(name, self.lookup)
if fname:
with open(fname, "rb") as f:
source = f.read().decode(self.encoding)
return (source, fname, uptodate)
template = functools.partial(base_template,
template_adapter=AutoreloadJinja2Template,
template_lookup = settings.TEMPLATE_PATHS,
template_settings={
'auto_reload': settings.DEVMODE
}
)
Templates are still getting cached until I restart dev_appserver. This must be a fairly common problem. Does anyone have a solution that works?
UPDATE:
I ended up doing something like:
class CustomJinja2Template(Jinja2Template):
if settings.DEVMODE:
def prepare(self, *args, **kwargs):
kwargs.update({'cache_size':0})
return Jinja2Template.prepare(self, *args, **kwargs)
template = functools.partial(original_template, template_adapter=CustomJinja2Template)
This causes the templates to always reload, but only if a python module has been touched. i.e. if you just edit a template file, the changes won't take affect until you edit one of the python files that imports it. It seems as if the templates are still being cached somewhere.
I resolved this issue by ditching bottle's template solutions completely and using pure jinja2. It seems that Jijnja's FileSystemLoader is the only one, which can watch for file changes.
I defined new template function as follows (it looks for files in views/, just like bottle used to):
from jinja2 import Environment, FileSystemLoader
if local_settings.DEBUG:
jinja2_env = Environment(loader=FileSystemLoader('views/'), cache_size=0)
else:
jinja2_env = Environment(loader=FileSystemLoader('views/'))
def template(name, ctx):
t = jinja2_env.get_template(name)
return t.render(**ctx)
Then I use it like this:
#route('/hello')
def hello():
return template('index.tpl', {'text': "hello"})
The difference from bottle's API is that you have to include .tpl in file name and you have to pass context variables as dictionary.
Bottle caches templates internally (independent from Jinja2 caching). You can disable the cache via bottle.debug(True) or bottle.run(..., debug=True) or clear the cache with bottle.TEMPLATES.clear().
The Environment object in jinja2 has a configuration value for the cache size and, according to the documentation,
If the cache size is set to 0 templates are recompiled all the time
Have you tried something like this?
from jinja2 import Environment
env = Environment(cache_size=0)
Using bottle view decorator, you can just do #view('your_view', cache_size=0).
Bottle has a reloader=True parameter in server adapter, but I guess it works only with SimpleTemplate. I will try to extend this behaviour to other template engines.
If you want to do it in all your views, maybe you can do something like this:
import functools
view = functools.partials(view, cache_size=0)
This way, you can do it only when you are in debug mode adding an if statement to this code if bottle.DEBUG.
I am working on a cherrypy+cheetah app and would like to improve the development experience.
I have everything working when I manually compile templates beforehand. (Update: This is how things work for production: precompile, don't ship *.tmpl and load templates as regular python modules.) However, during development I'd rather just load the templates every time they are referenced so that I don't need to kill and restart my application. I have a couple of issues I am facing:
If I have templates inheriting from base templates, I get import errors (can't find base templates). I think I had this actually working during my experiments, but unfortunately didn't save it and now I can't make it work.
Suppose I get 1. working, how do make it so that edits even in base templates get picked up without restart.
Below is my sample application that should demonstrate the problems. The directory structure is as follows:
t.py
templates/
base.tmpl
index.tmpl
t.py:
import sys
import cherrypy
from Cheetah.Template import Template
class T:
def __init__(self, foo):
self.foo = foo
#cherrypy.expose
def index(self):
return Template(file='templates/index.tmpl',
searchList=[{'foo': self.foo}]).respond()
cherrypy.quickstart(T(sys.argv[1]))
base.tmpl:
#def body
This is the body from the base
#end def
This is the base doc
index.tmpl:
#from templates.base import base
#extends base
#def body
$base.body(self)
This is the extended body
#end def
This is from index
Run it like this:
python t.py Something
Try this:
Replace base.tmpl with:
#from Cheetah.Template import Template
#def body
#set $base = Template(file="templates/base.tmpl")
$base.body()
<br/>
This is the extended body
#end def
$body()
<br/>
This is from index
Looks like this question was kind of answered in another SO question. By using the cheetah_import function, I can write my methods like this:
#cherrypy.expose
def index(self):
templates = cheetah_import('templates.index')
t = getattr(getattr(templates, 'index'), 'index')(searchList=[{'foo': self.foo}])
return t.respond()
I will also need to add an __init__.py into the templates directory. This approach also requires that Cherrypy's engine.autoreload_on setting is set to True.
To make production more efficient, I can make cheetah_import = __import__ and not replace the default __builtin__.import.