Create constants using a "settings" module? - python

I am relatively new to Python. I am looking to create a "settings" module where various application-specific constants will be stored.
Here is how I am wanting to set up my code:
settings.py
CONSTANT = 'value'
script.py
import settings
def func():
var = CONSTANT
# do some more coding
return var
I am getting a Python error stating:
global name 'CONSTANT' is not defined.
I have noticed on Django's source code their settings.py file has constants named just like I do. I am confused on how they can be imported to a script and referenced through the application.
EDIT
Thank you for all your answers! I tried the following:
import settings
print settings.CONSTANT
I get the same error
ImportError: cannot import name CONSTANT

The easiest way to do this is to just have settings be a module.
(settings.py)
CONSTANT1 = "value1"
CONSTANT2 = "value2"
(consumer.py)
import settings
print settings.CONSTANT1
print settings.CONSTANT2
When you import a python module, you have to prefix the the variables that you pull from it with the module name. If you know exactly what values you want to use from it in a given file and you are not worried about them changing during execution, then you can do
from settings import CONSTANT1, CONSTANT2
print CONSTANT1
print CONSTANT2
but I wouldn't get carried away with that last one. It makes it difficult for people reading your code to tell where values are coming from. and precludes those values being updated if another client module changes them. One final way to do it is
import settings as s
print s.CONSTANT1
print s.CONSTANT2
This saves you typing, will propagate updates and only requires readers to remember that anything after s is from the settings module.

step 1: create a new file settings.py on the same directory for easier access.
#database configuration settings
database = dict(
DATABASE = "mysql",
USER = "Lark",
PASS = ""
)
#application predefined constants
app = dict(
VERSION = 1.0,
GITHUB = "{url}"
)
step 2: importing settings module into your application file.
import settings as s # s is aliasing settings & settings is the actual file you do not have to add .py
print(s.database['DATABASE']) # should output mysql
print(s.app['VERSION']) # should output 1.0
if you do not like to use alias like s you can use a different syntax
from settings import database, app
print(database['DATABASE']) # should output mysql
print(app['VERSION']) # should output 1.0
notice on the second import method you can use the dict names directly
A small tip you can import all the code on the settings file by using * in case you have a large file and you will be using most of the settings on it on your application
from settings import * # * represent all the code on the file, it will work like step 2
print(database['USER']) # should output lark
print(app['VERSION']) # should output 1.0
i hope that helps.

When you import settings, a module object called settings is placed in the global namespace - and this object carries has that was in settings.py as attributes. I.e. outside of settings.py, you refer to CONSTANT as settings.CONSTANT.

Leave your settings.py exactly as it is, then you can use it just as Django does:
import settings
def func():
var = settings.CONSTANT

...Or, if you really want all the constants from settings.py to be imported into the global namespace, you can run
from settings import *
...but otherwise using settings.CONSTANT, as everyone else has mentioned here, is quite right.

See the answer I posted to Can I prevent modifying an object in Python? which does what you want (as well as force the use of UPPERCASE identifiers). It might actually be a better answer for this question than it was for the the other.

This way is more efficient since it loads/evaluates your settings variables only once. It works well for all my Python projects.
pip install python-settings
Docs here: https://github.com/charlsagente/python-settings
You need a settings.py file with all your defined constants like:
# settings.py
DATABASE_HOST = '10.0.0.1'
Then you need to either set an env variable (export SETTINGS_MODULE=settings) or manually calling the configure method:
# something_else.py
from python_settings import settings
from . import settings as my_local_settings
settings.configure(my_local_settings) # configure() receives a python module
The utility also supports Lazy initialization for heavy to load objects, so when you run your python project it loads faster since it only evaluates the settings variable when its needed
# settings.py
from python_settings import LazySetting
from my_awesome_library import HeavyInitializationClass # Heavy to initialize object
LAZY_INITIALIZATION = LazySetting(HeavyInitializationClass, "127.0.0.1:4222")
# LazySetting(Class, *args, **kwargs)
Just configure once and now call your variables where is needed:
# my_awesome_file.py
from python_settings import settings
print(settings.DATABASE_HOST) # Will print '10.0.0.1'

I'm new in python but if we define an constant like a function
on setings.py
def CONST1():
return "some value"
main.py
import setings
print setings.CONST1() ##take an constant value
here I see only one, value cant be changed but its work like a function..

Try this:
In settings.py:
CONSTANT = 5
In your main file:
from settings import CONSTANT
class A:
b = CONSTANT
def printb(self):
print self.b
I think your above error is coming from the settings file being imported too late. Make sure it's at the top of the file.

Also worth checking out is the simple-settings project which allows you to feed the settings into your script at runtim, which allows for environment-specific settings (think dev, test, prod,...)

Related

python system to dynamically load variables and modules

i am trying to build an infrastructure which uses a text file to dynamically setup an environment in which code should run. the text file will contain paths of python modules to load AND variables which other parts of my code will need. thus, i am trying to use one mechanism for loading modules and variables (versus virtualenv just for modules and another scheme for variables)
my code would look like:
my_script_which_uses_env.py:
import envcontrol_module
import some_other_module_whose_path_is_unknown_at_run_time
def main():
my_envcontrol = envcontrol_module.EnvControlClass()
my_variable = my_envcontrol.get_value("SOME_VARIABLE")
those using my infrastructure would be required to "import envcontrol_module" as the first line in their code. after that import is done, then the syspath has already been modified to include the paths of the modules to load, ie when import envcontrol_module returns, the sys path has the path for some_other_module_whose_path_is_unknown_at_run_time.
at a high level, i think the following is needed:
1. need "import envcontrol_module" to cause a reading of my config text file. this would create a dict with the variable names and values. here i could modify sys.path
2. somehow keep this dict around such that when an instance of EnvControlClass is created, this class is able to access the dict
envcontrol_module.py
def main():
config_dict = read_config_file()
modify_sys_path(config_dict)
class EnvControl():
self.config_dict = config_dict
how can i do the above in a python friendly way:
1. config dict is not global to envcontrol_module
2. maintain the need for this to be transparent to programmers, they simply do "import envcontrol_module" first and then do subsequent imports normally
i know i can do the following, but then the programmers would do the imports in their main() rather than at top of file, which will be different/not as nice/not python-esque:
my_script_which_uses_env.py:
import envcontrol_module
def main():
my_envcontrol = envcontrol_module.EnvControlClass()
import some_other_module_whose_path_is_unknown_at_run_time
my_variable = my_envcontrol.get_value("SOME_VARIABLE")
envcontrol_module.py
class EnvControl():
def __init__(self):
self.config_dict = read_config_file()
modify_sys_path(config_dict)
not sure if i explained this well. if i haven't, let me know and i can clarify more.
thanks!

Django app defaults?

I'm looking for a way to have application defaults and settings that are easy to use, difficult to get wrong, and have little overhead..
Currently I have it organized as follows:
myapp/defaults.py
# application defaults
import sys
if sys.platform == 'win32':
MYAPP_HOME_ROOT = os.path.dirname(os.environ['USERPROFILE'])
else:
MYAPP_HOME_ROOT = '/home'
in my project I have:
mysite/settings.py
from myapp.defaults import * # import all default from myapp
MYAPP_HOME_ROOT = '/export/home' # overriding myapp.defaults
With this setup I could import and use settings in the regular django way (from django.conf import settings and settings.XXX).
update-3 (why we need this)
Default settings ("defaults"):
An application is more convenient to use if it can be configured by overriding a set of sensible default settings.
the application "has domain knowledge", so it makes sense for it to provide sensible defaults whenever possible.
it isn't convenient for a user of the application to need to provide all the settings needed by every app, it should be sufficient to override a small subset and leave the rest with default values.
it is very useful if defaults can react to the environment. You'll often want to do something different when DEBUG is True, but any other global setting could be useful: e.g. MYAPP_IDENTICON_DIR = os.path.join(settings.MEDIA_ROOT, 'identicons') (https://en.wikipedia.org/wiki/Identicon)
project (site/global) settings must override app-defaults, i.e. a user who defined MYAPP_IDENTICON_DIR = 's3://myappbucket.example.com/identicons' in the (global) settings.py file for their site should get this value, and not the application's default.
any solution that keeps close to the normal way of using settings (import .. settings; settings.FOO) is superior to a solution that needs new syntax (since new syntax will diverge and we would get new and unique ways to use settings from app to app).
the zen of python probably applies here:
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
(The original post posited the two key problems below, leaving the above assumptions unstated..)
Problem #1: When running unit tests for the app there is no site however, so settings wouldn't have any of the myapp.defaults.
Problem #2: There is also a big problem if myapp.defaults needs to use anything from settings (e.g. settings.DEBUG), since you can't import settings from defaults.py (since that would be a circular import).
To solve problem #1 I created a layer of indirection:
myapp/conf.py
from . import defaults
from django.conf import settings
class Conf(object):
def __getattr__(self, attr):
try:
return getattr(settings, attr)
except AttributeError:
return getattr(defaults, attr)
conf = Conf() # !<-- create Conf instance
and usage:
myapp/views.py
from .conf import conf as settings
...
print settings.MYAPP_HOME_ROOT # will print '/export/home' when used from mysite
This allows the conf.py file to work with an "empty" settings file too, and the myapp code can continue using the familiar settings.XXX.
It doesn't solve problem #2, defining application settings based on e.g. settings.DEBUG. My current solution is to add to the Conf class:
from . import defaults
from django.conf import settings
class Conf(object):
def __getattr__(self, attr):
try:
return getattr(settings, attr)
except AttributeError:
return getattr(defaults, attr)
if settings.DEBUG:
MYAPP_HOME_ROOT = '/Users'
else:
MYAPP_HOME_ROOT = '/data/media'
conf = Conf() # !<-- create Conf instance
but this is not satisfying since mysite can no longer override the setting, and myapp's defaults/settings are now spread over two files...
Is there an easier way to do this?
update-4: "just use the django test runner.."
The app you are testing relies on the Django framework - and you cannot get around the fact that you need to bootstrap the framework first before you can test the app. Part of the bootstrapping process is creating a default settings.py and further using the django-supplied test runners to ensure that your apps are being testing in the environment that they are likely to be run.
While that sounds like it ought to be true, it doesn't actually make much sense, e.g. there is no such thing as a default settings.py (at least not for a reusable app). When talking about integration testing it makes sense to test an app with the site/settings/apps/database(s)/cache(s)/resource-limits/etc. that it will encounter in production. For unit testing, however, we want to test just the unit of code that we're looking at - with as few external dependencies (and settings) as possible. The Django test runner(s) do, and should, mock out major parts of the framework, so it can't be said to be running in any "real" environment.
While the Django test runner(s) are great, there are a long list of issues it doesn't handle. The two biggies for us are (i) running tests sequentially is so slow that the test suite becomes unused (<5mins when running in parallel, almost an hour when running sequentially), (ii) sometimes you just need to run tests on big databases (we restore last night's backup to a test database that the tests can run against - much too big for fixtures).
The people who made nose, py.test, twill, Selenium, and any of the fuzz testing tools really do know testing well, since that is their only focus. It would be a shame to not be able to draw on their collective experience.
I am not the first to have encountered this problem, and there doesn't look like there is an easy or common solution. Here are two projects that have different solution:
Update, python-oidc-provider method:
The python-oidc-provider package (https://github.com/juanifioren/django-oidc-provider) has another creative way to solve the app-settings/defaults problem. It uses properties to define defaults in a myapp/settings.py file:
from django.conf import settings
class DefaultSettings(object):
#property
def MYAPP_HOME_ROOT(self):
return ...
default_settings = DefaultSettings()
def get(name):
value = None
try:
value = getattr(default_settings, name)
value = getattr(settings, name)
except AttributeError:
if value is None:
raise Exception("Missing setting: " + name)
using a setting inside myapp becomes:
from myapp import settings
print settings.get('MYAPP_HOME_ROOT')
good: solves problem #2 (using settings when defining defaults), solves problem #1 (using default settings from tests).
bad: different syntax for accessing settings (settings.get('FOO') vs the normal settings.FOO), myapp cannot provide defaults for settings that will be used outside of myapp (the settings you get from from django.conf import settings will not contain any defaults from myapp). External code can do from myapp import settings to get regular settings and myapp defaults, but this breaks down if more than one app wants to do this...
Update2, the django-appconf package:
(Note: not related to Django's AppConfig..)
With django-appconfig, app settings are created in myapp/conf.py (which needs to be loaded early, so you should probably import the conf.py file from models.py - since it is loaded early):
from django.conf import settings
from appconf import AppConf
class MyAppConf(AppConf):
HOME_ROOT = '...'
usage:
from myapp.conf import settings
print settings.MYAPP_HOME_ROOT
AppConf will automagically add the MYAPP_ prefix, and also automagically detect if MYAPP_HOME_ROOT has been redefined/overridden in the project's settings.
pro: simple to use, solves problem #1 (accessing app-settings from tests), and problem #2 (using settings when defining defaults). As long as the conf.py file is loaded early, external code should be able to use defaults defined in myapp.
con: significantly magical. The name of the setting in conf.py is different from its usage (since appconf automatically adds the MYAPP_ prefix). Extenal/opaque dependency.
I've just created django-app-defaults based on all of the requirements. It's basically a generalization of the second approach highlighted in the question (class Conf(object):).
Usage:
# my_app/defaults.py
# `django.conf.settings` or any other module can be imported if needed
# required
DEFAULT_SETTINGS_MODULE = True
# define default settings below
MY_DEFAULT_SETTING = "yey"
Then anywhere within your project:
from app_defaults import settings
print(settings.MY_DEFAULT_SETTING)
# yey
# All `django.conf.settings` are also available
print(settings.DEBUG)
# True
To load default setting for a single app instead of all of the apps, just do:
# Note: when apps or modules are explicitly passed,
# the `DEFAULT_SETTINGS_MODULE` var is not required
from app_defaults import Settings
settings = Settings(apps=["my_app"])
# or
from my_app import defaults
settings = Settings(modules=[defaults])
I have written a django package for the management of app settings called django-pluggableappsettings.
It's a package that allows you to define sensible defaults for your settings but also adds advanced features like type checking or callable settings. It makes use of metaclasses to allow for an easy definition of the apps settings. Of course this adds an external dependency to your project.
Edit 1:
Example usage could be as follows:
Install the package using pip:
pip install django-pluggableappsettings
Create your AppSettings class in any of your project's files. E.g. in 'app_settings.py'.
app_settings.py
from django_pluggableappsettings import AppSettings, Setting, IntSetting
class MyAppSettings(AppSettings):
MY_SETTING = Setting('DEFAULT_VALUE')
# Checks for "MY_SETTING" in django.conf.settings and
# returns 'DEFAULT_VALUE' if it is not present
ANOTHER_SETTING = IntSetting(42, aliases=['OTHER_SETTING_NAME'])
# Checks for "ANOTHER_SETTING" or "OTHER_SETTING_NAME" in
# django.conf.settings and returns 42 if it is not present.
# also validates that the given value is of type int
Import your MyAppSettings in any file and access its attributes
from mypackage.app_settings import MyAppSettings
MyAppSettings.MY_SETTING
MyAppSettings.ANOTHER_SETTING
Note, that the settings are only initialized on first access, so if you never access a setting, its presence in django.conf.settings is never checked.
Problem #1: When running unit tests for the app there is no site
however, so settings wouldn't have any of the myapp.defaults.
This problem is solved by using the testing framework that comes with django (see the documentation) as it bootstraps the test environment correctly for you.
Keep in mind that django tests always run with DEBUG=False
Problem #2: There is also a big problem if myapp.defaults needs to use
anything from settings (e.g. settings.DEBUG), since you can't import
settings from defaults.py (since that would be a circular import).
This is not really a problem; once you import from myapp.defaults in myapp.settings, everything in settings will be in scope. So you don't to import DEBUG, it is already available to you as its in the global scope.
When I am structuring apps, I try to define functionality in the form of mixins. Each setting should be picked up by one functionality mixin, if possible.
So in your example from above:
from django.conf import settings
class AppHomeRootMixin:
home_root = getattr(settings, "MYAPP_HOME_ROOT", "/default/path/here")
Usage:
class MyView(AppHomeRootMixin, TemplateView):
def dispatch(self, *args, **kwargs):
print(self.home_root)
return super().dispatch(*args, **kwargs)
This is really easy for another developer to see exactly what is going on, alleviates the need for third-party or first-party "magic", and allows us to think about things in terms of functionalities, rather than in terms of individual settings.
I just think that the settings layer of Django should be kept as simple as possible, and any complexity should be the responsibility of the view layer. I have run into a lot of confusing settings configurations that were created by other developers, and those configurations consumed a lot of my time when there was nobody there to explain what was going on under the hood.

Pass-through/export whole third party module (using __all__?)

I have a module that wraps another module to insert some shim logic in some functions. The wrapped module uses a settings module mod.settings which I want to expose, but I don't want the users to import it from there, in case I would like to shim something there as well in the future. I want them to import wrapmod.settings.
Importing the module and exporting it works, but is a bit verbose on the client side. It results in having to write settings.thing instead of just thing.
I want the users to be able to do from wrapmod.settings import * and get the same results as if they did from mod.settings import * but right now, only from wrapmod import settings is available. How to I work around this?
If I understand the situation correctly, you're writing a module wrapmod that is intended to transform parts of an existing package mod. The specific part you're transforming is the submodule mod.settings. You've imported the settings module and made your changes to it, but even though it is available as wrapmod.settings, you can't use that module name in an from ... import ... statement.
I think the best way to fix that is to insert the modified module into sys.modules under the new dotted name. This makes Python accept that name as valid even though wrapmod isn't really a package.
So wrapmod would look something like:
import sys
from mod import settings
# modify settings here
sys.modules['wrapmod.settings'] = settings # add this line!
I ended up making a code-generator for a thin wrapper module instead, since the sys.module hacking broke all IDE integration.
from ... import mod
# this is just a pass-through wrapper around mod.settings
__all__ = mod.__all__
# generate pass-through wrapper around mod.settings; doesn't break IDE integration, unlike manual sys.modules editing.
if __name__ == "__main__":
for thing in settings.__all__:
print(thing + " = mod." + thing)
which when run as a script, outputs code that can then be appended to the end of this file.

import * from module named as a variable

I've got a situation where I need to import * from a list of modules which may or may not exist. So far, I've got a working solution that goes a little something like this:
for module in modules:
try:
imported = __import__(module, globals(), locals(), [], -1)
for sub in module.split(".")[1:]:
imported = getattr(imported, sub)
for name in [n for n in dir(imported) if not n.startswith("_")]:
globals()[name] = getattr(imported, name)
except (ImportError, AttributeError):
pass
This works, but is a total mess and confuses linters no end when they're looking for the variables that get imported in this way. I'm certain I'm missing something in the way __import__ works, since surely from foo import * doesn't generate such a horrible call as I've got above. Can anyone enlighten me?
Update:
The actual use case here is refactoring a django project settings file, to move app-specific settings into a settings file within that app. Currently, all settings are in the main settings file, which has become unmaintainably large. I need to retain the ability to override any of the settings defined in that way from another settings file (client_settings.py), which gets imported as from foo_project.client_settings import * at the bottom of the settings file.
Doing from app.settings import * for each app would work, but I want to drive this from the installed apps setting to maintain DRY.

My settings.py file in Django is getting too long. How do I split it into two?

How do I do that? split up 2 files?
Although my solution is not as complex as the ones above, they fit my simple needs: I have some imports in my settings.py file:
try:
from settings_local import *
except ImportError:
pass
try:
from settings_production import *
except ImportError:
pass
And then I have a settings_local.py file in my local development folder (which I don't upload to the server) and where I overwrite local settings. Then I have a settings_production.py server where I keep settings needed for the production environment.
You can use this technique to import other settings files as well.
Just put it in any file you like and import it somewhere in your main settings file.
So you could set up new settings my_new_settings.py anywhere django can reach, and import it at the end of your real settings.py.
# settings.py
# ...
from my_new_settings import *
These pages might help: discussion on SO, discussion on djangoproject.com
Create new_settings.py file to contain part of settings.py, and import that file wherever you need it.
I'm not a pythonista, so my take might just be the most un-pythonic way to do this. Having said that, this is how I split the settings for our Django app:
Instead of a single settings.py file, we have settings/__init__py. It contains imports for all the settings sections, which are placed in the settings directory.
# settings/__init__.py
from .foo import *
from .bar import *
# settings/foo.py
FOO="test"
# settings/bar.py
BAR="quz"
From the perspective of the application, this is still the same old settings module; from yours, it's a clean structure of configuration data.

Categories