I have a very simple model that has data in it that I need to use in various places in my application:
class Setting(models.Model):
name = models.CharField()
value = models.TextField()
I'd like to be able to load this information into a dictionary, then ship that data around my application so I don't have to make duplicate calls to the database. My attempt at doing so was wrapping the logic in a module like so (the print statement is there for debugging):
my_settings.py
from myapp import models
class Settings:
__settings = {}
def __init__(self):
if(not self.__class__.__settings):
print("===== Loading settings from table =====")
qs = models.Setting.objects.all()
for x in qs:
self.__class__.__settings[x.name] = x.value
def get(self, key, default=None):
return self.__class__.__settings.get(key, default)
def getint(self, key, default=0):
return int(self.__class__.__settings.get(key, default))
Using this module would then look like the following:
from my_settings import Settings
# Down in some view somewhere...
settings = Settings()
data = settings.get("some_key")
...
# Now we might be in a helper function somewhere, but still in the
# same view context as above. Note that we should not have made
# a database round trip here; we're using our memory store instead.
settings = Settings()
data = settings.get("another_key")
This seems to work fine, but it has the drawback that the data is loaded once (and only once) at the initial instantiation. If any of the data in the settings database table should change, those changes won't be reflected in the corresponding dictionary held by this class.
Is there a better approach here? I don't mind having a single database query per request, but I also don't want to have to pass the dictionary around from function to function. I was hoping a module-level wrapper would get me the "singleton"-ness that I desire, but it's apparently caching things more aggressively than I thought it would.
I would just not worry about it and once you go into production either do blanket caching using cachalot or write your own rough cache for just this model.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
In my endless quest in over-complicating simple stuff, I am researching the most 'Pythonic' way to provide global configuration variables inside the typical 'config.py' found in Python egg packages.
The traditional way (aah, good ol' #define!) is as follows:
MYSQL_PORT = 3306
MYSQL_DATABASE = 'mydb'
MYSQL_DATABASE_TABLES = ['tb_users', 'tb_groups']
Therefore global variables are imported in one of the following ways:
from config import *
dbname = MYSQL_DATABASE
for table in MYSQL_DATABASE_TABLES:
print table
or:
import config
dbname = config.MYSQL_DATABASE
assert(isinstance(config.MYSQL_PORT, int))
It makes sense, but sometimes can be a little messy, especially when you're trying to remember the names of certain variables. Besides, providing a 'configuration' object, with variables as attributes, might be more flexible. So, taking a lead from bpython config.py file, I came up with:
class Struct(object):
def __init__(self, *args):
self.__header__ = str(args[0]) if args else None
def __repr__(self):
if self.__header__ is None:
return super(Struct, self).__repr__()
return self.__header__
def next(self):
""" Fake iteration functionality.
"""
raise StopIteration
def __iter__(self):
""" Fake iteration functionality.
We skip magic attribues and Structs, and return the rest.
"""
ks = self.__dict__.keys()
for k in ks:
if not k.startswith('__') and not isinstance(k, Struct):
yield getattr(self, k)
def __len__(self):
""" Don't count magic attributes or Structs.
"""
ks = self.__dict__.keys()
return len([k for k in ks if not k.startswith('__')\
and not isinstance(k, Struct)])
and a 'config.py' that imports the class and reads as follows:
from _config import Struct as Section
mysql = Section("MySQL specific configuration")
mysql.user = 'root'
mysql.pass = 'secret'
mysql.host = 'localhost'
mysql.port = 3306
mysql.database = 'mydb'
mysql.tables = Section("Tables for 'mydb'")
mysql.tables.users = 'tb_users'
mysql.tables.groups = 'tb_groups'
and is used in this way:
from sqlalchemy import MetaData, Table
import config as CONFIG
assert(isinstance(CONFIG.mysql.port, int))
mdata = MetaData(
"mysql://%s:%s#%s:%d/%s" % (
CONFIG.mysql.user,
CONFIG.mysql.pass,
CONFIG.mysql.host,
CONFIG.mysql.port,
CONFIG.mysql.database,
)
)
tables = []
for name in CONFIG.mysql.tables:
tables.append(Table(name, mdata, autoload=True))
Which seems a more readable, expressive and flexible way of storing and fetching global variables inside a package.
Lamest idea ever? What is the best practice for coping with these situations? What is your way of storing and fetching global names and variables inside your package?
How about just using the built-in types like this:
config = {
"mysql": {
"user": "root",
"pass": "secret",
"tables": {
"users": "tb_users"
}
# etc
}
}
You'd access the values as follows:
config["mysql"]["tables"]["users"]
If you are willing to sacrifice the potential to compute expressions inside your config tree, you could use YAML and end up with a more readable config file like this:
mysql:
- user: root
- pass: secret
- tables:
- users: tb_users
and use a library like PyYAML to conventiently parse and access the config file
I like this solution for small applications:
class App:
__conf = {
"username": "",
"password": "",
"MYSQL_PORT": 3306,
"MYSQL_DATABASE": 'mydb',
"MYSQL_DATABASE_TABLES": ['tb_users', 'tb_groups']
}
__setters = ["username", "password"]
#staticmethod
def config(name):
return App.__conf[name]
#staticmethod
def set(name, value):
if name in App.__setters:
App.__conf[name] = value
else:
raise NameError("Name not accepted in set() method")
And then usage is:
if __name__ == "__main__":
# from config import App
App.config("MYSQL_PORT") # return 3306
App.set("username", "hi") # set new username value
App.config("username") # return "hi"
App.set("MYSQL_PORT", "abc") # this raises NameError
.. you should like it because:
uses class variables (no object to pass around/ no singleton required),
uses encapsulated built-in types and looks like (is) a method call on App,
has control over individual config immutability, mutable globals are the worst kind of globals.
promotes conventional and well named access / readability in your source code
is a simple class but enforces structured access, an alternative is to use #property, but that requires more variable handling code per item and is object-based.
requires minimal changes to add new config items and set its mutability.
--Edit--:
For large applications, storing values in a YAML (i.e. properties) file and reading that in as immutable data is a better approach (i.e. blubb/ohaal's answer).
For small applications, this solution above is simpler.
How about using classes?
# config.py
class MYSQL:
PORT = 3306
DATABASE = 'mydb'
DATABASE_TABLES = ['tb_users', 'tb_groups']
# main.py
from config import MYSQL
print(MYSQL.PORT) # 3306
Let's be honest, we should probably consider using a Python Software Foundation maintained library:
https://docs.python.org/3/library/configparser.html
Config example: (ini format, but JSON available)
[DEFAULT]
ServerAliveInterval = 45
Compression = yes
CompressionLevel = 9
ForwardX11 = yes
[bitbucket.org]
User = hg
[topsecret.server.com]
Port = 50022
ForwardX11 = no
Code example:
>>> import configparser
>>> config = configparser.ConfigParser()
>>> config.read('example.ini')
>>> config['DEFAULT']['Compression']
'yes'
>>> config['DEFAULT'].getboolean('MyCompression', fallback=True) # get_or_else
Making it globally-accessible:
import configpaser
class App:
__conf = None
#staticmethod
def config():
if App.__conf is None: # Read only once, lazy.
App.__conf = configparser.ConfigParser()
App.__conf.read('example.ini')
return App.__conf
if __name__ == '__main__':
App.config()['DEFAULT']['MYSQL_PORT']
# or, better:
App.config().get(section='DEFAULT', option='MYSQL_PORT', fallback=3306)
....
Downsides:
Uncontrolled global mutable state.
A small variation on Husky's idea that I use. Make a file called 'globals' (or whatever you like) and then define multiple classes in it, as such:
#globals.py
class dbinfo : # for database globals
username = 'abcd'
password = 'xyz'
class runtime :
debug = False
output = 'stdio'
Then, if you have two code files c1.py and c2.py, both can have at the top
import globals as gl
Now all code can access and set values, as such:
gl.runtime.debug = False
print(gl.dbinfo.username)
People forget classes exist, even if no object is ever instantiated that is a member of that class. And variables in a class that aren't preceded by 'self.' are shared across all instances of the class, even if there are none. Once 'debug' is changed by any code, all other code sees the change.
By importing it as gl, you can have multiple such files and variables that lets you access and set values across code files, functions, etc., but with no danger of namespace collision.
This lacks some of the clever error checking of other approaches, but is simple and easy to follow.
Similar to blubb's answer. I suggest building them with lambda functions to reduce code. Like this:
User = lambda passwd, hair, name: {'password':passwd, 'hair':hair, 'name':name}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3']['password'] #> password
config['blubb']['hair'] #> black
This does smell like you may want to make a class, though.
Or, as MarkM noted, you could use namedtuple
from collections import namedtuple
#...
User = namedtuple('User', ['password', 'hair', 'name']}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3'].password #> passwd
config['blubb'].hair #> black
I did that once. Ultimately I found my simplified basicconfig.py adequate for my needs. You can pass in a namespace with other objects for it to reference if you need to. You can also pass in additional defaults from your code. It also maps attribute and mapping style syntax to the same configuration object.
please check out the IPython configuration system, implemented via traitlets for the type enforcement you are doing manually.
Cut and pasted here to comply with SO guidelines for not just dropping links as the content of links changes over time.
traitlets documentation
Here are the main requirements we wanted our configuration system to have:
Support for hierarchical configuration information.
Full integration with command line option parsers. Often, you want to read a configuration file, but then override some of the values with command line options. Our configuration system automates this process and allows each command line option to be linked to a particular attribute in the configuration hierarchy that it will override.
Configuration files that are themselves valid Python code. This accomplishes many things. First, it becomes possible to put logic in your configuration files that sets attributes based on your operating system, network setup, Python version, etc. Second, Python has a super simple syntax for accessing hierarchical data structures, namely regular attribute access (Foo.Bar.Bam.name). Third, using Python makes it easy for users to import configuration attributes from one configuration file to another.
Fourth, even though Python is dynamically typed, it does have types that can be checked at runtime. Thus, a 1 in a config file is the integer ‘1’, while a '1' is a string.
A fully automated method for getting the configuration information to the classes that need it at runtime. Writing code that walks a configuration hierarchy to extract a particular attribute is painful. When you have complex configuration information with hundreds of attributes, this makes you want to cry.
Type checking and validation that doesn’t require the entire configuration hierarchy to be specified statically before runtime. Python is a very dynamic language and you don’t always know everything that needs to be configured when a program starts.
To acheive this they basically define 3 object classes and their relations to each other:
1) Configuration - basically a ChainMap / basic dict with some enhancements for merging.
2) Configurable - base class to subclass all things you'd wish to configure.
3) Application - object that is instantiated to perform a specific application function, or your main application for single purpose software.
In their words:
Application: Application
An application is a process that does a specific job. The most obvious application is the ipython command line program. Each application reads one or more configuration files and a single set of command line options and then produces a master configuration object for the application. This configuration object is then passed to the configurable objects that the application creates. These configurable objects implement the actual logic of the application and know how to configure themselves given the configuration object.
Applications always have a log attribute that is a configured Logger. This allows centralized logging configuration per-application.
Configurable: Configurable
A configurable is a regular Python class that serves as a base class for all main classes in an application. The Configurable base class is lightweight and only does one things.
This Configurable is a subclass of HasTraits that knows how to configure itself. Class level traits with the metadata config=True become values that can be configured from the command line and configuration files.
Developers create Configurable subclasses that implement all of the logic in the application. Each of these subclasses has its own configuration information that controls how instances are created.
I'm somewhat new to Python (but not at all to programming, already fluent in Perl, PHP, & Ruby).
I'm running into a major difference between all my previous languages and Python: Mainly that I can easily read in data from another file into an entire project.
there are two major examples of this that I would like to solve:
1) I would like to have a settings/config file (in YAML) that gets read in and parsed in a config.py file. I then want a resulting dict() to be accessible to all my other files in the project. I have figured out how to do this by from lib.project.config import cfg but that means that for each page that is importing the configs the system has to REparse the yaml. That seems just silly to me. Is there no way to have that file parsed just once and then have the results accessible to any other file in my project?
2) I would like to import a database.py file that then looks at my configs to see if we need to import a sqlite3, mysql, or postgresql version of a database class. Again, I can manage this by putting the logic directly in each page that I need the database class for. But I hate having to paste code like
if cfg.get('db_type') == 'sqlite':
from lib.project.databases.sqlite3 import database
elif cfg.get('db_type') == 'mysql':
from lib.project.databases.mysql import database
at the top of each file that needs the database class. I'd much rather just add:
import lib.project.database
Any help would be very much appreciated.
I've done a good deal of googling and SO searches and not found my answers. Hopefuly one of you wiz's out there can help.
Thanks.
UPDATE:
The reason I'm doing things this way (for #2) is because I'm also trying to make other classes inherit the database class.
So lib/project/databases/sqlite.py is a definition of a database class.
And so is lib/project/databases/mysql.py.
The idea being that after this import is complete I can import classes like the users class and define it like so:
class user(database):
...
And thus inherit all of the structure and methods of the database class.
Your suggestions to simply create an instance based on the sqlite/mysql logic/decision and then pass that where it needs to be is a good solution for that. But I need a bit more...
Ideas?
Thank you all for your help in understanding more about Python and how to get what I'm looking for done.
Here is what I ended up doing:
1) The config file issue:
This apparently was mostly solved to begin with. I confirmed what everyone was saying: that you can "import" a file as many times as you like, but it is only ever parsed/processed/compiled once. As for making it reachable to any file that needs it:
Config class:
import os
import yaml
class Config:
def __init__(self):
self.path = os.getcwd()
stream = open(self.path+"/conf/config.yaml", 'r')
data = yaml.load(stream)
config_keys = data.keys()
for k in config_keys:
setattr(self, k, data.get(k))
if (os.path.isfile(self.path+"/conf/config-override.yaml") ):
stream = open(self.path+"/conf/config-override.yaml", 'r')
data = yaml.load(stream)
config_keys = data.keys()
for k in config_keys:
setattr(self, k, data.get(k))
config = Config()
And then any file that wants to use it:
from lib.project.Config import config
This is all working swimmingly so far.
2) Dynamic database type for Database class:
I just altered my overall design a tiny bit to make a Database class (mostly empty) inherit from either the Sqlite or Mysql classes (both custom builds which are wrappers to existing Sqlite3 and mysql-connector classes). This way there is always a solid Database class to inherit from, I only load in what files I need to, and it's all defined by my config file. Example:
Database class:
from lib.project.Config import config
if config.db_type == 'sqlite' :
from lib.project.databases.Sqlite import Sqlite
elif config.db_type == 'mysql':
from lib.project.databases.Mysql import Mysql
class Database(Sqlite if config.db_type == 'sqlite' else Mysql):
''' documentation '''
I'd still love to hear people's feedback on this code/method.
As I said, I'm still newish to Python and could still be missing something.
Thanks again everyone.
1) you're all set - the file will only be read once.
2) Agreed you don't want to copy any paste code like that - if you want to use 1 instance of the database throughout your project, you'd do something like:
import lib.project
db = lib.project.database
where db is just a local variable used to access your already created database. You would instantiate your database as database (resolving as you've done with the if/elif code whether to use sqlite3 or mysql) in lib/project/databases.py and then in your lib/project/__init__.py file you would do a from .databases import database.
if you want to use multiple databases (of the same sqlite3/mysql type) throughout your project, you would resolve which constructor Database is bound to (or inherits from) in databases.py:
from lib.project.Config import config
if config.db_type == 'sqlite' :
import lib.project.databases.Sqlite as Db
elif config.db_type == 'mysql':
import lib.project.databases.Mysql as Db
class Database(Db):
'''docs'''
I'm trying to use the Django LocMemCache for storing a few simple values but I'm not sure how to initialize the cache when Django starts. Django 1.7 come with Applications, allowing to run some code in AppConfig.ready(), that place would be perfect to initialize the cache, but according to Django 1.7 documentation:
" ... Although you can access model classes as described above, avoid
interacting with the database in your ready() implementation."
So, let say I want to store some DB queries from my model:
x = MyModel.objects.count()
y = MyModel.(a really expensive query)
How and when should I init the cache? Is there a recommended "best practice" for doing that?
Currently, I have just added the following cache.py to my application, but I'm not sure if
my code hits the database once (i.e. the first request) and then uses the cached value before (the following requests).
# cache.py
from django.core.cache import caches
from .models import MyModel
class Cache(object):
def __init__(self):
self.__count = MyModel.objects.count()
self.cache = caches['cache-storage']
#property
def total_count(self):
return self.cache.get('total_count', self.__count)
Then I use the cached values in this way:
# view.py
from .cache import Cache
cache = Cache()
...
(some view)
counter = cache.total_count
That tip about using databases in ready() is generally good advice but isn't always going to work out. Some applications need to access the database based on a schedule instead of a view. The only way I've seen to implement that is from within ready().
This does create an issue because all manage.py commands will run the code in ready(). This means that the application would be accessing the database with your runtime code while running migrations or creating super users which isn't ideal.
I avoid this issue by adding a sys.argv check in ready().
import sys
from django.apps import AppConfig
class MyApp(AppConfig):
name = 'MyApp'
def ready(self):
if 'runserver' in sys.argv:
# your code here
I am trying to implement a simple database program in python. I get to the point where I have added elements to the db, changed the values, etc.
class db:
def __init__(self):
self.database ={}
def dbset(self, name, value):
self.database[name]=value
def dbunset(self, name):
self.dbset(name, 'NULL')
def dbnumequalto(self, value):
mylist = [v for k,v in self.database.items() if v==value]
return mylist
def main():
mydb=db()
cmd=raw_input().rstrip().split(" ")
while cmd[0]!='end':
if cmd[0]=='set':
mydb.dbset(cmd[1], cmd[2])
elif cmd[0]=='unset':
mydb.dbunset(cmd[1])
elif cmd[0]=='numequalto':
print len(mydb.dbnumequalto(cmd[1]))
elif cmd[0]=='list':
print mydb.database
cmd=raw_input().rstrip().split(" ")
if __name__=='__main__':
main()
Now, as a next step I want to be able to do nested transactions within this python code.I begin a set of commands with BEGIN command and then commit them with COMMIT statement. A commit should commit all the transactions that began. However, a rollback should revert the changes back to the recent BEGIN. I am not able to come up with a suitable solution for this.
A simple approach is to keep a "transaction" list containing all the information you need to be able to roll-back pending changes:
def dbset(self, name, value):
self.transaction.append((name, self.database.get(name)))
self.database[name]=value
def rollback(self):
# undo all changes
while self.transaction:
name, old_value = self.transaction.pop()
self.database[name] = old_value
def commit(self):
# everything went fine, drop undo information
self.transaction = []
If you are doing this as an academic exercise, you might want to check out the Rudimentary Database Engine recipe on the Python Cookbook. It includes quite a few classes to facilitate what you might expect from a SQL engine.
Database is used to create database instances without transaction support.
Database2 inherits from Database and provides for table transactions.
Table implements database tables along with various possible interactions.
Several other classes act as utilities to support some database actions that would normally be supported.
Like and NotLike implement the LIKE operator found in other engines.
date and datetime are special data types usable for database columns.
DatePart, MID, and FORMAT allow information selection in some cases.
In addition to the classes, there are functions for JOIN operations along with tests / demonstrations.
This is all available for free in the built in sqllite module. The commits and rollbacks for sqllite are discussed in more detail than I can understand here