I have a django application that loads a huge array into memory after django is started for further processing.
What I do now is , after loading a specific view, I execute the code to load the array as follows:
try:
load_model = load_model_func()
except:
#Some exception
The code is now very bad. I want to create a class to load this model once after starting django and I want to be able to get this model in all other this model in all other methods of the application
So, is there a good practice for loading data into memory after Django starts?
#Ken4scholar 's answer is a good response in terms of "good practice for loading data AFTER Django starts". It doesn't really address storing/accessing it.
If your data doesn't expire/change unless you reload the application, then using cache as suggested in comments is redundant, and depending on the cache storage you use you might end up still pulling from some db, creating overhead.
You can simply store it in-memory, like you are already doing. Adding an extra class to the mix won't really add anything, a simple global variable could do.
Consider something like this:
# my_app/heavy_model.py
data = None
def load_model():
global data
data = load_your_expensive_model()
def get_data():
if data is None:
raise Exception("Expensive model not loaded")
return data
and in your config (you can lean more about applications and config here):
# my_app/app_config.py
from django.apps import AppConfig
from my_app.heavy_model import load_model
class MyAppConfig(AppConfig):
# ...
def ready(self):
load_model()
then you can simply call my_app.heavy_model.get_data directly in your views.
⚠️ You can also access the global data variable directly, but a wrapper around it may be nicer to have (also in case you want to create more abstractions around it later).
You can run a startup script when apps are initialized. To do this, call the script in the ready method of your Config class in apps.py like this:
class MyAppConfig(AppConfig):
name = 'myapp'
def ready(self):
<import and run script>
I have a module models.py with some data logic:
db = PostgresqlDatabase(database='database', user='user')
# models logic
and flask app which actually interacts with database:
from models import db, User, ...
But I want to move initializing all setting from one config file in flask app:
So I could separate importing db from other stuff (I need this for access to module variable db in models):
import models
from models import User, ...
app.config.from_object(os.environ['APP_SETTINGS'])
models.db = PostgresqlDatabase(database=app.config['db'],
user=app.config['db'])
and use further db as models.db
But seems it is kinda ugly. Duplicating imports, different usage of module stuff..
Is there any better way how to handle this situation?
I'd recommend 1 level of indirection, so that your code becomes like this:
import const
import runtime
def foo():
runtime.db.execute("SELECT ... LIMIT ?", (..., const.MAX_ROWS))
You get:
clear separation of leaf module const
mocking and/or reloading is possible
uniform and concise access in all user modules
To get rich API on runtime, use the "replace module with object at import" trick ( see __getattr__ on a module )
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
In my endless quest in over-complicating simple stuff, I am researching the most 'Pythonic' way to provide global configuration variables inside the typical 'config.py' found in Python egg packages.
The traditional way (aah, good ol' #define!) is as follows:
MYSQL_PORT = 3306
MYSQL_DATABASE = 'mydb'
MYSQL_DATABASE_TABLES = ['tb_users', 'tb_groups']
Therefore global variables are imported in one of the following ways:
from config import *
dbname = MYSQL_DATABASE
for table in MYSQL_DATABASE_TABLES:
print table
or:
import config
dbname = config.MYSQL_DATABASE
assert(isinstance(config.MYSQL_PORT, int))
It makes sense, but sometimes can be a little messy, especially when you're trying to remember the names of certain variables. Besides, providing a 'configuration' object, with variables as attributes, might be more flexible. So, taking a lead from bpython config.py file, I came up with:
class Struct(object):
def __init__(self, *args):
self.__header__ = str(args[0]) if args else None
def __repr__(self):
if self.__header__ is None:
return super(Struct, self).__repr__()
return self.__header__
def next(self):
""" Fake iteration functionality.
"""
raise StopIteration
def __iter__(self):
""" Fake iteration functionality.
We skip magic attribues and Structs, and return the rest.
"""
ks = self.__dict__.keys()
for k in ks:
if not k.startswith('__') and not isinstance(k, Struct):
yield getattr(self, k)
def __len__(self):
""" Don't count magic attributes or Structs.
"""
ks = self.__dict__.keys()
return len([k for k in ks if not k.startswith('__')\
and not isinstance(k, Struct)])
and a 'config.py' that imports the class and reads as follows:
from _config import Struct as Section
mysql = Section("MySQL specific configuration")
mysql.user = 'root'
mysql.pass = 'secret'
mysql.host = 'localhost'
mysql.port = 3306
mysql.database = 'mydb'
mysql.tables = Section("Tables for 'mydb'")
mysql.tables.users = 'tb_users'
mysql.tables.groups = 'tb_groups'
and is used in this way:
from sqlalchemy import MetaData, Table
import config as CONFIG
assert(isinstance(CONFIG.mysql.port, int))
mdata = MetaData(
"mysql://%s:%s#%s:%d/%s" % (
CONFIG.mysql.user,
CONFIG.mysql.pass,
CONFIG.mysql.host,
CONFIG.mysql.port,
CONFIG.mysql.database,
)
)
tables = []
for name in CONFIG.mysql.tables:
tables.append(Table(name, mdata, autoload=True))
Which seems a more readable, expressive and flexible way of storing and fetching global variables inside a package.
Lamest idea ever? What is the best practice for coping with these situations? What is your way of storing and fetching global names and variables inside your package?
How about just using the built-in types like this:
config = {
"mysql": {
"user": "root",
"pass": "secret",
"tables": {
"users": "tb_users"
}
# etc
}
}
You'd access the values as follows:
config["mysql"]["tables"]["users"]
If you are willing to sacrifice the potential to compute expressions inside your config tree, you could use YAML and end up with a more readable config file like this:
mysql:
- user: root
- pass: secret
- tables:
- users: tb_users
and use a library like PyYAML to conventiently parse and access the config file
I like this solution for small applications:
class App:
__conf = {
"username": "",
"password": "",
"MYSQL_PORT": 3306,
"MYSQL_DATABASE": 'mydb',
"MYSQL_DATABASE_TABLES": ['tb_users', 'tb_groups']
}
__setters = ["username", "password"]
#staticmethod
def config(name):
return App.__conf[name]
#staticmethod
def set(name, value):
if name in App.__setters:
App.__conf[name] = value
else:
raise NameError("Name not accepted in set() method")
And then usage is:
if __name__ == "__main__":
# from config import App
App.config("MYSQL_PORT") # return 3306
App.set("username", "hi") # set new username value
App.config("username") # return "hi"
App.set("MYSQL_PORT", "abc") # this raises NameError
.. you should like it because:
uses class variables (no object to pass around/ no singleton required),
uses encapsulated built-in types and looks like (is) a method call on App,
has control over individual config immutability, mutable globals are the worst kind of globals.
promotes conventional and well named access / readability in your source code
is a simple class but enforces structured access, an alternative is to use #property, but that requires more variable handling code per item and is object-based.
requires minimal changes to add new config items and set its mutability.
--Edit--:
For large applications, storing values in a YAML (i.e. properties) file and reading that in as immutable data is a better approach (i.e. blubb/ohaal's answer).
For small applications, this solution above is simpler.
How about using classes?
# config.py
class MYSQL:
PORT = 3306
DATABASE = 'mydb'
DATABASE_TABLES = ['tb_users', 'tb_groups']
# main.py
from config import MYSQL
print(MYSQL.PORT) # 3306
Let's be honest, we should probably consider using a Python Software Foundation maintained library:
https://docs.python.org/3/library/configparser.html
Config example: (ini format, but JSON available)
[DEFAULT]
ServerAliveInterval = 45
Compression = yes
CompressionLevel = 9
ForwardX11 = yes
[bitbucket.org]
User = hg
[topsecret.server.com]
Port = 50022
ForwardX11 = no
Code example:
>>> import configparser
>>> config = configparser.ConfigParser()
>>> config.read('example.ini')
>>> config['DEFAULT']['Compression']
'yes'
>>> config['DEFAULT'].getboolean('MyCompression', fallback=True) # get_or_else
Making it globally-accessible:
import configpaser
class App:
__conf = None
#staticmethod
def config():
if App.__conf is None: # Read only once, lazy.
App.__conf = configparser.ConfigParser()
App.__conf.read('example.ini')
return App.__conf
if __name__ == '__main__':
App.config()['DEFAULT']['MYSQL_PORT']
# or, better:
App.config().get(section='DEFAULT', option='MYSQL_PORT', fallback=3306)
....
Downsides:
Uncontrolled global mutable state.
A small variation on Husky's idea that I use. Make a file called 'globals' (or whatever you like) and then define multiple classes in it, as such:
#globals.py
class dbinfo : # for database globals
username = 'abcd'
password = 'xyz'
class runtime :
debug = False
output = 'stdio'
Then, if you have two code files c1.py and c2.py, both can have at the top
import globals as gl
Now all code can access and set values, as such:
gl.runtime.debug = False
print(gl.dbinfo.username)
People forget classes exist, even if no object is ever instantiated that is a member of that class. And variables in a class that aren't preceded by 'self.' are shared across all instances of the class, even if there are none. Once 'debug' is changed by any code, all other code sees the change.
By importing it as gl, you can have multiple such files and variables that lets you access and set values across code files, functions, etc., but with no danger of namespace collision.
This lacks some of the clever error checking of other approaches, but is simple and easy to follow.
Similar to blubb's answer. I suggest building them with lambda functions to reduce code. Like this:
User = lambda passwd, hair, name: {'password':passwd, 'hair':hair, 'name':name}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3']['password'] #> password
config['blubb']['hair'] #> black
This does smell like you may want to make a class, though.
Or, as MarkM noted, you could use namedtuple
from collections import namedtuple
#...
User = namedtuple('User', ['password', 'hair', 'name']}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3'].password #> passwd
config['blubb'].hair #> black
I did that once. Ultimately I found my simplified basicconfig.py adequate for my needs. You can pass in a namespace with other objects for it to reference if you need to. You can also pass in additional defaults from your code. It also maps attribute and mapping style syntax to the same configuration object.
please check out the IPython configuration system, implemented via traitlets for the type enforcement you are doing manually.
Cut and pasted here to comply with SO guidelines for not just dropping links as the content of links changes over time.
traitlets documentation
Here are the main requirements we wanted our configuration system to have:
Support for hierarchical configuration information.
Full integration with command line option parsers. Often, you want to read a configuration file, but then override some of the values with command line options. Our configuration system automates this process and allows each command line option to be linked to a particular attribute in the configuration hierarchy that it will override.
Configuration files that are themselves valid Python code. This accomplishes many things. First, it becomes possible to put logic in your configuration files that sets attributes based on your operating system, network setup, Python version, etc. Second, Python has a super simple syntax for accessing hierarchical data structures, namely regular attribute access (Foo.Bar.Bam.name). Third, using Python makes it easy for users to import configuration attributes from one configuration file to another.
Fourth, even though Python is dynamically typed, it does have types that can be checked at runtime. Thus, a 1 in a config file is the integer ‘1’, while a '1' is a string.
A fully automated method for getting the configuration information to the classes that need it at runtime. Writing code that walks a configuration hierarchy to extract a particular attribute is painful. When you have complex configuration information with hundreds of attributes, this makes you want to cry.
Type checking and validation that doesn’t require the entire configuration hierarchy to be specified statically before runtime. Python is a very dynamic language and you don’t always know everything that needs to be configured when a program starts.
To acheive this they basically define 3 object classes and their relations to each other:
1) Configuration - basically a ChainMap / basic dict with some enhancements for merging.
2) Configurable - base class to subclass all things you'd wish to configure.
3) Application - object that is instantiated to perform a specific application function, or your main application for single purpose software.
In their words:
Application: Application
An application is a process that does a specific job. The most obvious application is the ipython command line program. Each application reads one or more configuration files and a single set of command line options and then produces a master configuration object for the application. This configuration object is then passed to the configurable objects that the application creates. These configurable objects implement the actual logic of the application and know how to configure themselves given the configuration object.
Applications always have a log attribute that is a configured Logger. This allows centralized logging configuration per-application.
Configurable: Configurable
A configurable is a regular Python class that serves as a base class for all main classes in an application. The Configurable base class is lightweight and only does one things.
This Configurable is a subclass of HasTraits that knows how to configure itself. Class level traits with the metadata config=True become values that can be configured from the command line and configuration files.
Developers create Configurable subclasses that implement all of the logic in the application. Each of these subclasses has its own configuration information that controls how instances are created.
In particular I want to use pystache but any guide for another template engine should be good enough to set it up.
If I understood correctly, I have to register the renderer factory in the __init__.py of my pyramid application.
config = Configurator(settings=settings)
config.add_renderer(None, 'pystache_renderer_factory')
Now I need to create the renderer factory and don't know how.
Even though I found the documentation about how to add a template engine, I didn't manage to set it up.
Finally I was able to add the pystache template engine following this guide:
https://groups.google.com/forum/#!searchin/pylons-discuss/add_renderer/pylons-discuss/Y4MoKwWKiUA/cyqldA-vHjkJ
What I did:
created the file mustacherenderer.py:
from pyramid.asset import abspath_from_asset_spec
import pystache
import os
def pystache_renderer_factory(info):
template = os.path.join(abspath_from_asset_spec('myproj:templates', False),
info.name)
f = open(template)
s = f.read()
f.close()
def _render(value, system):
return pystache.render(s, value)
return _render
added this to the __init__.py:
config.add_renderer('.pmt', 'myproj.mustacherenderer.pystache_renderer_factory')
working :)
add_renderer's second argument is supposed to be a class that implements the interface shown in "Adding a New Renderer". Pyramid will take pystache_renderer_factory and attempt to import it, so in your code the line import pystache_renderer_factory would have to work. This example won't ever resolve to a class, only a module or package, so you'll have to fix that first. It should be something like mypackage.pystache_renderer_factory.
The best way to learn how to write a renderer is probably to look at some that have been written already. Specifically the pyramid_jinja2 package, or in Pyramid's source there are very simple implementations of json and jsonp renderers. Notice how they all provide fairly unique ways to implement the required interface. Each factory accepts an info object, and returns a callable that accepts value and system objects.
https://github.com/Pylons/pyramid_jinja2/blob/master/pyramid_jinja2/init.py#L260
https://github.com/Pylons/pyramid/blob/master/pyramid/renderers.py#L135
Note that this answer works well until you create your Pyramid project with a scaffold. Once you do so, this related answer will prove more useful when constructing your Pystache/Mustache_Renderer_Factory: How to integrate pystache with pyramid?.
If I have existing classes (generated off some UML model) from legacy code, what is the best way to integrate them with the Django model classes?
I have so far considered using the Django custom fields to serialize the class and let that handle persistence. (The disadvantage of this being that other applications accessing the database directly - if it ever came to that being a requirement - would have to deserialize this field in order to access the data.)
If there is anyone that can suggest some alternatives to the aforementioned - in a way that the persistence around my existing classes can be 'swapped out' it would be greatly appreciated!
If you are trying to move a legacy app to django, I agree with #chrisdpratt and you should try to convert your classes to Django models. It will require a large effort, so when you are ready, you can follow this trail:
Create a legacy app and put your legacy code there.
If you decide your code is not so important, and you just want to grab the data, and it is stored in an SQL based server, you can try using inspectdb to create "legacy models" that will read your data from your legacy server. I suggest to configure a second DB connection called "legacy" for this. see: https://docs.djangoproject.com/en/dev/topics/db/multi-db/
Create a command line test script to make sure you can load your legacy classes from the database. (Make sure to set/export environment variable DJANGO_SETTINGS_MODULE from the shell prompt to run scripts from the command line or see https://docs.djangoproject.com/en/dev/ref/django-admin/?from=olddocs#running-management-commands-from-your-code ).
Create your new django models in a new app ("myapp").
Optionally, you can use inspectdb again to get a basic models automatically from a db. However, make sure you rename the models to standard Django look and feel and delete any unnecessary fields and attributes.
Create a script which reads the legacy data and writes it to the new models.
Migrate required logic from old classes to new classes.
You can use this as a skeleton for the script for step 3:
# legacy/management/commands/importlegacydb.py
from django.core.management.base import NoArgsCommand
import myapp.models as M
import legacy.models as L
import sys
write = sys.stdout.write
def copy_fields(old, new, mapping):
for old_key, new_key in mapping.items():
value = getattr(old, old_key)
if type(value) is str:
value = value.strip()
if type(new_key) is tuple:
value = new_key[0](value)
new_key = new_key[1]
else:
if new_key == "name":
value = value[0].upper() + value[1:]
setattr(new, new_key, value)
def import_table(old_class, new_class, mapping):
write("importing %s " % old_class.__name__)
lookup = {}
l = old_class.objects.all()
for old in l:
new = new_class()
copy_fields(old, new, mapping)
new.save()
lookup[old.id] = new.id
write (".")
print " Done."
return lookup
class Command(NoArgsCommand):
help = "Import data from legacy db."
def handle_noargs(self, **options):
"""
Read data from legacy db to new db.
"""
print "Importing legacy data"
import_table(L.X, M.X, { 'old_field' : 'new_field', 'old_field2' : 'new_field2'})
import_table(L.Y, M.Y, { 'old_field' : 'new_field'})
print "Done."