I have shortcut function declared in __init__.py of my module in order to simplify importing it, ie.
from app.logger import log
instead of:
from app.logger.shortcuts import log
This function is a class object in principle, intendet to be used like singleton in this way:
# __init_.py
from app.logger.backends import LogDatabaseBackend
log = LogDatabaseBackend()
In the backends.py i need to import some models in this way:
# backends.py
from app.logger.models import Model1, Model2
class LogDatabaseBackend(object):
...
These models are necessary in some methods of LogDatabaseBackend. The problem is that Django show warnings like:
"RemovedInDjango19Warning: Model class app.logger.models.Model1 doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9."
I think the reason is importing model in init, before app was loaded. How should I change my code to avoid these warnings and maintain Django 1.9 compatibility?
Define application configuration in <app_name>.apps.py :
From Application Configuration documentation :
Application configuration objects store metadata for an application.
Some attributes can be configured in AppConfig subclasses. Others are
set by Django and read-only.
Here is an example from the documentation :
# rock_n_roll/apps.py
from django.apps import AppConfig
class RockNRollConfig(AppConfig):
name = 'rock_n_roll'
verbose_name = "Rock ’n’ roll"
You can make your application load this AppConfig subclass by default as
follows:
# rock_n_roll/__init__.py
default_app_config = 'rock_n_roll.apps.RockNRollConfig'
That will cause RockNRollConfig to be used when INSTALLED_APPS just
contains 'rock_n_roll'. This allows you to make use of AppConfig
features without requiring your users to update their INSTALLED_APPS
setting. Besides this use case, it’s best to avoid using
default_app_config and instead specify the app config class in
INSTALLED_APPS as described next.
If Model1 and Model2 are used with ForeignKey, you don't need to import them. Instead use ForeignKey("app.Model1", ...). This delays the import till after app registry. Otherwise, consider using function-level import.
The docs say to put this somewhere:
from sqlalchemy import event
from colanderalchemy import setup_schema
event.listen(mapper, 'mapper_configured', setup_schema)
Where should this go in Pyramid? Should I be using Pyramid events instead of SQLAlchemy's?
When I tried putting it at the top of the models.py file, it complained about mapper not existing; should I still be using that?
You need to use the SQLAlchemy events as they tell what is happening inside the SQLAlchemy (they do not relate to the pyramid events at all).
The documentation for the ColanderAlchemy is confusing; what they call for a mapper here is your model class (it is not a mapper).
Thus in your models you should be doing something like:
class MyModelClass(Base):
...
event.listen(
MyModelClass,
"mapper_configured",
setup_schema)
The test suite shows it working like this:
from sqlalchemy import event
from colanderalchemy import setup_schema
from sqlalchemy.orm import mapper
event.listen(mapper, 'mapper_configured', setup_schema)
Please let me know if that fixes it for you and I can go update the documentation accordingly.
I'm somewhat new to Python (but not at all to programming, already fluent in Perl, PHP, & Ruby).
I'm running into a major difference between all my previous languages and Python: Mainly that I can easily read in data from another file into an entire project.
there are two major examples of this that I would like to solve:
1) I would like to have a settings/config file (in YAML) that gets read in and parsed in a config.py file. I then want a resulting dict() to be accessible to all my other files in the project. I have figured out how to do this by from lib.project.config import cfg but that means that for each page that is importing the configs the system has to REparse the yaml. That seems just silly to me. Is there no way to have that file parsed just once and then have the results accessible to any other file in my project?
2) I would like to import a database.py file that then looks at my configs to see if we need to import a sqlite3, mysql, or postgresql version of a database class. Again, I can manage this by putting the logic directly in each page that I need the database class for. But I hate having to paste code like
if cfg.get('db_type') == 'sqlite':
from lib.project.databases.sqlite3 import database
elif cfg.get('db_type') == 'mysql':
from lib.project.databases.mysql import database
at the top of each file that needs the database class. I'd much rather just add:
import lib.project.database
Any help would be very much appreciated.
I've done a good deal of googling and SO searches and not found my answers. Hopefuly one of you wiz's out there can help.
Thanks.
UPDATE:
The reason I'm doing things this way (for #2) is because I'm also trying to make other classes inherit the database class.
So lib/project/databases/sqlite.py is a definition of a database class.
And so is lib/project/databases/mysql.py.
The idea being that after this import is complete I can import classes like the users class and define it like so:
class user(database):
...
And thus inherit all of the structure and methods of the database class.
Your suggestions to simply create an instance based on the sqlite/mysql logic/decision and then pass that where it needs to be is a good solution for that. But I need a bit more...
Ideas?
Thank you all for your help in understanding more about Python and how to get what I'm looking for done.
Here is what I ended up doing:
1) The config file issue:
This apparently was mostly solved to begin with. I confirmed what everyone was saying: that you can "import" a file as many times as you like, but it is only ever parsed/processed/compiled once. As for making it reachable to any file that needs it:
Config class:
import os
import yaml
class Config:
def __init__(self):
self.path = os.getcwd()
stream = open(self.path+"/conf/config.yaml", 'r')
data = yaml.load(stream)
config_keys = data.keys()
for k in config_keys:
setattr(self, k, data.get(k))
if (os.path.isfile(self.path+"/conf/config-override.yaml") ):
stream = open(self.path+"/conf/config-override.yaml", 'r')
data = yaml.load(stream)
config_keys = data.keys()
for k in config_keys:
setattr(self, k, data.get(k))
config = Config()
And then any file that wants to use it:
from lib.project.Config import config
This is all working swimmingly so far.
2) Dynamic database type for Database class:
I just altered my overall design a tiny bit to make a Database class (mostly empty) inherit from either the Sqlite or Mysql classes (both custom builds which are wrappers to existing Sqlite3 and mysql-connector classes). This way there is always a solid Database class to inherit from, I only load in what files I need to, and it's all defined by my config file. Example:
Database class:
from lib.project.Config import config
if config.db_type == 'sqlite' :
from lib.project.databases.Sqlite import Sqlite
elif config.db_type == 'mysql':
from lib.project.databases.Mysql import Mysql
class Database(Sqlite if config.db_type == 'sqlite' else Mysql):
''' documentation '''
I'd still love to hear people's feedback on this code/method.
As I said, I'm still newish to Python and could still be missing something.
Thanks again everyone.
1) you're all set - the file will only be read once.
2) Agreed you don't want to copy any paste code like that - if you want to use 1 instance of the database throughout your project, you'd do something like:
import lib.project
db = lib.project.database
where db is just a local variable used to access your already created database. You would instantiate your database as database (resolving as you've done with the if/elif code whether to use sqlite3 or mysql) in lib/project/databases.py and then in your lib/project/__init__.py file you would do a from .databases import database.
if you want to use multiple databases (of the same sqlite3/mysql type) throughout your project, you would resolve which constructor Database is bound to (or inherits from) in databases.py:
from lib.project.Config import config
if config.db_type == 'sqlite' :
import lib.project.databases.Sqlite as Db
elif config.db_type == 'mysql':
import lib.project.databases.Mysql as Db
class Database(Db):
'''docs'''
I'm trying to use the Django LocMemCache for storing a few simple values but I'm not sure how to initialize the cache when Django starts. Django 1.7 come with Applications, allowing to run some code in AppConfig.ready(), that place would be perfect to initialize the cache, but according to Django 1.7 documentation:
" ... Although you can access model classes as described above, avoid
interacting with the database in your ready() implementation."
So, let say I want to store some DB queries from my model:
x = MyModel.objects.count()
y = MyModel.(a really expensive query)
How and when should I init the cache? Is there a recommended "best practice" for doing that?
Currently, I have just added the following cache.py to my application, but I'm not sure if
my code hits the database once (i.e. the first request) and then uses the cached value before (the following requests).
# cache.py
from django.core.cache import caches
from .models import MyModel
class Cache(object):
def __init__(self):
self.__count = MyModel.objects.count()
self.cache = caches['cache-storage']
#property
def total_count(self):
return self.cache.get('total_count', self.__count)
Then I use the cached values in this way:
# view.py
from .cache import Cache
cache = Cache()
...
(some view)
counter = cache.total_count
That tip about using databases in ready() is generally good advice but isn't always going to work out. Some applications need to access the database based on a schedule instead of a view. The only way I've seen to implement that is from within ready().
This does create an issue because all manage.py commands will run the code in ready(). This means that the application would be accessing the database with your runtime code while running migrations or creating super users which isn't ideal.
I avoid this issue by adding a sys.argv check in ready().
import sys
from django.apps import AppConfig
class MyApp(AppConfig):
name = 'MyApp'
def ready(self):
if 'runserver' in sys.argv:
# your code here
I am used to develop web applications on Django and gunicorn.
In case of Django, any application modules in a Django application can get deployment settings through django.conf.settings. The "settings.py" is written in Python, so that any arbitrary settings and pre-processing can be defined dynamically.
In case of gunicorn, it has three configuration places in order of precedence, and one settings registry class instance combines those.(But usually these settings are used only for gunicorn not application.)
Command line parameters.
Configuration file. (like Django, written in
Python which can have any arbitrary
settings dynamically.)
Paster application settings.
In case of Pyramid, according to Pyramid documentation, deployment settings may be usually put into pyramid.registry.Registry().settings. But it seems to be accessed only when a pyramid.router.Router() instances exists.
That is pyramid.threadlocal.get_current_registry().settings returns None, during the startup process in an application "main.py".
For example, I usually define some business logic in SQLAlchemy model modules, which requires deployment settings as follows.
myapp/models.py
from sqlalchemy import Table, Column, Types
from sqlalchemy.orm import mapper
from pyramid.threadlocal import get_current_registry
from myapp.db import session, metadata
settings = get_current_registry().settings
mytable = Table('mytable', metadata,
Column('id', Types.INTEGER, primary_key=True,)
(other columns)...
)
class MyModel(object):
query = session.query_property()
external_api_endpoint = settings['external_api_uri']
timezone = settings['timezone']
def get_api_result(self):
(interact with external api ...)
mapper(MyModel, mytable)
But, "settings['external_api_endpoint']" raises a TypeError exception because the "settings" is None.
I thought two solutions.
Define a callable which accepts "config" argument in "models.py" and "main.py" calls it with a
Configurator() instance.
myapp/models.py
from sqlalchemy import Table, Column, Types
from sqlalchemy.orm import mapper
from myapp.db import session, metadata
_g = globals()
def initialize(config):
settings = config.get_settings()
mytable = Table('mytable', metadata,
Column('id', Types.INTEGER, rimary_key = True,)
(other columns ...)
)
class MyModel(object):
query = session.query_property()
external_api_endpoint = settings['external_api_endpoint']
def get_api_result(self):
(interact with external api)...
mapper(MyModel, mytable)
_g['MyModel'] = MyModel
_g['mytable'] = mytable
Or, put an empty module "app/settings.py", and put setting into it later.
myapp/__init__.py
from pyramid.config import Configurator
from .resources import RootResource
def main(global_config, **settings):
config = Configurator(
settings = settings,
root_factory = RootResource,
)
import myapp.settings
myapp.setting.settings = config.get_settings()
(other configurations ...)
return config.make_wsgi_app()
Both and other solutions meet the requirements, but I feel troublesome. What I want is the followings.
development.ini
defines rough settings because development.ini can have only string type constants.
[app:myapp]
use = egg:myapp
env = dev0
api_signature = xxxxxx
myapp/settings.py
defines detail settings based on development.ini, beacause any arbitrary variables(types) can be set.
import datetime, urllib
from pytz import timezone
from pyramid.threadlocal import get_current_registry
pyramid_settings = get_current_registry().settings
if pyramid_settings['env'] == 'production':
api_endpoint_uri = 'http://api.external.com/?{0}'
timezone = timezone('US/Eastern')
elif pyramid_settings['env'] == 'dev0':
api_endpoint_uri = 'http://sandbox0.external.com/?{0}'
timezone = timezone('Australia/Sydney')
elif pyramid_settings['env'] == 'dev1':
api_endpoint_uri = 'http://sandbox1.external.com/?{0}'
timezone = timezone('JP/Tokyo')
api_endpoint_uri = api_endpoint_uri.format(urllib.urlencode({'signature':pyramid_settings['api_signature']}))
Then, other modules can get arbitrary deployment settings through "import myapp.settings".
Or, if Registry().settings is preferable than "settings.py", **settings kwargs and "settings.py" may be combined and registered into Registry().settings during "main.py" startup process.
Anyway, how to get the settings dictionay during startup time ? Or, Pyramid gently forces us to put every code which requires deployment settings in "views" callables which can get settings dictionary anytime through request.registry.settings ?
EDIT
Thanks, Michael and Chris.
I at last understand why Pyramid uses threadlocal variables(registry and request), in particular registry object for more than one Pyramid applications.
In my opinion, however, deployment settings usually affect business logics that may define application-specific somethings. Those logics are usually put in one or more Python modules that may be other than "app/init.py" or "app/views.py" that can easily get access to Config() or Registry(). Those Python modules are normally "global" at Python process level.
That is, even when more than one Pyramid applications coexist, despite their own threadlocal variables, they have to share those "global" Python modules that may contain applicatin-specific somethings at Python process level.
Of cause, every those modules can have "initialize()" callalbe which is called with a Configurator() by the application "main" callable, or passing Registory() or Request() object through so long series of function calls can meet usual requirements. But, I guess Pyramid beginers (like me) or developers who has "large application or so many settings" may feel troublesome, although that is Pyramid design.
So, I think, Registry().settings should have only real "thread-local" variables, and should not have normal business-logic settings. Responsibility for segregation of multiple application-specific module, classes, callables variables etc. should be taken by developer.
As of now, from my viewpoint, I will take Chris's answer. Or in "main" callable, do "execfile('settings.py', settings, settings)" and put it in some "global" space.
Another option, if you enjoy global configuration via Python, create a settings.py file. If it needs values from the ini file, parse the ini file and grab them out (at module scope, so it runs at import time):
from paste.deploy.loadwsgi import appconfig
config = appconfig('config:development.ini', 'myapp', relative_to='.')
if config['env'] == 'production':
api_endpoint_uri = 'http://api.external.com/?{0}'
timezone = timezone('US/Eastern')
# .. and so on ...
'config:development.ini' is the name of the ini file (prefixed with 'config:'). 'myapp' is the section name in the config file representing your app (e.g. [app:myapp]). "relative_to" is the directory name in which the config file can be found.
The pattern that I use is to pass the Configurator to modules that need to be initialized. Pyramid doesn't use any global variables because a design goal is to be able to run multiple instances of Pyramid in the same process. The threadlocals are global, but they are local to the current request, so different Pyramid apps can push to them at the same time from different threads.
With this in mind, if you do want a global settings dictionary you'll have to take care of that yourself. You could even push the registry onto the threadlocal manager yourself by calling config.begin().
I think the major thing to take away here is that you shouldn't be calling get_current_registry() at the module level, because at the time of import you aren't really guaranteed that the threadlocals are initialized, however in your init_model() function if you call get_current_registry(), you'd be fine if you previously called config.begin().
Sorry this is a little convoluted, but it's a common question and the best answer is: pass the configurator to your submodules that need it and allow them to add stuff to the registry/settings objects for use later.
Pyramid uses static configration by PasteDeploy, unlike Django.
Your [EDIT] part is a nice solution, I think Pyramid community should consider such usage.