I need to create a class instance (lets say backend requests session) on the app startup(runserver), and I don't want to rewrite this session after running other management command. How can I achieve this? I tried several approaches and I'm not sure why something like this doesn't work.
# app/apps.py
class MyConfig(AppConfig):
....
requests_session = None
....
def ready(self):
if MyConfig.requests_session is None:
MyConfig.requests_session = requests.Session()
Unfortunately, the condition is always met and the session is recreated. This approach is recommended in the documentation though.
Other solution for me would be to run MyConfig.ready() only after using selected subset of management commands, is that possible?
Is there completely different better way for me to store requests session?
TIA
I think it should work if you use an instance variable instead of a class variable:
# app/apps.py
class MyConfig(AppConfig):
def __init__(self, app_name, app_module):
super(MyConfig, self).__init__(app_name, app_module)
self.requests_session = None
def ready(self):
if self.requests_session is None:
self.requests_session = requests.Session()
The question now is how to access this instance variable elsewhere. You can do that like so:
from django.apps import apps
# Here myapp is the label of your app - change it as required
# This returns the instance of your app config that was initialised
# at startup.
my_app_config = apps.get_app_config('myapp')
# Use the stored request session
req = my_app_config.requests_session
Note that this instance variable only exists in the context of the current process. If you run a management command in a separate process (e.g., manage.py ...) then that will create a new instance of each app.
Related
I want to load a global menu for any template. The structure of the Query is always the same...
class ProductCats(db.Model):
def __init__(self,
lang=None,
catname=None,
catid=None,
update=False,
active=False,
create=False,
):
self.categories = db.session.query(ProductCats, ProductCatsTranslations)
if lang:
self.categories = self.categories.filter(ProductCatsTranslations.language_code==lang)
if catid:
self.categories = self.categories.filter(ProductCatsTranslations.catid==catid)
if active:
self.categories = self.categories.filter(ProductCats.active==active)
self.categories = self.categories.filter(ProductCatsTranslations.catid==ProductCats.catid).\
order_by(ProductCatsTranslations.sort)
self.dicted_by_catid = {}
for c, ct in self.categories:
c.ct = ct
self.dicted_by_catid[c.catid] = c
# then I have a method within this class
def by_catid(self, catid=None):
return self.dicted_by_catid[catid]
in init.py of my flask app
#app.before_request
def before_request():
from models import ProductCats
g.productcats = ProductCats(lang=lang)
So I can easily access my productcats everywhere on flaskapp. But it initiates one Query everytime instead of getting it from the dictonary "by_catid". I call something like this. e.g. in the template or on views or where ever in this app.
g.productcats.by_catid(catid=12).name
g.productcats.by_catid(catid=172).name
This produces two Queries instead of one from init of ProductCats
When I manually set up the same db.query of init of ProductCats() somewhere else in my application e.g. on my homeview I can call the attribute any time without it creates new Query everytime.
There seems to be something special, that g.object is calling the init method of ProductCats everytime again? But when I add a simple debugoutput in my ProductCats init it appears only ONE time. Also the relationship to ProductCatsTranslation is lazy-joined. Also it works in my homeview when not using flask-global (g).
How would I have my productcats all over the app without generating one Query for every category while I can still use the instance of my Productcats in my templates e.g.
g.productcats.by_catid(catid=12).name
g.productcats.by_catid(catid=14).otherattribute
g.productcats.by_catid(catid=12).active
I use MongoEngine as an ODM in my Flask application. Depending on the passed configuration document, MongoEngine should use a different collection.
At the moment I achieve this by changing the internal meta variable model._meta['collection']. Is there an alternative for selecting the collection?
from mongoengine import connect
from api_service.model import MyModel
create_app(config):
app = Flask(__name__)
# load app.config
connect(app.config['MONGODB_DB'],
host=app.config['MONGODB_HOST'],
port=app.config['MONGODB_PORT'],
username=app.config['MONGODB_USERNAME'],
password=app.config['MONGODB_PASSWORD'],
)
MyModel._meta['collection'] = app.config['MONGODB_MYMODEL_COLLECTION']
I know that you can define the collection by meta:{} in the class body of the model (see here). But I am not in the app context there and therefore I cannot access `app.config'.
You can simply modify the meta attribute inside the class itself
class MyModel(Document):
meta = {"collection": "my_actual_collection_name"}
...
Check This for more meta attributes you can use
Solution Update
I defined a helper class that can have a provide an access the application's configurations
class AppConfigHelper:
from flask import current_app
APP_CONFIG = current_app.config
and in the document import and use that class to get the collection name.
class MyModel(Document):
meta = {'collection': AppConfigHelper.APP_CONFIG['MONGODB_MYMODEL_COLLECTION']}
...
This is not the best solution I can think of, but it does the job.
Caution: this is not gonna work if you run it separately from Flask, it is going to crash, you can run it inside the app itself, or using flask shell
I am using an application factory to add views to my flask application like so :
(this is not my actual application factory, and has been shortened for the sake of brevity)
def create_app(config_name='default'):
app = Flask(__name__, template_folder="templates", static_folder='static')
admin_instance = Admin(app, name='Admin')
admin_instance.add_view(EntityAdmin(Entity, db.session))
My EntityAdmin class looks like this :
class EntityAdmin(ModelView):
column_filters = [
MyCustomFilter(column=None, name='Custom')
]
My custom filter looks like this :
class MyCustomFilter(BaseSQLAFilter):
def get_options(self, view):
entities = Entity.query.filter(Entity.active == True).all()
return [(entity.id, entity.name) for entity in entities]
The problem is that it seems that the get_options function is called when the app is instantiated, running a select query every time the create_app function gets called.
So if I update my database schema and run the flask db migrate command, I get an error because the new column I added does not exist when the select query is run. The query raises an error because my database schema is not in sync with the actual database.
Can I register my views only when an actual HTTP request is made ? How can I differentiate between a request and a command ?
You have one more problem with this filter: its options are created on the application instantiation so if your list of entities was changed during the application running it would still return the same list of options.
To fix both problems you don't need to postpone views registrations. You need the filter to get the list of options every time it is used.
This SO answer to the question "Resetting generator object in Python" describes a way to reuse a generator (in your case — a database query):
from flask import has_app_context
def get_entities():
# has_app_context is used to prevent database access
# when application is not ready yet
if has_app_context():
for entity in Entity.query.filter(Entity.active.is_(True)):
yield entity.id, entity.name
class ReloadingIterator:
def __init__(self, iterator_factory):
self.iterator_factory = iterator_factory
def __iter__(self):
return self.iterator_factory()
class MyCustomFilter(BaseSQLAFilter):
def get_options(self, view):
# This will return a generator which is
# reloaded every time it is used
return ReloadingIterator(get_entities)
The problem is that the query to the Entity table can be called multiple times during request. So I usually cache the result for a single request using Flask globals:
def get_entities():
if has_app_context():
if not hasattr(g, 'entities'):
query = Entity.query.filter(Entity.active.is_(True))
g.entities = [(entity.id, entity.name) for entity in query]
for entity_id, entity_name in g.entities:
yield entity_id, entity_name
I want to work with multiple databases with Python Pyramid Framework and SQL Alchemy.
I have 1 database with user information, and multiple databases (with the same structure) where the application information is stored. Each user at login time selects a database and is only shown information from that database (not others).
How should I structure my code?
I was thinking on saving in the session the dbname and on every request check user permission on the selected database and generate a new db session. So my view would look like (PSEUDO CODE):
#view_config(route_name='home', renderer='json')
def my_view_ajax(request):
try:
database = int(request.GET['database'])
# check user permissions from user database
engine = create_engine('postgresql://XXX:XXX#localhost/'+database)
DBSession.configure(bind=engine)
items = DBSession.query('table').all()
except DBAPIError:
return 'error'
return items
Should I generate a new db session with the user information on each request? Or is there a better way?
Thanks
This is quite easy to do in Pyramid+SQLAlchemy, but you'll likely want to
switch to a heavier boilerplate, more manual session management style, and you'll want to be up on the session management docs for SQLA 'cause you can easily trip up when working with multiple concurrent sessions. Also, things like connection management should stay out of views, and be in components that live in the server start up lifecycle and are shared across request threads. If you're doing it right in Pyramid, your views should be pretty small and you should have lots of components that work together through the ZCA (the registry).
In my apps, I have a db factory objects that get sessions when asked for them, and I instantiate these objects in the server start up code (the stuff in __ init __.py) and save them on the registry. Then you can attach sessions for each db to your request object with the reify decorator, and also attach a house keeping end of request cleanup method to close them. This can be done either with custom request factories or with the methods for attaching to the request right from init, I personally wind up using the custom factories as I find it easier to read and I usually end up adding more there.
# our DBFactory component, from some model package
class DBFactory(object):
def __init__(self, db_url, **kwargs):
db_echo = kwargs.get('db_echo', False)
self.engine = create_engine(db_url, echo=db_echo)
self.DBSession = sessionmaker(autoflush=False)
self.DBSession.configure(bind=self.engine)
self.metadata = Base.metadata
self.metadata.bind = self.engine
def get_session(self):
session = self.DBSession()
return session
# we instantiate them in the __init__.py file, and save on registry
def main(global_config, **settings):
"""runs on server start, returns a Pyramid WSGI application """
config = Configurator(
settings=settings,
# ask for a custom request factory
request_factory = MyRequest,
)
config.registry.db1_factory = DBFactory( db_url=settings['db_1_url'] )
config.registry.db2_factory = DBFactory( db_url=settings['db_2_url'] )
# and our custom request class, probably in another file
class MyRequest(Request):
"override the pyramid request object to add explicit db session handling"
#reify
def db1_session(self):
"returns the db_session at start of request lifecycle"
# register callback to close the session automatically after
# everything else in request lifecycle is done
self.add_finished_callback( self.close_dbs_1 )
return self.registry.db1_factory.get_session()
#reify
def db2_session(self):
self.add_finished_callback( self.close_dbs_2 )
return self.registry.db2_factory.get_session()
def close_dbs_1(self, request):
request.db1_session.close()
def close_dbs_2(self, request):
request.db2_session.close()
# now view code can be very simple
def my_view(request):
# get from db 1
stuff = request.db1_session.query(Stuff).all()
other_stuff = request.db2_session.query(OtherStuff).all()
# the above sessions will be closed at end of request when
# pyramid calls your close methods on the Request Factory
return Response("all done, no need to manually close sessions here!")
I'm trying to set the namespace for all DB operations for the Google App Engine in python, but i can't get it done.
Currently my code looks something like this:
""" Set Google namespace """
if user:
namespace = thisUser.namespace
namespace_manager.set_namespace(namespace)
""" End Google namespace """
#Then i have all sorts of classes:
class MainPage(BaseHandler):
def get(self):
#code with DB operations like get and put...
class MainPage2(BaseHandler):
def get(self):
#code with DB operations like get and put...
class MainPage3(BaseHandler):
def get(self):
#code with DB operations like get and put...
app = webapp2.WSGIApplication([ ... ], debug=True, config=webapp2_config)
The problem with this is, is that in the classes all DB operations are still done on the default namespace (so as if no namespace is set). Eventhough i set the namespace in the very top of my code.
When i print the variable "namespace", which i also set in the top of the code, then i do get to see the namespace that i wish to use.
But it looks like Google App Engine somewhere resets the namespace to empty before running the code in the classes.
So now i'm wondering if there's a good way to set the namespace once somewhere.
Currently i set it like this in all "def's":
class MainPage(BaseHandler):
def get(self):
namespace_manager.set_namespace(namespace)
#code with DB operations like get and put...
class MainPage(BaseHandler):
def get(self):
namespace_manager.set_namespace(namespace)
#code with DB operations like get and put...
etc...
It's just not a very elegant solution.
You need to write a middleware that will intercept the request and will set the namespace according to your app logic.
A good solution is to add a hook. Something like that should be works.
from google.appengine.api import apiproxy_stub_map
NAMESPACE_NAME = 'noname'
def namespace_call(service, call, request, response):
if hasattr(request, 'set_name_space'):
request.set_name_space(NAMESPACE_NAME)
apiproxy_stub_map.apiproxy.GetPreCallHooks().Append(
'datastore-hooks', namespace_call, 'datastore_v3')
You can add it in your main.py or appengine_config.py. By this way the hook is configured during the loading of the instances and keeps his state.
You can use appconfig.py and define namespace_manager_default_namespace_for_request()
Have a read of https://developers.google.com/appengine/docs/python/multitenancy/multitenancy see the first section of "Setting the Current Namespace"