Is there a way to change database for whole block of code. For example:
with using_db('my_other_db_conf'):
MyModel.objects.all()
which would be equivalent of:
MyModel.objects.using('my_other_db_conf').all()
I just need to use different DB depending on context and don't like the idea of using using() method every time :\
I'd use managers. In your models.py:
class DB_one_ItemsManager(models.Manager):
def get_query_set(self):
return super(DB_one_ItemsManager, self).get_query_set().using("database1")
class DB_two_ItemsManager(models.Manager):
def get_query_set(self):
return super(DB_two_ItemsManager, self).get_query_set().using("database2")
class YourModel(models.Model):
#Some fields here
#...
objects_db_one=DB_one_ItemsManager()
objects_db_two=DB_two_ItemsManager()
Or if you want to use objects_db_one or objects_db_two as default manager simply rename it to objects
The behavior needs to modify some global value, IMO, thus with statement is not the apropriate way. Of course, it can be done, in a implicit, dirty and thread unsafe way:
from contextlib import contextmanager
#contextmanager
def unsafe_modify_queryset_db(model_dbs):
"""model_dbs => sequence of tuple (model, db for the model to use).
For example ((User, 'slice_2'), ...)
"""
prev_db = map(lambda x:x[0].objects._db , model_dbs)
for model, db in model_dbs:
model.objects._db = db
yield
# restore previous db
for x, db in zip(model_dbs, prev_db):
x[0].objects._db = prev_db
# then
with unsafe_modify_queryset_db((User, 'slice_2', ...)):
User.objects.filter(...)
You could also use db_manager to operate on QuerySet level, which achieves the same goal as luke14free's code:
qs = User.objects.db_manager('slice_2')
foo = qs.filter(...)
bar = qs.filter(...)
Remember that Explicit is better than implicit, just arrange your code and enclose querysets sharing same db to function call, would be better.
Related
i'm trying to create back-end app using FastApi and sqlAchemy. I have a lot of entities which has relations with database. So, my question is: How to speed up development? Now i write for each entity code:
#app.get("/holidays")
def getHolidays():
session = Session(bind=engine)
holidays: List[Holiday] = session.query(Holiday).all()
return [x.to_json() for x in holidays]
#app.get("/exclusive_operations")
def getExclusiveOperations():
session = Session(bind=engine)
exclusive_operations: List[ExclusiveOperation] = session.query(ExclusiveOperation).all()
return [x.to_json() for x in exclusive_operations]
#app.get('/category_descriptions')
def getCategoryDescr():
session = Session(bind=engine)
category_descrs: List[CategoryDescr] = session.query(CategoryDescr).all()
return [x.to_json() for x in category_descrs]
So if i want to create all crud operations, i need to create 12 typical methods for 3 entities. Maybe another solution exists?
It is Python - as a dynamic language, the functions and methods are created at runtime. The "#app.get" decorator is what registers your views in the application, not their existence in the top level of a module.
Therefore, you can create a for loop that simply recreates and registers the view for each of your entities - it can be done either at the module level or inside a function.
(it is nice to have in mind that the "#xxxx" decorator syntax is just syntax sugar for calling the decorator passing the decorated function as its sole parameter)
for Entity, name in [(Holiday, "holidays"), (ExclusiveOperation, "exclusive_operations"), (CategoryDescr, "category_descriptions")]:
def freeze_var_wrapper(Entity, name):
# this intermediary function is needed, otherwise the Entity and name
# variables would be always up-to-date inside the view function
# and always point to the last value in the external for-loop after
# it finished execution:
def view():
session = Session(bind=engine)
entities = session.query(Entity).all()
return [x.to_json() for x in entities]
# optional, may facilitate debugging:
view.__name__ = f"get{Entity.__name__}s"
# actually registers the view function with the framework:
# (could be done in the same line, without the "view_registrer" var)
view_registrer = app.get(f"/{name}")
view_registrer(view)
freeze_var_wrapper(Entity, name)
There are other ways of doing this that might remove the boiler-plate and look more elegant - for example with class inheritance and an apropriate __init__subclass__in a base class (even if the framework does not use "class views", we will register the bound method for each class, which is just a callable):
class BaseView:
Entity: cls
view_name: str
def __init_subclass__(cls, *args, **kw):
super().__init_subclass__(*args, **kw)
app.get(f"/{cls.view_name}")(cls.view)
# above, cls.view is bound to the subclass being processed, therefore
# the class attributes as defined in each class body are used inside the method
# this could easily register post, delete and detail views as well
#classmethod
def view(cls);
session = Session(bind=engine)
entities = session.query(cls.Entity).all()
return [x.to_json() for x in entities]
class HolydayView(BaseView):
Entity = Holyday
view_name = "holydays"
# thats is just it.
class ExclusiveOperationView(BaseView):
Entity = ExclusiveOperation
view_name = "exclusive_operations"
class CatewgoryDescriptionView(BaseView):
Entity = CategoryDescription
view_name = "category_descriptions"
I wrote unit tests first, then I made all the tests pass, now I am looking how to refactor the code to avoid repetitions.
I have a function which returns different values depending on the context. All context is extracted on-the-fly from the Django models.
Currently my code is structured like that:
from django.test import TestCase
class MyTest(TestCase):
def test_case1(self):
user = User.objects.create(username='user')
tested_class = MyClass(user)
Model1.objects.create(...) # one type of context
self.assertEqual(...) # test the class method for this type of context
def test_case2(self):
user = User.objects.create(username='user')
tested_class = MyClass(user)
Model2.objects.create(...) # another type of context
self.assertEqual(...) # test the class method for this type of context
def test_case3(self):
user = User.objects.create(username='user')
tested_class = MyClass(user)
Model1.objects.create(...) # yet another type of context
Model2.objects.create(...)
self.assertEqual(...) # test the class method for this type of context
Obviously, the code is quite repetitive: the first two lines are the same in each function.
My first idea was to use a shared setup function:
def setUp(self):
self.user = User.objects.create(username='user')
self.tested_class = MyClass(user)
but this solution didn't work: all model updates were shared, and tests became dependent on each other.
What I need instead is a clean state ("empty database") before starting each test.
What else can I try?
Why don't you just destroy all the objects you don't want in your teardown? Looks like Django allows you to do this type of thing pretty easily.
def tearDown(self):
User.objects.all().delete()
For a Django model I'm using django-import-export package.
If need to export more then just available model fields, like properties or custom fields, new can be added with import_export.fields.Field class and optionally dehydrate_<field> method.
from import_export import resources, fields, instance_loaders
class ProductResource(resources.ModelResource):
categories = fields.Field()
price = fields.Field(attribute='unit_price')
class Meta:
model = Product
def dehydrate_categories(self, product):
return ';'.join(
'/%s' % '/'.join([c.name for c in cat.parents()] + [cat.name])
for cat in product.category.iterator() )
It does work well, but only for exporting. What about import, the reverse process ? Is there some counterpart to dehydrate_ method ?
So far I've overridden get_or_init_instance method:
class ProductResource(resources.ModelResource):
def get_or_init_instance(self, instance_loader, row):
row['unit_price'] = row['price']; row.pop('price')
return super(ProductResource, self).get_or_init_instance(instance_loader, row)
but doubt this is the right way.
Would appreciate any hint how to handle imports of custom fields.
You can override import_obj instead. See Import workflow for more details.
Another approach is to subclass Field and override export and save methods and do all required data manipulation in a field.
I know this is very old but I came across the same problem and this is how I fixed it (based on the direction the original asker was heading).
First, you can add any custom/modified fields you need by overriding the 'before_import_row' function, like so:
def before_import_row(self, row, **kwargs):
row['extra_info'] = 'Some Info'
return super(RetailLocationResource, self).before_import_row(row, **kwargs)
Then you can pass this into your instance by overriding get_or_init_instance like so:
def get_or_init_instance(self, instance_loader, row):
instance, bool = super(RetailLocationResource, self).get_or_init_instance(instance_loader, row)
instance.extra_info = row['extra_info']
return instance, bool
Hope this helps anyone!
Is there a way to create custom methods to the query object so you can do something like this?
User.query.all_active()
Where all_active() is essentially .filter(User.is_active == True)
And be able to filter off of it?
User.query.all_active().filter(User.age == 30)
You can subclass the base Query class to add your own methods:
from sqlalchemy.orm import Query
class MyQuery(Query):
def all_active(self):
return self.filter(User.is_active == True)
You then tell SQLAlchemy to use this new query class when you create the session (docs here). From your code it looks like you might be using Flask-SQLAlchemy, so you would do it as follows:
db = SQLAlchemy(session_options={'query_cls': MyQuery})
Otherwise you would pass the argument directly to the sessionmaker:
sessionmaker(bind=engine, query_cls=MyQuery)
As of right now, this new query object isn't that interesting because we hardcoded the User class in the method, so it won't work for anything else. A better implementation would use the query's underlying class to determine which filter to apply. This is slightly tricky but can be done as well:
class MyOtherQuery(Query):
def _get_models(self):
"""Returns the query's underlying model classes."""
if hasattr(query, 'attr'):
# we are dealing with a subquery
return [query.attr.target_mapper]
else:
return [
d['expr'].class_
for d in query.column_descriptions
if isinstance(d['expr'], Mapper)
]
def all_active(self):
model_class = self._get_models()[0]
return self.filter(model_class.is_active == True)
Finally, this new query class won't be used by dynamic relationships (if you have any). To let those also use it, you can pass it as argument when you create the relationship:
users = relationship(..., query_class=MyOtherQuery)
this work for me finely
from sqlalchemy.orm import query
from flask_sqlalchemy import BaseQuery
class ParentQuery(BaseQuery):
def _get_models(self):
if hasattr(query, 'attr'):
return [query.attr.target_mapper]
else:
return self._mapper_zero().class_
def FilterByCustomer(self):
model_class = self._get_models()
return self.filter(model_class.customerId == int(g.customer.get('customerId')))
#using like this
class AccountWorkflowModel(db.Model):
query_class = ParentQuery
.................
To provide a custom method that will be used by all your models that inherit from a particular parent, first as mentioned before inherit from the Query class:
from flask_sqlalchemy import SQLAlchemy, BaseQuery
from sqlalchemy.inspection import inspect
class MyCustomQuery(BaseQuery):
def all_active(self):
# get the class
modelClass = self._mapper_zero().class_
# get the primary key column
ins = inspect(modelClass)
# get a list of passing objects
passingObjs = []
for modelObj in self:
if modelObj.is_active == True:
# add to passing object list
passingObjs.append(modelObj.__dict__[ins.primary_key[0].name])
# change to tuple
passingObjs = tuple(passingObjs)
# run a filter on the query object
return self.filter(ins.primary_key[0].in_(passingObjs))
# add this to the constructor for your DB object
myDB = SQLAlchemy(query_class=MyCustomQuery)
This is for flask-sqlalchemy, for which people will still get here when looking for this answer.
I'm a web application developer and in using SQLAlchemy I find it clumsy to do this in many of my controllers when I'm wanting a specific row from (say) the users table:
from model import dbsession # SQLAlchemy SessionMaker instance
from model import User
user = dbsession().query(User).filter_by(some_kw_args).first()
Or say I want to add a user to the table (assuming another controller):
from model import dbsession # SQLAlchemy SessionMaker instance
from model import User
user = User("someval", "anotherval", "yanv")
dbsession().add(user)
So, because of that clumsiness (I won't go into some of my other personal idioms) I didn't like having to do all of that just to add a record to the table or to get a record from the table. So I decided (after a lot of nasty hacking on SQLAlchemy and deciding I was doing too many "magical" things) this was appropriate for the proxy pattern.
I (at first) did something like this inside of the model module:
def proxy_user(delete=False, *args, **kwargs):
session = DBSession()
# Keyword args? Let's instantiate it...
if (len(kwargs) > 0) and delete:
obj = session.query(User).filter_by(**kwargs).first()
session.delete(obj)
return True
elif len(kwargs) > 0:
kwargs.update({'removed' : False})
return session.query(User).filter_by(**kwargs).first()
else:
# Otherwise, let's create an empty one and add it to the session...
obj = User()
session.add(obj)
return obj
I did this for all of my models (nasty duplication of code, I know) and it works quite well. I can pass in keyword arguments to the proxy function and it handles all of the session querying for me (even providing a default filter keyword for the removed flag). I can initialize an empty model object and then add data to it by updating the object attributes and all of those changes are tracked (and committed/flushed) because the object has been added to the SQLAlchemy session.
So, to reduce duplication, I put the majority of the logic an decorator function and am now doing this:
def proxy_model(proxy):
"""Decorator for the proxy_model pattern."""
def wrapper(delete=False, *args, **kwargs):
model = proxy()
session = DBSession()
# Keyword args? Let's instantiate it...
if (len(kwargs) > 0) and delete:
obj = session.query(model).filter_by(**kwargs).first()
session.delete(obj)
return True
elif len(kwargs) > 0:
kwargs.update({'removed' : False})
return session.query(model).filter_by(**kwargs).first()
else:
# Otherwise, let's create an empty one and add it to the session...
obj = model()
session.add(obj)
return obj
return wrapper
# The proxy_model decorator is then used like so:
#proxy_model
def proxy_user(): return User
So now, in my controllers I can do this:
from model import proxy_user
# Fetch a user
user = proxy_user(email="someemail#ex.net") # Returns a user model filtered by that email
# Creating a new user, ZopeTransaction will handle the commit so I don't do it manually
new_user = proxy_user()
new_user.email = 'anotheremail#ex.net'
new_user.password = 'just an example'
If I need to do other more complex queries I will usually write function that handles it if I use it often. If it is a one-time thing I will just import the dbsession instance and then do the "standard" SQLAlchemy orm query....
This is much cleaner and works wonderfully but I still feel like it isn't "locked in" quite. Can anyone else (or more experienced python programmers) provide a better idiom that would achieve a similar amount of lucidity that I'm seeking while being a clearer abstraction?
You mention "didn't like having to do 'all of that'" where 'all of that' looks an awful lot like only 1 - 2 lines of code so I'm feeling that this isn't really necessary. Basically I don't really think that either statement you started with is all that verbose or confusing.
However, If I had to come up with a way to express this I wouldn't use a decorator here as you aren't really decorating anything. The function "proxy_user" really doesn't do anything without the decorator applied imo. Since you need to provide the name of the model somehow I think you're better of just using a function and passing the model class to it. I also think that rolling the delete functionality into your proxy is out of place and depending on how you've configured your Session the repeated calls to DBSession() may be creating new unrelated sessions which is going to cause problems if you need to work with multiple objects in the same transaction.
Anyway, here's a quick stab at how I would refactor your decorator into a pair of functions:
def find_or_add(model, session, **kwargs):
if len(kwargs) > 0:
obj = session.query(model).filter_by(**kwargs).first()
if not obj:
obj = model(**kwargs)
session.add(obj)
else:
# Otherwise, let's create an empty one and add it to the session...
obj = model()
session.add(obj)
return obj
def find_and_delete(model, session, **kwargs):
deleted = False
obj = session.query(model).filter_by(**kwargs).first()
if obj:
session.delete(obj)
deleted = True
return deleted
Again, I'm not convinced this is necessary but I think I can agree that:
user = find_or_add(User, mysession, email="bob#localhost.com")
Is perhaps nicer looking than the straight SQLAlchemy code necessary to find / create a user and add them to session.
I like the above functions better than your current decorator approach because:
The names clearly denote what your intent is here, where I feel proxy_user doesn't really make it clear that you want a user object if it exists otherwise you want to create it.
The session is managed explicitly
They don't require me to wrap every model in a decorator
The find_or_add function always returns an instance of model instead of sometimes returning True, a query result set, or a model instance.
the find_and_delete function always returns a boolean indicated whether or not it was successfully able to find and delete the record specified in kwargs.
Of course you might consider using a class decorator to add these functions as methods on your model classes, or perhaps deriving your models from a base class that includes this functionality so that you can do something like:
# let's add a classmethod to User or its base class:
class User(...):
...
#classmethod
def find_or_add(cls, session, **kwargs):
if len(kwargs) > 0:
obj = session.query(cls).filter_by(**kwargs).first()
if not obj:
obj = cls(**kwargs)
session.add(obj)
else:
# Otherwise, let's create an empty one and add it to the session...
obj = cls()
session.add(obj)
return obj
...
user = User.find_or_add(session, email="someone#tld.com")