SQLAlchemy hybrid method as an attribute in the query result? - python

In a search for more syntactically pleasant models, I've created something like (omitted not so relevant parts):
class Post(BaseModel):
acl = db.relationship("ACL", cascade="all, delete-orphan")
#hybrid_method
def is_editable_by(self, user):
if user_is_admin(user):
return True
else:
return any(acl.user_id == user.id for acl in self.acl)
#is_editable_by.expression
def is_editable_by(cls, user):
if user_is_admin(user):
return true().label("is_editable")
else:
return and_(true(), cls.acl.any(user_id=user.id)).label("is_editable")
That works nicely with a query like:
results = db.session.query(Post, Post.is_editable_by(current_user)).all()
usage = [(item.Post.name, item.is_editable) for item in results]
However, I am getting a list of tuples, so the question is: Does SQLAlchemy has any mechanism to make it one object (some kind of proxy/adapter maybe)? If it were not for a parameter (user), hybrid_property would be exact answer:
[(item.name, item.is_editable) for item in results]
It is possible to make the desirable with yet another class, but I wonder if ORM has something already for this case? The main idea is when a lot of objects are obtained, there is no need to calculate something for each object. Or at least that calculation will not require extra calls to the database. In other words, how to have result objects with some derived attributes, which depend on some parameter?
Maybe there is totally different pattern for this kind of thing. This case is in some sense similar to load_only, but it's about adding a "virtual" column to an object, not removing.
Could also be nice with version 0.9 of SQLAlchemy...
Subtlety:
results = db.session.query(Post, Post.is_editable_by(user1)).all()
[(item.name, item.is_editable) for item in results]
...
results = db.session.query(Post, Post.is_editable_by(user2)).all()
[(item.name, item.is_editable) for item in results]
should not cause problems.

Related

Pass a queryset as the argument to __in in django?

I have a list of object ID's that I am getting from a query in an model's method, then I'm using that list to delete objects from a different model:
class SomeObject(models.Model):
# [...]
def do_stuff(self, some_param):
# [...]
ids_to_delete = {item.id for item in self.items.all()}
other_object = OtherObject.objects.get_or_create(some_param=some_param)
other_object.items.filter(item_id__in=ids_to_delete).delete()
What I don't like is that this takes 2 queries (well, technically 3 for the get_or_create() but in the real code it's actually .filter(some_param=some_param).first() instead of the .get(), so I don't think there's any easy way around that).
How do I pass in an unevaluated queryset as the argument to an __in lookup?
I would like to do something like:
ids_to_delete = self.items.all().values("id")
other_object.items.filter(item_id__in=ids_to_delete).delete()
You can, pass a QuerySet to the query:
other_object.items.filter(id__in=self.items.all()).delete()
this will transform it in a subquery. But not all databases, especially MySQL ones, are good with such subqueries. Furthermore Django handles .delete() manually. It will thus make a query to fetch the primary keys of the items, and then trigger the delete logic (and also remove items that have a CASCADE dependency). So .delete() is not done as one query, but at least two queries, and often a larger amount due to ForeignKeys with an on_delete trigger.
Note however that you here remove Item objects, not "unlink" this from the other_object. For this .remove(…) [Django-doc] can be used.
I should've tried the code sample I posted, you can in fact do this. It's given as an example in the documentation, but it says "be cautious about using nested queries and understand your database server’s performance characteristics" and recommends against doing this, casting the subquery into a list:
values = Blog.objects.filter(
name__contains='Cheddar').values_list('pk', flat=True)
entries = Entry.objects.filter(blog__in=list(values))

With SQLAlchemy, how can I convert a row to a "real" Python object?

I've been using SQLAlchemy with Alembic to simplify the database access I use, and any data structure changes I make to the tables. This has been working out really well up until I started to notice more and more issues with SQLAlchemy "expiring" fields from my point of view nearly at random.
A case in point would be this snippet,
class HRDecimal(Model):
dec_id = Column(String(50), index=True)
#staticmethod
def qfilter(*filters):
"""
:rtype : list[HRDecimal]
"""
return list(HRDecimal.query.filter(*filters))
class Meta(Model):
dec_id = Column(String(50), index=True)
#staticmethod
def qfilter(*filters):
"""
:rtype : list[Meta]
"""
return list(Meta.query.filter(*filters))
Code:
ids = ['1', '2', '3'] # obviously fake list of ids
decs = HRDecimal.qfilter(
HRDecimal.dec_id.in_(ids))
metas = Meta.qfilter(
Meta.dec_id.in_(ids))
combined = []
for ident in ids:
combined.append((
ident,
[dec for dec in decs if dec.dec_id == ident],
[hm for hm in metas if hm.dec_id == ident]
))
For the above, there wasn't a problem, but when I'm processing a list of ids that may contain a few thousand ids, this process started taking a huge amount of time, and if done from a web request in flask, the thread would often be killed.
When I started poking around with why this was happening, the key area was
[dec for dec in decs if dec.dec_id == ident],
[hm for hm in metas if hm.dec_id == ident]
At some point during the combining of these (what I thought were) Python objects, at some point calling dec.dec_id and hm.dec_id, in the SQLAlchemy code, at best, we go into,
def __get__(self, instance, owner):
if instance is None:
return self
dict_ = instance_dict(instance)
if self._supports_population and self.key in dict_:
return dict_[self.key]
else:
return self.impl.get(instance_state(instance), dict_)
Of InstrumentedAttribute in sqlalchemy/orm/attributes.py which seems to be very slow, but even worse than this, I've observed times when fields expired, and then we enter,
def get(self, state, dict_, passive=PASSIVE_OFF):
"""Retrieve a value from the given object.
If a callable is assembled on this object's attribute, and
passive is False, the callable will be executed and the
resulting value will be set as the new value for this attribute.
"""
if self.key in dict_:
return dict_[self.key]
else:
# if history present, don't load
key = self.key
if key not in state.committed_state or \
state.committed_state[key] is NEVER_SET:
if not passive & CALLABLES_OK:
return PASSIVE_NO_RESULT
if key in state.expired_attributes:
value = state._load_expired(state, passive)
Of AttributeImpl in the same file. Horrible issue here is that state._load_expired re-runs the SQL Query completely. So in a situation like this, with a big list of idents, we end up running thousands of "small" SQL queries to the database, where I think we should have only been running two "large" ones at the top.
Now, I've gotten around the expired issue by how I initialise the database for flask with session-options, changing
app = Flask(__name__)
CsrfProtect(app)
db = SQLAlchemy(app)
to
app = Flask(__name__)
CsrfProtect(app)
db = SQLAlchemy(
app,
session_options=dict(autoflush=False, autocommit=False, expire_on_commit=False))
This has definitely improved the above situation for when a rows fields just seemed to expire seemingly (from my observations) at random, but the "normal" slowness of accessing items to SQLAlchemy is still an issue for what we're currently running.
Is there any way with SQLAlchemy, to get a "real" Python object returned from a query, instead of a proxied one like it is now, so it isn't being affected by this?
Your randomness is probably related to either explicitly committing or rolling back at an inconvenient time, or due to auto-commit of some kind. In its default configuration SQLAlchemy session expires all ORM-managed state when a transaction ends. This is usually a good thing, since when a transaction ends you've no idea what the current state of the DB is. This can be disabled, as you've done with expire_on_commit=False.
The ORM is also ill suited for extremely large bulk operations in general, as explained here. It is very well suited for handling complex object graphs and persisting those to a relational database with much less effort on your part, as it organizes the required inserts etc. for you. An important part of that is tracking changes to instance attributes. The SQLAlchemy Core is better suited for bulk.
It looks like you're performing 2 queries that produce a potentially large amount of results and then do a manual "group by" on the data, but in a rather unperforming way, because for each id you have you scan the entire list of results, or O(nm), where n is the number of ids and m the results. Instead you should group the results to lists of objects by id first and then perform the "join". On some other database systems you could handle the grouping in SQL directly, but alas MySQL has no notion of arrays, other than JSON.
A possibly more performant version of your grouping could be for example:
from itertools import groupby
from operator import attrgetter
ids = ['1', '2', '3'] # obviously fake list of ids
# Order the results by `dec_id` for Python itertools.groupby. Cannot
# use your `qfilter()` method as it produces lists, not queries.
decs = HRDecimal.query.\
filter(HRDecimal.dec_id.in_(ids)).\
order_by(HRDecimal.dec_id).\
all()
metas = Meta.query.\
filter(Meta.dec_id.in_(ids)).\
order_by(Meta.dec_id).\
all()
key = attrgetter('dec_id')
decs_lookup = {dec_id: list(g) for dec_id, g in groupby(decs, key)}
metas_lookup = {dec_id: list(g) for dec_id, g in groupby(metas, key)}
combined = [(ident,
decs_lookup.get(ident, []),
metas_lookup.get(ident, []))
for ident in ids]
Note that since in this version we iterate over the queries only once, all() is not strictly necessary, but it should not hurt much either. The grouping could also be done without sorting in SQL with defaultdict(list):
from collections import defaultdict
decs = HRDecimal.query.filter(HRDecimal.dec_id.in_(ids)).all()
metas = Meta.query.filter(Meta.dec_id.in_(ids)).all()
decs_lookup = defaultdict(list)
metas_lookup = defaultdict(list)
for d in decs:
decs_lookup[d.dec_id].append(d)
for m in metas:
metas_lookup[m.dec_id].append(m)
combined = [(ident, decs_lookup[ident], metas_lookup[ident])
for ident in ids]
And finally to answer your question, you can fetch "real" Python objects by querying for the Core table instead of the ORM entity:
decs = HRDecimal.query.\
filter(HRDecimal.dec_id.in_(ids)).\
with_entities(HRDecimal.__table__).\
all()
which will result in a list of namedtuple like objects that can easily be converted to dict with _asdict().

Building Django Q() objects from other Q() objects, but with relation crossing context

I commonly find myself writing the same criteria in my Django application(s) more than once. I'll usually encapsulate it in a function that returns a Django Q() object, so that I can maintain the criteria in just one place.
I will do something like this in my code:
def CurrentAgentAgreementCriteria(useraccountid):
'''Returns Q that finds agent agreements that gives the useraccountid account current delegated permissions.'''
AgentAccountMatch = Q(agent__account__id=useraccountid)
StartBeforeNow = Q(start__lte=timezone.now())
EndAfterNow = Q(end__gte=timezone.now())
NoEnd = Q(end=None)
# Now put the criteria together
AgentAgreementCriteria = AgentAccountMatch & StartBeforeNow & (NoEnd | EndAfterNow)
return AgentAgreementCriteria
This makes it so that I don't have to think through the DB model more than once, and I can combine the return values from these functions to build more complex criterion. That works well so far, and has saved me time already when the DB model changes.
Something I have realized as I start to combine the criterion from these functions that is that a Q() object is inherently tied to the type of object .filter() is being called on. That is what I would expect.
I occasionally find myself wanting to use a Q() object from one of my functions to construct another Q object that is designed to filter a different, but related, model's instances.
Let's use a simple/contrived example to show what I mean. (It's simple enough that normally this would not be worth the overhead, but remember that I'm using a simple example here to illustrate what is more complicated in my app.)
Say I have a function that returns a Q() object that finds all Django users, whose username starts with an 'a':
def UsernameStartsWithAaccount():
return Q(username__startswith='a')
Say that I have a related model that is a user profile with settings including whether they want emails from us:
class UserProfile(models.Model):
account = models.OneToOneField(User, unique=True, related_name='azendalesappprofile')
emailMe = models.BooleanField(default=False)
Say I want to find all UserProfiles which have a username starting with 'a' AND want use to send them some email newsletter. I can easily write a Q() object for the latter:
wantsEmails = Q(emailMe=True)
but find myself wanting to something to do something like this for the former:
startsWithA = Q(account=UsernameStartsWithAaccount())
# And then
UserProfile.objects.filter(startsWithA & wantsEmails)
Unfortunately, that doesn't work (it generates invalid PSQL syntax when I tried it).
To put it another way, I'm looking for a syntax along the lines of Q(account=Q(id=9)) that would return the same results as Q(account__id=9).
So, a few questions arise from this:
Is there a syntax with Django Q() objects that allows you to add "context" to them to allow them to cross relational boundaries from the model you are running .filter() on?
If not, is this logically possible? (Since I can write Q(account__id=9) when I want to do something like Q(account=Q(id=9)) it seems like it would).
Maybe someone suggests something better, but I ended up passing the context manually to such functions. I don't think there is an easy solution, as you might need to call a whole chain of related tables to get to your field, like table1__table2__table3__profile__user__username, how would you guess that? User table could be linked to table2 too, but you don't need it in this case, so I think you can't avoid setting the path manually.
Also you can pass a dictionary to Q() and a list or a dictionary to filter() functions which is much easier to work with than using keyword parameters and applying &.
def UsernameStartsWithAaccount(context=''):
field = 'username__startswith'
if context:
field = context + '__' + field
return Q(**{field: 'a'})
Then if you simply need to AND your conditions you can combine them into a list and pass to filter:
UserProfile.objects.filter(*[startsWithA, wantsEmails])

Django method returning

With a method to get all related connections, is it better to simply return a queryset, or iterate through the queryset to return a list of IDs?
example code:
class Foo(models.Model):
bar = models.ForeignKey(Bar, related_name="foos")
class Bar(models.Model):
is_active = models.BooleanField(default=True)
def get_foos(self):
#this is the method to focus on for the question
to return a queryset all that would need to be in get_foos is the following:
return self.foos.all()
but to get a list of ids, it would have to look as such:
foos = self.foos.all()
ids = []
for foo in foos:
ids.append(foo.pk)
return ids
I was looking through an API, and they used the latter, but I don't really understand why you would ever actually do that if you could just do a one-liner like the former! Can someone please explain the benefits of these methods and if there are specific cases in which one is better than the other?
You can get Foo ids in one line:
ids = self.foos.values_list('pk', flat=True)
Using id list is more efficient if used in other expressions, e.g.:
my_query.filter(foo__pk__in=ids)
If you really saw something like your second snippet in some published app, then you can send them a patch (and probably more than one, since chances are you'll find a lot of bad code in this app). Point is : there's a lot of very poorly written Django apps around. As Niekas posted, you can get the same result with whay better performances in a no-brainer one-liner.
Now wrt/ why return a list of pks instead of a full queryset, well, sometimes that's really just what you need, and sometimes that's just silly - depends on the context, really.
I don't think this is a better vs worse scenario. The ID list can also be generated with a one-liner:
list(self.foos.values_list('pk', flat=True))
It's sometimes good practice for a function to return the minimal or least powerful representation of the data requested. Maybe you don't want callers to have access to the queryset object.
Maybe those IDs are going to the template rendering engine, and you'd rather not have templates with write access to querysets, for example.

get_or_create django model with ManyToMany field

Suppose I have three django models:
class Section(models.Model):
name = models.CharField()
class Size(models.Model):
section = models.ForeignKey(Section)
size = models.IntegerField()
class Obj(models.Model):
name = models.CharField()
sizes = models.ManyToManyField(Size)
I would like to import a large amount of Obj data where many of the sizes fields will be identical. However, since Obj has a ManyToMany field, I can't just test for existence like I normally would. I would like to be able to do something like this:
try:
x = Obj(name='foo')
x.sizes.add(sizemodel1) # these can be looked up with get_or_create
...
x.sizes.add(sizemodelN) # these can be looked up with get_or_create
# Now test whether x already exists, so I don't add a duplicate
try:
Obj.objects.get(x)
except Obj.DoesNotExist:
x.save()
However, I'm not aware of a way to get an object this way, you have to just pass in keyword parameters, which don't work for ManyToManyFields.
Is there any good way I can do this? The only idea I've had is to build up a set of Q objects to pass to get:
myq = myq & Q(sizes__id=sizemodelN.id)
But I am not sure this will even work...
Use a through model and then .get() against that.
http://docs.djangoproject.com/en/dev/topics/db/models/#extra-fields-on-many-to-many-relationships
Once you have a through model, you can .get() or .filter() or .exists() to determine the existence of an object that you might otherwise want to create. Note that .get() is really intended for columns where unique is enforced by the DB - you might have better performance with .exists() for your purposes.
If this is too radical or inconvenient a solution, you can also just grab the ManyRelatedManager and iterate through to determine if the object exists:
object_sizes = obj.sizes.all()
exists = object_sizes.filter(id__in = some_bunch_of_size_object_ids_you_are_curious_about).exists()
if not exists:
(your creation code here)
Your example doesn't make much sense because you can't add m2m relationships before an x is saved, but it illustrated what you are trying to do pretty well. You have a list of Size objects created via get_or_create(), and want to create an Obj if no duplicate obj-size relationship exists?
Unfortunately, this is not possible very easily. Chaining Q(id=F) & Q(id=O) & Q(id=O) doesn't work for m2m.
You could certainly use Obj.objects.filter(size__in=Sizes) but that means you'd get a match for an Obj with 1 size in a huge list of sizes.
Check out this post for an __in exact question, answered by Malcolm, so I trust it quite a bit.
I wrote some python for fun that could take care of this.
This is a one time import right?
def has_exact_m2m_match(match_list):
"""
Get exact Obj m2m match
"""
if isinstance(match_list, QuerySet):
match_list = [x.id for x in match_list]
results = {}
match = set(match_list)
for obj, size in \
Obj.sizes.through.objects.filter(size__in=match).values_list('obj', 'size'):
# note: we are accessing the auto generated through model for the sizes m2m
try:
results[obj].append(size)
except KeyError:
results[obj] = [size]
return bool(filter(lambda x: set(x) == match, results.values()))
# filter any specific objects that have the exact same size IDs
# if there is a match, it means an Obj exists with exactly
# the sizes you provided to the function, no more.
sizes = [size1, size2, size3, sizeN...]
if has_exact_m2m_match(sizes):
x = Obj.objects.create(name=foo) # saves so you can use x.sizes.add
x.sizes.add(sizes)

Categories