I'm using sqlalchemy with Elixir and have some troubles trying to make a query..
I have 2 entities, Customer and CustomerList, with a many to many relationship.
customer_lists_customers_table = Table('customer_lists_customers',
metadata,
Column('id', Integer, primary_key=True),
Column('customer_list_id', Integer, ForeignKey("customer_lists.id")),
Column('customer_id', Integer, ForeignKey("customers.id")))
class Customer(Entity):
[...]
customer_lists = ManyToMany('CustomerList', table=customer_lists_customers_table)
class CustomerList(Entity):
[...]
customers = ManyToMany('Customer', table=customer_lists_customers_table)
I'm tryng to find CustomerList with some customer:
customer = [...]
CustomerList.query.filter_by(customers.contains(customer)).all()
But I get the error:
NameError:
global name 'customers' is not defined
customers seems to be unrelated to the entity fields, there's an special query form to work with relationships (or ManyToMany relationships)?
Thanks
You can use regular filter: query.filter(CustomerList.customers.contains(customer)). See SQLAlchemy documentation for more examples. It's actually filter_by that's a special case. The query.filter_by(**kwargs) shorthand works only for simple equality comparisons. Under the cover query.filter_by(foo="bar", baz=42) is delegated to the equivalent of query.filter(and_(MyClass.foo == "bar", MyClass.baz == 42)). (There's actually slightly more magic to figure out which property you meant you have many entities, but it still uses simple delegation)
Read the error message with attention, it points to the source of problem. Did you mean
CustomerList.query.filter_by(CustomerList.customers.contains(customer)).all()?
Update: When using declarative definition you can use just defined relation in class scope, but these properties are not visible outside class:
class MyClass(object):
prop1 = 'somevalue'
prop2 = prop1.upper() # prop1 is visible here
val2 = MyClass.prop1 # This is OK
val1 = prop1.lower() # And this will raise NameError, since there is no
# variable `prop1` is global scope
CustomerList.query.filter_by(CustomerList.customers.contains(customer)).all() should work fine.
Related
I have found many explanations for how to create a self-referential many-to-many relationship (for user followers or friends) using a separate table or class:
Below are three examples, one from Mike Bayer himself:
Many-to-many self-referential relationship in sqlalchemy
How can I achieve a self-referencing many-to-many relationship on the SQLAlchemy ORM back referencing to the same attribute?
Miguel Grinberg's Flask Megatutorial on followers
But in every example I've found, the syntax for defining the primaryjoin and secondaryjoin in the relationship is an early-binding one:
# this relationship is used for persistence
friends = relationship("User", secondary=friendship,
primaryjoin=id==friendship.c.friend_a_id,
secondaryjoin=id==friendship.c.friend_b_id,
)
This works great, except for one circumstance: when one uses a Base class to define the id column for all of your objects as shown in Mixins: Augmenting the base from the docs
My Base class and followers table are defined thusly:
from flask_sqlchalchemy import SQLAlchemy
db = SQLAlchemy()
class Base(db.Model):
__abstract__ = True
id = db.Column(db.Integer, primary_key=True)
user_flrs = db.Table(
'user_flrs',
db.Column('follower_id', db.Integer, db.ForeignKey('user.id')),
db.Column('followed_id', db.Integer, db.ForeignKey('user.id')))
But now I have trouble with the followers relationship that has served me loyally for a while before I moved the id's to the mixin:
class User(Base):
__table_name__ = 'user'
followed_users = db.relationship(
'User', secondary=user_flrs, primaryjoin=(user_flrs.c.follower_id==id),
secondaryjoin=(user_flrs.c.followed_id==id),
backref=db.backref('followers', lazy='dynamic'), lazy='dynamic')
db.class_mapper(User) # trigger class mapper configuration
Presumably because the id is not present in the local scope, though it seems to throw a strange error for that:
ArgumentError: Could not locate any simple equality expressions involving locally mapped foreign key columns for primary join condition 'user_flrs.follower_id = :follower_id_1' on relationship User.followed_users. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or are annotated in the join condition with the foreign() annotation. To allow comparison operators other than '==', the relationship can be marked as viewonly=True.
And it throws the same error if I change the parentheses to quotes to take advantage of late-binding. I have no idea how to annotate this thing with foreign() and remote() because I simply don't know what sqlalchemy would like me to describe as foreign and remote on a self-referential relationship that crosses a secondary table! I've tried many combinations of this, but it hasn't worked thus far.
I had a very similar (though not identical) problem with a self-referential relationship that did not span a separate table and the key was simply to convert the remote_side argument to a late-binding one. This makes sense to me, as the id column isn't present during an early-binding process.
If it is not late-binding that I am having trouble with, please advise. In the current scope, though, my understanding is that id is mapped to the Python builtin id() and thus will not work as an early-binding relationship.
Converting id to Base.id in the joins results in the following error:
ArgumentError: Could not locate any simple equality expressions involving locally mapped foreign key columns for primary join condition 'user_flrs.follower_id = "<name unknown>"' on relationship User.followed_users. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or are annotated in the join condition with the foreign() annotation. To allow comparison operators other than '==', the relationship can be marked as viewonly=True.
You can't use id in your join filters, no, because that's the built-in id() function, not the User.id column.
You have three options:
Define the relationship after creating your User model, assigning it to a new User attribute; you can then reference User.id as it has been pulled in from the base:
class User(Base):
# ...
User.followed_users = db.relationship(
User,
secondary=user_flrs,
primaryjoin=user_flrs.c.follower_id == User.id,
secondaryjoin=user_flrs.c.followed_id == User.id,
backref=db.backref('followers', lazy='dynamic'),
lazy='dynamic'
)
Use strings for the join expressions. Any argument to relationship() that is a string is evaluated as a Python expression when configuring the mapper, not just the first argument:
class User(Base):
# ...
followed_users = db.relationship(
'User',
secondary=user_flrs,
primaryjoin="user_flrs.c.follower_id == User.id",
secondaryjoin="user_flrs.c.followed_id == User.id",
backref=db.backref('followers', lazy='dynamic'),
lazy='dynamic'
)
Define the relationships as callables; these are called at mapper configuration time to produce the final object:
class User(Base):
# ...
followed_users = db.relationship(
'User',
secondary=user_flrs,
primaryjoin=lambda: user_flrs.c.follower_id == User.id,
secondaryjoin=lambda: user_flrs.c.followed_id == User.id,
backref=db.backref('followers', lazy='dynamic'),
lazy='dynamic'
)
For the latter two options, see the sqlalchemy.orgm.relationship() documentation:
Some arguments accepted by relationship() optionally accept a callable function, which when called produces the desired value. The callable is invoked by the parent Mapper at “mapper initialization” time, which happens only when mappers are first used, and is assumed to be after all mappings have been constructed. This can be used to resolve order-of-declaration and other dependency issues, such as if Child is declared below Parent in the same file*[.]*
[...]
When using the Declarative extension, the Declarative initializer allows string arguments to be passed to relationship(). These string arguments are converted into callables that evaluate the string as Python code, using the Declarative class-registry as a namespace. This allows the lookup of related classes to be automatic via their string name, and removes the need to import related classes at all into the local module space*[.]*
[...]
primaryjoin –
[...]
primaryjoin may also be passed as a callable function which is evaluated at mapper initialization time, and may be passed as a Python-evaluable string when using Declarative.
[...]
secondaryjoin –
[...]
secondaryjoin may also be passed as a callable function which is evaluated at mapper initialization time, and may be passed as a Python-evaluable string when using Declarative.
Both the string and the lambda define the same user_flrs.c.followed_id == User.id / user_flrs.c.follower_id == User.id expressions as used in the first option, but because they are given as a string and callable function, respectively, you postpone evaluation until SQLAlchemy needs to have those declarations finalised.
I'm using an declarative SQLAlchemy class to perform computations. Part of the computations require me to perform the computations for all configurations provided by a different table which doesn't have any foreign key relationships between the two tables.
This analogy is nothing like my real application, but hopefully will help to comprehend what I want to happen.
I have a set of cars and a list of paint colors.
The car object has a factory which provides a car in all possible colors
from sqlalchemy import *
from sqlachemy.orm import *
def PaintACar(car, color):
pass
Base = declarative_base()
class Colors(Base):
__table__ = u'colors'
id = Column('id', Integer)
color= Column('color', Unicode)
class Car(Base):
__table__ = u'car'
id = Column('id', Integer)
model = Column('model', Unicode)
# is this somehow possible?
all_color_objects = collection(...)
# I know this is possible, but would like to know if there's another way
#property
def all_colors(self):
s = Session.object_session(self)
return s.query(A).all()
def CarColorFactory(self):
for color in self.all_color_objects:
yield PaintACar(self, color)
My question: Is it possible to produce all_color_objects somehow? Without having to resort to finding the session and manually issuing a query as in the all_colors property?
It's been a while, so I'm providing the best answer I saw (as a comment by zzzeek). Basically, I was looking for one-off syntactic sugar. My original 'ugly' implementation works just fine.
what better way would there be here besides getting a Session and producing the query you
want? Are you looking for being able to add to the collection and that automatically
flushes things? (just add the objects to the Session?) Do you not like using
object_session(self) >(you can build some mixin class or something that hides that for
you?) It's not really clear >what the problem is. The objects here have no relationship to
the parent class so there's no particular intelligence SQLAlchemy would be able to add.
– zzzeek Jun 17 at 5:03
I'm a beginner in SQLAlchemy and found query can be done in 2 method:
Approach 1:
DBSession = scoped_session(sessionmaker())
class _Base(object):
query = DBSession.query_property()
Base = declarative_base(cls=_Base)
class SomeModel(Base):
key = Column(Unicode, primary_key=True)
value = Column(Unicode)
# When querying
result = SomeModel.query.filter(...)
Approach 2
DBSession = scoped_session(sessionmaker())
Base = declarative_base()
class SomeModel(Base):
key = Column(Unicode, primary_key=True)
value = Column(Unicode)
# When querying
session = DBSession()
result = session.query(SomeModel).filter(...)
Is there any difference between them?
In the code above, there is no difference. This is because, in line 3 of the first example:
the query property is explicitly bound to DBSession
there is no custom Query object passed to query_property
As #petr-viktorin points out in the answer here, there must be a session available before you define your model in the first example, which might be problematic depending on the structure of your application.
If, however, you need a custom query that adds additional query parameters automatically to all queries, then only the first example will allow that. A custom query class that inherits from sqlalchemy.orm.query.Query can be passed as an argument to query_property. This question shows an example of that pattern.
Even if a model object has a custom query property defined on it, that property is not used when querying with session.query, as in the last line in the second example. This means something like the first example the only option if you need a custom query class.
I see these downsides to query_property:
You cannot use it on a different Session than the one you've configured (though you could always use session.query then).
You need a session object available before you define your schema.
These could bite you when you want to write tests, for example.
Also, session.query fits better with how SQLAlchemy works; query_property looks like it's just added on top for convenience (or similarity with other systems?).
I'd recommend you stick to session.query.
An answer (here) to a different SQLAlchemy question might help. That answer starts with:
You can use Model.query, because the Model (or usually its base class, especially in cases where declarative extension is used) is assigned Session.query_property. In this case the Model.query is equivalent to Session.query(Model).
I'm using SQLAlchemy under Flask. I have a table that represents a mode my system can be in. I also have a table that contains lists of elevations that are applicable to each mode. (So this is a many-to-many relationship.) In the past when I've added an attribute (like elevations inside ScanModes here) to a class mapped to a table, the target was also a class.
Does it make more sense to wrap the elevations in a class (and make a corresponding table?) so I can use relationship to make ScanModes.elevations work, or should I use a query-enabled property? I'm also open to other suggestions.
elevation_mode_table = db.Table('elevation_mode', db.metadata,
db.Column('scan_mode', db.String,
db.ForeignKey('scan_modes'),
nullable=False),
db.Column('elevation', db.Float,
nullable=False),
db.PrimaryKeyConstraint('scan_mode',
'elevation'))
class ScanModes(db.Model):
scan_mode = db.Column(db.String, primary_key=True)
elevations = ?
def __init__(self, scan_mode):
self.scan_mode = scan_mode
the most straightforward approach would be to just map elevation_mode_table to a class:
from sqlalchemy.orm import mapper
class ElevationMode(object):
pass
mapper(ElevationMode, elevation_mode_table)
class ScanModes(db.Model):
elevations = relationship(ElevationMode)
of course even easier is to just have ElevationMode be a declared class in the first place, if you need to deal with elevation_mode_table you'd get that from ElevationMode.__table__ ...
The "query enabled property" idea here, sure you could do that too though you'd lose the caching benefits of relationship.
If I have an sqlalchemy-mapped instance. Can I get an underlying dynamic query object corresponding to an attribute of said instance?
For example:
e = Employee()
e.projects
#how do I get a query object loaded with the underlying sql of e.projects
I think you're describing the lazy="dynamic" property of relationship(). something like
class Employee(Base):
__table_name__ = "employees"
...
projects = relationship(..., lazy="dynamic")
which will cause Employee().project to return a sqlalchemy.orm.Query instance instead of a collection containing the related items. However, that means there's no (simple) way to access the collection directly. If you still need that (most likely you really do want it to be lazily loaded, set up two relationship()s instead.
class Employee(Base):
__table_name__ = "employees"
...
projects_query = relationship(..., lazy="dynamic")
projects = relationship(..., lazy="select")
edit: You said
I need somehow to get the dynamic query object of an already lazy relationship mapped property.
Supposing we have an instance i of class Foo related to a class Bar by the property bars. First, we need to get the property that handles the relationship.
from sqlalchemy.orm.attributes import manager_of_class
p = manager_of_class(Foo).mapper.get_property('bars')
We'd like an expression that and_s together all of the columns on i that relate it to bars. If you need to operate on Foo through an alias, substitute it in here.
e = sqlalchemy.and_(*[getattr(Foo, c.key) == getattr(i, c.key)
for c in p.local_side])
Now we can create a query that expresses this relationship. Substitute aliases for Foo and Bar here as needed.
q = session.query(Foo) \
.filter(e) \
.join(Foo.bars) \
.with_entities(Bar)
Not sure about the question in general, but you definitely can enable SQL logging by setting echo=True, which will log the SQL statement as soon as you try to get value of the attribute.
Depending on your relationship configuration, it might have been eagerly pre-loaded.