Python 3.6 and SQLAlchemy 1.2.
I have a package called events which defines Match as a type of Event and uses joined table inheritance to distinguish it from other types of Event. The other type of event is Competition, all events are one or the other:
class Event(Base):
__tablename__ = 'tbl_events'
event_id = Column(Integer, primary_key=True)
event_type_id = Column(String(50)) # 0 is Match, 1 is Competition
__mapper_args__ = {'polymorphic_on': event_type_id}
class Match(Event):
__tablename__ = 'tbl_match_details'
event_id = Column(Integer,
ForeignKey('tbl_events.event_id'),
primary_key=True)
team_1 = Column(String(50))
team_2 = Column(String(50))
__mapper_args__ = {'polymorphic_identity': 0}
I'm using Match in another package which distinguishes multiple types of Match and relies on the Match object's attribs and methods to pull event info from the database but operates away from the database otherwise to that:
from events import Match
class BaseMatch(Match):
# define common methods and attrs
class TennisMatch(BaseMatch):
# do Tennis based stuff
class FootballMatch(BaseMatch):
# do football based things
Any difference between events.Match and the classes that inherit from it only matter in this package and this package doesn't otherwise insert or update the database, only reads from it.
The issue I'm having is that attempting to instantiate an instance of any of the classes that inherits from Match results in a NULL value being passed into the query for the event_type_id field. This is the WHERE part of the query:
WHERE tbl_match_details.event_id = %s AND tbl_events.match_comp_id IN (NULL)
I can't simply give each class their own polymorphic identifier as those identifiers won't exist in the database.
I tried this:
class BaseMatch(Match):
#declared_attr
def __mapper_args__(cls):
return {'polymorphic_identity': 0}
class TennisMatch(BaseMatch):
# do tennis stuff
class FootballMatch(BaseMatch):
# do footy stuff
but importing the module, I get warnings like:
SAWarning: Reassigning polymorphic association for identity 0 from <Mapper at 0x7f80197f0550; Match> to <Mapper at 0x7f80197a9fd0; BaseModel>: Check for duplicate use of 0 as value for polymorphic_identity.
SAWarning: Reassigning polymorphic association for identity 0 from <Mapper at 0x7f80197a9fd0; BaseModel> to <Mapper at 0x7f800dfdf940; TennisMatch>: Check for duplicate use of 0 as value for polymorphic_identity.
I get one of those for each class that inherits from Match and when I attempt to instantiate any of the match types, I get an instance of the type last to have been associated with that polymorphic id.
I'd really appreciate a nudge in the right direction!
Thanks.
Here's what I've done to work around this - I'm not sure if it is 'right' but it has allowed me to move forward with what I'm doing and helped me to understand a bit more of the goings on under the hood.
I've created a factory methods on my Event, Competition and Match classes and a class attribute on Competition and Match that gives me access to each event type's event_type_id value:
from sqlalchemy import inspect
class Event(Base):
__tablename__ = 'tbl_events'
event_id = Column(Integer, primary_key=True)
event_type_id = Column(String(50)) # 0 is Match, 1 is Competition
__mapper_args__ = {'polymorphic_on': event_type_id}
#classmethod
def from_id(cls, id, session):
mapper = inspect(cls).mapper
mapper.polymorphic_map[cls.EVENT_TYPE_ID] = mapper
mapper.polymorphic_identity = cls.EVENT_TYPE_ID
return session.query(cls).filter_by(event_id=id).one()
class Match(Event):
EVENT_TYPE_ID = 0
__tablename__ = 'tbl_match_details'
event_id = Column(Integer,
ForeignKey('tbl_events.event_id'),
primary_key=True)
team_1 = Column(String(50))
team_2 = Column(String(50))
__mapper_args__ = {'polymorphic_identity': EVENT_TYPE_ID}
This way, whenever classes that inherit from Match or Competition are instantiated using the factory methods, the polymorphic identity is forced to the identity defined on the parent class and the polymorphic map points that identity to the class that the factory is being called upon.
A disadvantage obvious to me is that this will only work when objects are instantiated through the factory methods. Fine in this case but maybe not for all.
Would appreciate any feedback on how I've gone about this and any pointers toward a cleaner solution.
Thanks
Related
I am working with sqlalchemy's ORM to create classes mapped to SQL tables. I am running into issues generating the relationships between these classes since they reference each other before the class is declared. When I run the code the interpreter complains NameError: name 'Account' is not defined
I've included a code sample below that demonstrates how I am declaring these classes.
class Location(Base):
__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
name = Column(String)
address = Column(String)
city = Column(String)
state = Column(String)
zip_code = Column(String)
account = sa.orm.relationship('Account', order_by=Account.id, back_populates='location')
entity = sa.orm.relationship('Entity', order_by=Entity.id, back_populates='location')
def __repr__(self):
return "<Location(name='{}', address='{}', city='{}', state='{}', zip_code='{}')>".\
format(self.name, self.address, self.city, self.state, self.zip_code)
class Account(Base):
__tablename__ = 'accounts'
id = Column(Integer, primary_key=True)
name = Column(String)
number = Column(String)
institution = Column(String)
# entity_id = Column(Integer, sa.ForeignKey('entities.id'))
entity = sa.orm.relationship('Entity', back_populates='accounts')
location = sa.orm.relationship('Location', order_by=Location.id, back_populates='account')
def __repr__(self):
return "<Account(name='{}', account={}, institution={}, entity={})>".\
format(self.name, self.number, self.institution, self.entity)
class Entity(Base):
__tablename__ = 'entities'
id = Column(Integer, primary_key=True)
name = Column(String)
accounts = sa.orm.relationship('Account', order_by=Account.id, back_populates='entity')
location = sa.orm.relationship('Location', order_by=Location.id, back_populates='entity')
def __repr__(self):
return "<Entity(name='{}', location='{}')>".format(self.name, self.location)
What am I missing here? Is there a way to define all classes and then call them later as you can with functions? For example with functions, it's simple to call main at the bottom after all the functions are defined:
def main():
foo()
def foo():
if __name__=='__main__':
main()
Define you orderings either as callables or as expression strings, as explained in the relationship API documentation:
class Location(Base):
...
account = sa.orm.relationship('Account',
order_by=lambda: Account.id, ...)
or
class Location(Base):
...
account = sa.orm.relationship('Account',
order_by='Account.id', ...)
The problem is that during evaluation of Location class' body the name Account does not yet exist in the global scope, and was not defined in the local scope of the class body. Passing in a function/lambda allows deferring the evaluation to "mapper initialization time":
Some arguments accepted by relationship() optionally accept a callable function, which when called produces the desired value. The callable is invoked by the parent Mapper at “mapper initialization” time, which happens only when mappers are first used, and is assumed to be after all mappings have been constructed. This can be used to resolve order-of-declaration and other dependency issues, such as if Child is declared below Parent in the same file
Passing a string will also resolve the order-of-declaration issue, and provides another feature:
These string arguments are converted into callables that evaluate the string as Python code, using the Declarative class-registry as a namespace. This allows the lookup of related classes to be automatic via their string name, and removes the need to import related classes at all into the local module space
Working on a project with SQLAlchemy, I was attempting to employ what I believe is a composition pattern. I have a class of "owner" objects; I encapsulate some functionality in component classes and give the owners different capabilities by assigning components to them. The owners and the components all have state that needs to be serialized, so they're all SQLAlchemy objects. Here's a simple example (linked for readability):
class Employee(DeclarativeBase):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String)
def __init__(self, name):
self.name = name
class SalesRole(DeclarativeBase):
__tablename__ = 'sales_role'
id = Column(Integer, primary_key=True)
employee_id = Column(Integer, ForeignKey('employee.id'))
employee = relationship(
'Employee',
backref=backref('sales_role', uselist=False)
)
def __init__(self, employee):
self.employee = employee
self.total_sales = 0
def __repr__(self):
return "<SalesRole(employee='%s')>" % self.employee.name
# Sales-specific data and behavior
total_sales = Column(Float)
class CustomerSupportRole(DeclarativeBase):
__tablename__ = 'support_role'
id = Column(Integer, primary_key=True)
employee_id = Column(Integer, ForeignKey('employee.id'))
employee = relationship(
'Employee',
backref=backref('support_role', uselist=False)
)
def __init__(self, employee):
self.employee = employee
self.tickets_resolved = 0
def __repr__(self):
return "<CustomerSupportRole(employee='%s')>" % self.employee.name
# Support-specific data and behavior
tickets_resolved = Column(Integer)
What I would like to be able to do is to define a property on the owner class that returns a collection (of whatever kind) containing all such components that have been assigned to the owner, i.e.,
# Define an Employee object bob and assign it some roles
>>> bob.roles
[<SalesRole(employee='Bob')>, <CustomerSupportRole(employee='Bob')>]
I want to accomplish this without hardcoding any references to the types of components that can exist on the owner class -- I want to define new components without changing code anywhere else.
I can more or less accomplish this with an intermediary table mapping owner instances to their components using generic_relationship from sqlalchemy-utils. Unfortunately, generic_relationship severs SQLAlchemy's automatic cascading of child objects.
Another approach I've tried with help elsewhere was to use SQLAlchemy's event framework to listen for mappings of relationships targeting the owner class ('mapper_configured' events). The components would define a backref from themselves to the owner class, and use the info parameter to set an arbitrary flag denoting this relationship as a referring to one of the components we want to be made available via this collection. The function registered to catch mapping events would test for this flag, and hypothetically build the collection containing those relationships, but we could never figure out how to make that work.
It's not important to me that this collection be a SQLAlchemy object via which I can actually write to the database (i.e. bob.roles.append(SalesmanRole()). That would be very cool, but just a property serving as a read-only iterable view would suffice.
It's not important whether the named attribute backrefs exist on the owner class (e.g., bob.sales_role. It's fine if they do, but I think the collection is actually more important to me.
Like I mentioned earlier, cascading is important (unless you want to convince me it's not!).
Again, it is important that I don't have to hardcode a list of component types anywhere. Whatever magic distinguishes the classes I want to show up in this collection of components from everything else should live in the definition of the components themselves. I want this to be readily extensible as I define new components.
Is there a way to make this work? Or should I be taking a different approach in general -- feel free to tell me this sounds like an XY problem.
And the answer is something I suspected but didn't look for hard enough: class inheritance. Simple, elegant, and accomplishes everything I want.
class Role(DeclarativeBase):
__tablename__ = 'role'
id = Column(Integer, primary_key=True)
role_type = Column(String)
employee_id = Column(Integer, ForeignKey('employee.id'))
employee = relationship('Employee', backref='roles')
__mapper_args__ = {
'polymorphic_identity': 'employee',
'polymorphic_on': role_type
}
class SalesRole(Role):
__tablename__ = 'sales_role'
id = Column(Integer, ForeignKey('role.id'), primary_key=True)
__mapper_args__ = {
'polymorphic_identity': 'sales_role'
}
# Sales-specific attributes, etc.
class CustomerSupportRole(Role):
__tablename__ = 'support_role'
id = Column(Integer, ForeignKey('role.id'), primary_key=True)
__mapper_args__ = {
'polymorphic_identity': 'support_role'
}
# Support-specific attributes, etc.
I'm using SQLAlchemy to represent a relationship between authors. I'd like to have authors related to other authors (coauthorshp), with extra data in the relation, such that with an author a I can find their coauthors.
How this is done between two different objects is this:
class Association(Base):
__tablename__ = 'association'
left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
extra_data = Column(String(80))
child = relationship('Child', backref='parent_assocs')
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
children = relationship('Association', backref='parent')
class Child(Base):
__tablename__ = 'right'
id = Column(Integer, primary_key=True)
but how would I do this in my case?
The nature of a coauthorship is that it is bidirectional. So, when you insert the tuple (id_left, id_right) into the coauthorship table through a coauthoship object, is there a way to also insert the reverse relation easily? I'm asking because I want to use association proxies.
if you'd like to literally have pairs of rows in association, that is, for every id_left, id_right that's inserted, you also insert an id_right, id_left, you'd use an attribute event to listen for append events on either side, and produce an append in the other direction.
If you just want to be able to navigate between Parent/Child in either direction, just a single row of id_left, id_right is sufficient. The examples in the docs regarding this kind of mapping illustrate the whole thing.
Basically, I have this model, where I mapped in a single table a "BaseNode" class, and two subclasses. The point is that I need one of the subclasses, to have a one-to-many relationship with the other subclass.
So in sort, it is a relationship with another row of different class (subclass), but in the same table.
How do you think I could write it using declarative syntax?.
Note: Due to other relationships in my model, if it is possible, I really need to stick with single table inheritance.
class BaseNode(DBBase):
__tablename__ = 'base_node'
id = Column(Integer, primary_key=True)
discriminator = Column('type', String(50))
__mapper_args__ = {'polymorphic_on': discriminator}
class NodeTypeA(BaseNode):
__mapper_args__ = {'polymorphic_identity': 'NodeTypeA'}
typeB_children = relationship('NodeTypeB', backref='parent_node')
class NodeTypeB(BaseNode):
__mapper_args__ = {'polymorphic_identity': 'NodeTypeB'}
parent_id = Column(Integer, ForeignKey('base_node.id'))
Using this code will throw:
sqlalchemy.exc.ArgumentError: NodeTypeA.typeB_children and
back-reference NodeTypeB.parent_node are both of the same direction
. Did you mean to set remote_side on the
many-to-one side ?
Any ideas or suggestions?
I was struggling through this myself earlier. I was able to get this self-referential relationship working:
class Employee(Base):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String(64), nullable=False)
Employee.manager_id = Column(Integer, ForeignKey(Employee.id))
Employee.manager = relationship(Employee, backref='subordinates',
remote_side=Employee.id)
Note that the manager and manager_id are "monkey-patched" because you cannot make self-references within a class definition.
So in your example, I would guess this:
class NodeTypeA(BaseNode):
__mapper_args__ = {'polymorphic_identity': 'NodeTypeA'}
typeB_children = relationship('NodeTypeB', backref='parent_node',
remote_side='NodeTypeB.parent_id')
EDIT: Basically what your error is telling you is that the relationship and its backref are both identical. So whatever rules that SA is applying to figure out what the table-level relationships are, they don't jive with the information you are providing.
I learned that simply saying mycolumn=relationship(OtherTable) in your declarative class will result in mycolumn being a list, assuming that SA can detect an unambiguous relationship. So if you really want an object to have a link to its parent, rather than its children, you can define parent=relationship(OtherTable, backref='children', remote_side=OtherTable.id) in the child table. That defines both directions of the parent-child relationship.
I'm using SQLAlchemy, and many classes in my object model have the same two attributes: id and (integer & primary key), and name (a string). I'm trying to avoid declaring them in every class like so:
class C1(declarative_base()):
id = Column(Integer, primary_key = True)
name = Column(String)
#...
class C2(declarative_base()):
id = Column(Integer, primary_key = True)
name = Column(String)
#...
What's a good way to do that? I tried using metaclasses but it didn't work yet.
You could factor out your common attributes into a mixin class, and multiply inherit it alongside declarative_base():
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
class IdNameMixin(object):
id = Column(Integer, primary_key=True)
name = Column(String)
class C1(declarative_base(), IdNameMixin):
__tablename__ = 'C1'
class C2(declarative_base(), IdNameMixin):
__tablename__ = 'C2'
print C1.__dict__['id'] is C2.__dict__['id']
print C1.__dict__['name'] is C2.__dict__['name']
EDIT: You might think this would result in C1 and C2 sharing the same Column objects, but as noted in the SQLAlchemy docs, Column objects are copied when originating from a mixin class. I've updated the code sample to demonstrate this behavior.
Could you also use the Column's copy method? This way, fields can be defined independently of tables, and those fields that are reused are just field.copy()-ed.
id = Column(Integer, primary_key = True)
name = Column(String)
class C1(declarative_base()):
id = id.copy()
name = name.copy()
#...
class C2(declarative_base()):
id = id.copy()
name = name.copy()
#...
I think I got it to work.
I created a metaclass that derives from DeclarativeMeta, and made that the metaclass of C1 and C2. In that new metaclass, I simply said
def __new__(mcs, name, base, attr):
attr['__tablename__'] = name.lower()
attr['id'] = Column(Integer, primary_key = True)
attr['name'] = Column(String)
return super().__new__(mcs, name, base, attr)
And it seems to work fine.