SQLAlchemy One-to-Many relationship on single table inheritance - declarative - python

Basically, I have this model, where I mapped in a single table a "BaseNode" class, and two subclasses. The point is that I need one of the subclasses, to have a one-to-many relationship with the other subclass.
So in sort, it is a relationship with another row of different class (subclass), but in the same table.
How do you think I could write it using declarative syntax?.
Note: Due to other relationships in my model, if it is possible, I really need to stick with single table inheritance.
class BaseNode(DBBase):
__tablename__ = 'base_node'
id = Column(Integer, primary_key=True)
discriminator = Column('type', String(50))
__mapper_args__ = {'polymorphic_on': discriminator}
class NodeTypeA(BaseNode):
__mapper_args__ = {'polymorphic_identity': 'NodeTypeA'}
typeB_children = relationship('NodeTypeB', backref='parent_node')
class NodeTypeB(BaseNode):
__mapper_args__ = {'polymorphic_identity': 'NodeTypeB'}
parent_id = Column(Integer, ForeignKey('base_node.id'))
Using this code will throw:
sqlalchemy.exc.ArgumentError: NodeTypeA.typeB_children and
back-reference NodeTypeB.parent_node are both of the same direction
. Did you mean to set remote_side on the
many-to-one side ?
Any ideas or suggestions?

I was struggling through this myself earlier. I was able to get this self-referential relationship working:
class Employee(Base):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String(64), nullable=False)
Employee.manager_id = Column(Integer, ForeignKey(Employee.id))
Employee.manager = relationship(Employee, backref='subordinates',
remote_side=Employee.id)
Note that the manager and manager_id are "monkey-patched" because you cannot make self-references within a class definition.
So in your example, I would guess this:
class NodeTypeA(BaseNode):
__mapper_args__ = {'polymorphic_identity': 'NodeTypeA'}
typeB_children = relationship('NodeTypeB', backref='parent_node',
remote_side='NodeTypeB.parent_id')
EDIT: Basically what your error is telling you is that the relationship and its backref are both identical. So whatever rules that SA is applying to figure out what the table-level relationships are, they don't jive with the information you are providing.
I learned that simply saying mycolumn=relationship(OtherTable) in your declarative class will result in mycolumn being a list, assuming that SA can detect an unambiguous relationship. So if you really want an object to have a link to its parent, rather than its children, you can define parent=relationship(OtherTable, backref='children', remote_side=OtherTable.id) in the child table. That defines both directions of the parent-child relationship.

Related

Combining the Results of SQLAlchemy JOINs

When you do something like,
session.query(BaseModel, JoinModel).join(JoinModel, BaseModel.id == JoinModel.id), isouter=True)
The result is is similar to the following,
(<__main__.BaseModel object at 0x000001E32BC81220>, <__main__.JoinModel object at 0x000001E32A15BE50>)
(<__main__.BaseModel object at 0x000001E32BC81220>, <__main__.JoinModel object at 0x000001E32A15BC70>)
(<__main__.BaseModel object at 0x000001E32BC81220>, <__main__.JoinModel object at 0x000001E32A317670>)
When you do .__dict__ on one of these objects (either BaseModel or JoinModel) you can get the values of the attributes.
Is there an efficient way to combine the attributes of both these objects for one result? I mean in the end, ideally it should be just one set of values, right?
You can define relationships and backrefs in your model. This will help you to access the attributes without using join. Here is a sample code which might help you
from sqlalchemy.orm import backref, relationship
class BaseModel():
id = Column(Integer, primary_key=True, autoincrement=True)
join_model = relationship(
"JoinModel",
backref=backref("base_model", cascade="all, delete-orphan", lazy=True))
class JoinModel():
id = Column(Integer, primary_key=True, autoincrement=True)
base_id = Column(Integer, ForeignKey("base.id",ondelete="CASCADE"), nullable=True)
name = Column(String)
You can do like
base_model = BaseModel.query.all()
print(base_model.join_model.name)

Inherit From SQLAlchemy Joined Table Inheritance Class

Python 3.6 and SQLAlchemy 1.2.
I have a package called events which defines Match as a type of Event and uses joined table inheritance to distinguish it from other types of Event. The other type of event is Competition, all events are one or the other:
class Event(Base):
__tablename__ = 'tbl_events'
event_id = Column(Integer, primary_key=True)
event_type_id = Column(String(50)) # 0 is Match, 1 is Competition
__mapper_args__ = {'polymorphic_on': event_type_id}
class Match(Event):
__tablename__ = 'tbl_match_details'
event_id = Column(Integer,
ForeignKey('tbl_events.event_id'),
primary_key=True)
team_1 = Column(String(50))
team_2 = Column(String(50))
__mapper_args__ = {'polymorphic_identity': 0}
I'm using Match in another package which distinguishes multiple types of Match and relies on the Match object's attribs and methods to pull event info from the database but operates away from the database otherwise to that:
from events import Match
class BaseMatch(Match):
# define common methods and attrs
class TennisMatch(BaseMatch):
# do Tennis based stuff
class FootballMatch(BaseMatch):
# do football based things
Any difference between events.Match and the classes that inherit from it only matter in this package and this package doesn't otherwise insert or update the database, only reads from it.
The issue I'm having is that attempting to instantiate an instance of any of the classes that inherits from Match results in a NULL value being passed into the query for the event_type_id field. This is the WHERE part of the query:
WHERE tbl_match_details.event_id = %s AND tbl_events.match_comp_id IN (NULL)
I can't simply give each class their own polymorphic identifier as those identifiers won't exist in the database.
I tried this:
class BaseMatch(Match):
#declared_attr
def __mapper_args__(cls):
return {'polymorphic_identity': 0}
class TennisMatch(BaseMatch):
# do tennis stuff
class FootballMatch(BaseMatch):
# do footy stuff
but importing the module, I get warnings like:
SAWarning: Reassigning polymorphic association for identity 0 from <Mapper at 0x7f80197f0550; Match> to <Mapper at 0x7f80197a9fd0; BaseModel>: Check for duplicate use of 0 as value for polymorphic_identity.
SAWarning: Reassigning polymorphic association for identity 0 from <Mapper at 0x7f80197a9fd0; BaseModel> to <Mapper at 0x7f800dfdf940; TennisMatch>: Check for duplicate use of 0 as value for polymorphic_identity.
I get one of those for each class that inherits from Match and when I attempt to instantiate any of the match types, I get an instance of the type last to have been associated with that polymorphic id.
I'd really appreciate a nudge in the right direction!
Thanks.
Here's what I've done to work around this - I'm not sure if it is 'right' but it has allowed me to move forward with what I'm doing and helped me to understand a bit more of the goings on under the hood.
I've created a factory methods on my Event, Competition and Match classes and a class attribute on Competition and Match that gives me access to each event type's event_type_id value:
from sqlalchemy import inspect
class Event(Base):
__tablename__ = 'tbl_events'
event_id = Column(Integer, primary_key=True)
event_type_id = Column(String(50)) # 0 is Match, 1 is Competition
__mapper_args__ = {'polymorphic_on': event_type_id}
#classmethod
def from_id(cls, id, session):
mapper = inspect(cls).mapper
mapper.polymorphic_map[cls.EVENT_TYPE_ID] = mapper
mapper.polymorphic_identity = cls.EVENT_TYPE_ID
return session.query(cls).filter_by(event_id=id).one()
class Match(Event):
EVENT_TYPE_ID = 0
__tablename__ = 'tbl_match_details'
event_id = Column(Integer,
ForeignKey('tbl_events.event_id'),
primary_key=True)
team_1 = Column(String(50))
team_2 = Column(String(50))
__mapper_args__ = {'polymorphic_identity': EVENT_TYPE_ID}
This way, whenever classes that inherit from Match or Competition are instantiated using the factory methods, the polymorphic identity is forced to the identity defined on the parent class and the polymorphic map points that identity to the class that the factory is being called upon.
A disadvantage obvious to me is that this will only work when objects are instantiated through the factory methods. Fine in this case but maybe not for all.
Would appreciate any feedback on how I've gone about this and any pointers toward a cleaner solution.
Thanks

Sqlalchemy: Querying on m2m relations for polymorphic classes

I've got two classes that are connected through a many-to-many relationship: Parent and Tag.
Base = declarative_base()
association_table = Table('associations', Base.metadata,
Column('parent_id', Integer, ForeignKey('parent.id')),
Column('tag_id', Integer, ForeignKey('tag.id')),
)
class Tag(Base):
__tablename__ = 'tags'
id = Column(Integer, Sequence('tag_id_seq'), primary_key=True)
name = Column(String)
class Parent(Base):
__tablename__ = 'parents'
id = Column(Integer, Sequence('parent_id_seq'), primary_key=True)
tags = relationship('Tag', secondary=association_table, backref='parents')
If I want to query for all the Tag objects that have one or more relationships to a Parent, I'd do:
session.query(Tag).filter(Tag.parents.any()).all()
However, this Parent class is parent to child classes, Alice and Bob:
class Alice(Parent):
__tablename__ = 'alices'
__mapper_args__ = {'polymorphic_identity': 'alice'}
alice_id = Column(Integer, ForeignKey('parents.id'), primary_key=True)
class Bob(Parent):
__tablename__ = 'bobs'
__mapper_args__ = {'polymorphic_identity': 'bob'}
bob_id = Column(Integer, ForeignKey('parents.id'), primary_key=True)
Now I'd like to be able to retrieve all the Tag objects that have one or more relations to an Alice object. The previous query, session.query(Tag).filter(Tag.parents.any()).all(), would not do as it doesn't discriminate between Alice or Bob objects - it doesn't even know about their existence.
I've messed around with Query's for a while with no success so I assume that if it can be done, it must have something to do with some extra lines of code in the Table classes like those shown above. While the documentation holds info about polymorphic classes and many-to-many relations and Mike Bayer himself offered someone this answer to a seemingly related question which looks interesting but which I'm far from understanding, I'm kind of stuck.
The code samples may disgust the Python interpreter, but does hopefully get my point across. Candy for those who can help me on my way!
While writing a little MWE I actually found a solution, which is actually almost the same as what I had tried already. budder gave me new hope for the approach though, thanks :)
from sqlalchemy import create_engine, ForeignKey, Column, String, Integer, Sequence, Table
from sqlalchemy.orm import sessionmaker, relationship, backref
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
association_table = Table('associations', Base.metadata,
Column('parent_id', Integer, ForeignKey('parents.id')),
Column('tag_id', Integer, ForeignKey('tags.id')),
)
class Tag(Base):
__tablename__ = 'tags'
id = Column(Integer, Sequence('tag_id_seq'), primary_key=True)
name = Column(String)
class Parent(Base):
__tablename__ = 'parents'
id = Column(Integer, Sequence('parent_id_seq'), primary_key=True)
tags = relationship('Tag', secondary=association_table, backref='parents')
class Alice(Parent):
__tablename__ = 'alices'
__mapper_args__ = {'polymorphic_identity': 'alice'}
alice_id = Column(Integer, ForeignKey('parents.id'), primary_key=True)
class Bob(Parent):
__tablename__ = 'bobs'
__mapper_args__ = {'polymorphic_identity': 'bob'}
bob_id = Column(Integer, ForeignKey('parents.id'), primary_key=True)
engine = create_engine("sqlite://")
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
tag_a = Tag(name='a')
tag_b = Tag(name='b')
tag_c = Tag(name='c')
session.add(tag_a)
session.add(tag_b)
session.add(tag_c)
session.commit()
session.add(Alice(tags=[tag_a]))
session.add(Bob(tags=[tag_b]))
session.commit()
for tag in session.query(Tag).\
filter(Tag.parents.any(Parent.id == Alice.alice_id)).\
all():
print(tag.name)
If there's a good alternative approach, I'd still be interested. I can imagine sqlalchemy offering a more direct and elegant approach so that one could do, for example:
for tag in session.query(Tag).\
filter(Tag.alices.any()).\
all():
print(tag.name)
If you write your object classes a certain way you can use the blunt force method...Just search for them...Kind of like a crude query method:
all_tag_objects = session.query(Tag).all() ## All tag objects in your database
tags = []
for tag in all_tag_objects:
for parent in tag.parents:
if parent.alices != []: ## If there are alice objects in the tag parents alice reltionship instance variable...Then we append the tag because it meets our criteria.
flagged_tags.append(tag)
Sounds like you found a better way though, but I guess the ultimate test would be to actually do a speed test.

How to define (or simulate) a collection containing different types in SQLAlchemy?

Working on a project with SQLAlchemy, I was attempting to employ what I believe is a composition pattern. I have a class of "owner" objects; I encapsulate some functionality in component classes and give the owners different capabilities by assigning components to them. The owners and the components all have state that needs to be serialized, so they're all SQLAlchemy objects. Here's a simple example (linked for readability):
class Employee(DeclarativeBase):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String)
def __init__(self, name):
self.name = name
class SalesRole(DeclarativeBase):
__tablename__ = 'sales_role'
id = Column(Integer, primary_key=True)
employee_id = Column(Integer, ForeignKey('employee.id'))
employee = relationship(
'Employee',
backref=backref('sales_role', uselist=False)
)
def __init__(self, employee):
self.employee = employee
self.total_sales = 0
def __repr__(self):
return "<SalesRole(employee='%s')>" % self.employee.name
# Sales-specific data and behavior
total_sales = Column(Float)
class CustomerSupportRole(DeclarativeBase):
__tablename__ = 'support_role'
id = Column(Integer, primary_key=True)
employee_id = Column(Integer, ForeignKey('employee.id'))
employee = relationship(
'Employee',
backref=backref('support_role', uselist=False)
)
def __init__(self, employee):
self.employee = employee
self.tickets_resolved = 0
def __repr__(self):
return "<CustomerSupportRole(employee='%s')>" % self.employee.name
# Support-specific data and behavior
tickets_resolved = Column(Integer)
What I would like to be able to do is to define a property on the owner class that returns a collection (of whatever kind) containing all such components that have been assigned to the owner, i.e.,
# Define an Employee object bob and assign it some roles
>>> bob.roles
[<SalesRole(employee='Bob')>, <CustomerSupportRole(employee='Bob')>]
I want to accomplish this without hardcoding any references to the types of components that can exist on the owner class -- I want to define new components without changing code anywhere else.
I can more or less accomplish this with an intermediary table mapping owner instances to their components using generic_relationship from sqlalchemy-utils. Unfortunately, generic_relationship severs SQLAlchemy's automatic cascading of child objects.
Another approach I've tried with help elsewhere was to use SQLAlchemy's event framework to listen for mappings of relationships targeting the owner class ('mapper_configured' events). The components would define a backref from themselves to the owner class, and use the info parameter to set an arbitrary flag denoting this relationship as a referring to one of the components we want to be made available via this collection. The function registered to catch mapping events would test for this flag, and hypothetically build the collection containing those relationships, but we could never figure out how to make that work.
It's not important to me that this collection be a SQLAlchemy object via which I can actually write to the database (i.e. bob.roles.append(SalesmanRole()). That would be very cool, but just a property serving as a read-only iterable view would suffice.
It's not important whether the named attribute backrefs exist on the owner class (e.g., bob.sales_role. It's fine if they do, but I think the collection is actually more important to me.
Like I mentioned earlier, cascading is important (unless you want to convince me it's not!).
Again, it is important that I don't have to hardcode a list of component types anywhere. Whatever magic distinguishes the classes I want to show up in this collection of components from everything else should live in the definition of the components themselves. I want this to be readily extensible as I define new components.
Is there a way to make this work? Or should I be taking a different approach in general -- feel free to tell me this sounds like an XY problem.
And the answer is something I suspected but didn't look for hard enough: class inheritance. Simple, elegant, and accomplishes everything I want.
class Role(DeclarativeBase):
__tablename__ = 'role'
id = Column(Integer, primary_key=True)
role_type = Column(String)
employee_id = Column(Integer, ForeignKey('employee.id'))
employee = relationship('Employee', backref='roles')
__mapper_args__ = {
'polymorphic_identity': 'employee',
'polymorphic_on': role_type
}
class SalesRole(Role):
__tablename__ = 'sales_role'
id = Column(Integer, ForeignKey('role.id'), primary_key=True)
__mapper_args__ = {
'polymorphic_identity': 'sales_role'
}
# Sales-specific attributes, etc.
class CustomerSupportRole(Role):
__tablename__ = 'support_role'
id = Column(Integer, ForeignKey('role.id'), primary_key=True)
__mapper_args__ = {
'polymorphic_identity': 'support_role'
}
# Support-specific attributes, etc.

Many-to-many relations from table to itself in SQLAlchemy

I'm using SQLAlchemy to represent a relationship between authors. I'd like to have authors related to other authors (coauthorshp), with extra data in the relation, such that with an author a I can find their coauthors.
How this is done between two different objects is this:
class Association(Base):
__tablename__ = 'association'
left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
extra_data = Column(String(80))
child = relationship('Child', backref='parent_assocs')
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
children = relationship('Association', backref='parent')
class Child(Base):
__tablename__ = 'right'
id = Column(Integer, primary_key=True)
but how would I do this in my case?
The nature of a coauthorship is that it is bidirectional. So, when you insert the tuple (id_left, id_right) into the coauthorship table through a coauthoship object, is there a way to also insert the reverse relation easily? I'm asking because I want to use association proxies.
if you'd like to literally have pairs of rows in association, that is, for every id_left, id_right that's inserted, you also insert an id_right, id_left, you'd use an attribute event to listen for append events on either side, and produce an append in the other direction.
If you just want to be able to navigate between Parent/Child in either direction, just a single row of id_left, id_right is sufficient. The examples in the docs regarding this kind of mapping illustrate the whole thing.

Categories