Cascade delete from self-referential many-to-many relationship - python

I have a SQLite database with the following tables:
fits_table = Table("fits", saveddata_meta,
Column("ID", Integer, primary_key = True),
Column("ownerID", ForeignKey("users.ID"), nullable = True, index = True),
Column("shipID", Integer, nullable = False, index = True),
Column("name", String, nullable = False),
Column("timestamp", Integer, nullable = False),
Column("characterID", ForeignKey("characters.ID"), nullable = True),
Column("damagePatternID", ForeignKey("damagePatterns.ID"), nullable=True),
Column("booster", Boolean, nullable = False, index = True, default = 0))
projectedFits_table = Table("projectedFits", saveddata_meta,
Column("sourceID", ForeignKey("fits.ID"), primary_key = True),
Column("victimID", ForeignKey("fits.ID"), primary_key = True),
Column("amount", Integer))
mapper(Fit, fits_table,
properties = {
"_Fit__projectedFits" : relation(Fit,
primaryjoin = projectedFits_table.c.victimID == fits_table.c.ID,
secondaryjoin = fits_table.c.ID == projectedFits_table.c.sourceID,
secondary = projectedFits_table,
collection_class = HandledProjectedFitList)
})
It's basically a relationship table that links a Fit to another Fit.
I've been trying to figure out the proper way to cascade a delete, but I cannot get it to work. I would like if a Fit is deleted, then it also deletes any rows in the relationship table where the fit ID is in either the source or victim column.
EDIT: I forgot to add what cascade flags I tried.
cascade='all, delete, delete-orphan', single_parent=True, - did not work. In face, manually deleting the relationship row also deleted the parent (whatever matched the sourceID)
cascade='delete', single_parent=True, - did not have the issue the above setting had, but still did not delete the relationship record when the Fit was deleted
cascade='all, delete', single_parent=True, - same as above
EDIT 2:
I kept fiddling with it, and without adding a cascade attribute, it kinda works. Let me explain:
If I have Fit B linked to Fit A (A is parent in this case), then delete Fit B, it does not delete the relationship. However, if I delete fit A, it does delete the relationship.
I am assuming that I have just been thinking about this completely wrong. When I have a fit instance, it gathers the relationships it has, and deletes any when that fit is deleted. However, when I delete fit B, it technically doesn't have any fits that are linked to it as children. So it never delete them.
I guess a workaround would be to assign the fits that B is a child of to a dummy attribute so that it gets deleted as well. Or do some sort of post processing in the middle layer of the application. Will post back with results, though I still welcome any thoughts. =)

I figured it out. As stated in an edit to the OP, I simply had to create a new relation with the criteria swapped. This way both relationships are loaded and deleted when the fit is deleted:
mapper(Fit, fits_table,
properties = {
"_Fit__projectedFits" : relation(Fit,
primaryjoin = projectedFits_table.c.victimID == fits_table.c.ID,
secondaryjoin = fits_table.c.ID == projectedFits_table.c.sourceID,
secondary = projectedFits_table,
collection_class = HandledProjectedFitList),
"_Fit__projectedOnto" : relation(Fit,
primaryjoin = fits_table.c.ID == projectedFits_table.c.sourceID,
secondaryjoin = fits_table.c.ID == projectedFits_table.c.victimID == fits_table.c.ID,
secondary = projectedFits_table,
collection_class = HandledProjectedFitList)

Related

SQLAlchemy relationship fields name constructor

I'm using SQLAlchemy 1.4 to build my database models (posgresql).
I've stablished relationships between my models, which I follow using the different SQLAlchemy capabilities. When doing so, the fields of the related models get aliases which don't work for me.
Here's an example of one of my models:
from sqlalchemy import Column, DateTime, ForeignKey, Integer, func
from sqlalchemy.orm import relationship
class Process(declarative_model()):
"""Process database table class.
Process model. It contains all the information about one process
iteration. This is the proces of capturing an image with all the
provided cameras, preprocess the images and make a prediction for
them as well as computing the results.
"""
id: int = Column(Integer, primary_key=True, index=True, autoincrement=True)
"""Model primary key."""
petition_id: int = Column(Integer, ForeignKey("petition.id", ondelete="CASCADE"))
"""Foreign key to the related petition."""
petition: "Petition" = relationship("Petition", backref="processes", lazy="joined")
"""Related petition object."""
camera_id: int = Column(Integer, ForeignKey("camera.id", ondelete="CASCADE"))
"""Foreign key to the related camera."""
camera: "Camera" = relationship("Camera", backref="processes", lazy="joined")
"""Related camera object."""
n: int = Column(Integer, comment="Iteration number for the given petition.")
"""Iteration number for the given petition."""
image: "Image" = relationship(
"Image", back_populates="process", uselist=False, lazy="joined"
)
"""Related image object."""
datetime_init: datetime = Column(DateTime(timezone=True), server_default=func.now())
"""Datetime when the process started."""
datetime_end: datetime = Column(DateTime(timezone=True), nullable=True)
"""Datetime when the process finished if so."""
The model works perfectly and joins the data by default as expected, so far so good.
My problem comes when I make a query and I extract the results through query.all() or through pd.read_sql(query.statement, db).
Reading the documentation, I should get aliases for my fields like "{table_name}.{field}" but instead of that I'm getting like "{field}_{counter}". Here's an example of a query.statement for my model:
SELECT process.id, process.petition_id, process.camera_id, process.n, process.datetime_init, process.datetime_end, asset_quality_1.id AS id_2, asset_quality_1.code AS code_1, asset_quality_1.name AS name_1, asset_quality_1.active AS active_1, asset_quality_1.stock_quality_id, pit_door_1.id AS id_3, pit_door_1.code AS code_2, petition_1.id AS id_4, petition_1.user_id, petition_1.user_code, petition_1.load_code, petition_1.provider_code, petition_1.origin_code, petition_1.asset_quality_initial_id, petition_1.pit_door_id, petition_1.datetime_init AS datetime_init_1, petition_1.datetime_end AS datetime_end_1, mask_1.id AS id_5, mask_1.camera_id AS camera_id_1, mask_1.prefix_path, mask_1.position, mask_1.format, camera_1.id AS id_6, camera_1.code AS code_3, camera_1.pit_door_id AS pit_door_id_1, camera_1.position AS position_1, image_1.id AS id_7, image_1.prefix_path AS prefix_path_1, image_1.format AS format_1, image_1.process_id
FROM process LEFT OUTER JOIN petition AS petition_1 ON petition_1.id = process.petition_id LEFT OUTER JOIN asset_quality AS asset_quality_1 ON asset_quality_1.id = petition_1.asset_quality_initial_id LEFT OUTER JOIN stock_quality AS stock_quality_1 ON stock_quality_1.id = asset_quality_1.stock_quality_id LEFT OUTER JOIN pit_door AS pit_door_1 ON pit_door_1.id = petition_1.pit_door_id LEFT OUTER JOIN camera AS camera_1 ON camera_1.id = process.camera_id LEFT OUTER JOIN mask AS mask_1 ON camera_1.id = mask_1.camera_id LEFT OUTER JOIN image AS image_1 ON process.id = image_1.process_id
Does anybody know how can I change this behavior and make it alias the fields like “{table_name}_{field}"?
SQLAlchemy uses label styles to configure how columns are labelled in SQL statements. The default in 1.4.x is LABEL_STYLE_DISAMBIGUATE_ONLY, which will add a "counter" for columns with the same name in a query. LABEL_STYLE_TABLENAME_PLUS_COL is closer to what you want.
Default:
q = session.query(Table1, Table2).join(Table2)
q = q.set_label_style(LABEL_STYLE_DISAMBIGUATE_ONLY)
print(q)
gives
SELECT t1.id, t1.child_id, t2.id AS id_1
FROM t1 JOIN t2 ON t2.id = t1.child_id
whereas
q = session.query(Table1, Table2).join(Table2)
q = q.set_label_style(LABEL_STYLE_TABLENAME_PLUS_COL)
print(q)
generates
SELECT t1.id AS t1_id, t1.child_id AS t1_child_id, t2.id AS t2_id
FROM t1 JOIN t2 ON t2.id = t1.child_id
If you want to enforce a style for all orm queries you could sublcass Session:
class MySession(orm.Session):
_label_style = LABEL_STYLE_TABLENAME_PLUS_COL
and use this class for your sessions, or pass it it a sessionmaker, if you use one:
Session = orm.sessionmaker(engine, class_=MySession)
You can use the label argument of the Column or the relationship method to specify the custom name for a field.
For example, to give a custom label for the process.petition_id field, you can use:
petition_id = Column(Integer, ForeignKey("petition.id", ondelete="CASCADE"), label='process_petition_id')
And for the petition relationship, you can use:
petition = relationship("Petition", backref="processes", lazy="joined", lazyload=True, innerjoin=True, viewonly=False, foreign_keys=[petition_id], post_update=False, cascade='all, delete-orphan', passive_deletes=True, primaryjoin='Process.petition_id == Petition.id', single_parent=False, uselist=False, query_class=None, foreignkey=None, remote_side=None, remote_side_use_alter=False, order_by=None, secondary=None, secondaryjoin=None, back_populates=None, collection_class=None, doc=None, extend_existing=False, associationproxy=None, comparator_factory=None, proxy_property=None, impl=None, _create_eager_joins=None, dynamic=False, active_history=False, passive_updates=False, enable_typechecks=None, info=None, join_depth=None, innerjoin=None, outerjoin=None, selectin=None, selectinload=None, with_polymorphic=None, join_specified=None, viewonly=None, comparison_enabled=None, useexisting=None, label='process_petition')
With this, the fields should be aliased to process_petition_id and process_petition respectively.

SqlAlchemy Classes Declaration of two dependent classes

I have a problem in the file where I declare all my classes mappers.
class Application(AbstractId):
.........
key_event_id = ORM.column_property(
SA.select([ApplicationEvent.id],
correlate = True,
from_obj = [Application.__table__.join(ApplicationEvent.__table__)]
).as_scalar().label("tag").where(ApplicationEvent.key_event == 1)
)
SA.select([ApplicationEvent]).filter(
ApplicationEvent.key_event)
class ApplicationEvent(AbstractId):
__tablename__ = 'applications_events'
application_id = SA.Column(SA.Integer, SA.ForeignKey(Application.id), primary_key = True)
application = ORM.relationship(Application, backref = 'events')
event_id = SA.Column(SA.Integer, SA.ForeignKey(Event.id), primary_key = True)
event = ORM.relationship(Event)
This won't work since ApplicationEvent is declared before Application. How can I make this work ? I need key_event_id as a column of Application.
This won't work either:
#declarative.declared_attr
def key_event_id(cls):
return ORM.column_property(
SA.select(['ApplicationEvent.id'],
correlate = True,
from_obj = ['Application.__table__'.join('ApplicationEvent.__table__')]
).as_scalar().where('ApplicationEvent.key_event' == 1).label("key_event_id")
)
You can simply pass the model name as a string to the relationship() call.
argument
a mapped class, or actual Mapper instance, representing the target of
the relationship.
argument may also be passed as a callable function which is evaluated
at mapper initialization time, and may be passed as a Python-evaluable
string when using Declarative.
You can do
application = ORM.relationship("Application", backref = 'events')
and
event = ORM.relationship("Event" , order_by="Event.id")
You can write like this way
application_id = SA.Column("id", SA.ForeignKey("Application.id"), primary_key = True)
application = ORM.relationship("Application", backref = 'events')
event_id = SA.Column("id", SA.ForeignKey("Event.id"), primary_key = True)
event = ORM.relationship("Event")

How to use make_transient() to duplicate an SQLAlchemy mapped object?

I know the question how to duplicate or copy a SQLAlchemy mapped object was asked a lot of times. The answer always depends on the needs or how "duplicate" or "copy" is interpreted.
This is a specialized version of the question because I got the tip to use make_transient() for that.
But I have some problems with that. I don't really know how to handle the primary key (PK) here. In my use cases the PK is always autogenerated by SQLA (or the DB in background). But this doesn't happen with a new duplicated object.
The code is a little bit pseudo.
import sqlalchemy as sa
from sqlalchemy.orm.session import make_transient
_engine = sa.create_engine('postgres://...')
_session = sao.sessionmaker(bind=_engine)()
class MachineData(_Base):
__tablename__ = 'Machine'
_oid = sa.Column('oid', sa.Integer, primary_key=True)
class TUnitData(_Base):
__tablename__ = 'TUnit'
_oid = sa.Column('oid', sa.Integer, primary_key=True)
_machine_fk = sa.Column('machine', sa.Integer, sa.ForeignKey('Machine.oid'))
_machine = sao.relationship("MachineData")
def __str__(self):
return '{}.{}: oid={}(hasIdentity={}) machine={}(fk={})' \
.format(type(self), id(self),
self._oid, has_identity(self),
self._machine, self._machine_fk)
if __name__ == '__main__':
# any query resulting in one persistent object
obj = GetOneMachineDataFromDatabase()
# there is a valid 'oid', has_identity == True
print(obj)
# should i call expunge() first?
# remove the association with any session
# and remove its “identity key”
make_transient(obj)
# 'oid' is still there but has_identity == False
print(obj)
# THIS causes an error because the 'oid' still exsits
# and is not new auto-generated (what should happen in my
# understandings)
_session.add(obj)
_session.commit()
After making a object instance transient you have to remove its object-id. Without an object-id you can add it again to the database which will generate a new object-id for it.
if __name__ == '__main__':
# the persistent object with an identiy in the database
obj = GetOneMachineDataFromDatabase()
# make it transient
make_transient(obj)
# remove the identiy / object-id
obj._oid = None
# adding the object again generates a new identiy / object-id
_session.add(obj)
# this include a flush() and create a new primary key
_session.commit()

SQLAlchemy Error Appending to Relationship

I've been using SQLAlchemy 0.9.2 with Python Version 2.7.3 and have run into a bit of an odd problem that I can't quite seem to explain. Here is my relevant code:
Base = declarative_base()
class Parent(Base):
__tablename__ = 'parents'
__table_args__ = (UniqueConstraint('first_name', 'last_name', name='_name_constraint'),)
id = Column(Integer, primary_key=True)
first_name = Column(String(32), nullable=False)
last_name = Column(String(32), nullable=False)
children = relationship(Child, cascade='all,delete', backref='parent')
## Constructors and other methods ##
class Child(Base):
__tablename__ = 'children'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parents.id'))
foo = Column(String(32), nullable=False)
## Constructors and other methods ##
So a pretty basic set of models. The problem I'm experiencing is that I want to add a child to a parent that is saved to the database. The kicker is that the child is currently related to a parent that is not in the database. Consider the following example:
database_engine = create_engine("mysql://user:password#localhost/db", echo=False)
session = scoped_session(sessionmaker(autoflush=True,autocommit=False))
p1 = Parent("Foo", "Bar") # Create a parent and append a child
c1 = Child("foo")
p1.children.append(c1)
session.add(p1)
session.commit() # This works without a problem
db_parent = session.query(Parent).first()
db_parent.children.append(Child("bar"))
session.commit() # This also works without a problem
p2 = Parent("Foo", "Bar")
c3 = Child("baz")
p2.children.append(c3)
db_parent = session.query(Parent).first()
db_parent.children.append(p2.children[0])
session.commit() # ERROR: This blows up
The error I'm receiving is that I'm breaking an integrity Constraint, namely '_name_constraint'. SQLAlchemy is telling me that is trying to insert a Parent with the same information. My question is, why in the world is it trying to add a secondary parent?
These are the steps I've taken so far and don't have a good answer for:
I've inspected db_parent.children[2] It points to the same memory address as p1 once I have appended it to the list
I've inspected p2.children after the append. Oddly, p2 has no children once I have appended its child to db_parent I think this has something to do with what is going on, I just don't understand why its happening
Any help would be much appreciated, as I simply don't understand what's going on here. If you need me to post more please let me know. Thanks in advance.
Okay, after some more digging I think I have found a solution to my problem, but I don't yet have the answer as to why its happening the way it is, but I think I may have a guess. The solution I discovered was to use session.expunge(p2) before session.commit()
I started exploring SQLAlchemy's Internals, particularly, the instance state. I found that once you add the child to the parent, the original parent's state becomes pending. Here is an example:
from sqlalchemy import inspect
p2 = Parent("Foo", "Bar")
p2_inst = inspect(p2)
c3 = Child("Baz")
c3_inst = inspect(c3)
db_parent = session.query(Parent).first()
db_parent_inst = inspect(db_parent)
print("Pending State before append:")
print("p2_inst : {}".format(p2_inst.pending))
print("c3_inst : {}".format(c3_inst.pending))
print("db_parent_inst : {}".format(db_parent_inst.pending))
db_parent.children.append(p2.children[0])
print("Pending State after append:")
print("p2_inst : {}".format(p2_inst.pending))
print("c3_inst : {}".format(c3_inst.pending))
print("db_parent_inst : {}".format(db_parent_inst.pending))
session.expunge(p2)
print("Pending State after expunge:")
print("p2_inst : {}".format(p2_inst.pending))
print("c3_inst : {}".format(c3_inst.pending))
print("db_parent_inst : {}".format(db_parent_inst.pending))
session.commit()
The result of running this will be:
Pending State before append:
p2_inst : False
c3_inst : False
db_parent_inst : False
Pending State after append:
p2_inst : True
c3_inst : True
db_parent_inst : False
Pending State after expunge:
p2_inst : False
c3_inst : True
db_parent_inst : False
And there you have it. Once I thought about it a bit, I suppose it makes sense. There is no reason for the db_parent to ever enter a "pending" state because, you're not actually doing anything to the record in MySQL. My guess on why p2 becomes pending is due to an order of operations? In order for c3 to become pending, then all of its relationships must exist (to include p2) and so even when you change the child's parent, the session still think that it needs to add the parent.
I'd love for someone more knowledgeable on SQLAlchemy to correct me, but to the best of my knowledge, that's my best explanation :)

SQLAlchemy - Problem with an association table and dates in primary join [duplicate]

This question already has an answer here:
Sqlalchemy - Can we use date comparison in relation definition?
(1 answer)
Closed 2 years ago.
I am working on defining my mapping with SQLAlchemy and I am pretty much done except one thing.
I have a 'resource' object and an association table 'relation' with several properties and a relationship between 2 resources.
What I have been trying to do almost successfully so far, is to provide on the resource object 2 properties: parent and children to traverse the tree stored by the association table.
A relation between 2 properties only last for a while, so there is a start and end date. Only one resource can be the parent of another resource at a time.
My problem is that if I expire one relation and create a new one, the parent property is not refreshed. I am thinking maybe there an issue with the primaryjoin for the parent property of resource.
Here is some code:
resource_table = model.tables['resource']
relation_table = model.tables['resource_relation']
mapper(Resource, resource_table,
properties = {
'type' : relation(ResourceType,lazy = False),
'groups' : relation(Group,
secondary = model.tables['resource_group'],
backref = 'resources'),
'parent' : relation(Relation, uselist=False,
primaryjoin = and_(
relation_table.c.res_id == resource_table.c.res_id,
relation_table.c.end_date > func.now())),
'children' : relation(Relation,
primaryjoin = and_(
relation_table.c.parent_id == resource_table.c.res_id,
relation_table.c.end_date > func.now()))
}
)
mapper(Relation, relation_table,
properties = {
'resource' : relation(Resource,
primaryjoin = (relation_table.c.res_id == resource_table.c.res_id)),
'parent' : relation(Resource,
primaryjoin = (relation_table.c.parent_id == resource_table.c.res_id))
}
)
oldrelation = resource.parent
oldrelation.end_date = datetime.today()
relation = self.createRelation(parent, resource)
# Here the relation object has not replaced oldrelation in the resource object
Any idea ?
Thanks,
Richard Lopes
Consider using >= instead of > in date comparison.

Categories