SQLAlchemy - Problem with an association table and dates in primary join [duplicate] - python

This question already has an answer here:
Sqlalchemy - Can we use date comparison in relation definition?
(1 answer)
Closed 2 years ago.
I am working on defining my mapping with SQLAlchemy and I am pretty much done except one thing.
I have a 'resource' object and an association table 'relation' with several properties and a relationship between 2 resources.
What I have been trying to do almost successfully so far, is to provide on the resource object 2 properties: parent and children to traverse the tree stored by the association table.
A relation between 2 properties only last for a while, so there is a start and end date. Only one resource can be the parent of another resource at a time.
My problem is that if I expire one relation and create a new one, the parent property is not refreshed. I am thinking maybe there an issue with the primaryjoin for the parent property of resource.
Here is some code:
resource_table = model.tables['resource']
relation_table = model.tables['resource_relation']
mapper(Resource, resource_table,
properties = {
'type' : relation(ResourceType,lazy = False),
'groups' : relation(Group,
secondary = model.tables['resource_group'],
backref = 'resources'),
'parent' : relation(Relation, uselist=False,
primaryjoin = and_(
relation_table.c.res_id == resource_table.c.res_id,
relation_table.c.end_date > func.now())),
'children' : relation(Relation,
primaryjoin = and_(
relation_table.c.parent_id == resource_table.c.res_id,
relation_table.c.end_date > func.now()))
}
)
mapper(Relation, relation_table,
properties = {
'resource' : relation(Resource,
primaryjoin = (relation_table.c.res_id == resource_table.c.res_id)),
'parent' : relation(Resource,
primaryjoin = (relation_table.c.parent_id == resource_table.c.res_id))
}
)
oldrelation = resource.parent
oldrelation.end_date = datetime.today()
relation = self.createRelation(parent, resource)
# Here the relation object has not replaced oldrelation in the resource object
Any idea ?
Thanks,
Richard Lopes

Consider using >= instead of > in date comparison.

Related

How to update object returned in query

So I'm a flask/sqlalchemy newbie but this seems like it should be a pretty simple. Yet for the life of me I can't get it to work and I can't find any documentation for this anywhere online. I have a somewhat complex query I run that returns me a list of database objects.
items = db.session.query(X, func.count(Y.x_id).label('total')).filter(X.size >= size).outerjoin(Y, X.x_id == Y.x_id).group_by(X.x_id).order_by('total ASC')\
.limit(20).all()
after I get this list of items I want to loop through the list and for each item update some property on it.
for it in items:
it.some_property = 'xyz'
db.session.commit()
However what's happening is that I'm getting an error
it.some_property = 'xyz'
AttributeError: 'result' object has no attribute 'some_property'
I'm not crazy. I'm positive that the property does exist on model X which is subclassed from db.Model. Something about the query is preventing me from accessing the attributes even though I can clearly see they exist in the debugger. Any help would be appreciated.
class X(db.Model):
x_id = db.Column(db.Integer, primary_key=True)
size = db.Column(db.Integer, nullable=False)
oords = db.relationship('Oords', lazy=True, backref=db.backref('x', lazy='joined'))
def __init__(self, capacity):
self.size = size
Given your example your result objects do not have the attribute some_property, just like the exception says. (Neither do model X objects, but I hope that's just an error in the example.)
They have the explicitly labeled total as second column and the model X instance as the first column. If you mean to access a property of the X instance, access that first from the result row, either using index, or the implicit label X:
items = db.session.query(X, func.count(Y.x_id).label('total')).\
filter(X.size >= size).\
outerjoin(Y, X.x_id == Y.x_id).\
group_by(X.x_id).\
order_by('total ASC').\
limit(20).\
all()
# Unpack a result object
for x, total in items:
x.some_property = 'xyz'
# Please commit after *all* the changes.
db.session.commit()
As noted in the other answer you could use bulk operations as well, though your limit(20) will make that a lot more challenging.
You should use the update function.
Like that:
from sqlalchemy import update
stmt = update(users).where(users.c.id==5).\
values(name='user #5')
Or :
session = self.db.get_session()
session.query(Organisation).filter_by(id_organisation = organisation.id_organisation).\
update(
{
"name" : organisation.name,
"type" : organisation.type,
}, synchronize_session = False)
session.commit();
session.close()
The sqlAlchemy doc : http://docs.sqlalchemy.org/en/latest/core/dml.html

How to use make_transient() to duplicate an SQLAlchemy mapped object?

I know the question how to duplicate or copy a SQLAlchemy mapped object was asked a lot of times. The answer always depends on the needs or how "duplicate" or "copy" is interpreted.
This is a specialized version of the question because I got the tip to use make_transient() for that.
But I have some problems with that. I don't really know how to handle the primary key (PK) here. In my use cases the PK is always autogenerated by SQLA (or the DB in background). But this doesn't happen with a new duplicated object.
The code is a little bit pseudo.
import sqlalchemy as sa
from sqlalchemy.orm.session import make_transient
_engine = sa.create_engine('postgres://...')
_session = sao.sessionmaker(bind=_engine)()
class MachineData(_Base):
__tablename__ = 'Machine'
_oid = sa.Column('oid', sa.Integer, primary_key=True)
class TUnitData(_Base):
__tablename__ = 'TUnit'
_oid = sa.Column('oid', sa.Integer, primary_key=True)
_machine_fk = sa.Column('machine', sa.Integer, sa.ForeignKey('Machine.oid'))
_machine = sao.relationship("MachineData")
def __str__(self):
return '{}.{}: oid={}(hasIdentity={}) machine={}(fk={})' \
.format(type(self), id(self),
self._oid, has_identity(self),
self._machine, self._machine_fk)
if __name__ == '__main__':
# any query resulting in one persistent object
obj = GetOneMachineDataFromDatabase()
# there is a valid 'oid', has_identity == True
print(obj)
# should i call expunge() first?
# remove the association with any session
# and remove its “identity key”
make_transient(obj)
# 'oid' is still there but has_identity == False
print(obj)
# THIS causes an error because the 'oid' still exsits
# and is not new auto-generated (what should happen in my
# understandings)
_session.add(obj)
_session.commit()
After making a object instance transient you have to remove its object-id. Without an object-id you can add it again to the database which will generate a new object-id for it.
if __name__ == '__main__':
# the persistent object with an identiy in the database
obj = GetOneMachineDataFromDatabase()
# make it transient
make_transient(obj)
# remove the identiy / object-id
obj._oid = None
# adding the object again generates a new identiy / object-id
_session.add(obj)
# this include a flush() and create a new primary key
_session.commit()

SQLAlchemy Error Appending to Relationship

I've been using SQLAlchemy 0.9.2 with Python Version 2.7.3 and have run into a bit of an odd problem that I can't quite seem to explain. Here is my relevant code:
Base = declarative_base()
class Parent(Base):
__tablename__ = 'parents'
__table_args__ = (UniqueConstraint('first_name', 'last_name', name='_name_constraint'),)
id = Column(Integer, primary_key=True)
first_name = Column(String(32), nullable=False)
last_name = Column(String(32), nullable=False)
children = relationship(Child, cascade='all,delete', backref='parent')
## Constructors and other methods ##
class Child(Base):
__tablename__ = 'children'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parents.id'))
foo = Column(String(32), nullable=False)
## Constructors and other methods ##
So a pretty basic set of models. The problem I'm experiencing is that I want to add a child to a parent that is saved to the database. The kicker is that the child is currently related to a parent that is not in the database. Consider the following example:
database_engine = create_engine("mysql://user:password#localhost/db", echo=False)
session = scoped_session(sessionmaker(autoflush=True,autocommit=False))
p1 = Parent("Foo", "Bar") # Create a parent and append a child
c1 = Child("foo")
p1.children.append(c1)
session.add(p1)
session.commit() # This works without a problem
db_parent = session.query(Parent).first()
db_parent.children.append(Child("bar"))
session.commit() # This also works without a problem
p2 = Parent("Foo", "Bar")
c3 = Child("baz")
p2.children.append(c3)
db_parent = session.query(Parent).first()
db_parent.children.append(p2.children[0])
session.commit() # ERROR: This blows up
The error I'm receiving is that I'm breaking an integrity Constraint, namely '_name_constraint'. SQLAlchemy is telling me that is trying to insert a Parent with the same information. My question is, why in the world is it trying to add a secondary parent?
These are the steps I've taken so far and don't have a good answer for:
I've inspected db_parent.children[2] It points to the same memory address as p1 once I have appended it to the list
I've inspected p2.children after the append. Oddly, p2 has no children once I have appended its child to db_parent I think this has something to do with what is going on, I just don't understand why its happening
Any help would be much appreciated, as I simply don't understand what's going on here. If you need me to post more please let me know. Thanks in advance.
Okay, after some more digging I think I have found a solution to my problem, but I don't yet have the answer as to why its happening the way it is, but I think I may have a guess. The solution I discovered was to use session.expunge(p2) before session.commit()
I started exploring SQLAlchemy's Internals, particularly, the instance state. I found that once you add the child to the parent, the original parent's state becomes pending. Here is an example:
from sqlalchemy import inspect
p2 = Parent("Foo", "Bar")
p2_inst = inspect(p2)
c3 = Child("Baz")
c3_inst = inspect(c3)
db_parent = session.query(Parent).first()
db_parent_inst = inspect(db_parent)
print("Pending State before append:")
print("p2_inst : {}".format(p2_inst.pending))
print("c3_inst : {}".format(c3_inst.pending))
print("db_parent_inst : {}".format(db_parent_inst.pending))
db_parent.children.append(p2.children[0])
print("Pending State after append:")
print("p2_inst : {}".format(p2_inst.pending))
print("c3_inst : {}".format(c3_inst.pending))
print("db_parent_inst : {}".format(db_parent_inst.pending))
session.expunge(p2)
print("Pending State after expunge:")
print("p2_inst : {}".format(p2_inst.pending))
print("c3_inst : {}".format(c3_inst.pending))
print("db_parent_inst : {}".format(db_parent_inst.pending))
session.commit()
The result of running this will be:
Pending State before append:
p2_inst : False
c3_inst : False
db_parent_inst : False
Pending State after append:
p2_inst : True
c3_inst : True
db_parent_inst : False
Pending State after expunge:
p2_inst : False
c3_inst : True
db_parent_inst : False
And there you have it. Once I thought about it a bit, I suppose it makes sense. There is no reason for the db_parent to ever enter a "pending" state because, you're not actually doing anything to the record in MySQL. My guess on why p2 becomes pending is due to an order of operations? In order for c3 to become pending, then all of its relationships must exist (to include p2) and so even when you change the child's parent, the session still think that it needs to add the parent.
I'd love for someone more knowledgeable on SQLAlchemy to correct me, but to the best of my knowledge, that's my best explanation :)

Cascade delete from self-referential many-to-many relationship

I have a SQLite database with the following tables:
fits_table = Table("fits", saveddata_meta,
Column("ID", Integer, primary_key = True),
Column("ownerID", ForeignKey("users.ID"), nullable = True, index = True),
Column("shipID", Integer, nullable = False, index = True),
Column("name", String, nullable = False),
Column("timestamp", Integer, nullable = False),
Column("characterID", ForeignKey("characters.ID"), nullable = True),
Column("damagePatternID", ForeignKey("damagePatterns.ID"), nullable=True),
Column("booster", Boolean, nullable = False, index = True, default = 0))
projectedFits_table = Table("projectedFits", saveddata_meta,
Column("sourceID", ForeignKey("fits.ID"), primary_key = True),
Column("victimID", ForeignKey("fits.ID"), primary_key = True),
Column("amount", Integer))
mapper(Fit, fits_table,
properties = {
"_Fit__projectedFits" : relation(Fit,
primaryjoin = projectedFits_table.c.victimID == fits_table.c.ID,
secondaryjoin = fits_table.c.ID == projectedFits_table.c.sourceID,
secondary = projectedFits_table,
collection_class = HandledProjectedFitList)
})
It's basically a relationship table that links a Fit to another Fit.
I've been trying to figure out the proper way to cascade a delete, but I cannot get it to work. I would like if a Fit is deleted, then it also deletes any rows in the relationship table where the fit ID is in either the source or victim column.
EDIT: I forgot to add what cascade flags I tried.
cascade='all, delete, delete-orphan', single_parent=True, - did not work. In face, manually deleting the relationship row also deleted the parent (whatever matched the sourceID)
cascade='delete', single_parent=True, - did not have the issue the above setting had, but still did not delete the relationship record when the Fit was deleted
cascade='all, delete', single_parent=True, - same as above
EDIT 2:
I kept fiddling with it, and without adding a cascade attribute, it kinda works. Let me explain:
If I have Fit B linked to Fit A (A is parent in this case), then delete Fit B, it does not delete the relationship. However, if I delete fit A, it does delete the relationship.
I am assuming that I have just been thinking about this completely wrong. When I have a fit instance, it gathers the relationships it has, and deletes any when that fit is deleted. However, when I delete fit B, it technically doesn't have any fits that are linked to it as children. So it never delete them.
I guess a workaround would be to assign the fits that B is a child of to a dummy attribute so that it gets deleted as well. Or do some sort of post processing in the middle layer of the application. Will post back with results, though I still welcome any thoughts. =)
I figured it out. As stated in an edit to the OP, I simply had to create a new relation with the criteria swapped. This way both relationships are loaded and deleted when the fit is deleted:
mapper(Fit, fits_table,
properties = {
"_Fit__projectedFits" : relation(Fit,
primaryjoin = projectedFits_table.c.victimID == fits_table.c.ID,
secondaryjoin = fits_table.c.ID == projectedFits_table.c.sourceID,
secondary = projectedFits_table,
collection_class = HandledProjectedFitList),
"_Fit__projectedOnto" : relation(Fit,
primaryjoin = fits_table.c.ID == projectedFits_table.c.sourceID,
secondaryjoin = fits_table.c.ID == projectedFits_table.c.victimID == fits_table.c.ID,
secondary = projectedFits_table,
collection_class = HandledProjectedFitList)

SQLAlchemy - Searching Three Tables at Once

EDIT:
Please excuse me, as I have just realised I've made an error with the example below. Here's what I want to achieve:
Say I have the three tables as described below. When a user enters a query, it will search all three tables for results where name is LIKE %query%, but only return unique results.
Here's some example data and output:
Data:
**Grandchild:**
id: 1
name: John
child_id: 1
**Grandchild:**
id: 2
name: Jesse
child_id: 2
**Child:**
id: 1
name: Joshua
parent_id: 1
**Child:**
id: 2
name: Jackson
parent_id: 1
**Parent:**
id: 1
name: Josie
If a user searches for "j" it will return the two Grandchild entries: John and Jesse.
If a user searches for "j, Joshua", it will return only the Grandchildren who's child is Joshua - in this case, only John.
Essentially, I want to search for all the Grandchild entries, and then if the user types in more key words, it will filter those Grandchildren down based on their related Child entry's name. "j" will return all grandchildren starting with "j", "j, Josh" will return all grandchildren starting with "j" and whom have their Child LIKE %Josh%.
So, I have a setup like this:
Grandchild{
id
name
child_id
}
Child{
id
name
parent_id
}
Parent{
id
name
}
Grandchild is linked/mapped to child. Child is mapped to Parent.
What I want to do, is something like below, where I search all three databases at once:
return Grandchild.query.filter(or_(
Grandchild.name.like("%" + query + "%"),
Grandchild.child.name.like("%" + query + "%"),
Grandchild.child.parent.name.like("%" + query + "%")
)).all()
Obviously the query above is incorrect, and returns an error:
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object has an attribute 'name'
What would the correct way to go about what I'm attempting be?
I am running MySQL, Flask-SQLAlchemy (which I believe extends SQLAlchemy), Flask.
As for me, it is better to modify your data model (if it is possible).
You can create a self-referenced table 'People' like that:
People
{
id,
name,
parent_id,
grandparent_id,
}
class People(Base):
__tablename__ = "people"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(Unicode(255), nullable=False)
parent_id = Column(Integer, ForeignKey('people.id')) # parent in hierarchy
grandparent_id = Column(Integer, ForeignKey('people.id')) # grandparent in hierarchy
# relationships
parent = relationship("People", primaryjoin="People.parent_id==People.id",
remote_side=[id])
grandparent = relationship("People", primaryjoin="People.grandparent_id==People.id",
remote_side=[id])
Then the things get more obvious:
session.query(People).filter(People.name.like("%" + query + "%"))

Categories