My SQLAlchemy application (running on top of MariaDB) includes two models MyModelA and MyModelB where the latter is a child-record of the former:
class MyModelA(db.Model):
a_id = db.Column(db.Integer, nullable=False, primary_key=True)
my_field1 = db.Column(db.String(1024), nullable=True)
class MyModelB(db.Model):
b_id = db.Column(db.Integer, nullable=False, primary_key=True)
a_id = db.Column(db.Integer, db.ForeignKey(MyModelA.a_id), nullable=False)
my_field2 = db.Column(db.String(1024), nullable=True)
These are the instances of MyModelA and MyModelB that I create:
>>> my_a = MyModelA(my_field1="A1")
>>> my_a.aid
1
>>> MyModelB(a_id=my_a.aid, my_field2="B1")
I have the following code that deletes the instance of MyModelA where a_id==1:
db.session.commit()
try:
my_a = MyModelA.query.get(a_id=1)
assert my_a is not None
print "#1) Number of MyModelAs: %s\n" % MyModelA.query.count()
db.session.delete(my_a)
db.session.commit()
except IntegrityError:
print "#2) Cannot delete instance of MyModelA because it has child record(s)!"
db.session.rollback()
print "#3) Number of MyModelAs: %s\n" % MyModelA.query.count()
When I run this code look at the unexpected results I get:
#1) Number of MyModelAs: 1
#2) Cannot delete instance of MyModelA because it has child record(s)!
#3) Number of MyModelAs: 0
The delete supposedly fails and the DB throws an exception which causes a rollback. However even after the rollback, the number of rows in the table indicates that the row -- which supposedly wasn't deleted -- is actually gone!!!
Why is this happening? How can I fix this? It seems like a bug in SQLAlchemy.
TL;DR
Your problem might be related to the lack of explicit relationship declaration.
For example, here there is a sample of objects' relationship. In addition to the usage of a ForeignKey field, the class explicitly uses the relationship directive to define that connection. In the session API documentation, the following text appears:
object references should be constructed at the object level, not at the foreign key level
Which might imply to the way of SQLAlchemy to manage relations. I am not deeply familiar with the underlying mechanisms, but it is possible that this is what happens. Your session only includes the MyModelA object. Since you did not use the relationship() directive in the definition of MyModelB, objects of MyModelA type are not aware to the fact that some other object might refer them through a ForeignKey. Hence, when the session is about to commit, it is not aware to the fact that deleting the object affects some other MyModelB object, and its transaction mechanism does not take that into account.
I suggest that adding the relationship explicitly might prevent that behavior.
Related
I am using Flask-SQLAlchemy with Postgres. I noticed that when I delete a record, the next record will reuse that one's id, which is not ideal for my purposes. Another SO question that this is the default behavior. In particular, his SO question discussed the sql behind the scenes. However, when I tested the solution in this problem, it did not work. In fact, postgres was not using SERIAL for the primary key. I was having to edit it in pgadmin myself. Solutions in other programs mention using a Sequence but it is not shown where the sequence is coming from.
So I would hope this code:
class Test1(db.Model):
__tablename__ = "test1"
# id = ... this is what needs to change
id = db.Column(db.Integer, primary_key=True)
would not reuse say 3 if record 3 was deleted and another was created like so:
i1 = Invoice()
db.session.add(i1)
i2 = Invoice()
db.session.add(i2)
i3 = Invoice()
db.session.add(i3)
db.session.commit()
invs = Invoice.query.all()
for i in invs:
print(i.id) # Should print 1,2,3
Invoice.query.filter(id=3).delete() # no 3 now
db.session.commit()
i4 = Invoice()
db.session.add(i4)
db.session.commit()
invs = Invoice.query.all()
for i in invs:
print(i.id) # Should print 1,2,4
Other, solutions said to use autoincrement=False. Okay, but then how do I determine what the number to set the id to is? Is there a way to save a variable in the class without it being a column:
class Test2(db.Model)
__tablename__ = 'test2'
id = ...
last_id = 3
# code to set last_id when a record is deleted
Edit:
So I could (although I do not think I should) use Python to do this. I think this more clearly tries to illustrate what I am trying to do.
class Test1(db.Model):
__tablename__ = "test1"
# id = ... this is what needs to change
id = db.Column(db.Integer, primary_key=True)
last_used_id = 30
def __init__(self):
self.id = last_used_id + 1
self.last_used_id +=1
# Not sure if this somehow messes with SQLAlchemy / the db making the id first.
This will make any new record not touch an id that was already used.
However, with this I approach, I do encounter the class variable issue behavior of Python. See this SO question
Future self checking: See UUID per #net comment here:
You should use autoincrement=True. This will automatically increment the id everytime you add a new row.
class Test1(db.Model):
__tablename__ = "test1"
id = db.Column(db.Integer, primary_key=True, autoincrement=True, unique=True, nullable=False)
....
By default Postgres will not reuse ids due to performance issues. Attempting to avoid gaps or to re-use deleted IDs creates horrible performance problems. See the PostgreSQL wiki FAQ.
You don't need to keep track of the id. When you call db.session.add(i4) and db.session.commit() it will automatically insert with the incremented id.
I have some models with a relationship defined between them like so:
class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True, nullable=False)
children = Relationship(Child, lazy='joined')
class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True, nullable=False)
father_id = Column(Integer, ForeignKey('parent.id'), nullable=False)
If I add a child within the session (using session.add(Child(...))), I would expect its father's children relationship to update to include this child after flushing the session. However, I'm not seeing that.
parent = session.query(Parent).get(parent_id)
num_children = len(parent.children)
# num_children == 3, for example
session.add(Child(father_id=parent_id))
session.flush()
new_num_children = len(parent.children)
# num_children == 3, it should be 4!
Any help would be much appreciated!
I can add the new child to the parent.children list directly, and flush the session, but I'm due to other existing code, I want to add it using session.add.
I can also commit after adding the child, which does correctly update the parent.children relationship, but I don't want to commit the transaction at the point.
I've tried adding a backref to the children relationship, but that doesn't seem to make any difference.
I've just run into this problem myself. SQLAlchemy does some internal memoisation to prevent it emitting a new SQL query every time you access a relationship. The problem is that it doesn't seem to realise that updating the foreign key directly could have an effect on the relationship. While SQLAlchemy probably could be patched to deal with this for simple joins, it would be very difficult for complex joins and I presume this is why it behaves the way it does.
When you do session.flush(), you're sending the changes back to the database, but SQLAlchemy doesn't realise it needs to query the database to update the relationship.
If you call session.expire_all() after the flush, then you force SQLAlchemy to reload every model instance and relationship when they're next accessed - this solves the problem.
You can also use session.expire(obj) to do this more selectively or session.refresh(obj) to do it selectively and immediately re-query the database.
For more information about these methods and how they differ, I found a helpful blog post: https://www.michaelcho.me/article/sqlalchemy-commit-flush-expire-refresh-merge-whats-the-difference
Official docs: https://docs.sqlalchemy.org/en/13/orm/session_api.html
Given a simple declarative based class;
class Entity(db.Model):
__tablename__ = 'brand'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255), nullable=False)
And the next script
entity = Entity()
entity.name = 'random name'
db.session.add(entity)
db.session.commit()
# Just by accessing the property name of the created object a
# SELECT statement is sent to the database.
print entity.name
When I enable echo mode in SQLAlchemy, I can see in the terminal the INSERT statement and an extra SELECT just when I access a property (column) of the model (table row).
If I don't access to any property, the query is not created.
What is the reason for that behavior? In this basic example, We already have the value of the name property assigned to the object. So, Why is needed an extra query? It to secure an up to date value, or something like that?
By default, SQLAlchemy expires objects in the session when you commit. This is controlled via the expire_on_commit parameter.
The reasoning behind this is that the row behind the instance could have been modified outside of the transaction, so if you are not careful you could run into data races, but if you know what you are doing you can safely turn it off.
I'm trying to use association proxies to make dealing with tag-style records a little simpler, but I'm running into a problem enforcing uniqueness and getting objects to reuse existing tags rather than always create new ones.
Here is a setup similar to what I have. The examples in the documentation have a few recipes for enforcing uniqueness, but they all rely on having access to a session and usually require a single global session, which I cannot do in my case.
from sqlalchemy import Column, Integer, String, create_engine, ForeignKey
from sqlalchemy.orm import sessionmaker, relationship
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
Base = declarative_base()
engine = create_engine('sqlite://', echo=True)
Session = sessionmaker(bind=engine)
def _tag_find_or_create(name):
# can't use global objects here, may be multiple sessions and engines
# ?? No access to session here, how to do a query
tag = session.query(Tag).filter_by(name=name).first()
tag = Tag.query.filter_by(name=name).first()
if not tag:
tag = Tag(name=name)
return tag
class Item(Base)
__tablename__ = 'item'
id = Column(Integer, primary_key=True)
tags = relationship('Tag', secondary='itemtag')
tagnames = association_proxy('tags', 'name', creator=_tag_find_or_create)
class ItemTag(Base)
__tablename__ = 'itemtag'
id = Column(Integer, primary_key=True)
item_id = Column(Integer, ForeignKey('item.id'))
tag_id = Column(Integer, ForeignKey('tag.id'))
class Tag(Base)
__tablename__ = 'tag'
id = Column(Integer, primary_key=True)
name = Column(String(50), nullable=False)
# Scenario 1
session = Session()
item = Item()
session.add(item)
item.tagnames.append('red')
# Scenario 2
item2 = Item()
item2.tagnames.append('blue')
item2.tagnames.append('red')
session.add(item2)
Without the creator function, I just get tons of duplicate Tag items. The creator function seems like the most obvious place to put this type of check, but I'm unsure how to do a query from inside the creator function.
Consider the two scenarios provided at the bottom of the example. In the first example, it seems like there should be a way to get access to the session in the creator function, since the object the tags are being added to is already associated with a session.
In the second example, the Item object isn't yet associated with a session, so the validation check can't happen in the creator function. It would have to happen later when the object is actually added to a session.
For the first scenario, how would I go about getting access to the session object in the creator function?
For the second scenario, is there a way to "listen" for when the parent object is added to a session and validate the association proxies at that point?
For the first scenario, you can use object_session.
As for the question overall: true, you need access to the current session; if using scoped_session in your application is appropriate, then the second part of the Recipe you link to should work fine to use. See Contextual/Thread-local Sessions for more info.
Working with events and change objects when they change from transient to persistent state will not make your code pretty or very robust. So I would immediately add new Tag objects to the session, and if the transaction is rolled back, they would not be in the database.
Note that in a multi-user environment you are likely to have race condition: the same tag is new and created in simultaneously by two users. The user who commits last will fail (if you have a unique constraint on the database).
In this case you might consider be without the unique constraint, and have a (daily) procedure to clean those duplicates up (and reassign relations). With time there would be less and less new items, and less possibilities for such clashes.
I have two tables, say A and B. Both have a primary key id. They have a many-to-many relationship, SEC.
SEC = Table('sec', Base.metadata,
Column('a_id', Integer, ForeignKey('A.id'), primary_key=True, nullable=False),
Column('b_id', Integer, ForeignKey('B.id'), primary_key=True, nullable=False)
)
class A():
...
id = Column(Integer, primary_key=True)
...
rels = relationship(B, secondary=SEC)
class B():
...
id = Column(Integer, primary_key=True)
...
Let's consider this piece of code.
a = A()
b1 = B()
b2 = B()
a.rels = [b1, b2]
...
#some place later
b3 = B()
a.rels = [b1, b3] # errors sometimes
Sometimes, I get an error at the last line saying
duplicate key value violates unique constraint a_b_pkey
In my understanding, I think it tries to add (a.id, b.id) into 'sec' table again resulting in a unique constraint error. Is that what it is? If so, how can I avoid this? If not, why do I have this error?
The problem is you want to make sure the instances you create are unique. We can create an alternate constructor that checks a cache of existing uncommited instances or queries the database for existing commited instance before returning a new instance.
Here is a demonstration of such a method:
from sqlalchemy import Column, Integer, String, ForeignKey, Table
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.declarative.api import declarative_base
from sqlalchemy.orm import sessionmaker, relationship
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(engine)
Base = declarative_base(engine)
session = Session()
class Role(Base):
__tablename__ = 'role'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False, unique=True)
#classmethod
def get_unique(cls, name):
# get the session cache, creating it if necessary
cache = session._unique_cache = getattr(session, '_unique_cache', {})
# create a key for memoizing
key = (cls, name)
# check the cache first
o = cache.get(key)
if o is None:
# check the database if it's not in the cache
o = session.query(cls).filter_by(name=name).first()
if o is None:
# create a new one if it's not in the database
o = cls(name=name)
session.add(o)
# update the cache
cache[key] = o
return o
Base.metadata.create_all()
# demonstrate cache check
r1 = Role.get_unique('admin') # this is new
r2 = Role.get_unique('admin') # from cache
session.commit() # doesn't fail
# demonstrate database check
r1 = Role.get_unique('mod') # this is new
session.commit()
session._unique_cache.clear() # empty cache
r2 = Role.get_unique('mod') # from database
session.commit() # nop
# show final state
print session.query(Role).all() # two unique instances from four create calls
The create_unique method was inspired by the example from the SQLAlchemy wiki. This version is much less convoluted, favoring simplicity over flexibility. I have used it in production systems with no problems.
There are obviously improvements that can be added; this is just a simple example. The get_unique method could be inherited from a UniqueMixin, to be used for any number of models. More flexible memoizing of arguments could be implemented. This also puts aside the problem of multiple threads inserting conflicting data mentioned by Ants Aasma; handling that is more complex but should be an obvious extension. I leave that to you.
The error you mention is indeed from inserting a conflicting value to the sec table. To be sure that it is from the operation you think it is, not some previous change, turn on SQL logging and check what values is it trying to insert before erroring out.
When overwriting a many-to-many collection value, SQLAlchemy compares the new contents of the collection with the state in the database and correspondingly issues delete and insert statements. Unless you are poking around in SQLAlchemy internals, there should be two ways to encounter this error.
First is concurrent modification: Process 1 fetches the value a.rels and notices that it is empty, meanwhile Process 2 also fetches a.rels, sets it to [b1, b2] and commits flushing the (a,b1),(a,b2) tuples, Process 1 sets a.rels to [b1, b3] noticing that the previous contents was empty and when it tries to flush the sec tuple (a,b1) it gets a duplicate key error. The correct action in such cases is usually to retry the transaction from the top. You can use serializable transaction isolation to instead get a serialization error in this case that is distinct from a business logic error causing a duplicate key error.
The second case happens when you have managed to convince SQLAlchemy that you don't need to know the database state by setting the loading strategy of the rels attribute to noload. This can be done when defining the relationship by adding the lazy='noload' parameter, or when querying, calling .options(noload(A.rels)) on the query. SQLAlchemy will assume that sec table has no matching rows for objects loaded with this strategy in effect.