Cascade updates after sqlalchemy "after_update" event - python

I am trying to cascade updates after a model has been updated:
event.listens_for(Task, "after_update")
def propagate_status(mapper, connection, target):
object = target.object
if object.is_finished():
object.status = COMPLETED
The issue is that I can't seem to get the object status commited to the actual db. I can tell that the status does get set because I have a listener attached to object.status.
I have tried calling commit() on the object but that results in:
ResourceClosedError: This transaction is closed
Is there a better way to accomplish this? Should I use the connection object directly?

For those who find this later on. I was able to successfully use the connection directly:
#event.listens_for(Task, "after_update")
def propagate_status(mapper, connection, target):
obj_table = Object.__table__
object = target.object
if object.is_finished():
connection.execute(
scan_table.update().
where(object_table.c.id == object.id).
values(status=COMPLETED)
)
If there is a more elegant way I am all for it.

When calling after_update the session is in the flush state. So you won't be able to commit any changes with that session. Another way is to create a new session inside the after_update event handler
#event.listens_for(Task, "after_update")
def propagate_status(mapper, connection, target):
obj_table = Object.__table__
object = target.object
if object.is_finished():
session = db.create_scoped_session() # or any other way
session.query(Object).filter(Object.id == target.id).update({"status": COMPLETED})
session.commit()

Related

How does a session work with SQL Alchemy with pool?

engine = db.create_engine(self.url, convert_unicode=True, pool_size=5, pool_recycle=1800, max_overflow=10)
connection = self.engine.connect()
Session = scoped_session(sessionmaker(bind=self.engine, autocommit=False, autoflush=True))
I initialize my session like this. And
def first():
with Session() as session:
second(session)
def second(session):
session.add(obj)
third(session)
def third(session):
session.execute(query)
I use my session like this.
I think the pool is assigned one for each session. So, I think that the above code can work well when pool_size=1, max_overflow=0. But, when I set up like that It stuck and return Exception like.
descriptor '__init__' requires a 'super' object but received a 'tuple'
Why is that? The pool is assigned more than one per session rather than one by one?
And when using session with with, Can do I careless commit and rollback when exception?

How to create new database connection in django

I need to create a new database connection(session) to avoid an unexpected commit from a MySql procedure in my django transaction. How to set up it in django?
I have tried to duplicate the database configuration in setting file. It worked for me but it seems not a good solution. See my code for more detail.
#classmethod
def get_sequence_no(cls, name='', enable_transaction=False):
"""
return the sequence no for the given key name
"""
if enable_transaction:
valobj = cls.objects.using('sequence_no').raw("call sp_get_next_id('%s')" % name)
return valobj[0].current_val
else:
valobj = cls.objects.raw("call sp_get_next_id('%s')" % name)
return valobj[0].current_val
Does anyone know how to use a custom database connection to call the procedure?
If you have a look at the django.db module, you can see that django.db.connection is a proxy for django.db.connections[DEFAULT_DB_ALIAS] and django.db.connections is an instance of django.db.utils.ConnectionHandler.
Putting this together, you should be able to get a new connection like this:
from django.db import connections
from django.db.utils import DEFAULT_DB_ALIAS, load_backend
def create_connection(alias=DEFAULT_DB_ALIAS):
connections.ensure_defaults(alias)
connections.prepare_test_settings(alias)
db = connections.databases[alias]
backend = load_backend(db['ENGINE'])
return backend.DatabaseWrapper(db, alias)
Note that this function will open a new connection every time it is called and you are responsible for closing it. Also, the APIs it uses are probably considered internal and might change without notice.
To close the connection, it should be enough to call .close() on the object return by the create_connection function:
conn = create_connection()
# do some stuff
conn.close()
With modern Django you can just do:
from django.db import connections
new_connection = connections.create_connection('default')

closing session in sqlalchemy

I have created a method in seperate python file. Whenever I have to get any data from database I call this method.
Now I am doing a for loop where for every iteration, db call is made to below method for ex-
def get_method(self, identifier):
sess = session.get_session()
id = sess.query(..).filter(I.. == ..)
return list(id)[0]
def get_session():
engine = create_engine('postgresql+psycopg2://postgres:postgres#localhost/db', echo=True)
Session = sessionmaker(engine)
sess = Session()
return sess
I am getting FATAL: sorry, too many clients already , probably because I am not closing the sess object . Even after closing I am getting the same issue.
How do I handle this.
You shouldn't be opening your session within the for loop. Do that before your loop begins, and close it after you're finished with your transactions. The documentation is helpful here: when to open and close sessions

Zombie Connection in SQLAlchemy

DBSession = sessionmaker(bind=self.engine)
def add_person(name):
s = DBSession()
s.add(Person(name=name))
s.commit()
Everytime I run add_person() another connection is created with my postgreSQL DB.
Looking at:
SELECT count(*) FROM pg_stat_activity;
I see the count going up, until I get a Remaining connection slots are reserved for non-replication superuser connections error.
How do I kill those connections? Am I wrong in opening a new session everytime I want to add a Person record?
In general, you should keep your Session object (here DBSession) separate from any functions that make changes to the database. So in your case you might try something like this instead:
DBSession = sessionmaker(bind=self.engine)
session = DBSession() # create your session outside of functions that will modify database
def add_person(name):
session.add(Person(name=name))
session.commit()
Now you will not get new connections every time you add a person to the database.

should SQLAlchemy session.begin_nested() be committed with transaction.commit() when using pyramid_tm?

I'm developing a pyramid application and currently in the process of moving from sqlite to postgresql. I've found postgresql more restrictive transaction management is giving me a bad time.
I am using the pyramid_tm because I find it convenient. Most of my problems occur during my async calls. What I have is views that serve up dynamic forms. The idea is - if we got an id that corresponds to a database row we edit the existing row. Otherwise, we are adding a new guy.
#view_config(route_name='contact_person_form',
renderer='../templates/ajax/contact_person_form.pt',
permission='view',
request_method='POST')
def contact_person_form(request):
try:
contact_person_id = request.params['contact_person']
DBSession.begin_nested()
contact_person = DBSession.query(ContactPerson).filter(ContactPerson.id == contact_person_id).one()
transaction.commit()
except (NoResultFound, DataError):
DBSession.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)
I need to begin a nested transaction because otherwise my lazy request method which is registered with config.add_request_method(get_user, 'user', reify=True) and called when rendering my view
def get_user(request):
userid = unauthenticated_userid(request)
if userid is not None:
user = DBSession.query(Employee).filter(Employee.id == userid).first()
return user
complains that the transaction has been interrupted and the SELECT on employees will be skipped.
I have two questions:
Is it okay to do transaction.commit() on a session.begin_nested() nested transaction? I don't exactly where SQLAlchemy ends and pyramid_tm begins. If I try to commit the session I get an exception that says I can only commit using the transaction manager. On the other hand DBSession.rollback() works fine.
Does handling this like
try:
#do something with db
except:
#oops, let's do something else
make sense? I have the feeling this is 'pythonic', but I'm not sure if this underlying transaction thing calls for non-pythonic means.
Calling transaction.commit() in your code is committing the session and causing your contact_person object to be expired when you try to use it later after the commit. Similarly if your user object is touched on both sides of the commit you'll have problems.
As you said, if there is an exception (NoResultFound) then your session is now invalidated. What you're looking for is a savepoint, which transaction supports, but not directly through begin_nested. Rather, you can use transaction.savepoint() combined with DBSession.flush() to handle errors.
The logic here is that flush executes the SQL on the database, raising any errors and allowing you to rollback the savepoint. After the rollback, the session is recovered and you can go on your merry way. Nothing has been committed yet, leaving that job for pyramid_tm at the end of the request.
try:
sp = transaction.savepoint()
contact_person = DBSession.query(ContactPerson)\
.filter(ContactPerson.id == contact_person_id)\
.one()
DBSession.flush()
except (NoResultFound, DataError):
sp.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)

Categories