Sqlalchemy - rollback when exception - python

I need to rollback a transaction in core.event 'handle_error' (catch 'KeyboardInterrupt'), but the parameter in this event is ExceptionContext, how do this?

I usually have this kind of pattern when working with sqlalchemy:
session = get_the_session_one_way_or_another()
try:
# do something with the session
except: # * see comment below
session.rollback()
raise
else:
session.commit()
To make things easier to use, it is useful to have this as a context manager:
#contextmanager
def get_session():
session = get_the_session_one_way_or_another()
try:
yield session
except:
session.rollback()
raise
else:
session.commit()
And then:
with get_session() as session:
# do something with the session
If an exception is raised within the block, the transaction will be rolled back by the context manager.
*There is an empty except: which catches literally everything. That is usually not what you want, but here the exception is always re-raised, so it's fine.

Related

PSQL connection closing time delay using SQLAlchemy

Why does there appear to be a significant delay between calling session.close() and the session actually closing?
I'm "using up" connections in a way that doesn't feel right. Is there a better way to do this or a design pattern I'm missing?
Following the guide here I use the following code, copied for completeness:
#contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
def run_my_program():
with session_scope() as session:
ThingOne().go(session)
ThingTwo().go(session)
This works great for reliably committing data and avoiding invalid sessions.
The problem is with hitting connection limits.
For example, say I have a page that makes 5 asynchronous calls per visit. If I visit the page, and hit refresh in quick succession it will spawn 5 * number_times_refreshed connections. It will eventually close them but there is a non negligible time delay.
The issue was the use of sessionmaker() and binding the db engine. Specifically this not good way:
def newSession():
engine = create_engine(settings.DATABASE_URL)
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
return session
#contextmanager
def session_scope():
session = newSession()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
This creates a new engine with each request. In addition to this being a poor pattern, it caused confusion on which "connection" was being closed. In this context we have both a "session" and a "DBSession". The session does get closed with session.close() but this does not touch the "DBSession".
A better way (Solution):
engine = create_engine(settings.DATABASE_URL)
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
#contextmanager
def session_scope():
session = DBSession()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
Creates one connection for the lifetime of the app

How can I better handle this Flask-SQLAlchemy commit/rollback?

I'm reviewing some old code I wrote and was looking at a shared commit function I had written to handle responses to the user on certain failures when attempting to commit changes to the database (such as deletes):
def _commit_to_database():
"""A shared function to make a commit to the database and handle exceptions
if encountered.
"""
flask.current_app.logger.info('Committing changes to database...')
try:
db.session.commit()
except AssertionError as err:
flask.abort(409, err)
except (exc.IntegrityError, sqlite3.IntegrityError) as err:
flask.abort(409, err.orig)
except Exception as err:
flask.abort(500, err)
finally:
db.session.rollback()
I think I understand my thought process: attempt the commit, upon certain failures trigger a flask.abort to send the response back, but I believe I found that the database was left with an open session requiring a rollback when I did this and adding the rollback into a finally statement resolved this allowing me to still use the flask.abort.
The questions I have around me code are:
1) Is this a bug: will the Flask-SQLAlchemy extension not close out the session as normal; is calling the rollback on the finally which will be triggering after the abort going to affect successful commits?
2) If this is a bug: what should I be doing differently in handling the try-except-finally and the db session?
You need to rollback when exception occurs and finally close the session:
def _commit_to_database():
"""A shared function to make a
commit to the database and
handle exceptions if encountered.
"""
flask.current_app.logger.info('Committing changes to db...')
try:
db.session.commit()
except AssertionError as err:
db.session.rollback()
flask.abort(409, err)
except (exc.IntegrityError, sqlite3.IntegrityError) as err:
db.session.rollback()
flask.abort(409, err.orig)
except Exception as err:
db.session.rollback()
flask.abort(500, err)
finally:
db.session.close()

Is safe to pass session as parameter into inner function to avoid zombie sessions and pool overflow?

I am fairly new to SQLAlchemy and I wonder what is a right style to write code with sessions and splitting sqlalchemy queries into couple functions and to avoid zombie sessions in case of any exception ( To avoid overflow pool and make server irresponsible)
So, my question is ok to in one function create session and pass into another as parameter and in inner just call flush and in outter commit with finnaly, is this safe way to do or there is better way ?
For example
class Fetcher(object):
def main(self, name):
try:
session = Session()
user = session.query(UserModel).filter(UserModel.name.like(name)).first()
if user and user.active:
relatives = _fetch_relatives(session, user.id)
user.active = utc_time()
session.commit()
except Exception as e:
print e
session.rollback()
finally:
session.close()
def _fetch_relatives(self, session, id):
relatives = []
try:
for r in session.query(RelativesModel).filter(RelativesModel.relative_id == id).all():
relatives.apped({'name': r.name, 'age': r.age})
r.readed = utc_time()
session.flush()
except Exception as e:
print e
session.rollback()
finally:
session.close()
return relatives
the best approach is to have just one outermost transactional scope for an entire operation. Where you demarcate this scope is often something dependent on how the application works and there's some thoughts on this here.
for the example given, having just one outermost scope would probably look like this, seeing that your object is called a "fetcher" and I'd assume a typical use case in your application has to fetch more than one thing - it's best to keep the scope of transactions and sessions outside of the scope of objects that work with specific parts of the database:
class Fetcher(object):
def main(self, session, name):
user = session.query(UserModel).filter(UserModel.name.like(name)).first()
if user and user.active:
relatives = _fetch_relatives(session, user.id)
user.active = utc_time()
def _fetch_relatives(self, session, id):
relatives = []
for r in session.query(RelativesModel).filter(RelativesModel.relative_id == id).all():
relatives.apped({'name': r.name, 'age': r.age})
r.readed = utc_time()
session.flush()
return relatives
def run_my_program():
session = Session()
try:
f1 = Fetcher()
f1.main(session, "somename")
# work with other fetchers, etc.
session.commit()
except Exception as e:
session.rollback()
finally:
session.close()

Do we need to do session.begin() explicitly?

Specifically, do I need to call begin after doing a commit or rollback? I saw stuff suggesting that a new session always enters begin state; but I was wondering about auto-committed transactions happening when a session is begun.
When must I issue a begin? Will multiple begins in same session behave the same as in a MySQL terminal?
I have cases like (look at comments):
--1 A method that does transactions in a loop:
for ...: #EACH ONE DESERVES TO HAVE OWN TRANSACTION
session.begin()
for ....:
session.execute("insert into...")
session.commit()
--2 A function that calls another function in same session:
def f1(): #can be done standalone
session = Session()
session.begin()
...do stuff
session.commit()
def f2():
session = Session()
session.begin()
a = session.execute("select...")
if stuff_not_fine():
session.rollback() #KILL OF CURRENT TRANSACTION
f1()
session.begin() #CONTINUE WHERE IT LEFT
a = session.execute("select...")
...do rest of stuff
A SQL connection is as well a context manager. So you can do
session = Session()
with session as cursor:
# do stuff
In order to roolback, you might introduce an exception which, if raised, causes the context manager to rollback the transaction. You should remember to catch the exception, however.

Trying to catch integrity error with SQLAlchemy

I'm having problems with trying to catch an error. I'm using Pyramid/SQLAlchemy and made a sign up form with email as the primary key. The problem is when a duplicate email is entered it raises a IntegrityError, so I'm trying to catch that error and provide a message but no matter what I do I can't catch it, the error keeps appearing.
try:
new_user = Users(email, firstname, lastname, password)
DBSession.add(new_user)
return HTTPFound(location = request.route_url('new'))
except IntegrityError:
message1 = "Yikes! Your email already exists in our system. Did you forget your password?"
I get the same message when I tried except exc.SQLAlchemyError (although I want to catch specific errors and not a blanket catch all). I also tried exc.IntegrityError but no luck (although it exists in the API).
Is there something wrong with my Python syntax, or is there something I need to do special in SQLAlchemy to catch it?
I don't know how to solve this problem but I have a few ideas of what could be causing the problem. Maybe the try statement isn't failing but succeeding because SQLAlchemy is raising the exception itself and Pyramid is generating the view so the except IntegrityError: never gets activated. Or, more likely, I'm catching this error completely wrong.
In Pyramid, if you've configured your session (which the scaffold does for you automatically) to use the ZopeTransactionExtension, then session is not flushed/committed until after the view has executed. If you want to catch any SQL errors yourself in your view, you need to force a flush to send the SQL to the engine. DBSession.flush() should do it after the add(...).
Update
I'm updating this answer with an example of a savepoint just because there are very few examples around of how to do this with the transaction package.
def create_unique_object(db, max_attempts=3):
while True:
sp = transaction.savepoint()
try:
obj = MyObject()
obj.identifier = uuid.uuid4().hex
db.add(obj)
db.flush()
except IntegrityError:
sp.rollback()
max_attempts -= 1
if max_attempts < 1:
raise
else:
return obj
obj = create_unique_object(DBSession)
Note that even this is susceptible to duplicates between transactions if no table-level locking is used, but it at least shows how to use a savepoint.
What you need to do is catch a general exception and output its class; then you can make the exception more specific.
except Exception as ex:
print ex.__class__
There might be no database operations until DBSession.commit() therefore the IntegrityError is raised later in the stack after the controller code that has try/except has already returned.
This is how I do it.
from contextlib import(
contextmanager,
)
#contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
def create_user(email, firstname, lastname, password):
new_user = Users(email, firstname, lastname, password)
try:
with session_scope() as session:
session.add(new_user)
except sqlalchemy.exc.IntegrityError as e:
pass
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it
Edit: The edited answer above is a better way of doing this, using rollback.
--
If you want to handle transactions in the middle of a pyramid application or something where an automatic transaction commit is performed at the end of a sequence, there's no magic that needs to happen.
Just remember to start a new transaction if the previous transaction has failed.
Like this:
def my_view(request):
... # Do things
if success:
try:
instance = self._instance(**data)
DBSession.add(instance)
transaction.commit()
return {'success': True}
except IntegrityError as e: # <--- Oh no! Duplicate unique key
transaction.abort()
transaction.begin() # <--- Start new transaction
return {'success': False}
Notice that calling .commit() on a successful transaction is fine, so it is not necessary to start a new transaction after a successful call.
You only need to abort the transaction and start a new one if the transaction is in a failed state.
(If transaction wasn't such a poc, you could use a savepoint and roll back to the savepoint rather than starting a new transaction; sadly, that is not possible, as attempting a commit invalidates a known previous savepoint. Great stuff huh?) (edit: <--- Turns out I'm wrong about that...)
Catch the exception in a finally statement after flushing the session.
try:
new_user = Users(email, firstname, lastname, password)
DBSession.add(new_user)
return HTTPFound(location = request.route_url('new'))
finally:
try:
DBSession.flush()
except IntegrityError:
message1 = "Yikes! Your email already exists in our system. Did you forget your password?"

Categories