Retry on deadlock for MySQL / SQLAlchemy - python

I have searched for quite some time now and can't found a solution to my problem. We are using SQLAlchemy in conjunction with MySQL for our project and we encounter several time the dreaded error:
1213, 'Deadlock found when trying to get lock; try restarting transaction'.
We would like to try to restart the transaction at most three times in this case.
I have started to write a decorator that does this but i don't know how to save the session state before the fail and retry the same transaction after it ? (As SQLAlchemy requires a rollback whenever an exception is raised)
My work so far,
def retry_on_deadlock_decorator(func):
lock_messages_error = ['Deadlock found', 'Lock wait timeout exceeded']
#wraps(func)
def wrapper(*args, **kwargs):
attempt_count = 0
while attempt_count < settings.MAXIMUM_RETRY_ON_DEADLOCK:
try:
return func(*args, **kwargs)
except OperationalError as e:
if any(msg in e.message for msg in lock_messages_error) \
and attempt_count <= settings.MAXIMUM_RETRY_ON_DEADLOCK:
logger.error('Deadlock detected. Trying sql transaction once more. Attempts count: %s'
% (attempt_count + 1))
else:
raise
attempt_count += 1
return wrapper

You can't really do that with the Session from the outside. Session would have to support this internally. It would involve saving a lot of private state, so this may not be worth your time.
I completely ditched most ORM stuff in favour of the lower level SQLAlchemy Core interface. Using that (or even any dbapi interface) you can trivially use your retry_on_deadlock_decorator decorator (see question above) to make a retry-aware db.execute wrapper.
#retry_on_deadlock_decorator
def deadlock_safe_execute(db, stmt, *args, **kw):
return db.execute(stmt, *args, **kw)
And instead of
db.execute("UPDATE users SET active=0")
you do
deadlock_safe_execute(db, "UPDATE users SET active=0")
which will retry automatically if a deadlock happens.

Did you use code like this?
try:
Perform table transaction
break
except:
rollback
delay
try again to perform table transaction
The only way to truly handle deadlocks is to write your code to expect
them. This generally isn't very difficult if your database code is
well written. Often you can just put a try/catch around the query
execution logic and look for a deadlock when errors occur. If you
catch one, the normal thing to do is just attempt to execute the
failed query again.
Usefull links:
How to Cope with Deadlocks
Understanding Autocommit
Property MySQLConnection.autocommit

Related

SQLAlchemy use Transactions for E2E Testing

I am writing E2E tests for our software and would like to figure out why the rollback call does not roll the DB back to the state when the test started.
I use a decorator for my pytest test functions.
The issue I get is that data that I write to the DB during the tests persists, even though I call the rollback() in the final statement. Which indicates that the transaction is not setup or SQLAlchemy is doing something else in the background.
I see SQLAlchemy has the SAVEPOINT feature but I am not sure if this is what I really need. I think my request is pretty simple yet the framework obfuscates it. Or simply that I am not too experienced with it...
Note - the functions tested can have multiple commit calls...
def get_postgres_db():
v2_session = sessionmaker(
autocommit=False,
autoflush=False,
bind=v2_engine
)
try:
yield v2_session
finally:
v2_session.close()
def postgres_test_decorator(test_function):
"""
Decorator to open db connection and roll back regardless of test outcome
Can be ported into pytest later
"""
def the_decorator(*args, **kwargs):
try:
postgres_session = list(get_postgres_db())[0]
# IN MY SQL MIND I WOULD LIKE TO DO HERE
# BEGIN TRANSACTION
test_function(postgres_session)
finally:
# THIS SHOULD ROLLBACK TO ORIGINAL STATE
# ROLLBACK
postgres_session.rollback()
return the_decorator

Recover from PendingRollbackError and allow subsequent queries

We have a pyramid web application.
We use SQLAlchemy#1.4 with Zope transactions.
In our application, it is possible for an error to occur during flush as described here which causes any subsequent usage of the SQLAlchemy session to throw a PendingRollbackError. The error which occurs during a flush is unintentional (a bug), and is raised to our exception handling view... which tries to use data from the SQLAlchemy session, which then throws a PendingRollbackError.
Is it possible to "recover" from a PendingRollbackError if you have not framed your transaction management correctly? The SQLAclhemy documentation says to avoid this situation you essentially "just need to do things the right way". Unfortunately, this is a large codebase, and developers don't always follow correct transaction management. This issue is also complicated if savepoints/nested transactions are used.
def some_view():
# constraint violation
session.add_all([Foo(id=1), Foo(id=1)])
session.commit() # Error is raised during flush
return {'data': 'some data'}
def exception_handling_view(): # Wired in via pyramid framework, error ^ enters here.
session.query(... does a query to get some data) # This throws a `PendingRollbackError`
I am wondering if we can do something like the below, but don't understand pyramid + SQLAlchemy + Zope transactions well enough to know the implications (when considering the potential for nested transactions etc).
def exception_handling_view(): # Wired in via pyramid framework, error ^ enters here.
def _query():
session.query(... does a query to get some data)
try:
_query()
except PendingRollbackError:
session.rollback()
_query()
Instead of trying to execute your query, just try to get the connection:
def exception_handling_view():
try:
_ = session.connection()
except PendingRollbackError:
session.rollback()
session.query(...)
session.rollback() only rolls back the innermost transaction, as is usually expected — assuming nested transactions are used intentionally via the explicit session.begin_nested().
You don't have to rollback parent transactions, but if you decide to do that, you can:
while session.registry().in_transaction():
session.rollback()

simple example of working with neo4j python driver?

Is there a simple example of working with the neo4j python driver?
How do I just pass cypher query to the driver to run and return a cursor?
If I'm reading for example this it seems the demo has a class wrapper, with a private member func I pass to the session.write,
session.write_transaction(self._create_and_return_greeting, ...
That then gets called it with a transaction as a first parameter...
def _create_and_return_greeting(tx, message):
that in turn runs the cypher
result = tx.run("CREATE (a:Greeting) "
This seems 10X more complicated than it needs to be.
I did just try a simpler:
def raw_query(query, **kwargs):
neodriver = neo_connect() # cached dbconn
with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
return result.data()
But this results in a socket error on the query, probably because the session goes out of scope?
[dfcx/__init__] ERROR | Underlying socket connection gone (_ssl.c:2396)
[dfcx/__init__] ERROR | Failed to write data to connection IPv4Address(('neo4j-core-8afc8558-3.production-orch-0042.neo4j.io', 7687)) (IPv4Address(('34.82.120.138', 7687)))
Also I can't return a cursor/iterator, just the data()
When the session goes out of scope, the query result seems to die with it.
If I manually open and close a session, then I'd have the same problems?
Python must be the most popular language this DB is used with, does everyone use a different driver?
Py2neo seems cute, but completely lacking in ORM wrapper function for most of the cypher language features, so you have to drop down to raw cypher anyway. And I'm not sure it supports **kwargs argument interpolation in the same way.
I guess that big raise should help iron out some kinks :D
Slightly longer version trying to get a working DB wrapper:
def neo_connect() -> Union[neo4j.BoltDriver, neo4j.Neo4jDriver]:
global raw_driver
if raw_driver:
# print('reuse driver')
return raw_driver
neoconfig = NEOCONFIG
raw_driver = neo4j.GraphDatabase.driver(
neoconfig['url'], auth=(
neoconfig['user'], neoconfig['pass']))
if raw_driver is None:
raise BaseException("cannot connect to neo4j")
else:
return raw_driver
def raw_query(query, **kwargs):
# just get data, no cursor
neodriver = neo_connect()
session = neodriver.session()
# logging.info('neoquery %s', query)
# with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
data = result.data()
return data
except neo4j.exceptions.CypherSyntaxError as err:
logging.error('neo error %s', err)
logging.error('failed query: %s', query)
raise err
# finally:
# logging.info('close session')
# session.close()
update: someone pointed me to this example which is another way to use the tx wrapper.
https://github.com/neo4j-graph-examples/northwind/blob/main/code/python/example.py#L16-L21
def raw_query(query, **kwargs):
neodriver = neo_connect() # cached dbconn
with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
return result.data()
This is perfectly fine and works as intended on my end.
The error you're seeing is stating that there is a connection problem. So there must be something going on between the server and the driver that's outside of its influence.
Also, please note, that there is a difference between all of these ways to run a query:
with driver.session():
result = session.run("<SOME CYPHER>")
def work(tx):
result = tx.run("<SOME CYPHER>")
with driver.session():
session.write_transaction(work)
The latter one might be 3 lines longer and the team working on the drivers collected some feedback regarding this. However, there are more things to consider here. Firstly, changing the API surface is something that needs careful planning and cannot be done in say a patch release. Secondly, there are technical hurdles to overcome. Here are the semantics, anyway:
Auto-commit transaction. Runs only that query as one unit of work.
If you run a new auto-commit transaction within the same session, the previous result will buffer all available records for you (depending on the query, this will consume a lot of memory). This can be avoided by calling result.consume(). However, if the session goes out of scope, the result will be consumed automatically. This means you cannot extract further records from it. Lastly, any error will be raised and needs handling in the application code.
Managed transaction. Runs whatever unit of work you want within that function. A transaction is implicitly started and committed (unless you rollback explicitly) around the function.
If the transaction ends (end of function or rollback), the result will be consumed and become invalid. You'll have to extract all records you need before that.
This is the recommended way of using the driver because it will not raise all errors but handle some internally (where appropriate) and retry the work function (e.g. if the server is only temporarily unavailable). Since the function might be executed multiple time, you must make sure it's idempotent.
Closing thoughts:
Please remember that stackoverlfow is monitored on a best-effort basis and what can be perceived as hasty comments may get in the way of getting helpful answers to your questions

Preserve an aborted transaction when SQLAlchemy raises a ProgrammingError

I have a slightly unusual problem with transaction state and error handling in SQLAlchemy. The short version: is there any way of preserving a transaction when SQLAlchemy raises a ProgrammingError and aborts it?
Background
I'm working on an integration test suite for a legacy codebase. Right now, I'm designing a set of fixtures that will allow us to run all tests inside transactions, inspired by the SQLAlchemy documentation. The general paradigm involves opening a connection, starting a transaction, binding a session to that connection, and then mocking out most database access methods so that they make use of that transaction. (To get a sense of what this looks like, see the code provided in the docs link above, including the note at the end.) The goal is to allow ourselves to run methods from the codebase that perform a lot of database updates in the context of a test, with the assurance that any side effects that happen to alter the test database will get rolled back after the test has completed.
My problem is that the code often relies on handling DBAPI errors to accomplish control flow when running queries, and those errors automatically abort transactions (per the psycopg2 docs). This poses a problem, since I need to preserve the work that has been done in that transaction up to the point that the error is raised, and I need to continue using the transaction after the error handling is done.
Here's a representative method that uses error handling for control flow:
from api.database import engine
def entity_count():
"""
Count the entities in a project.
"""
get_count = '''
SELECT COUNT(*) AS entity_count FROM entity_browser
'''
with engine.begin() as conn:
try:
count = conn.execute(count).first().entity_count
except ProgrammingError:
count = 0
return count
In this example, the error handling provides a quick way of determining if the table entity_browser exists: if not, Postgres will throw an error that gets caught at the DBAPI level (psycopg2) and passed up to SQLAlchemy as a ProgrammingError.
In the tests, I mock out engine.begin() so that it always returns the connection with the ongoing transaction that was established in the test setup. Unfortunately, this means that when the code continues execution after SQLAlchemy has raised a ProgrammingError and psycopg2 has aborted the transaction, SQLAlchemy will raise an InternalError the next time a database query runs using the open connection, complaining that the transaction has been aborted.
Here's a sample test exhibiting this behavior:
import sqlalchemy as sa
def test_entity_count(session):
"""
Test the `entity_count` method.
`session` is a fixture that sets up the transaction and mocks out
database access, returning a Flask-SQLAlchemy `scoped_session` object
that we can use for queries.
"""
# Make a change to a table that we can observe later
session.execute('''
UPDATE users
SET name = 'in a test transaction'
WHERE id = 1
''')
# Drop `entity_browser` in order to raise a `ProgrammingError` later
session.execute('''DROP TABLE entity_browser''')
# Run the `entity_count` method, making sure that it raises an error
with pytest.raises(sa.exc.ProgrammingError):
count = entity_count()
assert count == 0
# Make sure that the changes we made earlier in the test still exist
altered_name = session.execute('''
SELECT name
FROM users
WHERE id = 1
''')
assert altered_name == 'in a test transaction'
Here's the type of output I get:
> altered_name = session.execute('''
SELECT name
FROM users
WHERE id = 1
''')
[... traceback history...]
def do_execute(self, cursor, statement, parameters, context=None):
> cursor.execute(statement, parameters)
E sqlalchemy.exc.InternalError: (psycopg2.InternalError) current transaction is
aborted, commands ignored until end of transaction block
Attempted solutions
My first instinct was to try to interrupt the error handling and force a rollback using SQLAlchemy's handle_error event listener. I added a listener into the test fixture that would roll back the raw connection (since SQLAlchemy Connection instances have no rollback API, as far as I understand it):
#sa.event.listens_for(connection, 'handle_error')
def raise_error(context):
dbapi_conn = context.connection.connection
dbapi_conn.rollback()
This successfully keeps the transaction open for further use, but ends up rolling back all of the previous changes made in the test. Sample output:
> assert altered_name == 'in a test transaction'
E AssertionError
Clearly, rolling back the raw connection is too aggressive of an approach. Thinking that I might be able to roll back to the last savepoint, I tried rolling back the scoped session, which has an event listener attached to it that automatically opens up a new nested transaction when a previous one ends. (See the note at the end of the SQLAlchemy doc on transactions in tests for a sample of what this looks like.)
Thanks to the mocks set up in the session fixture, I can import the scoped session directly into the event listener and roll it back:
#sa.event.listens_for(connection, 'handle_error')
def raise_error(context):
from api.database import db
db.session.rollback()
However, this approach also raises an InternalError on the next query. It seems that it doesn't actually rollback the transaction to the satisfaction of the underlying cursor.
Summary question
Is there any way of preserving the transaction after a ProgrammingError gets raised? On a more abstract level, what is happening when psycopg2 "aborts" the transaction, and how can I work around it?
The root of the problem is that you're hiding the exception from the context manager. You catch the ProgrammingError too soon and so the with-statement never sees it. Your entity_count() should be:
def entity_count():
"""
Count the entities in a project.
"""
get_count = '''
SELECT COUNT(*) AS entity_count FROM entity_browser
'''
try:
with engine.begin() as conn:
count = conn.execute(get_count).first().entity_count
except ProgrammingError:
count = 0
return count
And then if you provide something like
#contextmanager
def fake_begin():
""" Begin a nested transaction and yield the global connection.
"""
with connection.begin_nested():
yield connection
as the mocked engine.begin(), the connection stays usable. But #JL Peyret raises a good point about the logic of your test. Engine.begin() usually1 provides a new connection with an armed transaction from the pool, so your session and entity_count() shouldn't probably even be using the same connection.
1: Depends on pool configuration.

Using decorators for database access with psycopg2

I am constructing a model that does large parts of its calculations in a Postgresql database (for performance reasons). It looks somewhat like this:
def sql_func1(conn):
# prepare some data, crunch some number, etc.
curs = conn.cursor()
curs.execute("SOME SQL COMMAND")
curs.commit()
curs.close()
if __name__ == "__main__":
connection = psycopg2.connect(dbname='name', user='user', password='pass', host='localhost', port=1234)
sql_func1(conn)
sql_func2(conn)
sql_func3(conn)
connection.close()
The script uses around 30 individual functions like sql_func1. Obviously it is a little awkward to manage the connection and cursor in each function all the time. Thus I started using a decorator as described here. Now I can simply wrap sql_func1 with a decorator #db_connect and pass the connection from there. However, that means I am opening and closing the connection all the time, which is not good practice either. The psycopg2 FAQ says:
Creating a connection can be slow (think of SSL over TCP) so the best
practice is to create a single connection and keep it open as long as
required. It is also good practice to rollback or commit frequently
(even after a single SELECT statement) to make sure the backend is
never left “idle in transaction”. See also psycopg2.pool for
lightweight connection pooling.
Could you please give me some insights which would be an ideal practice im my case. Should I rather use a decorator that passes the cursor object instead of the connection? If so, please provide a code sample for the decorator. As I am rather new to programming, please let me also know in case you think my overall approach is wrong.
What about storing the connection in a global variable without closing it in the finally block? Something like this (according to the example you linked):
cnn = None
def with_connection(f):
def with_connection_(*args, **kwargs):
global cnn
if not cnn:
cnn = psycopg.connect(DSN)
try:
rv = f(cnn, *args, **kwargs)
except Exception, e:
cnn.rollback()
raise
else:
cnn.commit() # or maybe not
return rv
return with_connection_

Categories