Recover from PendingRollbackError and allow subsequent queries - python

We have a pyramid web application.
We use SQLAlchemy#1.4 with Zope transactions.
In our application, it is possible for an error to occur during flush as described here which causes any subsequent usage of the SQLAlchemy session to throw a PendingRollbackError. The error which occurs during a flush is unintentional (a bug), and is raised to our exception handling view... which tries to use data from the SQLAlchemy session, which then throws a PendingRollbackError.
Is it possible to "recover" from a PendingRollbackError if you have not framed your transaction management correctly? The SQLAclhemy documentation says to avoid this situation you essentially "just need to do things the right way". Unfortunately, this is a large codebase, and developers don't always follow correct transaction management. This issue is also complicated if savepoints/nested transactions are used.
def some_view():
# constraint violation
session.add_all([Foo(id=1), Foo(id=1)])
session.commit() # Error is raised during flush
return {'data': 'some data'}
def exception_handling_view(): # Wired in via pyramid framework, error ^ enters here.
session.query(... does a query to get some data) # This throws a `PendingRollbackError`
I am wondering if we can do something like the below, but don't understand pyramid + SQLAlchemy + Zope transactions well enough to know the implications (when considering the potential for nested transactions etc).
def exception_handling_view(): # Wired in via pyramid framework, error ^ enters here.
def _query():
session.query(... does a query to get some data)
try:
_query()
except PendingRollbackError:
session.rollback()
_query()

Instead of trying to execute your query, just try to get the connection:
def exception_handling_view():
try:
_ = session.connection()
except PendingRollbackError:
session.rollback()
session.query(...)
session.rollback() only rolls back the innermost transaction, as is usually expected — assuming nested transactions are used intentionally via the explicit session.begin_nested().
You don't have to rollback parent transactions, but if you decide to do that, you can:
while session.registry().in_transaction():
session.rollback()

Related

How to use self.env.commit() properly?

Could someone please explain to me how self.env.cr.commit() works, when to use it and some good practices?
From Odoo documentation seems the use of cr.commit is very dangerous. This is my first time using it and I am not sure how to use it properly for my use case.
Edit:
More information for my use case: I am creating shipments through shipping provider API. Let's say my API call is successful and I have created shipment but during handling of the response, I have to raise UserError for some reason and my changes are rollbacked. So now the state of the shipment is different in Odoo and on the shipping provider server which is unacceptable.
So if I am calling the method create_dhl_shipment() and the flag variable is True (an error occurred during last API call) then I would like to delete the original shipment and create a new one.
And my problem is: How do I make a change in the database and keep it from rollbacking.
During search on the internet, I came across cr.commit() but Odoo in documentation really discourages using it.
very simplified example:
Class StockPickingInherited(models.Model)
_inherit = 'stock.picking'
remnant_shipment = fields.Boolean("Possible remnant shipment")
packages = fields.One2many("stock.shipment.package", "picking_id")
def create_dhl_shipment(self):
response_from_shipping_provider = requests.get("API URL")
if response_from_shipping_provider != 200:
if not remnant_shipment:
raise UserError("Shipment creation failed")
else:
self.write({"packages" : [(5, 0, 0)]})
self.env.cr.commit() # write data into the db and keep the change from rollbacking due to raising UserError
raise UserError("Shipment creation failed")
Am I doing it right? Are there some potential dangers?
It is true that self.env.cr.commit() should be used sparingly. However, there are some legitimate use cases for it. For example:
#api.model
def some_cron_job(self):
for record in self.env[...].search(...):
record.do_some_process()
self.env.cr.commit()
The above is fine because you are doing some process on a batch of records and to avoid that the cron job does it over and over because of an error on one of the records, you can commit after each record has been processed. (PS: the above could be made safer with try...except... or marking a record as "failed" for example. This can be done in conjunction with commit())
In your use case you want to display a message to the user, but your problem is that once you raise the transaction will rollback, so to counter this you call a commit().
I have had similar situations and I sometimes find that using an #api.onchange works well for showing a message to the user.
#api.onchange('your_field')
def onchange_your_field(self):
if self.your_field > 100:
raise UserError("Thanks for entering the field")
However, I am no longer a fan of that approach since Odoo has gotten rid of most #api.onchange in favour of computed fields. (There are good reasons for the move, so try and avoid it too.)
Luckily there is a new way to do this, introduced in Odoo Version 13.0.
def some_method(self):
# Do something
self.write({"something" : False})
# Display message
return {
'type': 'ir.actions.client',
'tag': 'display_notification',
'params': {
'title': title,
'message': message,
'sticky': False,
}
}
The above is taken from here in the Odoo repo where a message is displayed to the user if the mail server credentials are correct.
The following rule is taken from the Odoo guidelines (never commit the transaction):
You should NEVER call cr.commit() yourself, UNLESS you have created your own database cursor explicitly! And the situations where you need to do that are exceptional!And by the way if you did create your own cursor, then you need to handle error cases and proper rollback, as well as properly close the cursor when you’re done with it.
In the following example, we create a new cursor to avoid rollback that could be caused by an upper method:
registry = odoo.registry(self.env.cr.dbname)
with registry.cursor() as cr:
env = api.Environment(cr, SUPERUSER_ID, {})
For more details check the well-documented Odoo guidelines
You can find an example in auth_ldap module

simple example of working with neo4j python driver?

Is there a simple example of working with the neo4j python driver?
How do I just pass cypher query to the driver to run and return a cursor?
If I'm reading for example this it seems the demo has a class wrapper, with a private member func I pass to the session.write,
session.write_transaction(self._create_and_return_greeting, ...
That then gets called it with a transaction as a first parameter...
def _create_and_return_greeting(tx, message):
that in turn runs the cypher
result = tx.run("CREATE (a:Greeting) "
This seems 10X more complicated than it needs to be.
I did just try a simpler:
def raw_query(query, **kwargs):
neodriver = neo_connect() # cached dbconn
with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
return result.data()
But this results in a socket error on the query, probably because the session goes out of scope?
[dfcx/__init__] ERROR | Underlying socket connection gone (_ssl.c:2396)
[dfcx/__init__] ERROR | Failed to write data to connection IPv4Address(('neo4j-core-8afc8558-3.production-orch-0042.neo4j.io', 7687)) (IPv4Address(('34.82.120.138', 7687)))
Also I can't return a cursor/iterator, just the data()
When the session goes out of scope, the query result seems to die with it.
If I manually open and close a session, then I'd have the same problems?
Python must be the most popular language this DB is used with, does everyone use a different driver?
Py2neo seems cute, but completely lacking in ORM wrapper function for most of the cypher language features, so you have to drop down to raw cypher anyway. And I'm not sure it supports **kwargs argument interpolation in the same way.
I guess that big raise should help iron out some kinks :D
Slightly longer version trying to get a working DB wrapper:
def neo_connect() -> Union[neo4j.BoltDriver, neo4j.Neo4jDriver]:
global raw_driver
if raw_driver:
# print('reuse driver')
return raw_driver
neoconfig = NEOCONFIG
raw_driver = neo4j.GraphDatabase.driver(
neoconfig['url'], auth=(
neoconfig['user'], neoconfig['pass']))
if raw_driver is None:
raise BaseException("cannot connect to neo4j")
else:
return raw_driver
def raw_query(query, **kwargs):
# just get data, no cursor
neodriver = neo_connect()
session = neodriver.session()
# logging.info('neoquery %s', query)
# with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
data = result.data()
return data
except neo4j.exceptions.CypherSyntaxError as err:
logging.error('neo error %s', err)
logging.error('failed query: %s', query)
raise err
# finally:
# logging.info('close session')
# session.close()
update: someone pointed me to this example which is another way to use the tx wrapper.
https://github.com/neo4j-graph-examples/northwind/blob/main/code/python/example.py#L16-L21
def raw_query(query, **kwargs):
neodriver = neo_connect() # cached dbconn
with neodriver.session() as session:
try:
result = session.run(query, **kwargs)
return result.data()
This is perfectly fine and works as intended on my end.
The error you're seeing is stating that there is a connection problem. So there must be something going on between the server and the driver that's outside of its influence.
Also, please note, that there is a difference between all of these ways to run a query:
with driver.session():
result = session.run("<SOME CYPHER>")
def work(tx):
result = tx.run("<SOME CYPHER>")
with driver.session():
session.write_transaction(work)
The latter one might be 3 lines longer and the team working on the drivers collected some feedback regarding this. However, there are more things to consider here. Firstly, changing the API surface is something that needs careful planning and cannot be done in say a patch release. Secondly, there are technical hurdles to overcome. Here are the semantics, anyway:
Auto-commit transaction. Runs only that query as one unit of work.
If you run a new auto-commit transaction within the same session, the previous result will buffer all available records for you (depending on the query, this will consume a lot of memory). This can be avoided by calling result.consume(). However, if the session goes out of scope, the result will be consumed automatically. This means you cannot extract further records from it. Lastly, any error will be raised and needs handling in the application code.
Managed transaction. Runs whatever unit of work you want within that function. A transaction is implicitly started and committed (unless you rollback explicitly) around the function.
If the transaction ends (end of function or rollback), the result will be consumed and become invalid. You'll have to extract all records you need before that.
This is the recommended way of using the driver because it will not raise all errors but handle some internally (where appropriate) and retry the work function (e.g. if the server is only temporarily unavailable). Since the function might be executed multiple time, you must make sure it's idempotent.
Closing thoughts:
Please remember that stackoverlfow is monitored on a best-effort basis and what can be perceived as hasty comments may get in the way of getting helpful answers to your questions

Preserve an aborted transaction when SQLAlchemy raises a ProgrammingError

I have a slightly unusual problem with transaction state and error handling in SQLAlchemy. The short version: is there any way of preserving a transaction when SQLAlchemy raises a ProgrammingError and aborts it?
Background
I'm working on an integration test suite for a legacy codebase. Right now, I'm designing a set of fixtures that will allow us to run all tests inside transactions, inspired by the SQLAlchemy documentation. The general paradigm involves opening a connection, starting a transaction, binding a session to that connection, and then mocking out most database access methods so that they make use of that transaction. (To get a sense of what this looks like, see the code provided in the docs link above, including the note at the end.) The goal is to allow ourselves to run methods from the codebase that perform a lot of database updates in the context of a test, with the assurance that any side effects that happen to alter the test database will get rolled back after the test has completed.
My problem is that the code often relies on handling DBAPI errors to accomplish control flow when running queries, and those errors automatically abort transactions (per the psycopg2 docs). This poses a problem, since I need to preserve the work that has been done in that transaction up to the point that the error is raised, and I need to continue using the transaction after the error handling is done.
Here's a representative method that uses error handling for control flow:
from api.database import engine
def entity_count():
"""
Count the entities in a project.
"""
get_count = '''
SELECT COUNT(*) AS entity_count FROM entity_browser
'''
with engine.begin() as conn:
try:
count = conn.execute(count).first().entity_count
except ProgrammingError:
count = 0
return count
In this example, the error handling provides a quick way of determining if the table entity_browser exists: if not, Postgres will throw an error that gets caught at the DBAPI level (psycopg2) and passed up to SQLAlchemy as a ProgrammingError.
In the tests, I mock out engine.begin() so that it always returns the connection with the ongoing transaction that was established in the test setup. Unfortunately, this means that when the code continues execution after SQLAlchemy has raised a ProgrammingError and psycopg2 has aborted the transaction, SQLAlchemy will raise an InternalError the next time a database query runs using the open connection, complaining that the transaction has been aborted.
Here's a sample test exhibiting this behavior:
import sqlalchemy as sa
def test_entity_count(session):
"""
Test the `entity_count` method.
`session` is a fixture that sets up the transaction and mocks out
database access, returning a Flask-SQLAlchemy `scoped_session` object
that we can use for queries.
"""
# Make a change to a table that we can observe later
session.execute('''
UPDATE users
SET name = 'in a test transaction'
WHERE id = 1
''')
# Drop `entity_browser` in order to raise a `ProgrammingError` later
session.execute('''DROP TABLE entity_browser''')
# Run the `entity_count` method, making sure that it raises an error
with pytest.raises(sa.exc.ProgrammingError):
count = entity_count()
assert count == 0
# Make sure that the changes we made earlier in the test still exist
altered_name = session.execute('''
SELECT name
FROM users
WHERE id = 1
''')
assert altered_name == 'in a test transaction'
Here's the type of output I get:
> altered_name = session.execute('''
SELECT name
FROM users
WHERE id = 1
''')
[... traceback history...]
def do_execute(self, cursor, statement, parameters, context=None):
> cursor.execute(statement, parameters)
E sqlalchemy.exc.InternalError: (psycopg2.InternalError) current transaction is
aborted, commands ignored until end of transaction block
Attempted solutions
My first instinct was to try to interrupt the error handling and force a rollback using SQLAlchemy's handle_error event listener. I added a listener into the test fixture that would roll back the raw connection (since SQLAlchemy Connection instances have no rollback API, as far as I understand it):
#sa.event.listens_for(connection, 'handle_error')
def raise_error(context):
dbapi_conn = context.connection.connection
dbapi_conn.rollback()
This successfully keeps the transaction open for further use, but ends up rolling back all of the previous changes made in the test. Sample output:
> assert altered_name == 'in a test transaction'
E AssertionError
Clearly, rolling back the raw connection is too aggressive of an approach. Thinking that I might be able to roll back to the last savepoint, I tried rolling back the scoped session, which has an event listener attached to it that automatically opens up a new nested transaction when a previous one ends. (See the note at the end of the SQLAlchemy doc on transactions in tests for a sample of what this looks like.)
Thanks to the mocks set up in the session fixture, I can import the scoped session directly into the event listener and roll it back:
#sa.event.listens_for(connection, 'handle_error')
def raise_error(context):
from api.database import db
db.session.rollback()
However, this approach also raises an InternalError on the next query. It seems that it doesn't actually rollback the transaction to the satisfaction of the underlying cursor.
Summary question
Is there any way of preserving the transaction after a ProgrammingError gets raised? On a more abstract level, what is happening when psycopg2 "aborts" the transaction, and how can I work around it?
The root of the problem is that you're hiding the exception from the context manager. You catch the ProgrammingError too soon and so the with-statement never sees it. Your entity_count() should be:
def entity_count():
"""
Count the entities in a project.
"""
get_count = '''
SELECT COUNT(*) AS entity_count FROM entity_browser
'''
try:
with engine.begin() as conn:
count = conn.execute(get_count).first().entity_count
except ProgrammingError:
count = 0
return count
And then if you provide something like
#contextmanager
def fake_begin():
""" Begin a nested transaction and yield the global connection.
"""
with connection.begin_nested():
yield connection
as the mocked engine.begin(), the connection stays usable. But #JL Peyret raises a good point about the logic of your test. Engine.begin() usually1 provides a new connection with an armed transaction from the pool, so your session and entity_count() shouldn't probably even be using the same connection.
1: Depends on pool configuration.

Retry on deadlock for MySQL / SQLAlchemy

I have searched for quite some time now and can't found a solution to my problem. We are using SQLAlchemy in conjunction with MySQL for our project and we encounter several time the dreaded error:
1213, 'Deadlock found when trying to get lock; try restarting transaction'.
We would like to try to restart the transaction at most three times in this case.
I have started to write a decorator that does this but i don't know how to save the session state before the fail and retry the same transaction after it ? (As SQLAlchemy requires a rollback whenever an exception is raised)
My work so far,
def retry_on_deadlock_decorator(func):
lock_messages_error = ['Deadlock found', 'Lock wait timeout exceeded']
#wraps(func)
def wrapper(*args, **kwargs):
attempt_count = 0
while attempt_count < settings.MAXIMUM_RETRY_ON_DEADLOCK:
try:
return func(*args, **kwargs)
except OperationalError as e:
if any(msg in e.message for msg in lock_messages_error) \
and attempt_count <= settings.MAXIMUM_RETRY_ON_DEADLOCK:
logger.error('Deadlock detected. Trying sql transaction once more. Attempts count: %s'
% (attempt_count + 1))
else:
raise
attempt_count += 1
return wrapper
You can't really do that with the Session from the outside. Session would have to support this internally. It would involve saving a lot of private state, so this may not be worth your time.
I completely ditched most ORM stuff in favour of the lower level SQLAlchemy Core interface. Using that (or even any dbapi interface) you can trivially use your retry_on_deadlock_decorator decorator (see question above) to make a retry-aware db.execute wrapper.
#retry_on_deadlock_decorator
def deadlock_safe_execute(db, stmt, *args, **kw):
return db.execute(stmt, *args, **kw)
And instead of
db.execute("UPDATE users SET active=0")
you do
deadlock_safe_execute(db, "UPDATE users SET active=0")
which will retry automatically if a deadlock happens.
Did you use code like this?
try:
Perform table transaction
break
except:
rollback
delay
try again to perform table transaction
The only way to truly handle deadlocks is to write your code to expect
them. This generally isn't very difficult if your database code is
well written. Often you can just put a try/catch around the query
execution logic and look for a deadlock when errors occur. If you
catch one, the normal thing to do is just attempt to execute the
failed query again.
Usefull links:
How to Cope with Deadlocks
Understanding Autocommit
Property MySQLConnection.autocommit

How do I efficiently do a bulk insert-or-update with SQLAlchemy?

I'm using SQLAlchemy with a Postgres backend to do a bulk insert-or-update. To try to improve performance, I'm attempting to commit only once every thousand rows or so:
trans = engine.begin()
for i, rec in enumerate(records):
if i % 1000 == 0:
trans.commit()
trans = engine.begin()
try:
inserter.execute(...)
except sa.exceptions.SQLError:
my_table.update(...).execute()
trans.commit()
However, this isn't working. It seems that when the INSERT fails, it leaves things in a weird state that prevents the UPDATE from happening. Is it automatically rolling back the transaction? If so, can this be stopped? I don't want my entire transaction rolled back in the event of a problem, which is why I'm trying to catch the exception in the first place.
The error message I'm getting, BTW, is "sqlalchemy.exc.InternalError: (InternalError) current transaction is aborted, commands ignored until end of transaction block", and it happens on the update().execute() call.
You're hitting some weird Postgresql-specific behavior: if an error happens in a transaction, it forces the whole transaction to be rolled back. I consider this a Postgres design bug; it takes quite a bit of SQL contortionism to work around in some cases.
One workaround is to do the UPDATE first. Detect if it actually modified a row by looking at cursor.rowcount; if it didn't modify any rows, it didn't exist, so do the INSERT. (This will be faster if you update more frequently than you insert, of course.)
Another workaround is to use savepoints:
SAVEPOINT a;
INSERT INTO ....;
-- on error:
ROLLBACK TO SAVEPOINT a;
UPDATE ...;
-- on success:
RELEASE SAVEPOINT a;
This has a serious problem for production-quality code: you have to detect the error accurately. Presumably you're expecting to hit a unique constraint check, but you may hit something unexpected, and it may be next to impossible to reliably distinguish the expected error from the unexpected one. If this hits the error condition incorrectly, it'll lead to obscure problems where nothing will be updated or inserted and no error will be seen. Be very careful with this. You can narrow down the error case by looking at Postgresql's error code to make sure it's the error type you're expecting, but the potential problem is still there.
Finally, if you really want to do batch-insert-or-update, you actually want to do many of them in a few commands, not one item per command. This requires trickier SQL: SELECT nested inside an INSERT, filtering out the right items to insert and update.
This error is from PostgreSQL. PostgreSQL doesn't allow you to execute commands in the same transaction if one command creates an error. To fix this you can use nested transactions (implemented using SQL savepoints) via conn.begin_nested(). Heres something that might work. I made the code use explicit connections, factored out the chunking part and made the code use the context manager to manage transactions correctly.
from itertools import chain, islice
def chunked(seq, chunksize):
"""Yields items from an iterator in chunks."""
it = iter(seq)
while True:
yield chain([it.next()], islice(it, chunksize-1))
conn = engine.commit()
for chunk in chunked(records, 1000):
with conn.begin():
for rec in chunk:
try:
with conn.begin_nested():
conn.execute(inserter, ...)
except sa.exceptions.SQLError:
conn.execute(my_table.update(...))
This still won't have stellar performance though due to nested transaction overhead. If you want better performance try to detect which rows will create errors beforehand with a select query and use executemany support (execute can take a list of dicts if all inserts use the same columns). If you need to handle concurrent updates, you'll still need to do error handling either via retrying or falling back to one by one inserts.

Categories