I'm having problems with trying to catch an error. I'm using Pyramid/SQLAlchemy and made a sign up form with email as the primary key. The problem is when a duplicate email is entered it raises a IntegrityError, so I'm trying to catch that error and provide a message but no matter what I do I can't catch it, the error keeps appearing.
try:
new_user = Users(email, firstname, lastname, password)
DBSession.add(new_user)
return HTTPFound(location = request.route_url('new'))
except IntegrityError:
message1 = "Yikes! Your email already exists in our system. Did you forget your password?"
I get the same message when I tried except exc.SQLAlchemyError (although I want to catch specific errors and not a blanket catch all). I also tried exc.IntegrityError but no luck (although it exists in the API).
Is there something wrong with my Python syntax, or is there something I need to do special in SQLAlchemy to catch it?
I don't know how to solve this problem but I have a few ideas of what could be causing the problem. Maybe the try statement isn't failing but succeeding because SQLAlchemy is raising the exception itself and Pyramid is generating the view so the except IntegrityError: never gets activated. Or, more likely, I'm catching this error completely wrong.
In Pyramid, if you've configured your session (which the scaffold does for you automatically) to use the ZopeTransactionExtension, then session is not flushed/committed until after the view has executed. If you want to catch any SQL errors yourself in your view, you need to force a flush to send the SQL to the engine. DBSession.flush() should do it after the add(...).
Update
I'm updating this answer with an example of a savepoint just because there are very few examples around of how to do this with the transaction package.
def create_unique_object(db, max_attempts=3):
while True:
sp = transaction.savepoint()
try:
obj = MyObject()
obj.identifier = uuid.uuid4().hex
db.add(obj)
db.flush()
except IntegrityError:
sp.rollback()
max_attempts -= 1
if max_attempts < 1:
raise
else:
return obj
obj = create_unique_object(DBSession)
Note that even this is susceptible to duplicates between transactions if no table-level locking is used, but it at least shows how to use a savepoint.
What you need to do is catch a general exception and output its class; then you can make the exception more specific.
except Exception as ex:
print ex.__class__
There might be no database operations until DBSession.commit() therefore the IntegrityError is raised later in the stack after the controller code that has try/except has already returned.
This is how I do it.
from contextlib import(
contextmanager,
)
#contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
def create_user(email, firstname, lastname, password):
new_user = Users(email, firstname, lastname, password)
try:
with session_scope() as session:
session.add(new_user)
except sqlalchemy.exc.IntegrityError as e:
pass
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it
Edit: The edited answer above is a better way of doing this, using rollback.
--
If you want to handle transactions in the middle of a pyramid application or something where an automatic transaction commit is performed at the end of a sequence, there's no magic that needs to happen.
Just remember to start a new transaction if the previous transaction has failed.
Like this:
def my_view(request):
... # Do things
if success:
try:
instance = self._instance(**data)
DBSession.add(instance)
transaction.commit()
return {'success': True}
except IntegrityError as e: # <--- Oh no! Duplicate unique key
transaction.abort()
transaction.begin() # <--- Start new transaction
return {'success': False}
Notice that calling .commit() on a successful transaction is fine, so it is not necessary to start a new transaction after a successful call.
You only need to abort the transaction and start a new one if the transaction is in a failed state.
(If transaction wasn't such a poc, you could use a savepoint and roll back to the savepoint rather than starting a new transaction; sadly, that is not possible, as attempting a commit invalidates a known previous savepoint. Great stuff huh?) (edit: <--- Turns out I'm wrong about that...)
Catch the exception in a finally statement after flushing the session.
try:
new_user = Users(email, firstname, lastname, password)
DBSession.add(new_user)
return HTTPFound(location = request.route_url('new'))
finally:
try:
DBSession.flush()
except IntegrityError:
message1 = "Yikes! Your email already exists in our system. Did you forget your password?"
Related
I need to rollback a transaction in core.event 'handle_error' (catch 'KeyboardInterrupt'), but the parameter in this event is ExceptionContext, how do this?
I usually have this kind of pattern when working with sqlalchemy:
session = get_the_session_one_way_or_another()
try:
# do something with the session
except: # * see comment below
session.rollback()
raise
else:
session.commit()
To make things easier to use, it is useful to have this as a context manager:
#contextmanager
def get_session():
session = get_the_session_one_way_or_another()
try:
yield session
except:
session.rollback()
raise
else:
session.commit()
And then:
with get_session() as session:
# do something with the session
If an exception is raised within the block, the transaction will be rolled back by the context manager.
*There is an empty except: which catches literally everything. That is usually not what you want, but here the exception is always re-raised, so it's fine.
I am getting the error InterfaceError (0, ''). Is there way in Pymysql library I can check whether connection or cursor is closed. For cursor I am already using context manager like that:
with db_connection.cursor() as cursor:
....
You can use Connection.open attribute.
The Connection.open field will be 1 if the connection is open and 0 otherwise. So you can say
if conn.open:
# do something
The conn.open attribute will tell you whether the connection has been
explicitly closed or whether a remote close has been detected.
However, it's always possible that you will try to issue a query and
suddenly the connection is found to have given out - there is no way
to detect this ahead of time (indeed, it might happen during the
process of issuing the query), so the only truly safe thing is to wrap
your calls in a try/except block
Use conn.connection in if statement.
import pymysql
def conn():
mydb=pymysql.Connect('localhost','root','password','demo_db',autocommit=True)
return mydb.cursor()
def db_exe(query,c):
try:
if c.connection:
print("connection exists")
c.execute(query)
return c.fetchall()
else:
print("trying to reconnect")
c=conn()
except Exception as e:
return str(e)
dbc=conn()
print(db_exe("select * from users",dbc))
This is how I did it, because I want to still run the query even if the connection goes down:
def reconnect():
mydb=pymysql.Connect(host='localhost',user='root',password='password',database='demo_db',ssl={"fake_flag_to_enable_tls":True},autocommit=True)
return mydb.cursor()
try:
if (c.connection.open != True):
c=reconnect() # reconnect
if c.connection.open:
c.execute(query)
return c.fetchall()
except Exception as e:
return str(e)
I think the try and catch might do the trick instead of checking cursor only.
try:
c = db_connection.cursor()
except OperationalError:
connected = False
else:
connected = True
#code here
I initially went with the solution from AKHIL MATHEW to call conn.open but later during testing found that sometimes conn.open was returning positive results even though the connection was lost. To be certain, I found I could call conn.ping() which actually tests the connection. The function also accepts an optional parameter (reconnect=True) which will cause it to automatically reconnect if the ping fails.
Of course there is a cost to this - as implied by the name, ping actually goes out to the server and tests the connection. You don't want to do this before every query, but in my case I have an AWS lambda spinning up on warm start and trying to reuse the connection, so I think I can justify testing my connection once on each warm start and reconnecting if it's been lost.
I have a small function for database connection in Django as follows:
def db_connection(query_name):
try:
cursor = connection.cursor()
cursor.execute(query_name)
descr = cursor.description
rows = cursor.fetchall()
result = [dict(zip([column[0] for column in descr], row)) for row in rows]
return result
except Exception as e:
return e
finally:
cursor.close()
This function is used in my function views for Django rest framework to execute SQL queries.The general syntax is as follows:
#api_view(['GET'])
def foo_bar(request):
....
....
....
query1 = "Select name from table"
result = db_connection(query1)
return Response(result, status=status.HTTP_200_OK)
However,what I need is to change the status value in the Response tuple depending on the return value of my db_connection function i.e. to return a 200 OK if no exception occurs else a 500. How can I check if the return value of a function is an exception ?
You are explicitly bypassing all the nice control flow that exceptions already give you. Don't do that.
If you just want a 500 to be returned, then don't catch the exception at all; just let it bubble up, Django's exception handling middleware will catch it and return a 500 response.
In any case, you should never catch the base Exception class; only catch the things you can actually deal with. And when you do catch an exception, you should actually deal with it; returning it instead of raising it is of no use whatsoever.
I would move the try catch block in the foo_bar function over the result = db_connection(query1) code and return a 500 status from there, along with some kind of errors dict which contains a message explaining why it failed. I say this because I'm assuming you are making a REST API which must always return an answer and not crash the normal django way.
I'm developing a pyramid application and currently in the process of moving from sqlite to postgresql. I've found postgresql more restrictive transaction management is giving me a bad time.
I am using the pyramid_tm because I find it convenient. Most of my problems occur during my async calls. What I have is views that serve up dynamic forms. The idea is - if we got an id that corresponds to a database row we edit the existing row. Otherwise, we are adding a new guy.
#view_config(route_name='contact_person_form',
renderer='../templates/ajax/contact_person_form.pt',
permission='view',
request_method='POST')
def contact_person_form(request):
try:
contact_person_id = request.params['contact_person']
DBSession.begin_nested()
contact_person = DBSession.query(ContactPerson).filter(ContactPerson.id == contact_person_id).one()
transaction.commit()
except (NoResultFound, DataError):
DBSession.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)
I need to begin a nested transaction because otherwise my lazy request method which is registered with config.add_request_method(get_user, 'user', reify=True) and called when rendering my view
def get_user(request):
userid = unauthenticated_userid(request)
if userid is not None:
user = DBSession.query(Employee).filter(Employee.id == userid).first()
return user
complains that the transaction has been interrupted and the SELECT on employees will be skipped.
I have two questions:
Is it okay to do transaction.commit() on a session.begin_nested() nested transaction? I don't exactly where SQLAlchemy ends and pyramid_tm begins. If I try to commit the session I get an exception that says I can only commit using the transaction manager. On the other hand DBSession.rollback() works fine.
Does handling this like
try:
#do something with db
except:
#oops, let's do something else
make sense? I have the feeling this is 'pythonic', but I'm not sure if this underlying transaction thing calls for non-pythonic means.
Calling transaction.commit() in your code is committing the session and causing your contact_person object to be expired when you try to use it later after the commit. Similarly if your user object is touched on both sides of the commit you'll have problems.
As you said, if there is an exception (NoResultFound) then your session is now invalidated. What you're looking for is a savepoint, which transaction supports, but not directly through begin_nested. Rather, you can use transaction.savepoint() combined with DBSession.flush() to handle errors.
The logic here is that flush executes the SQL on the database, raising any errors and allowing you to rollback the savepoint. After the rollback, the session is recovered and you can go on your merry way. Nothing has been committed yet, leaving that job for pyramid_tm at the end of the request.
try:
sp = transaction.savepoint()
contact_person = DBSession.query(ContactPerson)\
.filter(ContactPerson.id == contact_person_id)\
.one()
DBSession.flush()
except (NoResultFound, DataError):
sp.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)
I'm trying to recover after a primary key violation in an insert, by logging the duplicate user to another table. However, my code produces the error InvalidRequestError: This transaction is inactive.
Unfortunately the traceback doesn't show the specific line within this function where the error occurs--it only goes as deep as the function calling this function (which is strange).
Does my try/except begin/rollback/commit pattern look correct?
def convert_email(db, user_id, response):
"""Map a submitted email address to a user, if the user
does not yet have an email address.
"""
email = response['text_response']
trans = db.begin()
try:
update_stmt = users_tbl.update(
and_(users_tbl.c.user_id==user_id,
users_tbl.c.email==None))
db.execute(update_stmt.values(dict(email=email)))
trans.commit()
except sqlalchemy.exc.IntegrityError as e:
trans.rollback()
if e.orig.pgcode == UNIQUE_VIOLATION:
trans = db.begin()
try:
user = db.execute(users_tbl.select(users_tbl.c.email==email))
parent_user_id = user.first()['user_id']
insert_stmt = duplicate_public_users_tbl.insert().values(
user_id=user_id,
parent_id=parent_user_id)
db.execute(insert_stmt)
trans.commit()
except sqlalchemy.exc.IntegrityError as e:
trans.rollback()
if e.orig.pgcode != UNIQUE_VIOLATION:
raise
The exception was being produced by the calling function, which itself was wrapped in a transaction.
with engine.begin() as db:
convert_email(db, user_id, response)
The inner rollback() call must terminate the outer transaction as well. This is hinted at by the documentation for Transaction.close(),
... This is used to cancel a Transaction without affecting the scope of an enclosing transaction.