How to catch OperationalError that happened anywhere in Pyramid+SQLAlchemy? - python

I have a typical Pyramid+SQLAlchemy+Postgres app. In stress testing or during moments of exceptional load and with low max_connections setting in PG it might happen that OperationalException is raised:
OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already
Now, obviously I do not want to do this everywhere:
try:
DBSession.query(Item)...
except OperationalError as e:
log.error(...)
Is there some way of catching this exception "globally" to be properly handled?
My app uses ZopeTransactionExtension in typical Pyramid manner:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))

I managed to develop a tween that can do this (example):
def catch_pg_exc_tween_factory(handler, registry):
def catch_pg_exc_tween_clos(request):
response = None
try:
response = handler(request)
except Exception as e:
log.error('\n\n\n +++ problem: %s', e)
return response
return catch_pg_exc_tween_clos
The strange thing is that nothing but explicit tween ordering in development.ini works (no amount of over= or under= tuning of config.add_tween call seems to have worked):
pyramid.tweens = pyramid_debugtoolbar.toolbar_tween_factory
pyramid.tweens.excview_tween_factory
pyramid_tm.tm_tween_factory
mypkg.util.tweens.catch_pg_exc_tween_factory

Related

Custom Exception in Core Application Unrecognised in Module

In my core application I have a custom Exception 'timeout' which when caught should return a message and trigger 'sys.exit()' to terminate the application.
timeout Exception:
class timeout(Exception):
def __init__(self, msg):
super().__init__(msg)
Core Application:
try:
s3Client.put_object(Bucket = bucket, Key = key, Body = body)
except timeout as error:
print(error)
sys.exit()
In the above example I am using boto3 AWS module, but this could be substituted with any other. During the execution of this boto3 function, the timeout error will be raised by the core application.
What I would expect, is for the timeout error to be raised (triggered by an Alarm from the Signal module FYI) in the Core Application, '' to be printed, and 'sys.exit()' to be triggered, terminating the application.
Instead, the Core Application raises the custom 'timeout()' exception, but the boto3 function is not configured to handle it and returns HTTPClientError('An HTTP Client raised an unhandled exception: Integration Timeout'). My Core Application in turn is not configured to handle 'HTTPClientError()', so the timeout logic does not execute.
I want a way for my 'timeout()' Exception to be handled in the Core Application, even if the module throws a different exception.
Considered modifying the module to add my custom Exception, seems hacky.
Thanks
A bit hacky, but if you don't want to modify the module this could be an option:
timeout Exception:
class timeout(Exception):
exception_identifier = "timeout"
def __init__(self, msg):
super().__init__(msg + f" exception_identifier={self.exception_identifier}")
Core Application:
import timeout
try:
s3Client.put_object(Bucket = bucket, Key = key, Body = body)
except timeout as error:
print(error)
sys.exit()
except HTTPClientError as error:
if f"exception_identifier={timeout.exception_identifier}" in error.msg:
print(error)
sys.exit()
else:
raise
Maybe there is a better way to extract if the original Exception was a timeout than comparing the strings. You can also use an exception_identifier like a long ID-string, that is unlikely to clash with other Exceptions that may use "timeout" in the message.

flask caching - handle exception when redis service is down

I have a simple code using flask:
#app.route('/foo/<arg>')
#app.cache.memoize()
def foo_response(arg):
return 'Hello ' + arg
This is working great while my redis server (cache server) is up.
If the redis server goes down, an exception is raised every time I query /foo/<arg>, which is understandable.
How (and where) can I handle that exception (à la try-except) in order to not use the redis server if it is down at that moment?
It is actually implemented this way. By checking the source of memoize() in Flask-Cache package you see
try:
cache_key = decorated_function.make_cache_key(f, *args, **kwargs)
rv = self.cache.get(cache_key)
except Exception:
if current_app.debug:
raise
logger.exception("Exception possibly due to cache backend.")
return f(*args, **kwargs)
This means if you are on production i.e. app.debug=False you will see the exception log and the function will be called normally.

flask-sqlalchemy lost connection to MySQL db

Working with a MySQL database and flask-sqlalchemy I am encountering a lost connection error ('Lost connection to MySQL server during query'). I already have adapted the app.config['SQLALCHEMY_POOL_RECYCLE'] to be smaller than the engine timeout. I also added a pool_pre_ping, to ensure the database is not going away during two requests. Now I have no idea left, how this can still be an issue, since it is my understanding that flask-sqlalchemy should be taking care of opening and closing sessions correctly.
As a workaround, I thought about a way to tell flask-sqlalchemy to catch lost connection responses and restart the connection on the fly. But I have no idea how to do this. So, my questions are:
Do you know what could possibly cause my connection loss?
Do you think, my recent approach of catching is a good idea or do you have a better suggestion?
If this is a good idea, how can I do this most conveniently? I don't want to wrap all requests in try-catch-statements, since I have a lot of code.
I do not know the answer to your 1st and 2nd questions, but for the 3rd question, I used a decorator to wrap all my functions instead of using try / except directly inside the functions. The explicit pre_ping and session rollback / close somehow also solved the problem of Lost Connection for me (mariadb was the backend I was using)!
def manage_session(f):
def inner(*args, **kwargs):
# MANUAL PRE PING
try:
db.session.execute("SELECT 1;")
db.session.commit()
except:
db.session.rollback()
finally:
db.session.close()
# SESSION COMMIT, ROLLBACK, CLOSE
try:
res = f(*args, **kwargs)
db.session.commit()
return res
except Exception as e:
db.session.rollback()
raise e
# OR return traceback.format_exc()
finally:
db.session.close()
return inner
and then wrapping my functions with the decorator:
#manage_session
my_funtion(*args, **kwargs):
return "result"

Using Raven, how to save exception to file and sent later to Sentry?

In my Python app, I have been working with Raven+Sentry to catch exception and sent it to sentry at the time of the exception occur, code below:
if __name__ == '__main__':
from raven import Client
client = Client(dsn='<MY SENTRY DSN>')
try:
MyApp().run()
except:
import traceback
traceback.print_exc()
ident = client.get_ident(client.captureException())
print "Exception caught; reference is %s" % ident
It will sent the exception directly to Senty backend just after the application crashed. What I want to accomplished is, saving the exception first to local file and then send it later on when the app starting.
Do Sentry+Raven support this kind of functionality?

How to read www-site properly?

I tried to read WWW-site on my Python project. However, the code will crash if I can't connect to the Internet. How can catch the exception if there is no connection during some point of reading the site?
import sys
import time
import urllib3
# Gets the weather from foreca.fi.
def get_weather(url):
http = urllib3.PoolManager()
r = http.request('GET', url)
return (r.data)
time = time.strftime("%Y%m%d")
url = "http://www.foreca.fi/Finland/Kuopio/Saaristokaupunki/details/" + time
weather_all = get_weather(url)
print(weather_all)
I tested your code with no connection, if there's no connection it will raise and MaxRetryError ("Raised when the maximum number of retries is exceeded.") so you can handle the exception something like:
try:
# Your code here
except urllib3.exceptions.MaxRetryError:
# handle the exception here
Another thing you can do is to use a timeout and do something special when it times out, so that you have additional control. which in a sense what the exception raised it telling you, that it hit the max amount
Also, consider working with requests library.
I presume urllib3 would throw a URLError exception if there is no route to the specified server (i.e. the internet connection is lost), so perhaps you could use a simply try catch? I'm not particularly well versed in urllib3, but for urllib it would be something like:
E.g.
try:
weather_all = get_weather(url)
except urllib.error.URLError as e:
print "No connection to host"

Categories