I have a simple code using flask:
#app.route('/foo/<arg>')
#app.cache.memoize()
def foo_response(arg):
return 'Hello ' + arg
This is working great while my redis server (cache server) is up.
If the redis server goes down, an exception is raised every time I query /foo/<arg>, which is understandable.
How (and where) can I handle that exception (à la try-except) in order to not use the redis server if it is down at that moment?
It is actually implemented this way. By checking the source of memoize() in Flask-Cache package you see
try:
cache_key = decorated_function.make_cache_key(f, *args, **kwargs)
rv = self.cache.get(cache_key)
except Exception:
if current_app.debug:
raise
logger.exception("Exception possibly due to cache backend.")
return f(*args, **kwargs)
This means if you are on production i.e. app.debug=False you will see the exception log and the function will be called normally.
Related
I would like to instrument boto3 client, allowing it to block request on rate limit.
My code is based on the Instana instrumentation for boto client, which acts like a 'read' instrumentation that only collects data and exports it.
I would like to instrument a logic that blocks a request on rate limit, in a way that will cause the boto client to automatically retry sending the request (according to the boto retry settings).
Instana code snippet (with my raise_429_if_rate_limit_reached addition):
#wrapt.patch_function_wrapper('botocore.client', 'BaseClient._make_api_call')
def make_api_call_with_instana(wrapped, instance, arg_list, kwargs):
# pylint: disable=protected-access
active_tracer = get_active_tracer()
# If we're not tracing, just return
if active_tracer is None:
return wrapped(*arg_list, **kwargs)
with active_tracer.start_active_span("boto3", child_of=active_tracer.active_span) as scope:
try:
<... collect request info ...>
except Exception as exc:
logger.debug("make_api_call_with_instana: collect error", exc_info=True)
# check for rate limit and block request on 429 Status
raise_429_if_rate_limit_reached()
try:
result = wrapped(*arg_list, **kwargs)
<... collect response info ...>
return result
except Exception as exc:
scope.span.mark_as_errored({'error': exc})
raise
So in this patching, if we raise an exception instead of calling the wrapped method, no retry would take place.
The retry logic of boto only takes places further down the code, inside BaseClient._send_request method. The call hierarchy is:
_make_api_call -> _make_request -> endpoint.make_request -> endpoint._send_request -> endpoint._get_response
So where exactly can I add my patching logic?
inside endpoint._get_response method?
inside endpoint._do_get_response method?
elsewhere?
Thanks a lot.
Working with a MySQL database and flask-sqlalchemy I am encountering a lost connection error ('Lost connection to MySQL server during query'). I already have adapted the app.config['SQLALCHEMY_POOL_RECYCLE'] to be smaller than the engine timeout. I also added a pool_pre_ping, to ensure the database is not going away during two requests. Now I have no idea left, how this can still be an issue, since it is my understanding that flask-sqlalchemy should be taking care of opening and closing sessions correctly.
As a workaround, I thought about a way to tell flask-sqlalchemy to catch lost connection responses and restart the connection on the fly. But I have no idea how to do this. So, my questions are:
Do you know what could possibly cause my connection loss?
Do you think, my recent approach of catching is a good idea or do you have a better suggestion?
If this is a good idea, how can I do this most conveniently? I don't want to wrap all requests in try-catch-statements, since I have a lot of code.
I do not know the answer to your 1st and 2nd questions, but for the 3rd question, I used a decorator to wrap all my functions instead of using try / except directly inside the functions. The explicit pre_ping and session rollback / close somehow also solved the problem of Lost Connection for me (mariadb was the backend I was using)!
def manage_session(f):
def inner(*args, **kwargs):
# MANUAL PRE PING
try:
db.session.execute("SELECT 1;")
db.session.commit()
except:
db.session.rollback()
finally:
db.session.close()
# SESSION COMMIT, ROLLBACK, CLOSE
try:
res = f(*args, **kwargs)
db.session.commit()
return res
except Exception as e:
db.session.rollback()
raise e
# OR return traceback.format_exc()
finally:
db.session.close()
return inner
and then wrapping my functions with the decorator:
#manage_session
my_funtion(*args, **kwargs):
return "result"
I wrote a general error handler for a flask application like this
def error_handler(error):
if isinstance(error, HTTPException):
description = error.get_description(request.environ)
code = error.code
name = error.name
else:
description = ("We encountered an error "
"while trying to fulfill your request")
code = 500
name = 'Internal Server Error'
templates_to_try = ['errors/error{}.html'.format(code), 'errors/generic_error.html']
return render_template(templates_to_try,
code=code,
name=Markup(name),
description=Markup(description),
error=error)
def init_app(app):
''' Function to register error_handler in app '''
for exception in default_exceptions:
app.register_error_handler(exception, error_handler)
app.register_error_handler(Exception, error_handler)
which I registered with the app as
error_handler.init_app(app)
but in case of a 413 error (Request Entity Too Large) I do not get redirected to the error handler? Instead, I can create an additional error handler like this
#app.errorhandler(413)
def request_entity_too_large(error):
return 'File Too Large', 413
which catches the error fine.
I found that when I raise the RequestEntityTooLarge error artificially within my app, the error handler works fine. So it must have to do with the fact that the error gets raise within the werkzeuge package
RequestBase._load_form_data(self)
File "/usr/local/lib/python2.7/site-packages/werkzeug/wrappers.py", line 385, in _load_form_data
mimetype, content_length, options)
File "/usr/local/lib/python2.7/site-packages/werkzeug/formparser.py", line 197, in parse
raise exceptions.RequestEntityTooLarge()
RequestEntityTooLarge: 413 Request Entity Too Large: The data value transmitted exceeds the capacity limit.
Does anybody know why my first solution cannot capture 413 errors? But my second solution can? How would I need to modify my error_handler to capture the 413 error?
ok I found the solution. Changing the error_handler to
return render_template(templates_to_try,
code=code,
name=Markup(name),
description=Markup(description),
error=error), code
does solve the problem... not sure exactly why though
The problem lies in the flask development server. It is not really a fully fledged server and falls short in that aspect. You don't have to worry about it because in production WSGI server etc it will work as expected with a normal error handler.
To quote flask documentation:
When using the local development server, you may get a connection
reset error instead of a 413 response. You will get the correct status
response when running the app with a production WSGI server.
I'm developing a Python Service(Class) for accessing Redis Server. I want to know how to check if Redis Server is running or not. And also if somehow I'm not able to connect to it.
Here is a part of my code
import redis
rs = redis.Redis("localhost")
print rs
It prints the following
<redis.client.Redis object at 0x120ba50>
even if my Redis Server is not running.
As I found that my Python Code connects to the Server only when I do a set() or get() with my redis instance.
So I dont want other services using my class to get an Exception saying
redis.exceptions.ConnectionError: Error 111 connecting localhost:6379. Connection refused.
I want to return proper message/Error code. How can I do that??
If you want to test redis connection once at startup, use the ping() command.
from redis import Redis
redis_host = '127.0.0.1'
r = Redis(redis_host, socket_connect_timeout=1) # short timeout for the test
r.ping()
print('connected to redis "{}"'.format(redis_host))
The command ping() checks the connection and if invalid will raise an exception.
Note - the connection may still fail after you perform the test so this is not going to cover up later timeout exceptions.
The official way to check if redis server availability is ping ( http://redis.io/topics/quickstart ).
One solution is to subclass redis and do 2 things:
check for a connection at instantiation
write an exception handler in the case of no connectivity when making requests
As you said, the connection to the Redis Server is only established when you try to execute a command on the server. If you do not want to go head forward without checking that the server is available, you can just send a random query to the server and check the response. Something like :
try:
response = rs.client_list()
except redis.ConnectionError:
#your error handlig code here
There are already good solutions here, but here's my quick and dirty for django_redis which doesn't seem to include a ping function (though I'm using an older version of django and can't use the newest django_redis).
# assuming rs is your redis connection
def is_redis_available():
# ... get redis connection here, or pass it in. up to you.
try:
rs.get(None) # getting None returns None or throws an exception
except (redis.exceptions.ConnectionError,
redis.exceptions.BusyLoadingError):
return False
return True
This seems to work just fine. Note that if redis is restarting and still loading the .rdb file that holds the cache entries on disk, then it will throw the BusyLoadingError, though it's base class is ConnectionError so it's fine to just catch that.
You can also simply except on redis.exceptions.RedisError which is the base class of all redis exceptions.
Another option, depending on your needs, is to create get and set functions that catch the ConnectionError exceptions when setting/getting values. Then you can continue or wait or whatever you need to do (raise a new exception or just throw out a more useful error message).
This might not work well if you absolutely depend on setting/getting the cache values (for my purposes, if cache is offline for whatever we generally have to "keep going") in which case it might make sense to have the exceptions and let the program/script die and get the redis server/service back to a reachable state.
I have also come across a ConnectionRefusedError from the sockets library, when redis was not running, therefore I had to add that to the availability check.
r = redis.Redis(host='localhost',port=6379,db=0)
def is_redis_available(r):
try:
r.ping()
print("Successfully connected to redis")
except (redis.exceptions.ConnectionError, ConnectionRefusedError):
print("Redis connection error!")
return False
return True
if is_redis_available(r):
print("Yay!")
Redis server connection can be checked by executing ping command to the server.
>>> import redis
>>> r = redis.Redis(host="127.0.0.1", port="6379")
>>> r.ping()
True
using the ping method, we can handle reconnection etc. For knowing the reason for error in connecting, exception handling can be used as suggested in other answers.
try:
is_connected = r.ping()
except redis.ConnectionError:
# handle error
Use ping()
from redis import Redis
conn_pool = Redis(redis_host)
# Connection=Redis<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>
try:
conn_pool.ping()
print('Successfully connected to redis')
except redis.exceptions.ConnectionError as r_con_error:
print('Redis connection error')
# handle exception
I've recently started developing my first web app with GAE and Python, and it is a lot of fun.
One problem I've been having is exceptions being raised when I don't expect them (since I'm new to web apps). I want to:
Prevent users from ever seeing exceptions
Properly handle exceptions so they don't break my app
Should I put a try/except block around every call to put and get?
What other operations could fail that I should wrap with try/except?
You can create a method called handle_exception on your request handlers to deal with un-expected situations.
The webapp framework will call this automatically when it hits an issue
class YourHandler(webapp.RequestHandler):
def handle_exception(self, exception, mode):
# run the default exception handling
webapp.RequestHandler.handle_exception(self,exception, mode)
# note the error in the log
logging.error("Something bad happend: %s" % str(exception))
# tell your users a friendly message
self.response.out.write("Sorry lovely users, something went wrong")
You can wrap your views in a method that will catch all exceptions, log them and return a handsome 500 error page.
def prevent_error_display(fn):
"""Returns either the original request or 500 error page"""
def wrap(self, *args, **kwargs):
try:
return fn(self, *args, **kwargs)
except Exception, e:
# ... log ...
self.response.set_status(500)
self.response.out.write('Something bad happened back here!')
wrap.__doc__ = fn.__doc__
return wrap
# A sample request handler
class PageHandler(webapp.RequestHandler):
#prevent_error_display
def get(self):
# process your page request