Working with a MySQL database and flask-sqlalchemy I am encountering a lost connection error ('Lost connection to MySQL server during query'). I already have adapted the app.config['SQLALCHEMY_POOL_RECYCLE'] to be smaller than the engine timeout. I also added a pool_pre_ping, to ensure the database is not going away during two requests. Now I have no idea left, how this can still be an issue, since it is my understanding that flask-sqlalchemy should be taking care of opening and closing sessions correctly.
As a workaround, I thought about a way to tell flask-sqlalchemy to catch lost connection responses and restart the connection on the fly. But I have no idea how to do this. So, my questions are:
Do you know what could possibly cause my connection loss?
Do you think, my recent approach of catching is a good idea or do you have a better suggestion?
If this is a good idea, how can I do this most conveniently? I don't want to wrap all requests in try-catch-statements, since I have a lot of code.
I do not know the answer to your 1st and 2nd questions, but for the 3rd question, I used a decorator to wrap all my functions instead of using try / except directly inside the functions. The explicit pre_ping and session rollback / close somehow also solved the problem of Lost Connection for me (mariadb was the backend I was using)!
def manage_session(f):
def inner(*args, **kwargs):
# MANUAL PRE PING
try:
db.session.execute("SELECT 1;")
db.session.commit()
except:
db.session.rollback()
finally:
db.session.close()
# SESSION COMMIT, ROLLBACK, CLOSE
try:
res = f(*args, **kwargs)
db.session.commit()
return res
except Exception as e:
db.session.rollback()
raise e
# OR return traceback.format_exc()
finally:
db.session.close()
return inner
and then wrapping my functions with the decorator:
#manage_session
my_funtion(*args, **kwargs):
return "result"
Related
I'm developing a basic flask application to receive an input request from the user and insert that into MongoDB Atlas Cluster.
I have a route /save which is of type POST. This endpoint receives the request, makes a new mongo connection, inserts to mongo and finally closes the connection. This approach is slow, with average response latency of 700-800 ms even though I am only trying to insert one document.
Note-: My use case does not make sense to use bulk-insert.
Sample Code
app = Flask(__name__)
app.logger.setLevel(logging.INFO)
DBNAME = 'DBNAME as String'
CONNSTRING = 'CONNECTION as String'
class mongoDB:
def __init__(self):
try:
self.client = MongoClient(CONNSTRING, maxPoolSize=None)
self.database = self.client[DBNAME]
app.logger.info('Mongo Connection Established')
except Exception as e:
app.logger.warning('Mongo Connection could not be established')
app.logger.warning('Error Message: ' + str(e))
def close_connection(self):
try:
self.client.close()
except Exception as e:
app.logger.warning('connection failed to close')
app.logger.warning('Error Message: ' + str(e))
#app.route('/save', methods=['POST'])
def save():
data_info = flask.request.get_json()
try:
db = mongoDB()
image_collection = db.database['DUMMY_COLLECTION']
image_collection.insert_one({'VALUE_ID' : data_info['value1'], 'VALUE_STRING' : data_info['value2']})
app.logger.info('Inserted Successfully')
return {'message': 'Success'}, 200, {'Content-Type': 'application/json'}
except Exception as e:
app.logger.error('Error Adding data to Mongo: ' + str(e))
return {'message': 'Error'}, 500, {'Content-Type': 'application/json'}
finally:
db.close_connection()
app.logger.info('connection closed')
if __name__ == '__main__':
app.run()
However if I establish the Mongo connection at time of application initialization and keep it open and never close the connection the latency drops to 70-80ms.
Could someone please help understand the consequences of keeping an open connection, instead of establishing a new connection with each request? Or is there any method to reduce latency with multiple connections open?
Note-: Keeping multiple connections approach, I tried using writeConcern=0, maxPoolSize=None, and journal=False but all these did not improve the latency much at all.
Any help would be appreciated. Thanks
The MongoClient(CONNSTRING, maxPoolSize=None) is not just a single connection but a connection pool. Meaning that already with that object you can have multiple concurrent requests to MongoDB. By setting maxPoolSize=None you make them limitless (which can have some implications under heavy load).
It is an antipattern to create a connection pool per request (as you realized by the high latency) the reason for that is that you need to pay each time the cost to create the connection pool and the handshake to the database.
The best way is to initiate one on startup and then maintain it. Meaning that you should handle all the exceptions that might arise in case of DB or network failures. However, I assume that most things are already handled by MongoClient already.
I am trying to create a simple HTTP server that uses the Python HTTPServer which inherits BaseHTTPServer. [https://github.com/python/cpython/blob/main/Lib/http/server.py][1]
There are numerous examples of this approach online and I don't believe I am doing anything unusual.
I am simply importing the class via:
"from http.server import HTTPServer, BaseHTTPRequestHandler"
in my code.
My code overrides the do_GET() method to parse the path variable to determine what page to show.
However, if I start this server and connect to it locally (ex: http://127.0.0.1:50000) the first page loads fine. If I navigate to another page (via my first page links) that too works fine, however, on occasion (and this is somewhat sporadic), there is a delay and the server log shows a Request timed out: timeout('timed out') error. I have tracked this down to the handle_one_request method in the BaseHTTPServer class:
def handle_one_request(self):
"""Handle a single HTTP request.
You normally don't need to override this method; see the class
__doc__ string for information on how to handle specific HTTP
commands such as GET and POST.
"""
try:
self.raw_requestline = self.rfile.readline(65537)
if len(self.raw_requestline) > 65536:
self.requestline = ''
self.request_version = ''
self.command = ''
self.send_error(HTTPStatus.REQUEST_URI_TOO_LONG)
return
if not self.raw_requestline:
self.close_connection = True
return
if not self.parse_request():
# An error code has been sent, just exit
return
mname = 'do_' + self.command ## the name of the method is created
if not hasattr(self, mname): ## checking that we have that method defined
self.send_error(
HTTPStatus.NOT_IMPLEMENTED,
"Unsupported method (%r)" % self.command)
return
method = getattr(self, mname) ## getting that method
method() ## finally calling it
self.wfile.flush() #actually send the response if not already done.
except socket.timeout as e:
# a read or a write timed out. Discard this connection
self.log_error("Request timed out: %r", e)
self.close_connection = True
return
You can see where the exception is thrown in the "except socket.timeout as e:" clause.
I have tried overriding this method by including it in my code but it is not clear what is causing the error so I run into dead ends. I've tried creating very basic HTML pages to see if there was something in the page itself, but even "blank" pages cause the same sporadic issue.
What's odd is that sometimes a page loads instantly, and almost randomly, it will then timeout. Sometimes the same page, sometimes a different page.
I've played with the http.timeout setting, but it makes no difference. I suspect it's some underlying socket issue, but am unable to diagnose it further.
This is on a Mac running Big Sur 11.3.1, with Python version 3.9.4.
Any ideas on what might be causing this timeout, and in particular any suggestions on a resolution. Any pointers would be appreciated.
After further investigation, this particular appears to be an issue with Safari. Running the exact same code and using Firefox does not show the same issue.
I have a simple code using flask:
#app.route('/foo/<arg>')
#app.cache.memoize()
def foo_response(arg):
return 'Hello ' + arg
This is working great while my redis server (cache server) is up.
If the redis server goes down, an exception is raised every time I query /foo/<arg>, which is understandable.
How (and where) can I handle that exception (à la try-except) in order to not use the redis server if it is down at that moment?
It is actually implemented this way. By checking the source of memoize() in Flask-Cache package you see
try:
cache_key = decorated_function.make_cache_key(f, *args, **kwargs)
rv = self.cache.get(cache_key)
except Exception:
if current_app.debug:
raise
logger.exception("Exception possibly due to cache backend.")
return f(*args, **kwargs)
This means if you are on production i.e. app.debug=False you will see the exception log and the function will be called normally.
I have a typical Pyramid+SQLAlchemy+Postgres app. In stress testing or during moments of exceptional load and with low max_connections setting in PG it might happen that OperationalException is raised:
OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already
Now, obviously I do not want to do this everywhere:
try:
DBSession.query(Item)...
except OperationalError as e:
log.error(...)
Is there some way of catching this exception "globally" to be properly handled?
My app uses ZopeTransactionExtension in typical Pyramid manner:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
I managed to develop a tween that can do this (example):
def catch_pg_exc_tween_factory(handler, registry):
def catch_pg_exc_tween_clos(request):
response = None
try:
response = handler(request)
except Exception as e:
log.error('\n\n\n +++ problem: %s', e)
return response
return catch_pg_exc_tween_clos
The strange thing is that nothing but explicit tween ordering in development.ini works (no amount of over= or under= tuning of config.add_tween call seems to have worked):
pyramid.tweens = pyramid_debugtoolbar.toolbar_tween_factory
pyramid.tweens.excview_tween_factory
pyramid_tm.tm_tween_factory
mypkg.util.tweens.catch_pg_exc_tween_factory
I'm developing a Python Service(Class) for accessing Redis Server. I want to know how to check if Redis Server is running or not. And also if somehow I'm not able to connect to it.
Here is a part of my code
import redis
rs = redis.Redis("localhost")
print rs
It prints the following
<redis.client.Redis object at 0x120ba50>
even if my Redis Server is not running.
As I found that my Python Code connects to the Server only when I do a set() or get() with my redis instance.
So I dont want other services using my class to get an Exception saying
redis.exceptions.ConnectionError: Error 111 connecting localhost:6379. Connection refused.
I want to return proper message/Error code. How can I do that??
If you want to test redis connection once at startup, use the ping() command.
from redis import Redis
redis_host = '127.0.0.1'
r = Redis(redis_host, socket_connect_timeout=1) # short timeout for the test
r.ping()
print('connected to redis "{}"'.format(redis_host))
The command ping() checks the connection and if invalid will raise an exception.
Note - the connection may still fail after you perform the test so this is not going to cover up later timeout exceptions.
The official way to check if redis server availability is ping ( http://redis.io/topics/quickstart ).
One solution is to subclass redis and do 2 things:
check for a connection at instantiation
write an exception handler in the case of no connectivity when making requests
As you said, the connection to the Redis Server is only established when you try to execute a command on the server. If you do not want to go head forward without checking that the server is available, you can just send a random query to the server and check the response. Something like :
try:
response = rs.client_list()
except redis.ConnectionError:
#your error handlig code here
There are already good solutions here, but here's my quick and dirty for django_redis which doesn't seem to include a ping function (though I'm using an older version of django and can't use the newest django_redis).
# assuming rs is your redis connection
def is_redis_available():
# ... get redis connection here, or pass it in. up to you.
try:
rs.get(None) # getting None returns None or throws an exception
except (redis.exceptions.ConnectionError,
redis.exceptions.BusyLoadingError):
return False
return True
This seems to work just fine. Note that if redis is restarting and still loading the .rdb file that holds the cache entries on disk, then it will throw the BusyLoadingError, though it's base class is ConnectionError so it's fine to just catch that.
You can also simply except on redis.exceptions.RedisError which is the base class of all redis exceptions.
Another option, depending on your needs, is to create get and set functions that catch the ConnectionError exceptions when setting/getting values. Then you can continue or wait or whatever you need to do (raise a new exception or just throw out a more useful error message).
This might not work well if you absolutely depend on setting/getting the cache values (for my purposes, if cache is offline for whatever we generally have to "keep going") in which case it might make sense to have the exceptions and let the program/script die and get the redis server/service back to a reachable state.
I have also come across a ConnectionRefusedError from the sockets library, when redis was not running, therefore I had to add that to the availability check.
r = redis.Redis(host='localhost',port=6379,db=0)
def is_redis_available(r):
try:
r.ping()
print("Successfully connected to redis")
except (redis.exceptions.ConnectionError, ConnectionRefusedError):
print("Redis connection error!")
return False
return True
if is_redis_available(r):
print("Yay!")
Redis server connection can be checked by executing ping command to the server.
>>> import redis
>>> r = redis.Redis(host="127.0.0.1", port="6379")
>>> r.ping()
True
using the ping method, we can handle reconnection etc. For knowing the reason for error in connecting, exception handling can be used as suggested in other answers.
try:
is_connected = r.ping()
except redis.ConnectionError:
# handle error
Use ping()
from redis import Redis
conn_pool = Redis(redis_host)
# Connection=Redis<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>
try:
conn_pool.ping()
print('Successfully connected to redis')
except redis.exceptions.ConnectionError as r_con_error:
print('Redis connection error')
# handle exception