Force django to reopen database connection if lost - python

In my Python-Django web application, sometimes the database it will disconnect (problems related to my test environment, not so much stable...) and my web-app give me this error:
File "/usr/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 222, in create_cursor,
django.db.utils.InterfaceError: connection already closed,
cursor = self.connection.cursor()
Now, how i can tell django to retry to open the connection and continue? it seems that django remains stuck at this point...
Thanks.

There's no way to tell Django that it should retry on connection error. It's instead designed to simply fail on that one request. From the documentation:
If any database errors have occurred while processing the requests, Django checks whether the connection still works, and closes it if it doesn’t. Thus, database errors affect at most one request; if the connection becomes unusable, the next request gets a fresh connection.
However, this shouldn't be a problem if you follow this advice in the documentation:
If your database terminates idle connections after some time, you should set CONN_MAX_AGE to a lower value, so that Django doesn’t attempt to use a connection that has been terminated by the database server.

Related

Taking mongoengine.connect out of the setting.py in django

Most of the blog posts and examples on the web, in purpose of connecting to the MongoDB using Mongoengine in Python/Django, have suggested that we should add these lines to the settings.py file of the app:
from mongoengine import connect
connect('project1', host='localhost')
It works fine for most of the cases except one I have faced recently:
When the database is down!
Let say if db goes down, the process that is taking care of the web server (in my case, Supervisord) will stop running the app because of Exception that connect throws. It may try few more times but after its timeout reached, it will stop trying.
So even if your app has some parts that are not tied to db, they will also break down.
A quick solution to this is adding a try/exception block to the connect code:
try:
connect('project1', host='localhost')
except Exception as e:
print(e)
but I am looking for a better and clean way to handle this.
Unfortunately this is not really possible with mongoengine unless you go with the try-except solution like you did.
You could try to connect with pure pymongo version 3.0+ using MongoClient and registering the connection manually in the mongoengine.connection._connection_settings dictionary (quite hacky but should work). From pymongo documentation:
Changed in version 3.0: MongoClient is now the one and only client class for a standalone server, mongos, or replica set. It includes the functionality that had been split into MongoReplicaSetClient: it can connect to a replica set, discover all its members, and monitor the set for stepdowns, elections, and reconfigs.
The MongoClient constructor no longer blocks while connecting to the server or servers, and it no longer raises ConnectionFailure if they are unavailable, nor ConfigurationError if the user’s credentials are wrong. Instead, the constructor returns immediately and launches the connection process on background threads.

Django shell(plus) timeout on remote database connection

I'm migrating a legacy db into a bunch of models I have running locally. I connected to the legacy db and ran inspectdb to recreate the models. Now I'm writing functions to pair the numerous fields to their equivalents in new models. I've been using shell_plus, and the first minute or so of queries go great, but my connections keep timing out
with the following:
RemoteArticle.objects.using("remote_mysql").all()
django.db.utils.OperationalError: (2013, 'Lost connection to MySQL server during query')
Is there a command I can run to either a) reconnect to the db before running a query (so I don't have to reopen shell_plus), or ideally b) make it so that all of my queries automatically reconnect each time I run them?
I've seen timeout issues on other platforms, but I wasn't sure if Django had a built-in way of handling such things.
Thanks!
There is a page in the MySQL docs on this. Since you're apparently trying to migrate a big database, this part may apply to you:
Sometimes the “during query” form happens when millions of rows are
being sent as part of one or more queries. If you know that this is
happening, you should try increasing net_read_timeout from its default
of 30 seconds to 60 seconds or longer, sufficient for the data
transfer to complete.
The timeout makes sense, because the all() is just one query to retrieve all rows. So, reconnecting before each query is not the solution. If changing the net_read_timeout is not an option, you might want to think about paging.
I believe Lost connection to MySQL server during query happens because you exhaust the MySQL resource like timeout, session, and memory.
If the problem is because of the timeout, try increase the timeout --net_read_timeout=100. On the DB server.

How to implement callback on connection drop event using SQLObject?

I'm using a Python script that does certain flow control on our outgoing mail messages, mostly checking whether a user is sending spam.
The script establishes a persistent connection with a database via a SQLObject. Under certain circumstances, the connection is dropped by a third-party (e.g. our firewall closes the connection due to excess idle), and the SQLObject doesn't notice it has been closed and it continues sending queries on a dead TCP handler, resulting in log entries like these:
Feb 06 06:56:07 mailsrv2 flow: ERROR Processing request error: [Failure instance: Traceback: <class 'psycopg2.InterfaceError'>: connection already closed#012/usr/lib/python2.7/threading.py:524:__bootstrap#012/usr/lib/python2.7/threading.py:551:__bootstrap_inner#012/usr/lib/python2.7/threading.py:504:run#012--- <exception caught here>---#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/twisted/python/threadpool.py:191:_worker#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/twisted/python/context.py:118:callWithContext#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/twisted/python/context.py:81:callWithContext#012
/opt/scripts/flow/server.py:91:check#012
/opt/scripts/flow/flow.py:252:check#012
/opt/scripts/flow/flow.py:155:append_to_log#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/main.py:1226:__init__#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/main.py:1274:_create#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/main.py:1298:_SO_finishCreate#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/dbconnection.py:468:queryInsertID#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/dbconnection.py:327:_runWithConnection#012
/opt/scripts/virtualenv/local/lib/python2.7/site-packages/SQLObject-1.3.1-py2.7.egg/sqlobject/postgres/pgconnection.py:191:_queryInsertID#012]
This makes me think that indeed there must be some callback for this kind of situation, otherwise that log entry wouldn't be written. I'd use that callback to establish a new connection to the database. I've been unable to find any piece of documentation about that.
Does anyone know if it's even possible to implement that callback and how to declare it?
Thanks.
We're more regular users of SQLAlchemy rather than SQLObject. According to this thread from 2010 (http://sourceforge.net/p/sqlobject/mailman/message/26460439), SQLObject does not support reconnection logic for PostgreSQL. It's an old thread, but there does not appear to be any discussion about solving this from within SQLObject.
I have three suggested solutions.
The first solution is to explore Connection Pools. It might provide a way to open a new connection object when SQLObject detects the psycopg2 has disconnected. I can't guarantee it will, but if it does this solution would be your best best as it requires the least amount of changes on your part.
The second solution is to switch your backend from Postgres to MySQL. The SQLObject docs provide information on how use the reconnection logic of the mysql driver - http://sourceforge.net/p/mysql-python/feature-requests/9
The third solution is to switch to SQLAlchemy as your ORM and to use their version of Connection Pools. According to the core event documentation, when using pools if a connection is dropped or closed a new connection will opened -- http://docs.sqlalchemy.org/en/rel_0_9/core/exceptions.html#sqlalchemy.exc.DisconnectionError
Best of luck

Flask-SQLAlchemy "MySQL server has gone away" when using HAproxy

I've built a small python REST service using Flask, with Flask-SQLAlchemy used for talking to the MySQL DB.
If I connect directly to the MySQL server everything is good, no problems at all. If I use HAproxy (handles HA/failover, though in this dev environment there is only one DB server) then I constantly get MySQL server has gone away errors if the application doesn't talk to the DB frequently enough.
My HAproxy client timeout is set to 50 seconds, so what I think is happening is it cuts the stream, but the application isn't aware and tries to make use of an invalid connection.
Is there a setting I should be using when using services like HAproxy?
Also it doesn't seem to reconnect automatically, but if I issue a request manually I get Can't reconnect until invalid transaction is rolled back, which is odd since it is just a select() call I'm making, so I don't think it is a commit() I'm missing - or should I be calling commit() after every ORM based query?
Just to tidy up this question with an answer I'll post what I (think I) did to solve the issues.
Problem 1: HAproxy
Either increase the HAproxy client timeout value (globally, or in the frontend definition) to a value longer than what MySQL is set to reset on (see this interesting and related SF question)
Or set SQLALCHEMY_POOL_RECYCLE = 30 (30 in my case was less than HAproxy client timeout) in Flask's app.config so that when the DB is initialised it will pull in those settings and recycle connections before HAproxy cuts them itself. Similar to this issue on SO.
Problem 2: Can't reconnect until invalid transaction is rolled back
I believe I fixed this by tweaking the way the DB is initialised and imported across various modules. I basically now have a module that simply has:
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
Then in my main application factory I simply:
from common.database import db
db.init_app(app)
Also since I wanted to easily load table structures automatically I initialised the metadata binds within the app context, and I think it was this which cleanly handled the commit() issue/error I was getting, as I believe the database sessions are now being correctly terminated after each request.
with app.app_context():
# Setup DB binding
db.metadata.bind = db.engine

Periodic OperationalError: (2006, 'MySQL server has gone away')

I am hosting a web app at pythonanywhere.com and experiencing a strange problem. Every half-hour or so I am getting the OperationalError: (2006, 'MySQL server has gone away'). However, if I resave my wsgi.py file, the error disappears. And then appears again some half-an-hour later...
During the loading of the main page, my app checks a BOOL field in a 1x1 table (basically whether sign-ups should be open or closed). The only other MySQL actions are inserts into another small table, but none of these appear to be associated with the problem.
Any ideas for how I can fix this? I can provide more information as is necessary. Thanks in advance for your help.
EDIT
Problem turned out to be a matter of knowing when certain portions of code run. I assumed that every time a page loaded a new connection was opened. This was not the case; however, I have fixed it now.
It normally because your mysql network connect be disconnected, may by your network gateway/router, so you have two options. One is always build a mysql connect before every query (not using connect pool etc). Second is try and catch this error, then get connect and query db again.

Categories