Python loses connection to MySQL database after about a day - python

I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand.
The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe.
I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid.
I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one.
I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day.
So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows.

I have it working now. Using pooled connections seemed to fix the issue for me.
mysql.connector.connect(
host='10.0.0.25',
user='xxxxxxx',
passwd='xxxxxxx',
database='xxxxxxx',
pool_name='batman',
pool_size = 3
)
def connection():
"""Get a connection and a cursor from the pool"""
db = mysql.connector.connect(pool_name = 'batman')
return (db, db.cursor())
I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though.
Edit
I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement.
From bash: pip install Flask-MySQL
Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object:
from flaskext.mysql import MySQL
mysql = MySQL()
mysql.init_app(app)
And to get a connection: mysql.get_db().cursor()
All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now.

Related

Heroku Postgresql local connection

Everything worked great until today, while I did not change anything in the code.
I can't connect to the database not from the application, not from the IDE, did something go wrong on the heroku side? I have not seen news with global updates over the past couple of days on the heroku website. Can anyone advise how to solve the current problem?
I use the free version of dyno and postgresql, I definitely still have a lot of free space (less than 1 thousand fields). It looks like blocking access to the database locally, not from the service side.
What I would try in your situation:
Go to https://data.heroku.com/, select your Datastore and check everything there: Health, number of connections, number of rows, data size.
If everything is fine: Go to settings -> database credentials and try setting up a connection from any desktop tool such as Navicat or Pgadmin. What error message do you get?
Set-up another database on Heroku and try the same. If the second DB works, there is an issue with the first one. If it does not, it's rather about your setup/settings.
Hope that helps

SQLalchemy: Engine/Connection disconnects after some hours of inactivity

Situation
I have a plotly-dash application running in a docker container (based on python3.7-slim).
The app is accessing a postgres database and visualizes the queried data.
However, if the app has not been used for some time (I would estimate around 24-48 hours. We first noticed this issue on mondays after nobody used the app during the weekend) i.e. if no data has been queried from the database, the app freezes and the logs show some errors related to the database.
I cannot fully access the logs, but they contain this error:
AttributeError: 'Connection' object has no attribute '_Connection_connection'
and in the following, all the pieces of code which tried to query data from the database are stated (but not what exactly went wrong).
The problem was always solved with a restart of the app (and thus a new connection to the database)
Assumption
As stated above, this always occured after a period of inactivity. So my assumption is, that the engine disconnects after some idle time
Code Sample
For accessing the database, I have a DatabaseConnection class. The relevant part of the code contais something like this:
from sqlalchemy import create_engine
...
engine = create_engine(f"postgresql+psycopg2://{user}:{passw}#{url}:{port}/{db_name}")
self.engine = engine.connect()
...
Question
What is the best solution for overcoming the issue of the disconnect after some inactivity?
How could I possibly check whether the database connection is still active and if not, reconnect it somehow?
-Is there a better way the access the database than through an engine-object?
Is there something wrong with my approach in general?
Please let me know if you require further information. Thanks in Advance.
There is an error in my code. It should be
self.engine = create_engine(f"postgresql+psycopg2://{user}:{passw}#{url}:{port}/{db_name}")
and the second line should be omitted. I misunderstood what engine.connect() is doing: It returns a Connection object (not an engine, as the attribute name suggests).
Then, for each query I execute, I use the context manager like this:
with self.engine.connect() as conn
table1 = pd.read_sql_table("table1", con=conn)
That way, the Connection object is closed after it has been used. But the engine object may open new connections whenever necessary.
In my previous solution, die Connection was killed after some Idle time.
(Based on this GitHub Discussion)

Django shell(plus) timeout on remote database connection

I'm migrating a legacy db into a bunch of models I have running locally. I connected to the legacy db and ran inspectdb to recreate the models. Now I'm writing functions to pair the numerous fields to their equivalents in new models. I've been using shell_plus, and the first minute or so of queries go great, but my connections keep timing out
with the following:
RemoteArticle.objects.using("remote_mysql").all()
django.db.utils.OperationalError: (2013, 'Lost connection to MySQL server during query')
Is there a command I can run to either a) reconnect to the db before running a query (so I don't have to reopen shell_plus), or ideally b) make it so that all of my queries automatically reconnect each time I run them?
I've seen timeout issues on other platforms, but I wasn't sure if Django had a built-in way of handling such things.
Thanks!
There is a page in the MySQL docs on this. Since you're apparently trying to migrate a big database, this part may apply to you:
Sometimes the “during query” form happens when millions of rows are
being sent as part of one or more queries. If you know that this is
happening, you should try increasing net_read_timeout from its default
of 30 seconds to 60 seconds or longer, sufficient for the data
transfer to complete.
The timeout makes sense, because the all() is just one query to retrieve all rows. So, reconnecting before each query is not the solution. If changing the net_read_timeout is not an option, you might want to think about paging.
I believe Lost connection to MySQL server during query happens because you exhaust the MySQL resource like timeout, session, and memory.
If the problem is because of the timeout, try increase the timeout --net_read_timeout=100. On the DB server.

SQLAlchemy hangs while connecting to SQL Azure, but not always

I have a django application, which is making use of SQLAlchemy to connect to a SQL Server instance on Windows Azure. The app has worked perfectly for 3 months on a local SQL Server instance, and for over a month on an Azure instance. The issues appeared this monday, after a week without any code changes.
The site uses:
Python 2.7
Django 1.6
Apache/Nginx
SQLAlchemy 0.9.3
pyODBC 3.0.7
FreeTDS
The application appears to lock up right after a connection is pulled out of the Pool (I have setup verbose logging at every point in the workflow). I assumed this had something to do with the connections going stale. So we tried making the pool_recycle incredibly short (5 secs), all the way up to an hour. That did not help.
We also tried using the NullPool to force a new connection on every page view. However that does not help either. After about 15 minutes the site will completely lock up again (meaning no pages that use the database are viewable).
The weird thing is, half the computers that experience the "hang", will end up loading the page about 15 minutes later.
Has anyone had any experience with SQL Azure and SQLAlchemy?
I found a workaround for this issue. Please note that this is definitely not a fix, since the site worked perfectly fine before. We could not determine what the actual issue is because SQL Azure has no error log (one of the 100 reasons I would suggest never considering SQL Azure over a real database server).
I got around the problem by turning off all Connection Pooling, at the application level, AND at the driver level.
Things started consistently working after making my /etc/odbcinst.ini look like:
[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
# Some installations may differ in the paths
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
CPReuse =
CPTimeout = 0
FileUsage = 1
Pooling = No
The key being setting CPTimeout (Connection Pool Timeout) to 0, and Pooling to No. Just turning pooling off at the application level (in SQL Alchemy) did not work, only after setting it at the driver level did things start working smoothly.
I am now at 4 days without a problem after changing that setting.

How to manage db connections using webpy with SQLObject?

Web.py has its own database API, web.db. It's possible to use SQLObject instead, but I haven't been able to find documentation describing how to do this properly. I'm especially interested in managing database connections. It would be best to establish a connection at the wsgi entry point, and reuse it. Webpy cookbook contains an example how to do this with SQLAlchemy. I'd be interested to see how to properly do a similar thing using SQLObject.
This is how I currently do it:
class MyPage(object):
def GET(self):
ConnectToDatabase()
....
return render.MyPage(...)
This is obviously inefficient, because it establishes a new database connection on each query. I'm sure there's a better way.
As far as I understand the SQLAlchemy example given, a processor is used, that is, a session is created for each connection and committed when the handler is complete (or rolled back if an error has occurred).
I don't see any simple way to do what you propose, i.e. open a connection at the WSGI entry point. You will probably need a connection pool to serve multiple clients at the same time. (I have no idea what are the requirements for efficiency, code simplicity and so on, though. Please comment.)
Inserting ConnectToDatabase calls into each handler is of course ugly. I suggest that you adapt the cookbook example replacing the SQLAlchemy session with a SQLObject connection.

Categories