Periodic OperationalError: (2006, 'MySQL server has gone away') - python

I am hosting a web app at pythonanywhere.com and experiencing a strange problem. Every half-hour or so I am getting the OperationalError: (2006, 'MySQL server has gone away'). However, if I resave my wsgi.py file, the error disappears. And then appears again some half-an-hour later...
During the loading of the main page, my app checks a BOOL field in a 1x1 table (basically whether sign-ups should be open or closed). The only other MySQL actions are inserts into another small table, but none of these appear to be associated with the problem.
Any ideas for how I can fix this? I can provide more information as is necessary. Thanks in advance for your help.
EDIT
Problem turned out to be a matter of knowing when certain portions of code run. I assumed that every time a page loaded a new connection was opened. This was not the case; however, I have fixed it now.

It normally because your mysql network connect be disconnected, may by your network gateway/router, so you have two options. One is always build a mysql connect before every query (not using connect pool etc). Second is try and catch this error, then get connect and query db again.

Related

SQLalchemy: Engine/Connection disconnects after some hours of inactivity

Situation
I have a plotly-dash application running in a docker container (based on python3.7-slim).
The app is accessing a postgres database and visualizes the queried data.
However, if the app has not been used for some time (I would estimate around 24-48 hours. We first noticed this issue on mondays after nobody used the app during the weekend) i.e. if no data has been queried from the database, the app freezes and the logs show some errors related to the database.
I cannot fully access the logs, but they contain this error:
AttributeError: 'Connection' object has no attribute '_Connection_connection'
and in the following, all the pieces of code which tried to query data from the database are stated (but not what exactly went wrong).
The problem was always solved with a restart of the app (and thus a new connection to the database)
Assumption
As stated above, this always occured after a period of inactivity. So my assumption is, that the engine disconnects after some idle time
Code Sample
For accessing the database, I have a DatabaseConnection class. The relevant part of the code contais something like this:
from sqlalchemy import create_engine
...
engine = create_engine(f"postgresql+psycopg2://{user}:{passw}#{url}:{port}/{db_name}")
self.engine = engine.connect()
...
Question
What is the best solution for overcoming the issue of the disconnect after some inactivity?
How could I possibly check whether the database connection is still active and if not, reconnect it somehow?
-Is there a better way the access the database than through an engine-object?
Is there something wrong with my approach in general?
Please let me know if you require further information. Thanks in Advance.
There is an error in my code. It should be
self.engine = create_engine(f"postgresql+psycopg2://{user}:{passw}#{url}:{port}/{db_name}")
and the second line should be omitted. I misunderstood what engine.connect() is doing: It returns a Connection object (not an engine, as the attribute name suggests).
Then, for each query I execute, I use the context manager like this:
with self.engine.connect() as conn
table1 = pd.read_sql_table("table1", con=conn)
That way, the Connection object is closed after it has been used. But the engine object may open new connections whenever necessary.
In my previous solution, die Connection was killed after some Idle time.
(Based on this GitHub Discussion)

Python loses connection to MySQL database after about a day

I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand.
The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe.
I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid.
I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one.
I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day.
So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows.
I have it working now. Using pooled connections seemed to fix the issue for me.
mysql.connector.connect(
host='10.0.0.25',
user='xxxxxxx',
passwd='xxxxxxx',
database='xxxxxxx',
pool_name='batman',
pool_size = 3
)
def connection():
"""Get a connection and a cursor from the pool"""
db = mysql.connector.connect(pool_name = 'batman')
return (db, db.cursor())
I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though.
Edit
I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement.
From bash: pip install Flask-MySQL
Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object:
from flaskext.mysql import MySQL
mysql = MySQL()
mysql.init_app(app)
And to get a connection: mysql.get_db().cursor()
All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now.

Force django to reopen database connection if lost

In my Python-Django web application, sometimes the database it will disconnect (problems related to my test environment, not so much stable...) and my web-app give me this error:
File "/usr/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 222, in create_cursor,
django.db.utils.InterfaceError: connection already closed,
cursor = self.connection.cursor()
Now, how i can tell django to retry to open the connection and continue? it seems that django remains stuck at this point...
Thanks.
There's no way to tell Django that it should retry on connection error. It's instead designed to simply fail on that one request. From the documentation:
If any database errors have occurred while processing the requests, Django checks whether the connection still works, and closes it if it doesn’t. Thus, database errors affect at most one request; if the connection becomes unusable, the next request gets a fresh connection.
However, this shouldn't be a problem if you follow this advice in the documentation:
If your database terminates idle connections after some time, you should set CONN_MAX_AGE to a lower value, so that Django doesn’t attempt to use a connection that has been terminated by the database server.

Django shell(plus) timeout on remote database connection

I'm migrating a legacy db into a bunch of models I have running locally. I connected to the legacy db and ran inspectdb to recreate the models. Now I'm writing functions to pair the numerous fields to their equivalents in new models. I've been using shell_plus, and the first minute or so of queries go great, but my connections keep timing out
with the following:
RemoteArticle.objects.using("remote_mysql").all()
django.db.utils.OperationalError: (2013, 'Lost connection to MySQL server during query')
Is there a command I can run to either a) reconnect to the db before running a query (so I don't have to reopen shell_plus), or ideally b) make it so that all of my queries automatically reconnect each time I run them?
I've seen timeout issues on other platforms, but I wasn't sure if Django had a built-in way of handling such things.
Thanks!
There is a page in the MySQL docs on this. Since you're apparently trying to migrate a big database, this part may apply to you:
Sometimes the “during query” form happens when millions of rows are
being sent as part of one or more queries. If you know that this is
happening, you should try increasing net_read_timeout from its default
of 30 seconds to 60 seconds or longer, sufficient for the data
transfer to complete.
The timeout makes sense, because the all() is just one query to retrieve all rows. So, reconnecting before each query is not the solution. If changing the net_read_timeout is not an option, you might want to think about paging.
I believe Lost connection to MySQL server during query happens because you exhaust the MySQL resource like timeout, session, and memory.
If the problem is because of the timeout, try increase the timeout --net_read_timeout=100. On the DB server.

pymongo: "OperationFailure: database error: error querying server"

We're occasionally getting the following error when doing queries:
OperationFailure: database error: error querying server
There is no specific query causing this, and when repeating the process things work. Has anybody else seen this error?
Our setup is a cluster of Ubuntu VMs on Amazon EC2, we're using Python 2.7.3 and pymongo v2.3. We're also using Mongoengine, however we still get this exception from non-Mongoengine code.
To those discovering this question:
We were never able to fully diagnose the problem with this, our hunch is that the database connection tends to fail every once in a while for whatever reason. From our research into distributed computing, this is a common problem and needs to be handled explicitly.
In the end, we adapted our system to become robust to DB connection failures by catching OperationFailure exceptions along with similar ones and re-establishing the database connection. This resolved the problem along with a number of similar ones we were having.
Seems the query failed on the server - to diagnose you'd need to check the server logs.

Categories