python flask mysql using only one connection - python

When I get over 100 concurrent requests, mysql.connect() produces a "too many connections" error. I'm using a managed database which doesn't give me a root user to increase the connection limit. Below is the temporary fix that I need to replace.
import flaskext.mysql
#app.route("/filter")
def filter_ep():
# FIXME: hot fix for "too many connections" error
conn = None
errs = 0
while not conn and errs < 100:
try:
conn = mysql.connect()
except Exception as e:
errs += 1
time.sleep(0.001)
cur = con.cursor()
# pull `results` from database
cur.close()
conn.close()
return results
When I tried doing the same code using a single global connection I got packet out of order errors suggesting that cursors are reading each others responses.
I think the correct solution is to make some sort of task queue for queries but not sure how to implement.

This is my current solution, It's still bad but at least it doesn't consume all available connections causing other things to break.
# FIXME: this is still bad
conn = None
errs = 0
# for 1s try to connect until there's an open conn in queue
while not conn and errs < 100:
try:
conn = mysql.connector.connect(pool_name="ropool", pool_size=4, **db.mysql_connection_args)
except Exception as e:
errs += 1
time.sleep(0.01)
if not conn:
return json.dumps({"errorMessage" : "failed to connect to database"}), 500

Related

What is a good strategy for database cennection in continuous Python script?

I have continuous Python script that parses certain websites or XML's every 30 second and adds records o database if it turns out, there is something new.
First, I was just connecting to database every time which I knew, wasn't ideal way how to do it. i had something like this:
def job():
try:
cnx = mysql.connector.connect(user=DB_USER, password=DB_PASSWORD, host='XYZ', database=DB_NAME)
cursor = cnx.cursor()
# CALLS OF PARSERS:
run_parser1(cnx, cursor)
run_parser2(cnx, cursor)
# etc...
except Exception as e:
cursor.close()
cnx.close()
schedule.every(30).seconds.do(job)
while 1:
schedule.run_pending()
time.sleep(1)
Now I edited my code so connection is open until there is an exception either in connecting to database or in parsing:
try:
cnx = mysql.connector.connect(user=DB_USER, password=DB_PASSWORD, host='XYZ', database=DB_NAME)
cursor = cnx.cursor()
except Exception as e:
cursor.close()
cnx.close()
def job():
try:
alert_list = CAPParser(getxml()).as_dict()
# CALL OF PARSERS:
run_parser1(cnx, cursor)
run_parser2(cnx, cursor)
# etc...
except Exception as e:
cursor.close()
cnx.close()
schedule.every(30).seconds.do(job)
while 1:
schedule.run_pending()
time.sleep(1)
However, there is problem. While I know it's secure way to do it, it means that I need to restart script several times per day. There are a lot of exceptions either from lost database connection or unavailable URL's of parsed sites or files.
Any advice how to find better solution?
You could try to create a function which verifies if the cnx is open and if it's not, recreate it. Something like:
def job():
global cnx
global cursor
if not cnx.is_connected(): # Uses ping to verify connection
try:
cnx = mysql.connector.connect(user=DB_USER, password=DB_PASSWORD, host='XYZ', database=DB_NAME)
cursor = cnx.cursor()
except Exception as e:
cursor.close()
cnx.close()
# You can do this try in a while with a delay to keep retrying to connect
... # job here

Python sending data to a MySQL DB

I have a script running, updating certain values in a DB once a second.
At the start of the script I first connect to the DB:
conn = pymysql.connect(host= "server",
user="user",
passwd="pw",
db="db",
charset='utf8')
x = conn.cursor()
I leave the connection open for the running time of the script (around 30min)
With this code I update certain values once every second:
query = "UPDATE missionFilesDelDro SET landLat = '%s', landLon='%s',landHea='%s' WHERE displayName='%s'" % (lat, lon, heading, mission)
x.execute(query)
conn.ping(True)
However now when my Internet connection breaks the script also crashes since It can't update the variables. My connection normally reestablishes within one minute. (the script runs on a vehicle which is moving. Internet connection is established via a GSM Modem)
Is it better to re-open every time the connection to the server prior an update of the variable so I can see if the connection has been established or is there a better way?
You could just ping the connection first, instead of after the query, as that should reconnect if necessary.
Setup:
conn = pymysql.connect(host= "server",
user="user",
passwd="pw",
db="db",
charset='utf8')
and every second:
query = "UPDATE missionFilesDelDro SET landLat = '%s', landLon='%s',landHea='%s' WHERE displayName='%s'" % (lat, lon, heading, mission)
conn.ping()
x = conn.cursor()
x.execute(query)
Ref https://github.com/PyMySQL/PyMySQL/blob/master/pymysql/connections.py#L872
It's still possible that the connection could drop after the ping() but before the execute(), which would then fail. For handling that you would need to trap the error, something similar to
from time import sleep
MAX_ATTEMPTS = 10
# every second:
query = "UPDATE missionFilesDelDro SET landLat = '%s', landLon='%s',landHea='%s' WHERE displayName='%s'" % (lat, lon, heading, mission)
inserted = False
attempts = 0
while (not inserted) and attempts < MAX_ATTEMPTS:
attempts += 1
try:
conn.ping()
x = conn.cursor()
x.execute(query)
inserted = True
except StandardError: # it would be better to use the specific error, not StandardError
sleep(10) # however long is appropriate between tries
# you could also do a whole re-connection here if you wanted
if not inserted:
# do something
#raise RuntimeError("Couldn't insert the record after {} attempts.".format(MAX_ATTEMPTS))
pass
I'm guessing the script fails with an exception at the line x.execute(query) when the connection drops.
You could trap the exception and retry opening the connection. The following 'pseudo-python' demonstrates the general technique, but will obviously need to be adapted to use real function, method, and exception names:
def open_connection(retries, delay):
for (x in range(0, retries)):
conn = pymysql.connection()
if (conn.isOpen()):
return conn
sleep(delay)
return None
conn = open_connection(30, 3)
x = conn.cursor()
while(conn is not None and more_data)
# read data here
query = ...
while(conn is not None): # Loop until data is definitely saved
try:
x.execute(query)
break # data saved, exit inner loop
except SomeException:
conn = open_connection(30,3)
x = conn.cursor()
The general idea is that you need to loop and retry until either the data is definitely saved, or until you encounter an unrecoverable error.
Hm. If you're sampling or receiving data at a constant rate, but are only able to send it irregularly because of network failures, you've created a classic producer-consumer problem. You'll need one thread to read or receive the data, a queue to hold any backlog, and another thread to store the data. Fun! ;-)

python mysql.connector write failure on connection disconnection stalls for 30 seconds

I use python module mysql.connector for connecting to an AWS RDS instance.
Now, as we know, if we do not send a request to SQL server for a while, the connection disconnects.
To handle this, I reconnect to SQL in case a read/write request fails.
Now my problem with the "request fails", it takes significant to fail. And only then can I reconnect, and retry my request. (I have pointed this out as a comment in code snippet).
For a real-time application such as mine, this is a problem. How could I solve this? Is it possible to find out if the disconnection has already happened so that I can try a new connection without having to wait on a read/write request?
Here is how I handle it in my code right now:
def fetchFromDB(self, vid_id):
fetch_query = "SELECT * FROM <db>"
success = False
attempts = 0
output = []
while not success and attempts < self.MAX_CONN_ATTEMPTS:
try:
if self.cnx == None:
self._connectDB_()
if self.cnx:
cursor = self.cnx.cursor() # MY PROBLEM: This step takes too long to fail in case the connection has expired.
cursor.execute(fetch_query)
output = []
for entry in cursor:
output.append(entry)
cursor.close()
success = True
attempts = attempts + 1
except Exception as ex:
logging.warning("Error")
if self.cnx != None:
try:
self.cnx.close()
except Exception as ex:
pass
finally:
self.cnx = None
return output
In my application I cannot tolerate a delay of more than 1 second while reading from mysql.
While configuring mysql, I'm doing just the following settings:
SQL.user = '<username>'
SQL.password = '<password>'
SQL.host = '<AWS RDS HOST>'
SQL.port = 3306
SQL.raise_on_warnings = True
SQL.use_pure = True
SQL.database = <database-name>
There are some contrivances like generating an ALARM signal or similar if a function call takes too long. Those can be tricky with database connections or not work at all. There are other SO questions that go there.
One approach would be to set the connection_timeout to a known value when you create the connection making sure it's shorter than the server side timeout. Then if you track the age of the connection yourself you can preemptively reconnect before it gets too old and clean up the previous connection.
Alternatively you could occasionally execute a no-op query like select now(); to keep the connection open. You would still want to recycle the connection every so often.
But if there are long enough periods between queries (where they might expire) why not open a new connection for each query?

connect to mysql in a loop

i have to connect to mysql server and grab some data for ever
so i have two way
1)connect to mysql the grab data in a while
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
cursor = conn.cursor(buffered=True)
while True:
cursor.execute("statments")
sqlData = cursor.fetchone()
print(sqlData)
sleep(0.5)
this working good but if script crashed due to mysql connection problem script goes down
2)connect to mysql in while
while True:
try:
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
cursor = conn.cursor(buffered=True)
cursor.execute("statments")
sqlData = cursor.fetchone()
print(sqlData)
cursor.close()
conn.close()
sleep(0.5)
except:
print("recoverable error..")
both code working good but my question is which is better?!
Among these two, better way will be to use a single connection but create a new cursor for each statement because creation of new connection takes time but creating a new cursor is fast. You may update the code as:
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
while True:
try:
cursor = conn.cursor(buffered=True)
cursor.execute("statments")
sqlData = cursor.fetchone()
print(sqlData)
except Exception: # Catch exception which will be raise in connection loss
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
cursor = conn.cursor(buffered=True)
finally:
cursor.close()
conn.close() # Close the connection
Also read Defining Clean-up Actions regarding the usage of try:finally block.

How to properly use try/except in Python

I have a function that returns the DB connection handler from MongoDB. I have various other functions that makes a call to the DB, I figure let's throw the connection handler into a function so I don't have to define it in every function.
Does this look right? I guess my question is, if it can't make a connection to the DB server, it will print both messages Could not connect to server and No hosts found How can I go about only printing "Could not connect to the server."
def mongodb_conn():
try:
conn = pymongo.MongoClient()
except pymongo.errors.ConnectionFailure, e:
print "Could not connect to server: %s" % e
return conn
def get_hosts()
try:
conn = mongodb_conn()
mongodb = conn.dbname.collection
b = []
hosts_obj = mongodb.find({'_id': 'PR'})
for x in hosts_obj:
print x
except:
print "No hosts found"
get_hosts()
Move your conn = mongodb_conn() call out of the try .. except handler, and test if None was returned:
def get_hosts()
conn = mongodb_conn()
if conn is None:
# no connection, exit early
return
try:
mongodb = conn.dbname.collection
b = []
hosts_obj = mongodb.find({'_id': 'PR'})
for x in hosts_obj:
print x
except:
print "No hosts found"
You should, at all cost, avoid using a blanket except however; you are catching everything now, including memory errors and keyboard interrupts, see Why is "except: pass" a bad programming practice?
Use specific exceptions only; you can use one except statement to catch multiple exception types:
except (AttributeError, pymongo.errors.OperationFailure):
or you can use multiple except statements handle different exceptions in different ways.
Limit the exception handler to just those parts of the code where the exception can be thrown. The for x in hosts_obj: loop for example is probably not going to throw an AttributeError exception, so it should probably not be part of the try block.
Note that you'll need to adjust your mongodb_conn() function to not try and use the conn local if it has never been set; you'll get an UnboundLocal error if you do:
def mongodb_conn():
try:
return pymongo.MongoClient()
except pymongo.errors.ConnectionFailure, e:
print "Could not connect to server: %s" % e
Now the function returns the connection if successful, None if the connection failed.
You can also check if the server is available
like this:
from pymongo.errors import ConnectionFailure
client = MongoClient()
try:
# The ismaster command is cheap and does not require auth.
client.admin.command('ismaster')
except ConnectionFailure:
print("Server not available")

Categories