How to reconnect to mysql after receiving mqtt message in python? - python

I am using python to subscribe to one topic, parse JSON and store them in the database. I have problems with loosing connection with MySQL because it can't be opened too long. Message that I receive is below
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
I managed to remove it by increasing timeout, but that is not good solution, because I can't know how long will the system need to wait for the message.
Is there a possibility I could create connection only when message is received?
i tried to add the connection details into on message, and then closing it but I still have the same problem
def on_message(client, userdata, msg):
sql="""INSERT INTO data(something) VALUES (%s)"""
data = ("some value")
with db:
try:
cursor.execute(sql,data)
except MySQLdb.Error:
db.ping(True)
cursor.execute(sql,data)
except:
print("error")
print(cursor._last_executed)
but then that variable is not visible outside this function. What is the best practise for this.
The part of code for making connection is bellow
import paho.mqtt.client as mqtt
import MySQLdb
import json
import time
#mysql config
try:
db = MySQLdb.connect(host="localhost", # your host
user="admin", # username
passwd="somepass", # password
db="mydb") # name of the database
except:
print("error")
So as you see, I have created one connection to mysql at the begging, and if there is no message for time longer then defined timeout my script stops working.

Try:
cur = db.cursor()
try:
cur.execute(query, params)
except MySQLdb.Error:
db.ping(True)
cur.execute(query, params)
db.ping(True) says to reconnect to DB is the connection was lost. You can also call db.ping(True) right after MySQLdb.connect. But to be on the safe side I'd better wrap execute() into try and call db.ping(True) in except block.

Related

Creating a method to connect to postgres database in python

I'm working on a python program with functionality such as inserting and retrieving values from a postgres database using psycopg2. The issue is that every time I want to create a query I have to connect to the database so the following code snippet is present multiple times throughout the file:
# Instantiate Connection
try:
conn = psycopg2.connect(
user=userName,
password=passwrd,
host=hostAddr,
database=dbName
)
# Instantiate Cursor
cur = conn.cursor()
return cur
except psycopg2.Error as e:
print(f"Error connecting to Postgres Platform: {e}")
sys.exit(1)
My question is:
Is there a way I could just create a method to call every time I wish to connect to the database? I've tried creating one but I get a bunch of errors since variables cur and conn are not global
Could I just connect to the database once at the beginning of the program and keep the connection open for the entire time that the program is running? This seems like the easiest option but I am not sure if it would be bad practice (for reference the program will be running 24/7 so I assumed it would be better to only connect when a query is being made).
Thanks for the help.
You could wrap your own database handling class in a context manager, so you can manage the connections in a single place:
import psycopg2
import traceback
from psycopg2.extras import RealDictCursor
class Postgres(object):
def __init__(self, *args, **kwargs):
self.dbName = args[0] if len(args) > 0 else 'prod'
self.args = args
def _connect(self, msg=None):
if self.dbName == 'dev':
dsn = 'host=127.0.0.1 port=5556 user=xyz password=xyz dbname=development'
else:
dsn = 'host=127.0.0.1 port=5557 user=xyz password=xyz dbname=production'
try:
self.con = psycopg2.connect(dsn)
self.cur = self.con.cursor(cursor_factory=RealDictCursor)
except:
traceback.print_exc()
def __enter__(self, *args, **kwargs):
self._connect()
return (self.con, self.cur)
def __exit__(self, *args):
for c in ('cur', 'con'):
try:
obj = getattr(self, c)
obj.close()
except:
pass # handle it silently!?
self.args, self.dbName = None, None
Usage:
with Postgres('dev') as (con, cur):
print(con)
print(cur.execute('select 1+1'))
print(con) # verify connection gets closed!
Out:
<connection object at 0x109c665d0; dsn: '...', closed: 0>
[RealDictRow([('sum', 2)])]
<connection object at 0x109c665d0; dsn: '...', closed: 1>
It shouldn't be too bad to keep a connection open. The server itself should be responsible for closing connections it thinks have been around for too long or that are too inactive. We then just need to make our code resilient in case the server has closed the connection:
import pscyopg2
CONN = None
def create_or_get_connection():
global CONN
if CONN is None or CONN.closed:
CONN = psycopg2.connect(...)
return CONN
I have been down this road lots before and you may be reinventing the wheel. I would highly recommend you use a ORM like [Django][1] or if you need to interact with a database - it handles all this stuff for you using best practices. It is some learning up front but I promise it pays off.
If you don't want to use Django, you can use this code to get or create the connection and the context manager of cursors to avoid errors with
import pscyopg2
CONN = None
def create_or_get_connection():
global CONN
if CONN is None or CONN.closed:
CONN = psycopg2.connect(...)
return CONN
def run_sql(sql):
con = create_or_get_connection()
with conn.cursor() as curs:
return curs.execute(sql)
This will allow you simply to run sql statements directly to the DB without worrying about connection or cursor issues.
If I wrap your code-fragment into a function definition, I don't get "a bunch of errors since variables cur and conn are not global". Why would they need to be global? Whatever the error was, you removed it from your code fragment before posting it.
Your try-catch doesn't make any sense to me. Catching an error just to hide the calling site and then bail out seems like the opposite of helpful.
When to connect depends on how you structure your transactions, how often you do them, and what you want to do if your database ever restarts in the middle of a program execution.

Resuse the connection pool created in psycopg2

The code is below
import psycopg2
from psycopg2 import pool
try:
postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20,user = "postgres",
password = "pass##29",
host = "127.0.0.1",
port = "5432",
database = "postgres_db")
if(postgreSQL_pool):
print("Connection pool created successfully")
# Use getconn() to Get Connection from connection pool
ps_connection = postgreSQL_pool.getconn()
if(ps_connection):
print("successfully recived connection from connection pool ")
ps_cursor = ps_connection.cursor()
ps_cursor.execute("select * from mobile")
mobile_records = ps_cursor.fetchall()
print ("Displaying rows from mobile table")
for row in mobile_records:
print (row)
ps_cursor.close()
#Use this method to release the connection object and send back to connection pool
postgreSQL_pool.putconn(ps_connection)
print("Put away a PostgreSQL connection")
except (Exception, psycopg2.DatabaseError) as error :
print ("Error while connecting to PostgreSQL", error)
finally:
#closing database connection.
# use closeall method to close all the active connection if you want to turn of the application
if (postgreSQL_pool):
postgreSQL_pool.closeall
So let say we have wrapped code in a function. similar way if we create another function as per code we need to create connection pool again..
I am thinking it to don't close the connection pool and reuse it different function.
How can we find the existing pool and reuse it.
Thanks

What is the best way to handle sql connection in http server (Flask) without ORM in Python?

I am using Flask with MySQL (MariaDB) database. To handle sql connection and cursor I use self-made context manager. I open and close connection inside each Flask http request hadling function, so I can be sure that number of connections to db will not exceed the certain number, but it creates overhead. I am sure that the same mysql connections can be used by other users, what other approach to handle sql connection and cursor I can use, if I do not use ORM ?
Context managers to hangle cursor and connection:
from contextlib import contextmanager
import mysql.connector
from mysql.connector.errors import Error
#contextmanager
def mysql_connection(user, password, host, database, auth_plugin):
_conn = mysql.connector.connect(user=user, password=password, host=host, database=database, auth_plugin=auth_plugin)
try:
yield _conn
except (Exception, Error) as ex:
# if error happened all made changes during the connection will be rolled back:
_conn.rollback()
# this statement re-raise error to let it be handled in outer scope:
raise
else:
# if everything is fine commit all changes to save them in db:
_conn.commit()
finally:
# close connection to db, do not wait for timeout release:
_conn.close()
#contextmanager
def mysql_curs(user, password, host, database, auth_plugin) -> "curs":
with mysql_connection(user=user, password=password, host=host, database=database, auth_plugin=auth_plugin) as _conn:
_curs = _conn.cursor()
try:
yield _curs
finally:
_curs.close() # close cursor when everything is done
Some random Flask http handler function:
#app.route('/admin_panel/repair', methods=["GET"])
def repair_show_all_menu_webpages():
"""The page exists to repair menu if not existent flask function was added"""
try:
with common_db_ops.mysql_curs() as curs:
left_side_menu = []
webpages = admin_ops.show_all_menu_webpages_to_repair(curs)
except (Exception, Error) as err:
app.logger.error(f"Failed to repair website: {err}")
abort(500)
return render_template('repair_menu.html', webpages=webpages, left_side_menu=left_side_menu)
Edit: I would like to add that I found the following article which discuss how to use Flask with PostgreSQL and create your customized sql connection context manager, but I have question where in Flask I should declare sql connectors Pool:
Manage RAW database connection pool in Flask
Try to pool connections
From offical docs:
A pool opens a number of connections and handles thread safety when
providing connections to requesters
Implementing connection pooling, you can reuse existing connections
dbconfig = {
"database": "test",
"user": "joe"
}
cnxpool = mysql.connector.connect(pool_name = "mypool",
pool_size = 3, # or any number to suit your need
**dbconfig)
# then to get a connection from pool use
cnx = cnxpool.get_connection()
For more see: https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html
If anybody is interested in the approach of handling sql connection without ORM, I made the following steps to combine MySQL Connections Pool, context manager and Flask:
SQL_CONN_POOL = pooling.MySQLConnectionPool(
pool_name="mysqlpool",
pool_size=10,
user=DB_USER,
password=DB_PASS,
host=DB_HOST,
database=DATABASE,
auth_plugin=DB_PLUGIN
)
#contextmanager
def mysql_connection_from_pool() -> "conn":
conn_pool = SQL_CONN_POOL # get connection from the pool, all the rest is the same
# you can add print(conn_pool) here to be sure that pool
# is the same for each http request
_conn = conn_pool.get_connection()
try:
yield _conn
except (Exception, Error) as ex:
# if error happened all made changes during the connection will be rolled back:
_conn.rollback()
# this statement re-raise error to let it be handled in outer scope:
raise
else:
# if everything is fine commit all changes to save them in db:
_conn.commit()
finally:
# actually it returns cursor to the pool, rather than close it
_conn.close()
#contextmanager
def mysql_curs_from_pool() -> "curs":
with mysql_connection_from_pool() as _conn:
_curs = _conn.cursor()
try:
yield _curs
finally:
_curs.close()
I used the following links to answer the question:
Manage RAW database connection pool in Flask
MySQL docs

python mysql.connector write failure on connection disconnection stalls for 30 seconds

I use python module mysql.connector for connecting to an AWS RDS instance.
Now, as we know, if we do not send a request to SQL server for a while, the connection disconnects.
To handle this, I reconnect to SQL in case a read/write request fails.
Now my problem with the "request fails", it takes significant to fail. And only then can I reconnect, and retry my request. (I have pointed this out as a comment in code snippet).
For a real-time application such as mine, this is a problem. How could I solve this? Is it possible to find out if the disconnection has already happened so that I can try a new connection without having to wait on a read/write request?
Here is how I handle it in my code right now:
def fetchFromDB(self, vid_id):
fetch_query = "SELECT * FROM <db>"
success = False
attempts = 0
output = []
while not success and attempts < self.MAX_CONN_ATTEMPTS:
try:
if self.cnx == None:
self._connectDB_()
if self.cnx:
cursor = self.cnx.cursor() # MY PROBLEM: This step takes too long to fail in case the connection has expired.
cursor.execute(fetch_query)
output = []
for entry in cursor:
output.append(entry)
cursor.close()
success = True
attempts = attempts + 1
except Exception as ex:
logging.warning("Error")
if self.cnx != None:
try:
self.cnx.close()
except Exception as ex:
pass
finally:
self.cnx = None
return output
In my application I cannot tolerate a delay of more than 1 second while reading from mysql.
While configuring mysql, I'm doing just the following settings:
SQL.user = '<username>'
SQL.password = '<password>'
SQL.host = '<AWS RDS HOST>'
SQL.port = 3306
SQL.raise_on_warnings = True
SQL.use_pure = True
SQL.database = <database-name>
There are some contrivances like generating an ALARM signal or similar if a function call takes too long. Those can be tricky with database connections or not work at all. There are other SO questions that go there.
One approach would be to set the connection_timeout to a known value when you create the connection making sure it's shorter than the server side timeout. Then if you track the age of the connection yourself you can preemptively reconnect before it gets too old and clean up the previous connection.
Alternatively you could occasionally execute a no-op query like select now(); to keep the connection open. You would still want to recycle the connection every so often.
But if there are long enough periods between queries (where they might expire) why not open a new connection for each query?

Python DBAPI time out for connections?

I was attempting to test for connection failure, and unfortunately it's not failing if the IP address of the host is fire walled.
This is the code:
def get_connection(self, conn_data):
rtu, hst, prt, usr, pwd, db = conn_data
try:
self.conn = pgdb.connect(host=hst+":"+prt, user=usr, password=pwd, database=db)
self.cur = self.conn.cursor()
return True
except pgdb.Error as e:
logger.exception("Error trying to connect to the server.")
return False
if self.get_connection(conn_data):
# Do stuff here:
If I try to connect to a known server but give an incorrect user name, it will trigger the exception and fail.
However if I try to connect to a machine that does not respond (firewalled) it never gets passed self.conn = pgdb.connect()
How to I wait or test for time out rather than have my app appear to hang when a user mistypes an IP address?
What you are experiencing is the pain of firewalls, and the timeout is the normal TCP timeout.
You can usually pass timeout argument in connect function. If it doesn't exist you could try with socket.timeout or default timeout:
import socket
socket.setdefaulttimeout(10) # sets timeout to 10 seconds
This will apply this setting to all connections(socket based) you make and will fail after 10 seconds of waiting.

Categories