The code is below
import psycopg2
from psycopg2 import pool
try:
postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20,user = "postgres",
password = "pass##29",
host = "127.0.0.1",
port = "5432",
database = "postgres_db")
if(postgreSQL_pool):
print("Connection pool created successfully")
# Use getconn() to Get Connection from connection pool
ps_connection = postgreSQL_pool.getconn()
if(ps_connection):
print("successfully recived connection from connection pool ")
ps_cursor = ps_connection.cursor()
ps_cursor.execute("select * from mobile")
mobile_records = ps_cursor.fetchall()
print ("Displaying rows from mobile table")
for row in mobile_records:
print (row)
ps_cursor.close()
#Use this method to release the connection object and send back to connection pool
postgreSQL_pool.putconn(ps_connection)
print("Put away a PostgreSQL connection")
except (Exception, psycopg2.DatabaseError) as error :
print ("Error while connecting to PostgreSQL", error)
finally:
#closing database connection.
# use closeall method to close all the active connection if you want to turn of the application
if (postgreSQL_pool):
postgreSQL_pool.closeall
So let say we have wrapped code in a function. similar way if we create another function as per code we need to create connection pool again..
I am thinking it to don't close the connection pool and reuse it different function.
How can we find the existing pool and reuse it.
Thanks
Related
I am connecting database MySQL (MariaDB) from Python script using MySQLConnectionPool. I use context manager to hadle connection in the pool. I wonder if pool can expire if it is not used for a long amount of time or if my program collapsed. I've found that connection to MySQL db expires, do it is released even if you've forgot or have not been able to close connection in your program, what's situation with connections pool?
from contextlib import contextmanager
import mysql.connector
from mysql.connector.errors import Error
from mysql.connector import pooling
SQL_CONN_POOL = pooling.MySQLConnectionPool(
pool_name="mysqlpool",
pool_size=1,
user=DB_USER,
password=DB_PASS,
host=DB_HOST,
database=DATABASE,
auth_plugin=DB_PLUGIN
)
#contextmanager
def mysql_connection_from_pool() -> "conn":
conn_pool = SQL_CONN_POOL # get connection from the pool, all the rest is the same
_conn = conn_pool.get_connection()
try:
yield _conn
except (Exception, Error) as ex:
# if error happened all made changes during the connection will be rolled back:
_conn.rollback()
# this statement re-raise error to let it be handled in outer scope:
raise
else:
# if everything is fine commit all changes to save them in db:
_conn.commit()
finally:
# actually it returns connection to the pool, rather than close it
_conn.close()
#contextmanager
def mysql_curs_from_pool() -> "curs":
with mysql_connection_from_pool() as _conn:
_curs = _conn.cursor()
try:
yield _curs
finally:
_curs.close()
Yes it can be time outed. There are two timeout configuration.
See wait_timeout and interactive_timeout
I am using Flask with MySQL (MariaDB) database. To handle sql connection and cursor I use self-made context manager. I open and close connection inside each Flask http request hadling function, so I can be sure that number of connections to db will not exceed the certain number, but it creates overhead. I am sure that the same mysql connections can be used by other users, what other approach to handle sql connection and cursor I can use, if I do not use ORM ?
Context managers to hangle cursor and connection:
from contextlib import contextmanager
import mysql.connector
from mysql.connector.errors import Error
#contextmanager
def mysql_connection(user, password, host, database, auth_plugin):
_conn = mysql.connector.connect(user=user, password=password, host=host, database=database, auth_plugin=auth_plugin)
try:
yield _conn
except (Exception, Error) as ex:
# if error happened all made changes during the connection will be rolled back:
_conn.rollback()
# this statement re-raise error to let it be handled in outer scope:
raise
else:
# if everything is fine commit all changes to save them in db:
_conn.commit()
finally:
# close connection to db, do not wait for timeout release:
_conn.close()
#contextmanager
def mysql_curs(user, password, host, database, auth_plugin) -> "curs":
with mysql_connection(user=user, password=password, host=host, database=database, auth_plugin=auth_plugin) as _conn:
_curs = _conn.cursor()
try:
yield _curs
finally:
_curs.close() # close cursor when everything is done
Some random Flask http handler function:
#app.route('/admin_panel/repair', methods=["GET"])
def repair_show_all_menu_webpages():
"""The page exists to repair menu if not existent flask function was added"""
try:
with common_db_ops.mysql_curs() as curs:
left_side_menu = []
webpages = admin_ops.show_all_menu_webpages_to_repair(curs)
except (Exception, Error) as err:
app.logger.error(f"Failed to repair website: {err}")
abort(500)
return render_template('repair_menu.html', webpages=webpages, left_side_menu=left_side_menu)
Edit: I would like to add that I found the following article which discuss how to use Flask with PostgreSQL and create your customized sql connection context manager, but I have question where in Flask I should declare sql connectors Pool:
Manage RAW database connection pool in Flask
Try to pool connections
From offical docs:
A pool opens a number of connections and handles thread safety when
providing connections to requesters
Implementing connection pooling, you can reuse existing connections
dbconfig = {
"database": "test",
"user": "joe"
}
cnxpool = mysql.connector.connect(pool_name = "mypool",
pool_size = 3, # or any number to suit your need
**dbconfig)
# then to get a connection from pool use
cnx = cnxpool.get_connection()
For more see: https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html
If anybody is interested in the approach of handling sql connection without ORM, I made the following steps to combine MySQL Connections Pool, context manager and Flask:
SQL_CONN_POOL = pooling.MySQLConnectionPool(
pool_name="mysqlpool",
pool_size=10,
user=DB_USER,
password=DB_PASS,
host=DB_HOST,
database=DATABASE,
auth_plugin=DB_PLUGIN
)
#contextmanager
def mysql_connection_from_pool() -> "conn":
conn_pool = SQL_CONN_POOL # get connection from the pool, all the rest is the same
# you can add print(conn_pool) here to be sure that pool
# is the same for each http request
_conn = conn_pool.get_connection()
try:
yield _conn
except (Exception, Error) as ex:
# if error happened all made changes during the connection will be rolled back:
_conn.rollback()
# this statement re-raise error to let it be handled in outer scope:
raise
else:
# if everything is fine commit all changes to save them in db:
_conn.commit()
finally:
# actually it returns cursor to the pool, rather than close it
_conn.close()
#contextmanager
def mysql_curs_from_pool() -> "curs":
with mysql_connection_from_pool() as _conn:
_curs = _conn.cursor()
try:
yield _curs
finally:
_curs.close()
I used the following links to answer the question:
Manage RAW database connection pool in Flask
MySQL docs
I have a Python script that indefinitely connects to a SQL server and an ActiveMQ server and I am trying to build something that can handle disconnects for both separately. Whenever a connection breaks, I want to reconnect to the server. However, the ActiveMQ connection disconnects much more frequently than the SQL connection and I don't want to reconnect to the SQL server a bunch of times just because the ActiveMQ one is broken.
This is what I've got so far:
def connectSQL(host, port):
try:
time.sleep(5)
connSQL = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}',
server=sqlserver,
database=sqldb,
uid=sqluser,pwd=sqlpassword)
cursor = connSQL.cursor()
def connectActiveMQ(host, port):
try:
time.sleep(5)
conn = stomp.Connection(host_and_ports = [(host, port)],heartbeats=(1000, 1000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
print("Deployed ActiveMQ listener ...")
while True:
time.sleep(10)
except:
print("ActiveMQ connection broke, redeploying listener")
connectActiveMQ(host, port)
connectActiveMQ(host,port)
#Here is a ValueError representing a SQL disconnect
raise ValueError('SQL connection broke')
except:
print("SQL connection broke, reconnecting to SQL")
connectSQL(host, port)
connectSQL(host,port)
This works perfectly for reconnecting to ActiveMQ, but it doesn't work for SQL. Once it has already connected to SQL, any errors become inaccessible due to the ActiveMQ loop (the raise ValueError "SQL connection broke" becomes inaccessible in this code if both connections go through even for a moment). I need the connection to run indefinitely, but I don't know where else I can put my while:True wait statement.
How can I rewrite this so I can catch both ActiveMQ and SQL disconnects in parallel indefinitely?
Quick fix: use threading or multiprocessing. Here is a snippet using threading.
import threading
def connectSQL(host, port):
try:
time.sleep(5)
connSQL = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}',
server=sqlserver,
database=sqldb,
uid=sqluser,pwd=sqlpassword)
cursor = connSQL.cursor()
raise ValueError('SQL connection broke')
except:
print("SQL connection broke, reconnecting to SQL")
connectSQL(host, port)
def connectActiveMQ(host, port):
try:
time.sleep(5)
conn = stomp.Connection(host_and_ports = [(host, port)],heartbeats=(1000, 1000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
print("Deployed ActiveMQ listener ...")
while True:
time.sleep(10)
except:
print("ActiveMQ connection broke, redeploying listener")
connectActiveMQ(host, port)
t1 = threading.Thread(target=connectActiveMQ, args=(host, port))
t2 = threading.Thread(target=connectSQL, args=(host, port))
t1.start()
t2.start()
P.S. Given the quickfix, you should definitely look into the comments above to refactor the individual functions connectSQL and connectActiveMQ. If you need to share data between the methods, have a look here.
I am using python to subscribe to one topic, parse JSON and store them in the database. I have problems with loosing connection with MySQL because it can't be opened too long. Message that I receive is below
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
I managed to remove it by increasing timeout, but that is not good solution, because I can't know how long will the system need to wait for the message.
Is there a possibility I could create connection only when message is received?
i tried to add the connection details into on message, and then closing it but I still have the same problem
def on_message(client, userdata, msg):
sql="""INSERT INTO data(something) VALUES (%s)"""
data = ("some value")
with db:
try:
cursor.execute(sql,data)
except MySQLdb.Error:
db.ping(True)
cursor.execute(sql,data)
except:
print("error")
print(cursor._last_executed)
but then that variable is not visible outside this function. What is the best practise for this.
The part of code for making connection is bellow
import paho.mqtt.client as mqtt
import MySQLdb
import json
import time
#mysql config
try:
db = MySQLdb.connect(host="localhost", # your host
user="admin", # username
passwd="somepass", # password
db="mydb") # name of the database
except:
print("error")
So as you see, I have created one connection to mysql at the begging, and if there is no message for time longer then defined timeout my script stops working.
Try:
cur = db.cursor()
try:
cur.execute(query, params)
except MySQLdb.Error:
db.ping(True)
cur.execute(query, params)
db.ping(True) says to reconnect to DB is the connection was lost. You can also call db.ping(True) right after MySQLdb.connect. But to be on the safe side I'd better wrap execute() into try and call db.ping(True) in except block.
i have to connect to mysql server and grab some data for ever
so i have two way
1)connect to mysql the grab data in a while
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
cursor = conn.cursor(buffered=True)
while True:
cursor.execute("statments")
sqlData = cursor.fetchone()
print(sqlData)
sleep(0.5)
this working good but if script crashed due to mysql connection problem script goes down
2)connect to mysql in while
while True:
try:
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
cursor = conn.cursor(buffered=True)
cursor.execute("statments")
sqlData = cursor.fetchone()
print(sqlData)
cursor.close()
conn.close()
sleep(0.5)
except:
print("recoverable error..")
both code working good but my question is which is better?!
Among these two, better way will be to use a single connection but create a new cursor for each statement because creation of new connection takes time but creating a new cursor is fast. You may update the code as:
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
while True:
try:
cursor = conn.cursor(buffered=True)
cursor.execute("statments")
sqlData = cursor.fetchone()
print(sqlData)
except Exception: # Catch exception which will be raise in connection loss
conn = mysql.connector.connect(user='root',password='password',host='localhost',database='db',charset='utf8',autocommit=True)
cursor = conn.cursor(buffered=True)
finally:
cursor.close()
conn.close() # Close the connection
Also read Defining Clean-up Actions regarding the usage of try:finally block.