pyMySQL: How to check if connection is already opened or close - python

I am getting the error InterfaceError (0, ''). Is there way in Pymysql library I can check whether connection or cursor is closed. For cursor I am already using context manager like that:
with db_connection.cursor() as cursor:
....

You can use Connection.open attribute.
The Connection.open field will be 1 if the connection is open and 0 otherwise. So you can say
if conn.open:
# do something
The conn.open attribute will tell you whether the connection has been
explicitly closed or whether a remote close has been detected.
However, it's always possible that you will try to issue a query and
suddenly the connection is found to have given out - there is no way
to detect this ahead of time (indeed, it might happen during the
process of issuing the query), so the only truly safe thing is to wrap
your calls in a try/except block

Use conn.connection in if statement.
import pymysql
def conn():
mydb=pymysql.Connect('localhost','root','password','demo_db',autocommit=True)
return mydb.cursor()
def db_exe(query,c):
try:
if c.connection:
print("connection exists")
c.execute(query)
return c.fetchall()
else:
print("trying to reconnect")
c=conn()
except Exception as e:
return str(e)
dbc=conn()
print(db_exe("select * from users",dbc))

This is how I did it, because I want to still run the query even if the connection goes down:
def reconnect():
mydb=pymysql.Connect(host='localhost',user='root',password='password',database='demo_db',ssl={"fake_flag_to_enable_tls":True},autocommit=True)
return mydb.cursor()
try:
if (c.connection.open != True):
c=reconnect() # reconnect
if c.connection.open:
c.execute(query)
return c.fetchall()
except Exception as e:
return str(e)

I think the try and catch might do the trick instead of checking cursor only.
try:
c = db_connection.cursor()
except OperationalError:
connected = False
else:
connected = True
#code here

I initially went with the solution from AKHIL MATHEW to call conn.open but later during testing found that sometimes conn.open was returning positive results even though the connection was lost. To be certain, I found I could call conn.ping() which actually tests the connection. The function also accepts an optional parameter (reconnect=True) which will cause it to automatically reconnect if the ping fails.
Of course there is a cost to this - as implied by the name, ping actually goes out to the server and tests the connection. You don't want to do this before every query, but in my case I have an AWS lambda spinning up on warm start and trying to reuse the connection, so I think I can justify testing my connection once on each warm start and reconnecting if it's been lost.

Related

The proper way to ensure Python(Flask) closes DB connection (pymysql) on error

The below is my way of handling database connection, but it is a bit clumsy than I want...So the question is whether or not there are some other even more proper ways to close the database while returning an error message to client if DB operations return some errors.
#app.route('/get-data/', methods=['GET'])
def get_data():
db_error = False
try:
conn = pymysql.connect(db_url, db_username, db_password, db_name)
cursor = conn.cursor()
cursor.execute('SELECT a_column FROM a_table WHERE a_condition = 0')
results = cursor.fetchall()
except Exception as ex:
logging.error('{ex}')
db_error = True # Cannot simply return here; otherwise DB connection is left open
finally:
cursor.close()
conn.close()
if db_error:
return Response('Database error', 500)
return jsonify(results) # Let's assume the jsonify() function will not throw an error...
Suppose I use a context manager, does it mean that both conn and cursor will definitely be closed even when an exception is thrown? Or is it something implementation-dependent, i.e., some packages, say, pymysql, will make sure all cursors and conns are closed, regardless of errors are thrown or not; while other packages, say, pyodbc, will NOT ensure this. (Here pymysql and pyodbc are just two examples of course...)

Connecting to MySQL DB using mysql.connector.connect fails with no error to catch

I'm using python to try and connect to a DB. This code worked and something in my environment changed so that the host in not present/accessible. This is as expected. The thing that I'm trying to work out is, I can't seem to catch the error of this happening. This is my code:
def create_db_connection(self):
try:
message('try...')
DB_HOST = os.environ['DB_HOST']
DB_USERNAME = os.environ['DB_USERNAME']
DB_PASSWORD = os.environ['DB_PASSWORD']
message('connecting...')
db = mysql.connector.connect(
host=DB_HOST,
user=DB_USERNAME,
password=DB_PASSWORD,
auth_plugin='mysql_native_password'
)
message('connected...')
return db
except mysql.connector.Error as err:
log.info('bad stuff happened...')
log.info("Something went wrong: {}".format(err))
message('exception connecting...')
except Exception as ex:
log.info('something bad happened')
message("Exception: {}".format(ex))
message('returning false connection...')
return False
I see up to the message('connecting...') call, but nothing afterwards. Also, I don't see any of the except messages/logs at all.
Is there something else I need to catch/check in order to know that a DB connection attempt has failed?
This is running inside an AWS Lambda and was working until I changed some subnets/etc. The key thing is I want to catch it no longer being able to connect.
The issue is most likely that your lambda function is timing out before the database connection is timing out.
First, modify the lambda function to execute for 60 seconds and test. You should find after about 30 seconds you will see the connection to the database timeout.
To resolve this issue, modify the security group on the database instance to include the security group configured for lambda. Use this entry to open a the correct port 3306

pyodbc not committing changes to db2 database

I am trying to update my db2 database using pyodbc in python. The sql statement runs normally without errors on the database directly. when I run the code below, I get no errors and the code executes successfully but when I query the database, the changes did not save.
try:
conn2 = pyodbc.connect("DRIVER={iSeries Access ODBC Driver};SYSTEM="+ Config_Main.iseriesServer +";DATABASE="+ Config_Main.iseriesDB +";UID="+ Config_Main.iseriesUser +";PWD=" + Config_Main.iseriesPass)
db2 = conn2.cursor()
for row in encludeData:
count = len(str(row[2]))
srvid = row[2]
if count < 10:
sql3 = "UPDATE SVCEN2DEV.SRVMAST SET svbrch = ? WHERE svtype != '*DCS-' AND svacct = ? AND svcid LIKE '%?' and svbrch = ?"
db2.execute(sql3, (row[4],row[1],"%" + str(srvid),row[5]))
else:
sql3 = "UPDATE SVCEN2DEV.SRVMAST SET svbrch = ? WHERE svtype != '*DCS-' AND svacct = ? AND svcid = ? and svbrch = ?"
db2.execute(sql3, (row[4],row[1],srvid,row[5]))
conn2.commit()
except pyodbc.Error as e:
logging.error(e)
I have tried setting conn2.autocommit = True. and I have also tried moving the conn2.commit() inside of the for loop to commit after each iteration. I also tried a different driver {IBM i Access ODBC Driver}
EDIT:
Sample of encludeData
['4567890001','4567890001','1234567890','1234567890','foo','bar']
After changing the except statement to grab general errors, the code above now produces this error:
IntegrityError('23000', '[23000] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0803 - Duplicate key value specified. (-803) (SQLExecDirectW)')
As OP found out, the application layer language, Python, may not raise specific database exceptions such as duplicate index or foreign key issues and hence will silently fail or will be logged on server side. Usually errors that affect actual SQL queries to run like incorrect identifiers and syntax errors will raise an error on client side.
Therefore, as best practice in programming it is necessary to use exception handling like Python's try/except/finally or the equivalent in other general purpose languages that interface with any external API like database connections in order to catch and properly handle runtime issues.
Below will print any exception on statements raised in the try block including connection and query execution. And regardless of success or fail will run the finally statements.
try:
conn2 = pyodbc.connect(...)
db2 = conn2.cursor()
sql = "..."
db2.execute(sql, params)
conn2.commit()
except Exception as e:
print(e)
finally:
db2.close()
conn2.close()

MySQL server has gone away python MySQLdb

In my python script, I've subscribed to a web socket. Whenever the data is received, I'm inserting this data into MySQL db. Every second there are about 100-200 queries. The problem is it works for some time, and then it gives the error "error 2006: MySQL server has gone away"
I've increased Max_allowed_packets up to 512M. but it didn't work.
Here's my code.
def db_entry(threadName, _data):
_time = time.strftime('%Y-%m-%d %H:%M:%S')
#print ("starting new thread...")
for data in _data:
#print (data)
sql = "INSERT INTO %s (Script_Name, Lot_Size, Date, Time, Last_Price, Price_Change, Open,High, Low, Close, Volume, Buy_Quantity, Sell_Quantity) VALUES('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" %("_" + str(data['instrument_token']), data['instrument_token'], 1, datetime.datetime.today().strftime("%Y-%m-%d"), _time, data['last_price'], data['change'], data['ohlc']['open'], data['ohlc']['high'], data['ohlc']['low'], data['ohlc']['close'], data['volume'], data['buy_quantity'], data['sell_quantity'])
cursor.execute(sql)
# Commit your changes in the database
db.commit()
def on_tick(tick, ws):
thread_name = "Thread" + str(thread_count + 1)
try:
_thread.start_new_thread(db_entry,(thread_name,tick, ))
except exception as e:
print (e)
raise
def on_connect(ws):
# Subscribe to a list of instrument_tokens (RELIANCE and ACC here).
ws.subscribe(instrument_token)
# Set RELIANCE to tick in `full` mode.
ws.set_mode(ws.MODE_FULL,instrument_token)
# Assign the callbacks.
kws.on_tick = on_tick
kws.on_connect = on_connect
kws.enable_reconnect(reconnect_interval=5, reconnect_tries=50)
# Infinite loop on the main thread. Nothing after this will run.
# You have to use the pre-defined callbacks to manage subscriptions.
kws.connect()
Thanks in advance. :)
The documentation provided by the MySQL developer docs are very clear on this point. Odds are, some of those MySQL queries are running slower than others because they're waiting for their turn to insert data. If they wait too long, MySQL will just close their connection. By default, MySQL's wait_timeout is eight hours (28800s). Has the MySQL configuration been tweaked? How much hardware is allocated to MySQL?
Generally, look at all the timeout configurations. Read them and understand them. Do not simply copy and paste all the performance tweaks bloggers like blogging about.
Finally, It's solved.
I kept the db connection open which was causing the problem.
I'm closing the db connection when the query is fired. and opening again when want to insert something again.
You need to create an object with its own connection handling methods. I use this and works well.
class DB():
def __init__(self, **kwargs):
self.conn = MySQLdb.connect(‘host’, ‘user’, ‘pass’, ‘db’)
try:
if (self.conn):
status = "DB init success"
else:
status = "DB init failed"
self.conn.autocommit(True)
# self.conn.select_db(DB_NAME)
self.cursor = self.conn.cursor()
except Exception as e:
status = "DB init fail %s " % str(e)
def execute(self, query):
try:
if self.conn is None:
self.__init__()
else:
self.conn.ping(True)
self.cursor.execute(query)
return self.cursor.fetchall()
except Exception as e:
import traceback
traceback.print_exc()
# error ocurs,rollback
self.conn.rollback()
return False
Usage
data = DB().execute("SELECT * FROM Users")
print(data)

Set database connection timeout in Python

I'm creating a RESTful API which needs to access the database. I'm using Restish, Oracle, and SQLAlchemy. However, I'll try to frame my question as generically as possible, without taking Restish or other web APIs into account.
I would like to be able to set a timeout for a connection executing a query. This is to ensure that long running queries are abandoned, and the connection discarded (or recycled). This query timeout can be a global value, meaning, I don't need to change it per query or connection creation.
Given the following code:
import cx_Oracle
import sqlalchemy.pool as pool
conn_pool = pool.manage(cx_Oracle)
conn = conn_pool.connect("username/p4ss#dbname")
conn.ping()
try:
cursor = conn.cursor()
cursor.execute("SELECT * FROM really_slow_query")
print cursor.fetchone()
finally:
cursor.close()
How can I modify the above code to set a query timeout on it?
Will this timeout also apply to connection creation?
This is similar to what java.sql.Statement's setQueryTimeout(int seconds) method does in Java.
Thanks
for the query, you can look on timer and conn.cancel() call.
something in those lines:
t = threading.Timer(timeout,conn.cancel)
t.start()
cursor = conn.cursor()
cursor.execute(query)
res = cursor.fetchall()
t.cancel()
In linux see /etc/oracle/sqlnet.ora,
sqlnet.outbound_connect_timeout= value
also have options:
tcp.connect_timeout and sqlnet.expire_time, good luck!
You could look at setting up PROFILEs in Oracle to terminate the queries after a certain number of logical_reads_per_call and/or cpu_per_call
Timing Out with the System Alarm
Here's how to use the operating system timout to do this. It's generic, and works for things other than Oracle.
import signal
class TimeoutExc(Exception):
"""this exception is raised when there's a timeout"""
def __init__(self): Exception.__init__(self)
def alarmhandler(signame,frame):
"sigalarm handler. raises a Timeout exception"""
raise TimeoutExc()
nsecs=5
signal.signal(signal.SIGALRM, alarmhandler) # set the signal handler function
signal.alarm(nsecs) # in 5s, the process receives a SIGALRM
try:
cx_Oracle.connect(blah blah) # do your thing, connect, query, etc
signal.alarm(0) # if successful, turn of alarm
except TimeoutExc:
print "timed out!" # timed out!!

Categories