In Oracle, how han I cancel broken connection? - python

This code is in python but basically it's using OCI so should be reproducible in any other language:
import cx_Oracle as db
dsn = '(DESCRIPTION =(CONNECT_TIMEOUT=3)(RETRY_COUNT=1)(TRANSPORT_CONNECT_TIMEOUT=3)(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = SOME_HOST)(PORT = 1531)))(CONNECT_DATA =(SERVICE_NAME = SOME_NAME)))'
connect_string = "LOGIN/PASSWORD#%s" % dsn
conn = db.connect(connect_string)
conn.ping() # WILL HANG FOREVER!!!
If SOME_HOST is down, this will hang forever!
And it's not related to OCIPing - if I replace:
ping()
with:
cursor = conn.cursor()
cursor.execute('SELECT 1 FROM DUAL') # HANG FOREVER AS WELL
This will hang as well.
I'm using SQL*Plus: Release 11.2.0.3.0 Production on Wed Nov 6 12:17:09 2013.
I tried wrapping this code in thread and waiting for same time than killing the thread but this doesn't work. This code creates a thread itself and it's impossible from python to kill it. Do you have any ideas how to recover?

The short answer is to use try/except/finally blocks but if part of your code is truly awaiting for a condition that would never be satisfied, what you need to do, is implement an internal timeout. There are numerous methods to do this. You can adapt the solution to this problem to your needs to get this done.
Hope this helps.

I had the same problems with interrupting conn.ping().
Now I use next construction:
from threading import Timer
pingTimeout = 10 # sec
# ...
def breakConnection():
conn.cancel()
connection = False
try:
t = Timer(pingTimeout, breakConnection)
cursor = conn.cursor()
cursor.execute('SELECT 1 FROM DUAL')
t.close()
cursor.close()
except Exception:
connection = False
if not connection:
print 'Trying to reconnect...'
# ...
It's a dirty way, but it works.
And real way to check if a connection is usable is to execute the application statement you want run (I don't mean SELECT 1 FROM DUAL).
Then try retry, if you catch the exception.

if you want close connection try
conn.close()

Related

Creating a method to connect to postgres database in python

I'm working on a python program with functionality such as inserting and retrieving values from a postgres database using psycopg2. The issue is that every time I want to create a query I have to connect to the database so the following code snippet is present multiple times throughout the file:
# Instantiate Connection
try:
conn = psycopg2.connect(
user=userName,
password=passwrd,
host=hostAddr,
database=dbName
)
# Instantiate Cursor
cur = conn.cursor()
return cur
except psycopg2.Error as e:
print(f"Error connecting to Postgres Platform: {e}")
sys.exit(1)
My question is:
Is there a way I could just create a method to call every time I wish to connect to the database? I've tried creating one but I get a bunch of errors since variables cur and conn are not global
Could I just connect to the database once at the beginning of the program and keep the connection open for the entire time that the program is running? This seems like the easiest option but I am not sure if it would be bad practice (for reference the program will be running 24/7 so I assumed it would be better to only connect when a query is being made).
Thanks for the help.
You could wrap your own database handling class in a context manager, so you can manage the connections in a single place:
import psycopg2
import traceback
from psycopg2.extras import RealDictCursor
class Postgres(object):
def __init__(self, *args, **kwargs):
self.dbName = args[0] if len(args) > 0 else 'prod'
self.args = args
def _connect(self, msg=None):
if self.dbName == 'dev':
dsn = 'host=127.0.0.1 port=5556 user=xyz password=xyz dbname=development'
else:
dsn = 'host=127.0.0.1 port=5557 user=xyz password=xyz dbname=production'
try:
self.con = psycopg2.connect(dsn)
self.cur = self.con.cursor(cursor_factory=RealDictCursor)
except:
traceback.print_exc()
def __enter__(self, *args, **kwargs):
self._connect()
return (self.con, self.cur)
def __exit__(self, *args):
for c in ('cur', 'con'):
try:
obj = getattr(self, c)
obj.close()
except:
pass # handle it silently!?
self.args, self.dbName = None, None
Usage:
with Postgres('dev') as (con, cur):
print(con)
print(cur.execute('select 1+1'))
print(con) # verify connection gets closed!
Out:
<connection object at 0x109c665d0; dsn: '...', closed: 0>
[RealDictRow([('sum', 2)])]
<connection object at 0x109c665d0; dsn: '...', closed: 1>
It shouldn't be too bad to keep a connection open. The server itself should be responsible for closing connections it thinks have been around for too long or that are too inactive. We then just need to make our code resilient in case the server has closed the connection:
import pscyopg2
CONN = None
def create_or_get_connection():
global CONN
if CONN is None or CONN.closed:
CONN = psycopg2.connect(...)
return CONN
I have been down this road lots before and you may be reinventing the wheel. I would highly recommend you use a ORM like [Django][1] or if you need to interact with a database - it handles all this stuff for you using best practices. It is some learning up front but I promise it pays off.
If you don't want to use Django, you can use this code to get or create the connection and the context manager of cursors to avoid errors with
import pscyopg2
CONN = None
def create_or_get_connection():
global CONN
if CONN is None or CONN.closed:
CONN = psycopg2.connect(...)
return CONN
def run_sql(sql):
con = create_or_get_connection()
with conn.cursor() as curs:
return curs.execute(sql)
This will allow you simply to run sql statements directly to the DB without worrying about connection or cursor issues.
If I wrap your code-fragment into a function definition, I don't get "a bunch of errors since variables cur and conn are not global". Why would they need to be global? Whatever the error was, you removed it from your code fragment before posting it.
Your try-catch doesn't make any sense to me. Catching an error just to hide the calling site and then bail out seems like the opposite of helpful.
When to connect depends on how you structure your transactions, how often you do them, and what you want to do if your database ever restarts in the middle of a program execution.

Closing a connection in Python with asyncio

I am trying to retrieve data from a database for use in an api context. However I noticed that conn.close() was taking a relatively long time to execute (in this context conn is a connection from a mysql connection pool). Since closing the connection is not blocking the api's ability to return data I figured I would use asyncio to close the connection async so it wouldn't block the data being returned.
async def get_data(stuff):
conn = api.db.get_connection()
cursor = conn.cursor(dictionary=True)
data = execute_query(stuff, conn, cursor)
cursor.close()
asyncio.ensure_future(close_conn(conn))
return helper_rows
async def close_conn(conn):
conn.close()
results = asyncio.run(get_data(stuff))
However despite the fact the asyncio.ensure_future(close(conn)) is not blocking (I put timing statements in to see how long everything was taking and the ones before and after this command were about 1ms different) the actual result won't be gotten until close_conn is completed. (I verified this using time statements and the difference in time between when it reaches the return statement in get_data and when the line after results=asyncio.run(get_data(stuff)) is about 200ms).
So my question is how do I make this code close the connection in the background so I am free to go ahead and process the data without having to wait for it.
Since conn.close() is not a coroutine it blocks the event loop when close_conn is scheduled. If you want to do what you described, use an async sql client and do await conn.close().
You could try using an asynchronous context manager. (async with statement)
async def get_data(stuff):
async with api.db.get_connection() as conn:
cursor = conn.cursor(dictionary=True)
data = execute_query(stuff, conn, cursor)
cursor.close()
asyncio.ensure_future(close_conn(conn))
return helper_rows
results = asyncio.run(get_data(stuff))
If that doesn't work the sql client you are using try with aiosqlite.
https://github.com/omnilib/aiosqlite
import aiosqlite

Python sending data to a MySQL DB

I have a script running, updating certain values in a DB once a second.
At the start of the script I first connect to the DB:
conn = pymysql.connect(host= "server",
user="user",
passwd="pw",
db="db",
charset='utf8')
x = conn.cursor()
I leave the connection open for the running time of the script (around 30min)
With this code I update certain values once every second:
query = "UPDATE missionFilesDelDro SET landLat = '%s', landLon='%s',landHea='%s' WHERE displayName='%s'" % (lat, lon, heading, mission)
x.execute(query)
conn.ping(True)
However now when my Internet connection breaks the script also crashes since It can't update the variables. My connection normally reestablishes within one minute. (the script runs on a vehicle which is moving. Internet connection is established via a GSM Modem)
Is it better to re-open every time the connection to the server prior an update of the variable so I can see if the connection has been established or is there a better way?
You could just ping the connection first, instead of after the query, as that should reconnect if necessary.
Setup:
conn = pymysql.connect(host= "server",
user="user",
passwd="pw",
db="db",
charset='utf8')
and every second:
query = "UPDATE missionFilesDelDro SET landLat = '%s', landLon='%s',landHea='%s' WHERE displayName='%s'" % (lat, lon, heading, mission)
conn.ping()
x = conn.cursor()
x.execute(query)
Ref https://github.com/PyMySQL/PyMySQL/blob/master/pymysql/connections.py#L872
It's still possible that the connection could drop after the ping() but before the execute(), which would then fail. For handling that you would need to trap the error, something similar to
from time import sleep
MAX_ATTEMPTS = 10
# every second:
query = "UPDATE missionFilesDelDro SET landLat = '%s', landLon='%s',landHea='%s' WHERE displayName='%s'" % (lat, lon, heading, mission)
inserted = False
attempts = 0
while (not inserted) and attempts < MAX_ATTEMPTS:
attempts += 1
try:
conn.ping()
x = conn.cursor()
x.execute(query)
inserted = True
except StandardError: # it would be better to use the specific error, not StandardError
sleep(10) # however long is appropriate between tries
# you could also do a whole re-connection here if you wanted
if not inserted:
# do something
#raise RuntimeError("Couldn't insert the record after {} attempts.".format(MAX_ATTEMPTS))
pass
I'm guessing the script fails with an exception at the line x.execute(query) when the connection drops.
You could trap the exception and retry opening the connection. The following 'pseudo-python' demonstrates the general technique, but will obviously need to be adapted to use real function, method, and exception names:
def open_connection(retries, delay):
for (x in range(0, retries)):
conn = pymysql.connection()
if (conn.isOpen()):
return conn
sleep(delay)
return None
conn = open_connection(30, 3)
x = conn.cursor()
while(conn is not None and more_data)
# read data here
query = ...
while(conn is not None): # Loop until data is definitely saved
try:
x.execute(query)
break # data saved, exit inner loop
except SomeException:
conn = open_connection(30,3)
x = conn.cursor()
The general idea is that you need to loop and retry until either the data is definitely saved, or until you encounter an unrecoverable error.
Hm. If you're sampling or receiving data at a constant rate, but are only able to send it irregularly because of network failures, you've created a classic producer-consumer problem. You'll need one thread to read or receive the data, a queue to hold any backlog, and another thread to store the data. Fun! ;-)

python mysql.connector write failure on connection disconnection stalls for 30 seconds

I use python module mysql.connector for connecting to an AWS RDS instance.
Now, as we know, if we do not send a request to SQL server for a while, the connection disconnects.
To handle this, I reconnect to SQL in case a read/write request fails.
Now my problem with the "request fails", it takes significant to fail. And only then can I reconnect, and retry my request. (I have pointed this out as a comment in code snippet).
For a real-time application such as mine, this is a problem. How could I solve this? Is it possible to find out if the disconnection has already happened so that I can try a new connection without having to wait on a read/write request?
Here is how I handle it in my code right now:
def fetchFromDB(self, vid_id):
fetch_query = "SELECT * FROM <db>"
success = False
attempts = 0
output = []
while not success and attempts < self.MAX_CONN_ATTEMPTS:
try:
if self.cnx == None:
self._connectDB_()
if self.cnx:
cursor = self.cnx.cursor() # MY PROBLEM: This step takes too long to fail in case the connection has expired.
cursor.execute(fetch_query)
output = []
for entry in cursor:
output.append(entry)
cursor.close()
success = True
attempts = attempts + 1
except Exception as ex:
logging.warning("Error")
if self.cnx != None:
try:
self.cnx.close()
except Exception as ex:
pass
finally:
self.cnx = None
return output
In my application I cannot tolerate a delay of more than 1 second while reading from mysql.
While configuring mysql, I'm doing just the following settings:
SQL.user = '<username>'
SQL.password = '<password>'
SQL.host = '<AWS RDS HOST>'
SQL.port = 3306
SQL.raise_on_warnings = True
SQL.use_pure = True
SQL.database = <database-name>
There are some contrivances like generating an ALARM signal or similar if a function call takes too long. Those can be tricky with database connections or not work at all. There are other SO questions that go there.
One approach would be to set the connection_timeout to a known value when you create the connection making sure it's shorter than the server side timeout. Then if you track the age of the connection yourself you can preemptively reconnect before it gets too old and clean up the previous connection.
Alternatively you could occasionally execute a no-op query like select now(); to keep the connection open. You would still want to recycle the connection every so often.
But if there are long enough periods between queries (where they might expire) why not open a new connection for each query?

Python threading test not working

EDIT
I solved the issue by forking the process instead of using threads. From the comments and links in the comments, I don't think threading is the right move here.
Thanks everyone for your assistance.
FINISHED EDIT
I haven't done much with threading before. I've created a few simple example "Hello World" scripts but nothing that actually did any work.
To help me grasp it, I wrote a simple script using the binaries from Nagios to query services like HTTP. This script works although with a timeout of 1 second if I have 10 services that timeout, the script will take over 10 seconds long.
What I want to do is run all checks in parallel to each other. This should reduce the time it takes to complete.
I'm currently getting segfaults but not all the time. Strangely at the point where I check the host in the processCheck function, I can printout all hosts. Just after checking the host though, the hosts variable only prints one or two of the hosts in the set. I have a feeling it's a namespace issue but I'm not sure how to resolve.
I've posted the entire code here sans the MySQL db but a result from he service_list view looks like.
Any assistance is greatly appreciated.
6543L, 'moretesting.com', 'smtp')
(6543L, 'moretesting.com', 'ping')
(6543L, 'moretesting.com', 'http')
from commands import getstatusoutput
import MySQLdb
import threading
import Queue
import time
def checkHost(x, service):
command = {}
command['http'] = './plugins/check_http -t 1 -H '
command['smtp'] = './plugins/check_smtp -t 1 -H '
cmd = command[service]
cmd += x
retval = getstatusoutput(cmd)
if retval[0] == 0:
return 0
else:
return retval[1]
def fetchHosts():
hostList = []
cur.execute('SELECT veid, hostname, service from service_list')
for row in cur.fetchall():
hostList.append(row)
return hostList
def insertFailure(veid, hostname, service, error):
query = 'INSERT INTO failures (veid, hostname, service, error) '
query += "VALUES ('%s', '%s', '%s', '%s')" % (veid, hostname, service, error)
cur.execute(query)
cur.execute('COMMIT')
def processCheck(host):
#If I print the host tuple here I get all hosts/services
retval = checkHost(host[1], host[2])
#If I print the host tuple here, I get one host maybe two
if retval != 0:
try:
insertFailure(host[0], host[1], host[2], retval)
except:
pass
else:
try:
#if service is back up, remove old failure entry
query = "DELETE FROM failures WHERE veid='%s' AND service='%s' AND hostname='%s'" % (host[0], host[2], host[1])
cur.execute(query)
cur.execute('COMMIT')
except:
pass
return 0
class ThreadClass(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
processCheck(queue.get())
time.sleep(1)
def main():
for host in fetchHosts():
queue.put(host)
t = ThreadClass(queue)
t.setDaemon(True)
t.start()
if __name__ == '__main__':
conn = MySQLdb.connect('localhost', 'root', '', 'testing')
cur = conn.cursor()
queue = Queue.Queue()
main()
conn.close()
The MySQL DB driver isn't thread safe. You're using the same cursor concurrently from all threads.
Try creating a new connection in each thread, or create a pool of connections that the threads can use (e.g. keep them in a Queue, each thread gets a connection, and puts it pack when it's done).
You should be constructing and populating your queue first. When the entire queue is constructed and populated, then you should construct a number of threads which then, each in a loop, polls the queue and processes an item on the queue.
You realize Python doesn't do true multi-threading as you would expect on a multi-core processor:
See Here
And Here
Don't expect those 10 things to take 1 second each. Besides, even in true multi-threading there is a little overhead associated with the threads. I'd like to add that this isn't a slur against Python.

Categories