I have a Python script that indefinitely connects to a SQL server and an ActiveMQ server and I am trying to build something that can handle disconnects for both separately. Whenever a connection breaks, I want to reconnect to the server. However, the ActiveMQ connection disconnects much more frequently than the SQL connection and I don't want to reconnect to the SQL server a bunch of times just because the ActiveMQ one is broken.
This is what I've got so far:
def connectSQL(host, port):
try:
time.sleep(5)
connSQL = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}',
server=sqlserver,
database=sqldb,
uid=sqluser,pwd=sqlpassword)
cursor = connSQL.cursor()
def connectActiveMQ(host, port):
try:
time.sleep(5)
conn = stomp.Connection(host_and_ports = [(host, port)],heartbeats=(1000, 1000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
print("Deployed ActiveMQ listener ...")
while True:
time.sleep(10)
except:
print("ActiveMQ connection broke, redeploying listener")
connectActiveMQ(host, port)
connectActiveMQ(host,port)
#Here is a ValueError representing a SQL disconnect
raise ValueError('SQL connection broke')
except:
print("SQL connection broke, reconnecting to SQL")
connectSQL(host, port)
connectSQL(host,port)
This works perfectly for reconnecting to ActiveMQ, but it doesn't work for SQL. Once it has already connected to SQL, any errors become inaccessible due to the ActiveMQ loop (the raise ValueError "SQL connection broke" becomes inaccessible in this code if both connections go through even for a moment). I need the connection to run indefinitely, but I don't know where else I can put my while:True wait statement.
How can I rewrite this so I can catch both ActiveMQ and SQL disconnects in parallel indefinitely?
Quick fix: use threading or multiprocessing. Here is a snippet using threading.
import threading
def connectSQL(host, port):
try:
time.sleep(5)
connSQL = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}',
server=sqlserver,
database=sqldb,
uid=sqluser,pwd=sqlpassword)
cursor = connSQL.cursor()
raise ValueError('SQL connection broke')
except:
print("SQL connection broke, reconnecting to SQL")
connectSQL(host, port)
def connectActiveMQ(host, port):
try:
time.sleep(5)
conn = stomp.Connection(host_and_ports = [(host, port)],heartbeats=(1000, 1000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
print("Deployed ActiveMQ listener ...")
while True:
time.sleep(10)
except:
print("ActiveMQ connection broke, redeploying listener")
connectActiveMQ(host, port)
t1 = threading.Thread(target=connectActiveMQ, args=(host, port))
t2 = threading.Thread(target=connectSQL, args=(host, port))
t1.start()
t2.start()
P.S. Given the quickfix, you should definitely look into the comments above to refactor the individual functions connectSQL and connectActiveMQ. If you need to share data between the methods, have a look here.
Related
The code is below
import psycopg2
from psycopg2 import pool
try:
postgreSQL_pool = psycopg2.pool.SimpleConnectionPool(1, 20,user = "postgres",
password = "pass##29",
host = "127.0.0.1",
port = "5432",
database = "postgres_db")
if(postgreSQL_pool):
print("Connection pool created successfully")
# Use getconn() to Get Connection from connection pool
ps_connection = postgreSQL_pool.getconn()
if(ps_connection):
print("successfully recived connection from connection pool ")
ps_cursor = ps_connection.cursor()
ps_cursor.execute("select * from mobile")
mobile_records = ps_cursor.fetchall()
print ("Displaying rows from mobile table")
for row in mobile_records:
print (row)
ps_cursor.close()
#Use this method to release the connection object and send back to connection pool
postgreSQL_pool.putconn(ps_connection)
print("Put away a PostgreSQL connection")
except (Exception, psycopg2.DatabaseError) as error :
print ("Error while connecting to PostgreSQL", error)
finally:
#closing database connection.
# use closeall method to close all the active connection if you want to turn of the application
if (postgreSQL_pool):
postgreSQL_pool.closeall
So let say we have wrapped code in a function. similar way if we create another function as per code we need to create connection pool again..
I am thinking it to don't close the connection pool and reuse it different function.
How can we find the existing pool and reuse it.
Thanks
I'm a beginner in Python. So I wanted to make if a server shuts down, disconnects, the client just keeps connecting until the server is opened again. I get this error:
File "C:\Users\Laurynas\Desktop\project\client.py", line 24, in reconnect server1.connect((HOST, PORT)) OSError: [WinError 10056] A connect request was made on an already connected socket
Current client.py code:
import socket
import time
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
HOST = socket.gethostbyname(socket.gethostname())
PORT = 8888
# Check at the first try
def connect():
try:
server.connect((HOST, PORT))
messages()
except ConnectionRefusedError:
print("reconnecting, please wait...")
time.sleep(0.1)
connect()
# Check at the second, third, etc.
def reconnect():
try:
server1.connect((HOST, PORT))
messages()
except ConnectionRefusedError:
print("reconnecting, please wait...")
time.sleep(0.1)
reconnect()
def messages():
while True:
try:
command = server.recv(1024).decode()
print(command)
except:
reconnect()
pass
connect()
With the exception of listening sockets that are used for many accepts, data sockets cannot be reconnected and reused. On the client side a new socket needs to be created for the new connection and on the server side a new accept needs to be made. The old sockets should also be closed to get them out of the kernel.
This poses a difficulty because a server won't automatically know which client is reconnecting and which higher level activity should be restarted. This has to be baked into the protocol you implement on top of the connection. In HTTP for instance, each GET/PUT/POST reidentifies itself so that the web server knows how to do that, perhaps using a cookie based session id.
Bottom line, you can't keep on calling server.connect to start it up again.
I am using python to subscribe to one topic, parse JSON and store them in the database. I have problems with loosing connection with MySQL because it can't be opened too long. Message that I receive is below
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
I managed to remove it by increasing timeout, but that is not good solution, because I can't know how long will the system need to wait for the message.
Is there a possibility I could create connection only when message is received?
i tried to add the connection details into on message, and then closing it but I still have the same problem
def on_message(client, userdata, msg):
sql="""INSERT INTO data(something) VALUES (%s)"""
data = ("some value")
with db:
try:
cursor.execute(sql,data)
except MySQLdb.Error:
db.ping(True)
cursor.execute(sql,data)
except:
print("error")
print(cursor._last_executed)
but then that variable is not visible outside this function. What is the best practise for this.
The part of code for making connection is bellow
import paho.mqtt.client as mqtt
import MySQLdb
import json
import time
#mysql config
try:
db = MySQLdb.connect(host="localhost", # your host
user="admin", # username
passwd="somepass", # password
db="mydb") # name of the database
except:
print("error")
So as you see, I have created one connection to mysql at the begging, and if there is no message for time longer then defined timeout my script stops working.
Try:
cur = db.cursor()
try:
cur.execute(query, params)
except MySQLdb.Error:
db.ping(True)
cur.execute(query, params)
db.ping(True) says to reconnect to DB is the connection was lost. You can also call db.ping(True) right after MySQLdb.connect. But to be on the safe side I'd better wrap execute() into try and call db.ping(True) in except block.
I am struggling to get my python socket to behave.
There are two major problems:
1) When it listens for the client connection the program stalls which is a problem because it is running on a IRC client python interpreter causing the IRC client not to respond until the client connects.
2) When the client disconnects the entire script has to be stopped and then restarted again inorder to get the socket server to listen once more.
I thought the way around it might be to start the socket listening in a separate thread, so the IRC client can continue while it waits for the client connection. Also, once the client has decided to close the connection I need a way restart it.
The following code is terrible and doesn't work but it might give you an idea as to what I'm attempting:
__module_name__ = "Forward Module"
__module_version__ = "1.0.0"
__module_description__ = "Forward To Flash Module by Xcom"
# Echo client program
import socket
import sys
import xchat
import thread
import time
HOST = None # Symbolic name meaning all available interfaces
PORT = 7001 # Arbitrary non-privileged port
s = None
socketIsOpen = False
def openSocket():
# start server
print "starting to listen"
for res in socket.getaddrinfo(HOST, PORT, socket.AF_UNSPEC,
socket.SOCK_STREAM, 0, socket.AI_PASSIVE):
af, socktype, proto, canonname, sa = res
try:
s = socket.socket(af, socktype, proto)
except socket.error as msg:
s = None
continue
try:
s.bind(sa)
s.listen(1)
except socket.error as msg:
s.close()
s = None
continue
break
if s is None:
print 'could not open socket'
global socketIsOpen = False
sys.exit(1)
conn, addr = s.accept()
print 'Connected by', addr
global socketIsOpen = True
def someone_said(word, word_eol, userdata):
username = str(word[0])
message = str(word[1])
sendMessage = username + " : " + message
send_to_server(sendMessage)
def send_to_server(message):
conn.send(message)
def close_connection():
conn.close()
print "connection closed"
xchat.hook_print('Channel Message' , someone_said)
def threadMethod(arg) :
while 1:
if (not socketIsOpen) :
openSocket()
try:
thread.start_new_thread(threadMethod, args = [])
except:
print "Error: unable to start thread"
The python is running on an IRC client called HexChat which is where the xchat import comes from.
The way you usually program a threaded socket server is:
call accept() in a loop
spawn a new thread to handle the new connection
A very minimal example would be somethig like this:
import socket
import threading
import time
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(('0.0.0.0', 9999))
server.listen(1)
def handle(conn):
conn.send(b'hello')
time.sleep(1) # do some "heavy" work
conn.close()
while True:
print('listening...')
conn, addr = server.accept()
print('handling connection from %s' % (addr,))
threading.Thread(target=handle, args=(conn,)).start()
You're spawning new threads in which you create your listening socket, then accept and handle your connection. And while socketIsOpen is True your programm will be using a lot of cpu looping through your while loop doing nothing. (btw, the way you check socketIsOpen allows for race conditions, you can start multiple threads before it is set.)
And one last thing, you should try to use the threading module instead of the deprecated thread.
I was attempting to test for connection failure, and unfortunately it's not failing if the IP address of the host is fire walled.
This is the code:
def get_connection(self, conn_data):
rtu, hst, prt, usr, pwd, db = conn_data
try:
self.conn = pgdb.connect(host=hst+":"+prt, user=usr, password=pwd, database=db)
self.cur = self.conn.cursor()
return True
except pgdb.Error as e:
logger.exception("Error trying to connect to the server.")
return False
if self.get_connection(conn_data):
# Do stuff here:
If I try to connect to a known server but give an incorrect user name, it will trigger the exception and fail.
However if I try to connect to a machine that does not respond (firewalled) it never gets passed self.conn = pgdb.connect()
How to I wait or test for time out rather than have my app appear to hang when a user mistypes an IP address?
What you are experiencing is the pain of firewalls, and the timeout is the normal TCP timeout.
You can usually pass timeout argument in connect function. If it doesn't exist you could try with socket.timeout or default timeout:
import socket
socket.setdefaulttimeout(10) # sets timeout to 10 seconds
This will apply this setting to all connections(socket based) you make and will fail after 10 seconds of waiting.