Python DBAPI time out for connections? - python

I was attempting to test for connection failure, and unfortunately it's not failing if the IP address of the host is fire walled.
This is the code:
def get_connection(self, conn_data):
rtu, hst, prt, usr, pwd, db = conn_data
try:
self.conn = pgdb.connect(host=hst+":"+prt, user=usr, password=pwd, database=db)
self.cur = self.conn.cursor()
return True
except pgdb.Error as e:
logger.exception("Error trying to connect to the server.")
return False
if self.get_connection(conn_data):
# Do stuff here:
If I try to connect to a known server but give an incorrect user name, it will trigger the exception and fail.
However if I try to connect to a machine that does not respond (firewalled) it never gets passed self.conn = pgdb.connect()
How to I wait or test for time out rather than have my app appear to hang when a user mistypes an IP address?

What you are experiencing is the pain of firewalls, and the timeout is the normal TCP timeout.

You can usually pass timeout argument in connect function. If it doesn't exist you could try with socket.timeout or default timeout:
import socket
socket.setdefaulttimeout(10) # sets timeout to 10 seconds
This will apply this setting to all connections(socket based) you make and will fail after 10 seconds of waiting.

Related

Pyst2 - How to reconnect Asterisk manager?

I'm using pyst2 to connect to the AMI (Asterisk manager interface). I have a event for shutdown, so it can close it and try to reconnect every minute.
My shutdown event:
def handle_shutdown(event, manager, hass, entry):
_LOGGER.error("Asterisk shutting down.")
manager.close()
host = entry.data[CONF_HOST]
port = entry.data[CONF_PORT]
username = entry.data[CONF_USERNAME]
password = entry.data[CONF_PASSWORD]
while True:
sleep(30)
try:
manager.connect(host, port)
manager.login(username, password)
_LOGGER.info("Succesfully reconnected.")
break
except asterisk.manager.ManagerException as exception:
_LOGGER.error("Error reconnecting to Asterisk: %s", exception.args[1])
It works fine, until Asterisk started up again and it can connect. Instead of connecting I get this error: RuntimeError: threads can only be started once.
Does anybody knows how to do this correctly?
Here is my entire code.
Thanks!

How to handle two recursive Try Except blocks in parallel (Python)

I have a Python script that indefinitely connects to a SQL server and an ActiveMQ server and I am trying to build something that can handle disconnects for both separately. Whenever a connection breaks, I want to reconnect to the server. However, the ActiveMQ connection disconnects much more frequently than the SQL connection and I don't want to reconnect to the SQL server a bunch of times just because the ActiveMQ one is broken.
This is what I've got so far:
def connectSQL(host, port):
try:
time.sleep(5)
connSQL = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}',
server=sqlserver,
database=sqldb,
uid=sqluser,pwd=sqlpassword)
cursor = connSQL.cursor()
def connectActiveMQ(host, port):
try:
time.sleep(5)
conn = stomp.Connection(host_and_ports = [(host, port)],heartbeats=(1000, 1000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
print("Deployed ActiveMQ listener ...")
while True:
time.sleep(10)
except:
print("ActiveMQ connection broke, redeploying listener")
connectActiveMQ(host, port)
connectActiveMQ(host,port)
#Here is a ValueError representing a SQL disconnect
raise ValueError('SQL connection broke')
except:
print("SQL connection broke, reconnecting to SQL")
connectSQL(host, port)
connectSQL(host,port)
This works perfectly for reconnecting to ActiveMQ, but it doesn't work for SQL. Once it has already connected to SQL, any errors become inaccessible due to the ActiveMQ loop (the raise ValueError "SQL connection broke" becomes inaccessible in this code if both connections go through even for a moment). I need the connection to run indefinitely, but I don't know where else I can put my while:True wait statement.
How can I rewrite this so I can catch both ActiveMQ and SQL disconnects in parallel indefinitely?
Quick fix: use threading or multiprocessing. Here is a snippet using threading.
import threading
def connectSQL(host, port):
try:
time.sleep(5)
connSQL = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}',
server=sqlserver,
database=sqldb,
uid=sqluser,pwd=sqlpassword)
cursor = connSQL.cursor()
raise ValueError('SQL connection broke')
except:
print("SQL connection broke, reconnecting to SQL")
connectSQL(host, port)
def connectActiveMQ(host, port):
try:
time.sleep(5)
conn = stomp.Connection(host_and_ports = [(host, port)],heartbeats=(1000, 1000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
print("Deployed ActiveMQ listener ...")
while True:
time.sleep(10)
except:
print("ActiveMQ connection broke, redeploying listener")
connectActiveMQ(host, port)
t1 = threading.Thread(target=connectActiveMQ, args=(host, port))
t2 = threading.Thread(target=connectSQL, args=(host, port))
t1.start()
t2.start()
P.S. Given the quickfix, you should definitely look into the comments above to refactor the individual functions connectSQL and connectActiveMQ. If you need to share data between the methods, have a look here.

python mysql.connector write failure on connection disconnection stalls for 30 seconds

I use python module mysql.connector for connecting to an AWS RDS instance.
Now, as we know, if we do not send a request to SQL server for a while, the connection disconnects.
To handle this, I reconnect to SQL in case a read/write request fails.
Now my problem with the "request fails", it takes significant to fail. And only then can I reconnect, and retry my request. (I have pointed this out as a comment in code snippet).
For a real-time application such as mine, this is a problem. How could I solve this? Is it possible to find out if the disconnection has already happened so that I can try a new connection without having to wait on a read/write request?
Here is how I handle it in my code right now:
def fetchFromDB(self, vid_id):
fetch_query = "SELECT * FROM <db>"
success = False
attempts = 0
output = []
while not success and attempts < self.MAX_CONN_ATTEMPTS:
try:
if self.cnx == None:
self._connectDB_()
if self.cnx:
cursor = self.cnx.cursor() # MY PROBLEM: This step takes too long to fail in case the connection has expired.
cursor.execute(fetch_query)
output = []
for entry in cursor:
output.append(entry)
cursor.close()
success = True
attempts = attempts + 1
except Exception as ex:
logging.warning("Error")
if self.cnx != None:
try:
self.cnx.close()
except Exception as ex:
pass
finally:
self.cnx = None
return output
In my application I cannot tolerate a delay of more than 1 second while reading from mysql.
While configuring mysql, I'm doing just the following settings:
SQL.user = '<username>'
SQL.password = '<password>'
SQL.host = '<AWS RDS HOST>'
SQL.port = 3306
SQL.raise_on_warnings = True
SQL.use_pure = True
SQL.database = <database-name>
There are some contrivances like generating an ALARM signal or similar if a function call takes too long. Those can be tricky with database connections or not work at all. There are other SO questions that go there.
One approach would be to set the connection_timeout to a known value when you create the connection making sure it's shorter than the server side timeout. Then if you track the age of the connection yourself you can preemptively reconnect before it gets too old and clean up the previous connection.
Alternatively you could occasionally execute a no-op query like select now(); to keep the connection open. You would still want to recycle the connection every so often.
But if there are long enough periods between queries (where they might expire) why not open a new connection for each query?

Paramiko check login timeout of SSH Server

I want to see how long it takes my ssh server to close the connection if the user does not login.
What i have so far
self.sshobj = paramiko.SSHClient()
self.sshobj.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.sshobj.connect("192.168.0.1", port=22, username="test", password="test")
self.channel = self.sshobj.invoke_shell()
But the problem is that i don't want to login , which sshobj.connect does, i want to be in the login screen.
And i want to check how long it takes for the server to close the connection.
Is there any way to do this via paramiko ?
You do not necessarily need paramiko to check the LoginGraceTime but since you're specifically asking for it:
Note: banner_timeout is just a timeout for the peer ssh banner response.
Note: timeout is actually a socket read timeout, none is no timeout. Use this to set a hard-timeout for your check.
self.sshobj = paramiko.SSHClient()
self.sshobj.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
self.sshobj.connect("192.168.0.1", port=22, look_for_keys=False, timeout=None, banner_timeout=5)
except paramiko.ssh_exception.SSHException, se:
# paramiko raises SSHException('No authentication methods available',) since we did not specify any auth methods. socket stays open.
pass
ts_start = time.time()
try:
self.channel = self.sshobj.invoke_shell()
except EOFError, e:
# EOFError is raised when peer terminates session.
pass
print time.time()-ts_start
You can even get rid of the first try_catch for No authentication methods available by overriding self.sshobj._auth with an NOP. Below are some changes to the first variant:
def noauth(username, password, pkey, key_filenames, allow_agent,
look_for_keys, gss_auth, gss_kex, gss_deleg_creds, gss_host): pass
...
sshobj._auth = noauth
sshobj.connect("192.168.0.1", port=22, look_for_keys=False, timeout=None, banner_timeout=5)
...
But, as initially mentioned, you do not even need paramiko to test this timeout since the LoginGraceTime triggers like a server-side socket read timeout once banners are exchanged. Therefore you just need to establish a TCP connection, send a fake ssh banner and wait until the remote side disconnects:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("192.168.0.1", 22))
s.sendall("SSH-2.0-MyPythonSSHProbingClient")
s.settimeout(5*60) # hard-limit
print s.recv(500) # remote banner
ts_start = time.time()
if not s.recv(100):
# exits when remote site closes connection, or raises socket.timeout when hard-limit is hit.
print time.time()-ts_start
else:
raise Exception("whoop, something's gone wrong")
The non-paramiko variant is even more accurate.

Port Scanner python script

I'm a beginner to python and i'm learning the socket objects in python. I found out a script on the internet that is:
import socket
s = socket.socket()
socket.setdefaulttimeout(2)
try:
s = s.connect(("IP_ADD", PORT_NUM))
print "[+] connection successful"
except Exception, e:
print "[+] Port closed"
I just wanted to ask, that whether this script can work as a port scanner? Thanks alot!
Just change your code, it can be used as a TCP port scanner for localhost :
import socket
def scan_port(port_num, host):
s = socket.socket()
socket.setdefaulttimeout(2)
try:
s = s.connect((host, port_num))
print port_num, "[+] connection successful"
except Exception, e:
print port_num, "[+] Port closed"
host = 'localhost'
for i in xrange(1024):
scan_port(i, host)
But it is just a toy, you can not use it for something real, if you want scan the ports of other's computer,
try nmap.
Here is my version of your port scanner. I tried to explain how everything works in the comments.
#-*-coding:utf8;-*-
#qpy:3
#qpy:console
import socket
import os
# This is used to set a default timeout on socket
# objects.
DEFAULT_TIMEOUT = 0.5
# This is used for checking if a call to socket.connect_ex
# was successful.
SUCCESS = 0
def check_port(*host_port, timeout=DEFAULT_TIMEOUT):
''' Try to connect to a specified host on a specified port.
If the connection takes longer then the TIMEOUT we set we assume
the host is down. If the connection is a success we can safely assume
the host is up and listing on port x. If the connection fails for any
other reason we assume the host is down and the port is closed.'''
# Create and configure the socket.
sock = socket.socket()
sock.settimeout(timeout)
# the SO_REUSEADDR flag tells the kernel to reuse a local
# socket in TIME_WAIT state, without waiting for its natural
# timeout to expire.
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Like connect(address), but return an error indicator instead
# of raising an exception for errors returned by the C-level connect() 
# call (other problems, such as “host not found,” can still raise exceptions).
# The error indicator is 0 if the operation succeeded, otherwise the value of
# the errnovariable. This is useful to support, for example, asynchronous connects.
connected = sock.connect_ex(host_port) is SUCCESS
# Mark the socket closed.
# The underlying system resource (e.g. a file descriptor)
# is also closed when all file objects from makefile() are closed.
# Once that happens, all future operations on the socket object will fail.
# The remote end will receive no more data (after queued data is flushed).
sock.close()
# return True if port is open or False if port is closed.
return connected
con = check_port('www.google.com', 83)
print(con)

Categories