Pyst2 - How to reconnect Asterisk manager? - python

I'm using pyst2 to connect to the AMI (Asterisk manager interface). I have a event for shutdown, so it can close it and try to reconnect every minute.
My shutdown event:
def handle_shutdown(event, manager, hass, entry):
_LOGGER.error("Asterisk shutting down.")
manager.close()
host = entry.data[CONF_HOST]
port = entry.data[CONF_PORT]
username = entry.data[CONF_USERNAME]
password = entry.data[CONF_PASSWORD]
while True:
sleep(30)
try:
manager.connect(host, port)
manager.login(username, password)
_LOGGER.info("Succesfully reconnected.")
break
except asterisk.manager.ManagerException as exception:
_LOGGER.error("Error reconnecting to Asterisk: %s", exception.args[1])
It works fine, until Asterisk started up again and it can connect. Instead of connecting I get this error: RuntimeError: threads can only be started once.
Does anybody knows how to do this correctly?
Here is my entire code.
Thanks!

Related

Closing clients sockets after server has exited

I am writing a simple client/server socket program where clients connect with server and communicate and then they send exit msg to server and then server closes the connection. The code looks like below.
server.py
import socket
import sys
from threading import Thread
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# This is to prevent the socket going into TIME_WAIT status and OSError
# "Address already in use"
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
except socket.error as e:
print('Error occured while creating the socket {}'.format(e))
server_address = ('localhost', 50000)
sock.bind(server_address)
print('**** Server started on {}:{} ****'.format(*server_address))
sock.listen(5)
def client_thread(conn_sock, client_add):
while True:
client_msg = conn_sock.recv(1024).decode()
if client_msg.lower() != 'exit':
print('[{0}:{1}] {2}'.format(*client_add, client_msg))
serv_reply = 'Okay ' + client_msg.upper()
conn_sock.send(bytes(serv_reply, 'utf-8'))
else:
conn_sock.close()
print('{} exitted !!'.format(client_add[0]))
sys.exit()
try:
# Keep the server until there are incominmg connections
while True:
# Wait for the connctions to accept
conn_sock, client_add = sock.accept()
print('Recieved connection from {}:{}'.format(*client_add))
conn_sock.send(
bytes('***** Welcome to {} *****'.format(server_address[0]), 'utf-8'))
Thread(target=client_thread, args=(
conn_sock, client_add), daemon=True).start()
except Exception as e:
print('Some error occured \n {}'.format(e))
except KeyboardInterrupt as e:
print('Program execution cancelled by user')
conn_sock.send(b'exit')
sys.exit(0)
finally:
sock.close()
client.py
import socket
import sys
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 50000)
print('Connecting to {} on {}'.format(*server_address))
sock.connect(server_address)
def exiting(host=''):
print('{} exitted !!'.format(host))
sys.exit()
while True:
serv_msg = sock.recv(1024).decode()
if serv_msg.lower() != 'exit':
print('{1}: {0}'.format(serv_msg, server_address[0]))
client_reply = input('You: ')
sock.send(bytes(client_reply, 'utf-8'))
if client_reply.lower() == 'exit':
exiting()
else:
exiting('Server')
What I want is in case server exits either through ctrl-c or any other way I want all client sockets to be closed and send msg to clients upon which they should close their socket as well.
I am doing below in except section but for some reason the msg sent by server is not being received by the client.
except KeyboardInterrupt as e:
print('Program execution cancelled by user')
conn_sock.send(b'exit')
sys.exit(0)
Surprisingly if I send the 'exit' msg from client_thread as srvr_reply, the client accepts the msg and exit the client socket at its end just fine. So I am not sure as to why the server is not able to send the same message in except section of the code as mentioned above.
I'm sorry to say that abnormal termination of TCP/IP connections is undetectable unless you try to send data through the connection.
This is known as a "Half Open" socket and it's also mention in the Python documentation.
Usually, when a server process crashes, the OS will close TCP/IP sockets, signaling the client about the closure.
When a client receives the signal, the server's termination can be detected while polling. The polling mechanism (i.e. poll / epoll / kqueue) will test for the HUP (hung up) event.
This is why "Half Open" sockets don't happen in development unless the issue is forced. When both the client and the server run on the same machine, the OS will send the signal about the closure.
But if the server computer crashes, or connectivity is lost (i.e. mobile devices), no such signal is sent and the client never knows.
The only way to detect an abnormal termination is a failed write attempt read will not detect the issue (it will act as if no data was received).
This is why they invented the ping concept and this is why HTTP/1.1 servers and clients (that don't support pings) use timeouts to assume termination.
There's a good blog post about Half Open sockets here.
EDIT (clarifications due to comments)
How to handle the situation:
I would recommend the following:
Add an explicit Ping message (or an Empty/NULL message) to your protocol (the messages understood by both the clients and the server).
Monitor the socket for inactivity by recording each send or recv operation.
Add timeout monitoring to your code. This means that you will need to implement polling, such as select (or poll or the OS specific epoll/kqueue), instead of blocking on recv.
When connection timeout is reached, send the Ping / empty message.
For an easy solution, reset the timeout after sending the Ping.
The next time you poll the socket, the polling mechanism should alert you about the failed connection. Alternatively, the second time you try to ping the server/client you will get an error message.
Note that the first send operation might succeed even though the connection was lost.
This is because the TCP/IP layer sends the message but the send function doesn't wait for the TCP/IP's ACK confirmation.
However, by the second time you get to the ping, the TCP/IP layer would have probably realized that no ACK is coming and registered the error in the socket (this takes time).
Why the send failed before exiting the server
The comment I left about this issue is wrong (in part).
The main reason the conn_sock.send(b'exit') failed is because conn_sock is a local variable in the client thread and isn't accessible from the global state where the SIGINT (CTRL+C) is raised.
This makes sense, as what would happen if the server has more than a single client?
However, it is true that socket.send only schedules the data to be sent, so the assumption that the data was actually sent is incorrect.
Also note that socket.send might not send the whole message if there isn't enough room in the kernel's buffer.

Paramiko check login timeout of SSH Server

I want to see how long it takes my ssh server to close the connection if the user does not login.
What i have so far
self.sshobj = paramiko.SSHClient()
self.sshobj.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.sshobj.connect("192.168.0.1", port=22, username="test", password="test")
self.channel = self.sshobj.invoke_shell()
But the problem is that i don't want to login , which sshobj.connect does, i want to be in the login screen.
And i want to check how long it takes for the server to close the connection.
Is there any way to do this via paramiko ?
You do not necessarily need paramiko to check the LoginGraceTime but since you're specifically asking for it:
Note: banner_timeout is just a timeout for the peer ssh banner response.
Note: timeout is actually a socket read timeout, none is no timeout. Use this to set a hard-timeout for your check.
self.sshobj = paramiko.SSHClient()
self.sshobj.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
self.sshobj.connect("192.168.0.1", port=22, look_for_keys=False, timeout=None, banner_timeout=5)
except paramiko.ssh_exception.SSHException, se:
# paramiko raises SSHException('No authentication methods available',) since we did not specify any auth methods. socket stays open.
pass
ts_start = time.time()
try:
self.channel = self.sshobj.invoke_shell()
except EOFError, e:
# EOFError is raised when peer terminates session.
pass
print time.time()-ts_start
You can even get rid of the first try_catch for No authentication methods available by overriding self.sshobj._auth with an NOP. Below are some changes to the first variant:
def noauth(username, password, pkey, key_filenames, allow_agent,
look_for_keys, gss_auth, gss_kex, gss_deleg_creds, gss_host): pass
...
sshobj._auth = noauth
sshobj.connect("192.168.0.1", port=22, look_for_keys=False, timeout=None, banner_timeout=5)
...
But, as initially mentioned, you do not even need paramiko to test this timeout since the LoginGraceTime triggers like a server-side socket read timeout once banners are exchanged. Therefore you just need to establish a TCP connection, send a fake ssh banner and wait until the remote side disconnects:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("192.168.0.1", 22))
s.sendall("SSH-2.0-MyPythonSSHProbingClient")
s.settimeout(5*60) # hard-limit
print s.recv(500) # remote banner
ts_start = time.time()
if not s.recv(100):
# exits when remote site closes connection, or raises socket.timeout when hard-limit is hit.
print time.time()-ts_start
else:
raise Exception("whoop, something's gone wrong")
The non-paramiko variant is even more accurate.

Python Socket Programming: Application freezes when connecting to a server

I am new to socket coding in Python, and I wrote this simple function to connect to a server. It runs in a tkinter window. I have an Entry widget, and it is where you input the IP address of the server to connect to. However, when you click the button to connect, the application hangs and freezes. Here is the code below.
def Test(self):
socket.setdefaulttimeout(5)
lengthInfo = self.lengthEntry.get()
if self.portEntry.get() != '':
portInfo = int(self.portEntry.get())
serverInfo = self.serverEntry.get()
conn = socket.socket()
if not self.portEntry.get():
portInfo = 80
try:
conn = socket.socket()
name = socket.gethostbyname(serverInfo)
conn.connect((name,portInfo))
ans = conn.recv(2048)
self.outputWindow.insert(END, "Connection successful: \n \
port:{}, server:{} ('{}'), '{}' \n".format(portInfo, name, serverInfo, \
ans))
return
except Exception as e:
self.outputWindow.insert(END, str(e) + "\n")
I originally thought it was because there was now timeout, but as you can see, I added a 5 second timeout in the very first line of the code. I assumed it was because the application was having some sort of trouble connecting, but I checked Windows task manager, and under the network section there was no activity. I also ran the program in Ubuntu 14.04 but got the same results.
Any solutions?
The socket waits for 2,048 bytes from the server (conn.recv(2048)) and I guess they never arrive.

Python: How to interrupt raw_input() in other thread

I am writing a simple client-server program in python. In the client program, I am creating two threads (using Python's threading module), one for receiving, one for sending. The receiving thread continuously receives strings from the server side; while the sending thread continuously listens to the user input (using raw_input()) and send it to the server side. The two threads communicate using a Queue (which is natively synchronized, LIKE!).
The basic logic is like following:
Receiving thread:
global queue = Queue.Queue(0)
def run(self):
while 1:
receive a string from the server side
if the string is QUIT signal:
sys.exit()
else:
put it into the global queue
Sending thread:
def run(self):
while 1:
str = raw_input()
send str to the server side
fetch an element from the global queue
deal with the element
As you can see, in the receiving thread, I have a if condition to test whether the server has sent a "QUIT signal" to the client. If it has, then I want the whole program to stop.
The problem here is that for most of its time, the sending thread is blocked by "raw_input()" and waiting for the user input. When it is blocked, calling "sys.exit()" from the other thread (receiving thread) will not terminate the sending thread immediately. The sending thread has to wait for the user to type something and hit the enter button.
Could anybody inspire me how to get around with this? I do not mind using alternatives of "raw_input()". Actually I do not even mind changing the whole structure.
-------------EDIT-------------
I am running this on a linux machine, and my Python version is 2.7.5
You could just make the sending thread daemonic:
send_thread = SendThread() # Assuming this inherits from threading.Thread
send_thread.daemon = True # This must be called before you call start()
The Python interpreter won't be blocked from exiting if the only threads left running are daemons. So, if the only thread left is send_thread, your program will exit, even if you're blocked on raw_input.
Note that this will terminate the sending thread abruptly, no matter what its doing. This could be dangerous if it accesses external resources that need to be cleaned up properly or shouldn't be interrupted (like writing to a file, for example). If you're doing anything like that, protect it with a threading.Lock, and only call sys.exit() from the receiving thread if you can acquire that same Lock.
The short answer is you can't. input() like a lot of such input commands is blocking and it's blocking whether everything about the thread has been killed. You can sometimes call sys.exit() and get it to work depending on the OS, but it's not going to be consistent. Sometimes you can kill the program by deferring out to the local OS. But, then you're not going to be widely cross platform.
What you might want to consider if you have this is to funnel the functionality through the sockets. Because unlike input() we can do timeouts, and threads and kill things rather easily. It also gives you the ability to do multiple connections and maybe accept connections more broadly.
import socket
import time
from threading import Thread
def process(command, connection):
print("Command Entered: %s" % command)
# Any responses are written to connection.
connection.send(bytes('>', 'utf-8'))
class ConsoleSocket:
def __init__(self):
self.keep_running_the_listening_thread = True
self.data_buffer = ''
Thread(target=self.tcp_listen_handle).start()
def stop(self):
self.keep_running_the_listening_thread = False
def handle_tcp_connection_in_another_thread(self, connection, addr):
def handle():
while self.keep_running_the_listening_thread:
try:
data_from_socket = connection.recv(1024)
if len(data_from_socket) != 0:
self.data_buffer += data_from_socket.decode('utf-8')
else:
break
while '\n' in self.data_buffer:
pos = self.data_buffer.find('\n')
command = self.data_buffer[0:pos].strip('\r')
self.data_buffer = self.data_buffer[pos + 1:]
process(command, connection)
except socket.timeout:
continue
except socket.error:
if connection is not None:
connection.close()
break
Thread(target=handle).start()
connection.send(bytes('>', 'utf-8'))
def tcp_listen_handle(self, port=23, connects=5, timeout=2):
"""This is running in its own thread."""
sock = socket.socket()
sock.settimeout(timeout)
sock.bind(('', port))
sock.listen(connects) # We accept more than one connection.
while self.keep_running_the_listening_thread:
connection = None
try:
connection, addr = sock.accept()
address, port = addr
if address != '127.0.0.1': # Only permit localhost.
connection.close()
continue
# makes a thread deals with that stuff. We only do listening.
connection.settimeout(timeout)
self.handle_tcp_connection_in_another_thread(connection, addr)
except socket.timeout:
pass
except OSError:
# Some other error.
if connection is not None:
connection.close()
sock.close()
c = ConsoleSocket()
def killsocket():
time.sleep(20)
c.stop()
Thread(target=killsocket).start()
This launches a listener thread for the connections set on port 23 (telnet), and you connect and it passes that connection off to another thread. And it starts a killsocket thread that disables the various threads and lets them die peacefully (for demonstration purposes). You cannot however connect localhost within this code, because you'd need input() to know what to send to the server, which recreates the problem.

Python DBAPI time out for connections?

I was attempting to test for connection failure, and unfortunately it's not failing if the IP address of the host is fire walled.
This is the code:
def get_connection(self, conn_data):
rtu, hst, prt, usr, pwd, db = conn_data
try:
self.conn = pgdb.connect(host=hst+":"+prt, user=usr, password=pwd, database=db)
self.cur = self.conn.cursor()
return True
except pgdb.Error as e:
logger.exception("Error trying to connect to the server.")
return False
if self.get_connection(conn_data):
# Do stuff here:
If I try to connect to a known server but give an incorrect user name, it will trigger the exception and fail.
However if I try to connect to a machine that does not respond (firewalled) it never gets passed self.conn = pgdb.connect()
How to I wait or test for time out rather than have my app appear to hang when a user mistypes an IP address?
What you are experiencing is the pain of firewalls, and the timeout is the normal TCP timeout.
You can usually pass timeout argument in connect function. If it doesn't exist you could try with socket.timeout or default timeout:
import socket
socket.setdefaulttimeout(10) # sets timeout to 10 seconds
This will apply this setting to all connections(socket based) you make and will fail after 10 seconds of waiting.

Categories