issues with socket programming - python - python

I am doing a client-server project for my college project,
we have to allocate the login to the client.
Client system will request its status for every 2 seconds(to check whether the client is locked or unlocked). and server will accept the client request and reply the client status to the system.
But the problem is server thread is not responding to the client request.
CLIENT THREAD:
def checkPort():
while True:
try:
s = socket.socket()
s.connect((host, port))
s.send('pc1') # send PC name to the server
status = s.recv(1024) # receive the status from the server
if status == "unlock":
disableIntrrupts() # enable all the functions of system
else:
enableInterrupts() # enable all the functions of system
time.sleep(5)
s.close()
except Exception:
pass
SERVER THREAD:
def check_port():
while True:
try:
print "hello loop is repeating"
conn, addr = s.accept()
data = conn.recv(1024)
if exit_on_click == 1:
break
if (any(sublist[0] == data for sublist in available_sys)):
print "locked"
conn.send("lock")
elif (any(sublist[0] == data for sublist in occupied_sys)):
conn.send("unlock")
print "unlocked"
else:
print "added to gui for first time"
available_sys.append([data,addr[0],nameText,usnText,branchText])
availSysList.insert('end',data)
except Exception:
pass
But my problem is server thread is not executing more than 2 time,
So its unable to accept client request more than one time.
can't we handle multiple client sockets using single server socket?
How to handle multiple client request from server ?
Thanks for any help !!

Its because your server, will block waiting for a new connection on this line
conn, addr = s.accept()
This is because calls like .accept and .read are blocking calls that hold the process
You need to consider an alternative design, where in you either.
Have one process per connection (this idea is stupid)
One thread per connection (this idea is less stupid than the first but still mostly foolish)
Have a non blocking design that allows multiple clients and read/write without blocking execution.
To achieve the first, look at multiprocessing, the second is threading the third is slightly more complicated to get your head around but will yield the best results, the go to library for event driven code in Python is twisted but there are others like
gevent
tulip
tornado
And so so many more that I haven't listed here.

here's an full example of implementing a threaded server. it's fully functional and comes with the benefit of using SSL as well. further, i use threaded event objects to signal another class object after storing my received data in a database.
please note, _sni and _cams_db are additional modules purely of my own. if you want to see the _sni module (provides SNI support for pyOpenSSL), let me know.
what follows this, is a snippet from camsbot.py, there's a whole lot more that far exceeds the scope of this question. what i've built is a centralized message relay system. it listens to tcp/2345 and accepts SSL connections. each connection passes messages into the system. short lived connections will connect, pass message, and disconnect. long lived connections will pass numerous messages after connecting. messages are stored in a database and a threading.Event() object (attached to the DB class) is set to tell the bot to poll the database for new messages and relay them.
the below example shows
how to set up a threaded tcp server
how to pass information from the listener to the accept handler such as config data and etc
in addition, this example also shows
how to employ an SSL socket
how to do some basic certificate validations
how to cleanly wrap and unwrap SSL from a tcp socket
how to use poll() on the socket instead of select()
db.pending is a threading.Event() object in _cams_db.py
in the main process we start another thread that waits on the pending object with db.pending.wait(). this makes that thread wait until another thread does db.pending.set(). once it is set, our waiting thread immediately wakes up and continues to work. when our waiting thread is done, it calls db.pending.clear() and goes back to the beginning of the loop and starts waiting again with db.pending.wait()
while True:
db.pending.wait()
# after waking up, do code. for example, we wait for incoming messages to
# be stored in the database. the threaded server will call db.pending.set()
# which will wake us up. we'll poll the DB for new messages, relay them, clear
# our event flag and go back to waiting.
# ...
db.pending.clear()
snippet from camsbot.py:
import sys, os, sys, time, datetime, threading, select, logging, logging.handlers
import configparser, traceback, re, socket, hashlib
# local .py
sys.path.append('/var/vse/python')
import _util, _webby, _sni, _cams_db, _cams_threaded_server, _cams_bot
# ...
def start_courier(config):
# default values
host = '::'
port = 2345
configp = config['configp']
host = configp.get('main', 'relay msp hostport')
# require ipv6 addresses be specified in [xx:xx:xx] notation, therefore
# it is safe to look for :nnnn at the end
if ':' in host and not host.endswith(']'):
port = host.split(':')[-1]
try:
port = int(port, 10)
except:
port = 2345
host = host.split(':')[:-1][0]
server = _cams_threaded_server.ThreadedTCPServer((host, port), _cams_threaded_server.ThreadedTCPRequestHandler, config)
t = threading.Thread(target=server.serve_forever, name='courier')
t.start()
_cams_threaded_server.py:
import socket, socketserver, select, datetime, time, threading
import sys, struct
from OpenSSL.SSL import SSLv23_METHOD, SSLv3_METHOD, TLSv1_METHOD, OP_NO_SSLv2
from OpenSSL.SSL import VERIFY_NONE, VERIFY_PEER, VERIFY_FAIL_IF_NO_PEER_CERT, Context, Connection
from OpenSSL.SSL import FILETYPE_PEM
from OpenSSL.SSL import WantWriteError, WantReadError, WantX509LookupError, ZeroReturnError, SysCallError
from OpenSSL.crypto import load_certificate
from OpenSSL import SSL
# see note at beginning of answer
import _sni, _cams_db
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass, config):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET6
self.connected = []
self.logger = config['logger']
self.config = config
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sc = Context(TLSv1_METHOD)
sc.set_verify(VERIFY_PEER|VERIFY_FAIL_IF_NO_PEER_CERT, _sni.verify_cb)
sc.set_tlsext_servername_callback(_sni.pick_certificate)
self.sc = sc
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
config = self.server.config
logger = self.server.logger
connected = self.server.connected
sc = self.server.sc
try:
self.peer_hostname = socket.gethostbyaddr(socket.gethostbyname(self.request.getpeername()[0]))[0]
except:
self.peer_hostname = '!'+self.request.getpeername()[0]
logger.info('peer: {}'.format(self.peer_hostname))
ssl_s = Connection(sc, self.request)
ssl_s.set_accept_state()
try:
ssl_s.do_handshake()
except:
t,v,tb = sys.exc_info()
logger.warn('handshake failed {}'.format(v))
ssl_s.setblocking(True)
self.ssl_s = ssl_s
try:
peercert = ssl_s.get_peer_certificate()
except:
peercert = False
t,v,tb = sys.exc_info()
logger.warn('SSL get peer cert failed: {}'.format(v))
if not peercert:
logger.warn('No peer certificate')
else:
acl = config['configp']['main'].get('client cn acl', '').split(' ')
cert_subject = peercert.get_subject().CN
logger.info('Looking for {} in acl: {}'.format(cert_subject,acl))
if cert_subject in acl:
logger.info('{} is permitted'.format(cert_subject))
else:
logger.warn('''client CN not approved''')
# it's ok to block here, every socket has its own thread
ssl_s.setblocking(True)
self.db = config['db']
msgcount = 0
p = select.poll()
# don't want writable, just readable
p.register(self.request, select.POLLIN|select.POLLPRI|select.POLLERR|select.POLLHUP|select.POLLNVAL)
peername = ssl_s.getpeername()
x = peername[0]
if x.startswith('::ffff:'):
x = x[7:]
peer_ip = x
try:
host = socket.gethostbyaddr(x)[0]
except:
host = peer_ip
logger.info('{}/{}:{} connected'.format(host, peer_ip, peername[1]))
connected.append( [host, peername[1]] )
if peercert:
threading.current_thread().setName('{}/port={}/CN={}'.format(host, peername[1], peercert.get_subject().CN))
else:
threading.current_thread().setName('{}/port={}'.format(host, peername[1]))
sockclosed = False
while not sockclosed:
keepreading = True
#logger.debug('starting 30 second timeout for poll')
pe = p.poll(30.0)
if not pe:
# empty list means poll timeout
# for SSL sockets it means WTF. we get an EAGAIN like return even if the socket is blocking
continue
logger.debug('poll indicates: {}'.format(pe))
#define SSL_NOTHING 1
#define SSL_WRITING 2
#define SSL_READING 3
#define SSL_X509_LOOKUP 4
while keepreading and not sockclosed:
data,sockclosed,keepreading = self._read_ssl_data(2, head=True)
if sockclosed or not keepreading:
time.sleep(5)
continue
plen = struct.unpack('H', data)[0]
data,sockclosed,keepreading = self._read_ssl_data(plen)
if sockclosed or not keepreading:
time.sleep(5)
continue
# send thank you, ignore any errors since we appear to have gotten
# the message
try:
self.ssl_s.sendall(b'ty')
except:
pass
# extract the timestamp
message_ts = data[0:8]
msgtype = chr(data[8])
message = data[9:].decode()
message_ts = struct.unpack('d', message_ts)[0]
message_ts = datetime.datetime.utcfromtimestamp(message_ts).replace(tzinfo=datetime.timezone.utc)
self.db.enqueue(config['group'], peer_ip, msgtype, message, message_ts)
self.db.pending.set()
# we're recommended to use the return socket object for any future operations rather than the original
try:
s = ssl_s.unwrap()
s.close()
except:
pass
connected.remove( [host, peername[1]] )
t_name = threading.current_thread().getName()
logger.debug('disconnect: {}'.format(t_name))
def _read_ssl_data(self, wantsize=16384, head=False):
_w = ['WANT_NOTHING','WANT_READ','WANT_WRITE','WANT_X509_LOOKUP']
logger = self.server.logger
data = b''
sockclosed = False
keepreading = True
while len(data) < wantsize and keepreading and not sockclosed:
rlen = wantsize - len(data)
try:
w,wr = self.ssl_s.want(),self.ssl_s.want_read()
#logger.debug(' want({}) want_read({})'.format(_w[w],wr))
x = self.ssl_s.recv(rlen)
#logger.debug(' recv(): {}'.format(x))
if not ( x or len(x) ):
raise ZeroReturnError
data += x
if not (len(x) == len(data) == wantsize):
logger.info(' read={}, len(data)={}, plen={}'.format(len(x),len(data),wantsize))
except WantReadError:
# poll(), when ready, read more
keepreading = False
logger.info(' got WantReadError')
continue
except WantWriteError:
# poll(), when ready, write more
keepreading = False
logger.info(' got WantWriteError')
continue
except ZeroReturnError:
# socket got closed, a '0' bytes read also means the same thing
keepreading = False
sockclosed = True
logger.info(' ZRE, socket closed normally')
continue
except SysCallError:
keepreading = False
sockclosed = True
t,v,tb = sys.exc_info()
if v.args[0] == -1: # normal EOF
logger.info(' EOF found, keepreading=False')
else:
logger.info('{} terminated session abruptly while reading plen'.format(self.peer_hostname))
logger.info('t: {}'.format(t))
logger.info('v: {}'.format(v))
continue
except:
t,v,tb = sys.exc_info()
logger.warning(' fucked? {}'.format(v))
raise
if not head and not len(data) == wantsize:
logger.warn(' short read {} of {}'.format(len(data), wantsize))
return data,sockclosed,keepreading

let's start with a bare bones threaded tcp server.
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
# self.request is your accepted socket, do all your .read() and .wirte() on it
s = self.request
request = s.read(1024)
# decide locked or unlocked. this example arbitrarily writes back 'locked'
s.write('locked')
# we're done, close the socket and exit with a default return of None
s.close()
ok, start your threaded server with this in your main() function:
server = threading.ThreadedTCPServer(('127.0.0.1', 1234), ThreadedTCPRequestHandler)
t = threading.Thread(target=server.serve_forever, name='optional_name')
t.start()
now you can let the threading module handle the semantics of concurrency and not worry about it.

You might want to take a look at 0MQ and concurrent.futures. 0MQ has a Tornado event loop in the library and it reduces the complexity of socket programming. concurrent.futures is a high level interface over threading or multiprocessing.

You can see different concurrent server approaches at
https://bitbucket.org/arco_group/upper/src
These will help you to choose the better way for you.
Cheers

Related

Should a TCP client be able to pause the server, when the TCP server reads a non-blocking socket

Overview
I have a simple question with code below. Hopefully I didn't make a mistake in the code.
I'm a network engineer, and I need to test certain linux behavior of our business application keepalives during network outages (I'm going to insert some iptables stuff later to jack with the connection - first I want to make sure I got the client & server right).
As part of a network failure test I'm conducting, I wrote a non-blocking Python TCP client and server that are supposed to blindly send messages to each other in a loop. To understand what's happening I am using loop counters.
The server's loop should be relatively straightforward. I loop through every fd that select says is ready. I never even import sleep anywhere in my server's code. From this perspective, I don't expect the server's code to pause while it loops over the client's socket , but for some reason the server code pauses intermittently (more detail, below).
I initially didn't put a sleep in the client's loop. Without a sleep on the client side, the server and client seem to be as efficient as I want. However, when I put a time.sleep(1) statement after the client does an fd.send() to the server, the TCP server code intermittently pauses while the client is sleeping.
My questions:
Should I be able to write a single-threaded Python TCP server that doesn't pause when the client hits time.sleep() in the client's fd.send() loop? If so, what am I doing wrong? <- ANSWERED
If I wrote this test code correctly and the server shouldn't pause, why is the TCP server intermittently pausing while it polls the client's connection for data?
Reproducing the scenario
I'm running this on two RHEL6 linux machines. To reproduce the issue...
Open two different terminals.
Save the client and server scripts in different files
Change the shebang path to your local python (I'm using Python 2.7.15)
Change the SERVER_HOSTNAME and SERVER_DOMAIN in the client's code to be the hostname and domain of the server you're running this on
Start the server first, then start the client.
After the client connects, you'll see messages as shown in EXHIBIT 1 scrolling quickly in the server's terminal. After a few seconds The scrolling pauses intermittently when the client hits time.sleep(). I don't expect to see those pauses, but maybe I've misunderstood something.
EXHIBIT 1
---
LOOP_COUNT 0
---
LOOP_COUNT 1
---
LOOP_COUNT 2
---
LOOP_COUNT 3
CLIENTMSG: 'client->server 0'
---
LOOP_COUNT 4
---
LOOP_COUNT 5
---
LOOP_COUNT 6
---
LOOP_COUNT 7
---
LOOP_COUNT 8
---
LOOP_COUNT 9
---
LOOP_COUNT 10
---
LOOP_COUNT 11
---
Summary resolution
If I wrote this test code correctly and the server shouldn't pause, why is the TCP server intermittently pausing while it polls the client's connection for data?
Answering my own question. My blocking problem was caused by calling select() with a non-zero timeout.
When I changed select() to use a zero-second timeout, I got expected results.
Final non-blocking code (incorporating suggestions in answers):
tcp_server.py
#!/usr/bin/python -u
from socket import AF_INET, SOCK_STREAM, SO_REUSEADDR, SOL_SOCKET
from socket import MSG_DONTWAIT
#from socket import MSG_OOB <--- for send()
from socket import socket
import socket as socket_module
import select
import errno
import fcntl
import time
import sys
import os
def get_errno_info(e, op='', debugmsg=False):
"""Return verbose information from errno errors, such as errors returned by python socket()"""
VALID_OP = set(['accept', 'connect', 'send', 'recv', 'read', 'write'])
assert op.lower() in VALID_OP, "op must be: {0}".format(
','.join(sorted(VALID_OP)))
## ref: man 3 errno (in linux)... other systems may be man 2 intro
## also see https://docs.python.org/2/library/errno.html
try:
retval_int = int(e.args[0]) # Example: 32
retval_str = os.strerror(e.args[0]) # Example: 'Broken pipe'
retval_code = errno.errorcode.get(retval_int, 'MODULEFAIL') # Ex: EPIPE
except:
## I don't expect to get here unless something broke in python errno...
retval_int = -1
retval_str = '__somethingswrong__'
retval_code = 'BADFAIL'
if debugmsg:
print "DEBUG: Can't {0}() on socket (errno:{1}, code:{2} / {3})".format(
op, retval_int, retval_code, retval_str)
return retval_int, retval_str, retval_code
host = ''
port = 6667 # IRC service
DEBUG = True
serv_sock = socket(AF_INET, SOCK_STREAM)
serv_sock.setsockopt(SOL_SOCKET, SOCK_STREAM, 1)
serv_sock.bind((host, port))
serv_sock.listen(5)
#fcntl.fcntl(serv_sock, fcntl.F_SETFL, os.O_NONBLOCK) # Make the socket non-blocking
serv_sock.setblocking(False)
sock_list = [serv_sock]
from_client_str = '__DEFAULT__'
to_client_idx = 0
loop_count = 0
need_send_select = False
while True:
if need_send_select:
# Only do this after send() EAGAIN or EWOULDBLOCK...
send_sock_list = sock_list
else:
send_sock_list = []
#print "---"
#print "LOOP_COUNT", loop_count
recv_ready_list, send_ready_list, exception_ready = select.select(
sock_list, send_sock_list, [], 0.0) # Last float is the select() timeout...
## Read all sockets which are output-ready... might be client or server...
for sock_fd in recv_ready_list:
# accept() if we're reading on the server socket...
if sock_fd is serv_sock:
try:
clientsock, clientaddr = sock_fd.accept()
except socket_module.error, e:
errstr, errint, errcode = get_errno_info(e, op='accept',
debugmsg=DEBUG)
assert sock_fd.gettimeout()==0.0, "client socket should be in non-blocking mode"
sock_list.append(clientsock)
# read input from the client socket...
else:
try:
from_client_str = sock_fd.recv(1024, MSG_DONTWAIT)
if from_client_str=='':
# Client closed the socket...
print "CLIENT CLOSED SOCKET"
sock_list.remove(sock_fd)
except socket_module.error, e:
errstr, errint, errcode = get_errno_info(e, op='recv',
debugmsg=DEBUG)
if errcode=='EAGAIN' or errcode=='EWOULDBLOCK':
# socket unavailable to read()
continue
elif errcode=='ECONNRESET' or errcode=='EPIPE':
# Client closed the socket...
sock_list.remove(sock_fd)
else:
print "UNHANDLED SOCKET ERROR", errcode, errint, errstr
sys.exit(1)
print "from_client_str: '{0}'".format(from_client_str)
## Adding dynamic_list, per input from EJP, below...
if need_send_select is False:
dynamic_list = sock_list
else:
dynamic_list = send_ready_list
## NOTE: socket code shouldn't walk this list unless a write is pending...
## broadast the same message to all clients...
for sock_fd in dynamic_list:
## Ignore server's listening socket...
if sock_fd is serv_sock:
## Only send() to accept()ed sockets...
continue
try:
to_client_str = "server->client: {0}\n".format(to_client_idx)
send_retval = sock_fd.send(to_client_str, MSG_DONTWAIT)
## send() returns the number of bytes written, on success
## disabling assert check on sent bytes while using MSG_DONTWAIT
#assert send_retval==len(to_client_str)
to_client_idx += 1
need_send_select = False
except socket_module.error, e:
errstr, errint, errcode = get_errno_info(e, op='send',
debugmsg=DEBUG)
if errcode=='EAGAIN' or errcode=='EWOULDBLOCK':
need_send_select = True
continue
elif errcode=='ECONNRESET' or errcode=='EPIPE':
# Client closed the socket...
sock_list.remove(sock_fd)
else:
print "FATAL UNHANDLED SOCKET ERROR", errcode, errint, errstr
sys.exit(1)
loop_count += 1
tcp_client.py
#!/usr/bin/python -u
from socket import AF_INET, SOCK_STREAM
from socket import MSG_DONTWAIT # non-blocking send/recv; see man 2 recv
from socket import gethostname, socket
import socket as socket_module
import select
import fcntl
import errno
import time
import sys
import os
## NOTE: Using this script to simulate a scheduler
SERVER_HOSTNAME = 'myServerHostname'
SERVER_DOMAIN = 'mydomain.local'
PORT = 6667
DEBUG = True
def get_errno_info(e, op='', debugmsg=False):
"""Return verbose information from errno errors, such as errors returned by python socket()"""
VALID_OP = set(['accept', 'connect', 'send', 'recv', 'read', 'write'])
assert op.lower() in VALID_OP, "op must be: {0}".format(
','.join(sorted(VALID_OP)))
## ref: man 3 errno (in linux)... other systems may be man 2 intro
## also see https://docs.python.org/2/library/errno.html
try:
retval_int = int(e.args[0]) # Example: 32
retval_str = os.strerror(e.args[0]) # Example: 'Broken pipe'
retval_code = errno.errorcode.get(retval_int, 'MODULEFAIL') # Ex: EPIPE
except:
## I don't expect to get here unless something broke in python errno...
retval_int = -1
retval_str = '__somethingswrong__'
retval_code = 'BADFAIL'
if debugmsg:
print "DEBUG: Can't {0}() on socket (errno:{1}, code:{2} / {3})".format(
op, retval_int, retval_code, retval_str)
return retval_int, retval_str, retval_code
connect_finished = False
while not connect_finished:
try:
c2s = socket(AF_INET, SOCK_STREAM) # Client to server socket...
# Set socket non-blocking
#fcntl.fcntl(c2s, fcntl.F_SETFL, os.O_NONBLOCK)
c2s.connect(('.'.join((SERVER_HOSTNAME, SERVER_DOMAIN,)), PORT))
c2s.setblocking(False)
assert c2s.gettimeout()==0.0, "c2s socket should be in non-blocking mode"
connect_finished = True
except socket_module.error, e:
errstr, errint, errcode = get_errno_info(e, op='connect',
debugmsg=DEBUG)
if errcode=='EINPROGRESS':
pass
to_srv_idx = 0
need_send_select = False
while True:
socket_list = [c2s]
# Get the list sockets which can: take input, output, etc...
if need_send_select:
# Only do this after send() EAGAIN or EWOULDBLOCK...
send_sock_list = socket_list
else:
send_sock_list = []
recv_ready_list, send_ready_list, exception_ready = select.select(
socket_list, send_sock_list, [])
for sock_fd in recv_ready_list:
assert sock_fd is c2s, "Strange socket failure here"
#incoming message from remote server
try:
from_srv_str = sock_fd.recv(1024, MSG_DONTWAIT)
except socket_module.error, e:
## https://stackoverflow.com/a/16745561/667301
errstr, errint, errcode = get_errno_info(e, op='recv',
debugmsg=DEBUG)
if errcode=='EAGAIN' or errcode=='EWOULDBLOCK':
# Busy, try again later...
print "recv() BLOCKED"
continue
elif errcode=='ECONNRESET' or errcode=='EPIPE':
# Server ended normally...
sys.exit(0)
## NOTE: if we get this far, we successfully received from_srv_str.
## Anything caught above, is some kind of fail...
print "from_srv_str: {0}".format(from_srv_str)
## Adding dynamic_list, per input from EJP, below...
if need_send_select is False:
dynamic_list = socket_list
else:
dynamic_list = send_ready_list
for sock_fd in dynamic_list:
# outgoing message to remote server
if sock_fd is c2s:
try:
to_srv_str = 'client->server {0}'.format(to_srv_idx)
sock_fd.send(to_srv_str, MSG_DONTWAIT)
##
time.sleep(1) ## Client blocks the server here... Why????
##
to_srv_idx += 1
need_send_select = False
except socket_module.error, e:
errstr, errint, errcode = get_errno_info(e, op='send',
debugmsg=DEBUG)
if errcode=='EAGAIN' or errcode=='EWOULDBLOCK':
## Try to send() later...
print "send() BLOCKED"
need_send_select = True
continue
elif errcode=='ECONNRESET' or errcode=='EPIPE':
# Server ended normally...
sys.exit(0)
Original Question Code:
tcp_server.py
#!/usr/bin/python -u
from socket import AF_INET, SOCK_STREAM, SO_REUSEADDR, SOL_SOCKET
#from socket import MSG_OOB <--- for send()
from socket import socket
import socket as socket_module
import select
import fcntl
import os
host = ''
port = 9997
serv_sock = socket(AF_INET, SOCK_STREAM)
serv_sock.setsockopt(SOL_SOCKET, SOCK_STREAM, 1)
serv_sock.bind((host, port))
serv_sock.listen(5)
fcntl.fcntl(serv_sock, fcntl.F_SETFL, os.O_NONBLOCK) # Make the socket non-blocking
sock_list = [serv_sock]
from_client_str = '__DEFAULT__'
to_client_idx = 0
loop_count = 0
while True:
recv_ready_list, send_ready_list, exception_ready = select.select(sock_list, sock_list,
[], 5)
print "---"
print "LOOP_COUNT", loop_count
## Read all sockets which are input-ready... might be client or server...
for sock_fd in recv_ready_list:
# accept() if we're reading on the server socket...
if sock_fd is serv_sock:
clientsock, clientaddr = sock_fd.accept()
sock_list.append(clientsock)
# read input from the client socket...
else:
try:
from_client_str = sock_fd.recv(4096)
if from_client_str=='':
# Client closed the socket...
print "CLIENT CLOSED SOCKET"
sock_list.remove(sock_fd)
except socket_module.error, e:
print "WARNING RECV FAIL"
print "from_client_str: '{0}'".format(from_client_str)
for sock_fd in send_ready_list:
if sock_fd is not serv_sock:
try:
to_client_str = "server->client: {0}\n".format(to_client_idx)
sock_fd.send(to_client_str)
to_client_idx += 1
except socket_module.error, e:
print "TO CLIENT SEND ERROR", e
loop_count += 1
tcp_client.py
#!/usr/bin/python -u
from socket import AF_INET, SOCK_STREAM
from socket import gethostname, socket
import socket as socket_module
import select
import fcntl
import errno
import time
import sys
import os
## NOTE: Using this script to simulate a scheduler
SERVER_HOSTNAME = 'myHostname'
SERVER_DOMAIN = 'mydomain.local'
PORT = 9997
def handle_socket_error_continue(e):
## non-blocking socket info from:
## https://stackoverflow.com/a/16745561/667301
print "HANDLE_SOCKET_ERROR_CONTINUE"
err = e.args[0]
if (err==errno.EAGAIN) or (err==errno.EWOULDBLOCK):
print 'CLIENT DEBUG: No data input from server'
return True
else:
print 'FROM SERVER RECV ERROR: {0}'.format(e)
sys.exit(1)
c2s = socket(AF_INET, SOCK_STREAM) # Client to server socket...
c2s.connect(('.'.join((SERVER_HOSTNAME, SERVER_DOMAIN,)), PORT))
# Set socket non-blocking...
fcntl.fcntl(c2s, fcntl.F_SETFL, os.O_NONBLOCK)
to_srv_idx = 0
while True:
socket_list = [c2s]
# Get the list sockets which can: take input, output, etc...
recv_ready_list, send_ready_list, exception_ready = select.select(
socket_list, socket_list, [])
for sock_fd in recv_ready_list:
assert sock_fd is c2s, "Strange socket failure here"
#incoming message from remote server
try:
from_srv_str = sock_fd.recv(4096)
except socket_module.error, e:
## https://stackoverflow.com/a/16745561/667301
err_continue = handle_socket_error_continue(e)
if err_continue is True:
continue
else:
if len(from_srv_str)==0:
print "SERVER CLOSED NORMALLY"
sys.exit(0)
## NOTE: if we get this far, we successfully received from_srv_str.
## Anything caught above, is some kind of fail...
print "from_srv_str: {0}".format(from_srv_str)
for sock_fd in send_ready_list:
#incoming message from remote server
if sock_fd is c2s:
#to_srv_str = raw_input('Send to server: ')
try:
to_srv_str = 'client->server {0}'.format(to_srv_idx)
sock_fd.send(to_srv_str)
##
time.sleep(1) ## Client blocks the server here... Why????
##
to_srv_idx += 1
except socket_module.error, e:
print "TO SERVER SEND ERROR", e
TCP sockets are almost always ready for writing, unless their socket send buffer is full.
It is therefore incorrect to always select on writability for a socket. You should only do so after you've encountered a send failure due to EAGAIN/EWOULDBLOCK. Otherwise your server will spin mindlessly processing writeable sockets, which will usually be all of them.
However, when I put a time.sleep(1) statement after the client does an
fd.send() to the server, the TCP server code intermittently pauses
while the client is sleeping.
AFAICT from running the provided code (nice self-contained example, btw), the server is behaving as intended.
In particular, the semantics of the select() call are that select() shouldn't return until there is something for the thread to do. Having the thread block inside select() is a good thing when there is nothing that the thread can do right now anyway, since it prevents the thread from spinning the CPU for no reason.
So in this case, your server program has told select() that it wants select() to return only when at least one of the following conditions is true:
serv_sock is ready-for-read (which is to say, a new client wants to connect to the server now)
serv_sock is ready-for-write (I don't believe this ever actually happens on a listening-socket, so this criterion can probably be ignored)
clientsock is ready-for-read (that is, the client has sent some bytes to the server and they are waiting in clientsock's buffer for the server thread to recv() them)
clientsock is ready-for-write (that is, clientsock has some room in its outgoing-data-buffer that the server could send() data into if it wants to send some data back to the client)
Five seconds have passed since the call to select() started blocking.
I see (via print-debugging) that when your server program blocks, it is blocking inside select(), which indicates that none of the 5 conditions above are being met during the blocking-period.
Why is that? Well, let's go down the list.
Not met because no other clients are trying to connect
Not met because this never happens
Not met because the server has read all of the data that the connected client has sent (and since the connected client is itself sleeping, it's not sending any more data)
Not met because the server has filled up the outgoing-data buffer of its clientsock (because the client program is sleeping, it's only reading the data coming from the server intermittently, and the TCP layer guarantees lossless/in-order transmission, so once clientsock's outgoing-data-buffer is full, clientsock won't select-as-ready-for-write unless/until the client reads at least some data from its end of the conenction)
Not met because 5 seconds haven't elapsed yet since select() started blocking.
So is this behavior actually a problem for the server? In fact it is not, because the server will still be responsive to any other clients that connect to the server. In particular, select() will still return right away whenever serv_sock or any other client's socket select()s as ready-for-read (or ready-for-write) and so the server can handle the other clients just fine while waiting for your hacked/slow client to wake up.
The hacked/slow client might be a problem for the user, but there's nothing the server can really do about that (short of forcibly disconnecting the client's TCP connection, or maybe printing out a log message requesting that someone debug the connected client program, I suppose :)).
I agree with EJP, btw -- selecting on ready-for-write should only be done on sockets that you actually want to write some data to. If you don't actually have any desire to write to the socket ASAP, then it's pointless and counterproductive to instruct select() to return as soon as that socket is ready-for-write: the problem with doing so is that you're likely to spin the CPU a lot whenever any socket's outgoing-data-buffer is less-than-full (which in most applications, is most of the time!). The user-visible symptom of the problem would be that your server program is using up 100% of a CPU core even when it ought to be idle or mostly-idle.

Start new process on __init__ (for TCP listener - server)

I'm trying to run new process for each new instance of class Server. Each Server instance should listen on specific port. I have this (simplified) code so far: source
class Server(object):
def handle(connection, address):
print("OK...connected...")
try:
while True:
data = connection.recv(1024)
if data == "":
break
connection.sendall(data)
except Exception as e:
print(e)
finally:
connection.close()
def __init__(self, port, ip):
self.port = port
self.ip = ip
self.socket = socket(AF_INET, SOCK_STREAM)
self.socket.bind((self.ip, self.port))
self.socket.listen(1)
while True:
print("Listening...")
conn, address = self.socket.accept()
process = multiprocessing.Process(target=Pmu.handle, args=(conn, address))
process.daemon = True
process.start()
s1 = Server(9001,"127.0.0.1")
s2 = Server(9002,"127.0.0.1")
But when I run this script only first server s1 is running and waiting for connection. How to make both servers listening at the same time?
Your current server is effectively a SocketServer.ForkingTCPServer that enters a tight loop in its __init__, foerever accepting new connections, and creating a new child process for each incoming connection.
The problem is that __init__ never returns, so only one server gets instantiated, one socket gets bound, and only one port will accept new requests.
A common way of solving this type of problem is to move the accept loop into a worker thread. This code would look something like this:
import multiprocessing
import threading
import socket
class Server(object):
def handle(self, connection, address):
print("OK...connected...")
try:
while True:
data = connection.recv(1024)
if data == "":
break
connection.sendall(data)
except Exception as e:
print(e)
finally:
connection.close()
print("Connection closed")
def accept_forever(self):
while True:
# Accept a connection on the bound socket and fork a child process
# to handle it.
print("Waiting for connection...")
conn, address = self.socket.accept()
process = multiprocessing.Process(
target=self.handle, args=(conn, address))
process.daemon = True
process.start()
# Close the connection fd in the parent, since the child process
# has its own reference.
conn.close()
def __init__(self, port, ip):
self.port = port
self.ip = ip
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.bind((self.ip, self.port))
self.socket.listen(1)
# Spin up an acceptor thread
self.worker = threading.Thread(target=self.accept_forever)
self.worker.daemon = True
self.worker.start()
def join(self):
# threading.Thread.join() is not interruptible, so tight loop
# in a sleep-based join
while self.worker.is_alive():
self.worker.join(0.5)
# Create two servers that run in the background
s1 = Server(9001,"127.0.0.1")
s2 = Server(9002,"127.0.0.1")
# Wait for servers to shutdown
s1.join()
s2.join()
Note one other change I snuck in here:
# Wait for servers to shutdown
s1.join()
s2.join()
Using the saved reference to the Server's accept worker, we call .join() from the main thread to force things to block while the servers are running. Without this, your main program will exit nearly immediately, due to the workers' .daemon attribute being set.
It's also worth noting that this approach will have some quirks:
Since the handler functions are running in separate processes, you will need to share data structures between them carefully using Queue, Value, Pipe, and other multiprocessing constructs if they depend on each other.
There is no rate limiting of active concurrent connections; creating a new process for every single request can be expensive, and can create a vector for your service being easily DoSed.

Python Socket connection class

I'm trying to create a small program that will log information output from a device via TCP
Basically this just streams data out, that i want to capture, and dump into a database for dealing with later
but the device reboots so i need to be able to reconnect when the socket closes with out any human interference
so this is what i have so far
import socket, time, logging, sys, smtplib # Import socket module
logging.basicConfig(filename='Tcplogger.log',level=logging.DEBUG,format='%(asctime)s : %(levelname)s : %(message)s')
logging.info('|--------------------------------------|')
logging.info('|--------------- TCP Logger Starting---|')
logging.info('|--------------------------------------|')
host = '127.0.0.01' # host or Ip address
port = 12345 # output port
retrytime = 1 # reconnect time
reconnectattemps = 10 # Number of time to try and reconnect
class TPCLogger:
def __init__(self):
logging.debug('****Trying connection****')
print('****Trying connection****')
self.initConnection()
def initConnection(self):
s = socket.socket()
try:
s.connect((host, port))
logging.debug('****Connected****')
except IOError as e:
while 1:
reconnectcount = 0;
logging.error(format(e.errno)+' : '+format(e.strerror))
while 1:
reconnectcount = reconnectcount + 1
logging.error('Retrying connection to Mitel attempt : '+str(reconnectcount))
try:
s.connect((host, port))
connected = True
logging.debug('****Connected****')
except IOError as e:
connected = False
logging.error(format(e.errno)+' : '+format(e.strerror))
if reconnectcount == reconnectattemps:
logging.error('******####### Max Reconnect attempts reached logger will Terminate ######******')
sys.exit("could Not connect")
time.sleep(retrytime)
if connected == True:
break
break
while 1:
s.recv(1034)
LOGGER= TCPLogger()
Which all works fine on start up if a try to connect and its not there it will retry the amount of times set by reconnectattemps
but he is my issue
while 1:
s.recv(1034)
when this fails i want to try to reconnect
i could of course type out or just copy my connection part again but what i want to be able todo is call a function that will handle the connection and retry and hand me back the connection object
for example like this
class tcpclient
#set some var
host, port etc....
def initconnection:
connect to socket and retry if needed
RETURN SOCKET
def dealwithdata:
initconnection()
while 1:
try:
s.recv
do stuff here copy to db
except:
log error
initconnection()
I think this is possible but im really not geting how the class/method system works in python so i think im missing something here
FYI just in case you didn't notice iv very new to python. any other comments on what i already have are welcome too :)
Thanks
Aj
Recommendation
For this use-case I would recommend something higher-level than sockets. Why? Controlling all these exceptions and errors for yourself can be irritating when you just want to retrieve or send data and maintain connection.
Of course you can achieve what you want with your plain solution, but you mess with code a bit more, methinks. Anyway it'll look similarly to class amustafa wrote, with handling socket errors to close/reconnect method, etc.
Example
I made some example for easier solution using asyncore module:
import asyncore
import socket
from time import sleep
class Client(asyncore.dispatcher_with_send):
def __init__(self, host, port, tries_max=5, tries_delay=2):
asyncore.dispatcher.__init__(self)
self.host, self.port = host, port
self.tries_max = tries_max
self.tries_done = 0
self.tries_delay = tries_delay
self.end = False # Flag that indicates whether socket should reconnect or quit.
self.out_buffer = '' # Buffer for sending.
self.reconnect() # Initial connection.
def reconnect(self):
if self.tries_done == self.tries_max:
self.end = True
return
print 'Trying connecting in {} sec...'.format(self.tries_delay)
sleep(self.tries_delay)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
try:
self.connect((self.host, self.port))
except socket.error:
pass
if not self.connected:
self.tries_done += 1
print 'Could not connect for {} time(s).'.format(self.tries_done)
def handle_connect(self):
self.tries_done = 0
print 'We connected and can get the stuff done!'
def handle_read(self):
data = self.recv(1024)
if not data:
return
# Check for terminator. Can be any action instead of this clause.
if 'END' in data:
self.end = True # Everything went good. Shutdown.
else:
print data # Store to DB or other thing.
def handle_close(self):
print 'Connection closed.'
self.close()
if not self.end:
self.reconnect()
Client('localhost', 6666)
asyncore.loop(timeout=1)
reconnnect() method is somehow core of your case - it's called when connection is needed to be made: when class initializes or connection brokes.
handle_read() operates any recieved data, here you log it or something.
You can even send data using buffer (self.out_buffer += 'message'), which will remain untouched after reconnection, so class will resume sending when connected again.
Setting self.end to True will inform class to quit when possible.
asyncore takes care of exceptions and calls handle_close() when such events occur, which is convenient way of dealing with connection failures.
You should look at the python documentation to understand how classes and methods work. The biggest difference between python methods and methods in most other languages is the addition of the "self" tag. The self represents the instance that a method is called against and is automatically fed in by the python system. So:
class TCPClient():
def __init__(self, host, port, retryAttempts=10 ):
#this is the constructor that takes in host and port. retryAttempts is given
# a default value but can also be fed in.
self.host = host
self.port = port
self.retryAttempts = retryAttempts
self.socket = None
def connect(self, attempt=0):
if attempts<self.retryAttempts:
#put connecting code here
if connectionFailed:
self.connect(attempt+1)
def diconnectSocket(self):
#perform all breakdown operations
...
self.socket = None
def sendDataToDB(self, data):
#send data to db
def readData(self):
#read data here
while True:
if self.socket is None:
self.connect()
...
Just make sure you properly disconnect the socket and set it to None.

How to make server accepting connections from multiple ports?

How can I make a simple server(simple as in accepting a connection and print to terminal whatever is received) accept connection from multiple ports or a port range?
Do I have to use multiple threads, one for each bind call. Or is there another solution?
The simple server can look something like this.
def server():
import sys, os, socket
port = 11116
host = ''
backlog = 5 # Number of clients on wait.
buf_size = 1024
try:
listening_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listening_socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
listening_socket.bind((host, port))
listening_socket.listen(backlog)
except socket.error, (value, message):
if listening_socket:
listening_socket.close()
print 'Could not open socket: ' + message
sys.exit(1)
while True:
accepted_socket, adress = listening_socket.accept()
data = accepted_socket.recv(buf_size)
if data:
accepted_socket.send('Hello, and goodbye.')
accepted_socket.close()
server()
EDIT:
This is an example of how it can be done. Thanks everyone.
import socket, select
def server():
import sys, os, socket
port_wan = 11111
port_mob = 11112
port_sat = 11113
sock_lst = []
host = ''
backlog = 5 # Number of clients on wait.
buf_size = 1024
try:
for item in port_wan, port_mob, port_sat:
sock_lst.append(socket.socket(socket.AF_INET, socket.SOCK_STREAM))
sock_lst[-1].setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
sock_lst[-1].bind((host, item))
sock_lst[-1].listen(backlog)
except socket.error, (value, message):
if sock_lst[-1]:
sock_lst[-1].close()
sock_lst = sock_lst[:-1]
print 'Could not open socket: ' + message
sys.exit(1)
while True:
read, write, error = select.select(sock_lst,[],[])
for r in read:
for item in sock_lst:
if r == item:
accepted_socket, adress = item.accept()
print 'We have a connection with ', adress
data = accepted_socket.recv(buf_size)
if data:
print data
accepted_socket.send('Hello, and goodbye.')
accepted_socket.close()
server()
I'm not a python guy, but the function you are interested in is "select". This will allow you to watch multiple sockets and breaks out when activity occurs on any one of them.
Here's a python example that uses select.
Since Python's got so much overhead, multithreaded apps are a big point of debate. Then there's the whole blocking-operation-GIL issue too. Luckily, the Python motto of "If it seems like a big issue, someone's probably already come up with a solution (or several!)" holds true here. My favorite solution tends to be the microthread model, specifically gevent.
Gevent is an event-driven single-thread concurrency library that handles most issues for you out of the box via monkey-patching. gevent.monkey.patch_socket() is a function that replaces the normal socket calls with non-blocking variants, polling and sleeping to allow the switch to other greenlets as need be. If you want more control, or it's not cutting it for you, you can easily manage the switching with select and gevent's cooperative yield.
Here's a simple example.
import gevent
import socket
import gevent.monkey; gevent.monkey.patch_socket()
ALL_PORTS=[i for i in xrange(1024, 2048)]
MY_ADDRESS = "127.0.0.1"
def init_server_sock(port):
try:
s=socket.socket()
s.setblocking(0)
s.bind((MY_ADDRESS, port))
s.listen(5)
return s
except Exception, e:
print "Exception creating socket at port %i: %s" % (port, str(e))
return False
def interact(port, sock):
while 1:
try:
csock, addr = sock.accept()
except:
continue
data = ""
while not data:
try:
data=csock.recv(1024)
print data
except:
gevent.sleep(0) #this is the cooperative yield
csock.send("Port %i got your message!" % port)
csock.close()
gevent.sleep(0)
def main():
socks = {p:init_server_sock(p) for p in ALL_PORTS}
greenlets = []
for k,v in socks.items():
if not v:
socks.pop(k)
else:
greenlets.append(gevent.spawn(interact, k, v))
#now we've got our sockets, let's start accepting
gevent.joinall(greenlets)
That would be a super-simple, completely untested server serving plain text We got your message! on ports 1024-2048. Involving select is a little harder; you'd have to have a manager greenlet which calls select and then starts up the active ones; but that's not massively hard to implement.
Hope this helps! The nice part of the greenlet-based philosophy is that the select call is actually part of their hub module, as I recall, which will allow you to create a much more scalable and complex server more easily. It's pretty efficient too; there are a couple benchmarks floating around.
If you really wanted to be lazy (from a programmer standpoint, not an evaluation standpoint), you could set a timeout on your blocking read and just loop through all your sockets; if a timeout occurs, there wasn't any data available. Functionally, this is similar to what the select is doing, but it is taking that control away from the OS and putting it in your application.
Of course, this implies that as your sleep time gets smaller, your program will approach 100% CPU usage, so you wouldn't use it on a production app. It's fine for a toy though.
It would go something like this: (not tested)
def server():
import sys, os, socket
port = 11116
host = ''
backlog = 5 # Number of clients on wait.
buf_size = 1024
NUM_SOCKETS = 10
START_PORT = 2000
try:
socket.setdefaulttimeout(0.5) # raise a socket.timeout error after a half second
listening_sockets = []
for i in range(NUM_SOCKETS):
listening_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listening_socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
listening_socket.bind((host, START_PORT + i))
listening_socket.listen(backlog)
listening_sockets.append(listening_socket)
except socket.error, (value, message):
if listening_socket:
listening_socket.close()
print 'Could not open socket: ' + message
sys.exit(1)
while True:
for sock in listening_sockets:
try:
accepted_socket, adress = sock_socket.accept()
data = sock.recv(buf_size)
if data:
sock_socket.send('Hello, and goodbye.')
sock.close()
except socket.timeout:
pass
server()

How to tell if a connection is dead in python

I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?
Short answer:
use a non-blocking recv(), or a blocking recv() / select() with a very
short timeout.
Long answer:
The way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors.
TCP distinguishes between 3 forms of "dropping" a connection: timeout, reset, close.
Of these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after.
Also remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running.
So strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed.
Even if the connection was "dropped", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv().
Checking if the connection was dropped is like asking "what will I receive after reading all data that is currently buffered ?" To find that out, you just have to read all data that is currently bufferred.
I can see how "reading all buffered data", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), "checking" for a read when the buffer is already empty will block, which defeats the purpose of "checking".
In my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea.
What you can do is:
set the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full
stick to blocking mode but set a very short socket timeout. This will allow you to "ping" or "check" the socket with recv(), pretty much what you want to do
use select() call or asyncore module with a very short timeout. Error reporting is still system-specific.
For the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection "dropped" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel.
I guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either:
receive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation.
perform a successful shutdow() and close() on the socket
The python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :)
Also here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.
It depends on what you mean by "dropped". For TCP sockets, if the other end closes the connection either through
close() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.
From the link Jweede posted:
exception socket.timeout:
This exception is raised when a timeout occurs on a socket
which has had timeouts enabled via a prior call to settimeout().
The accompanying value is a string whose value is currently
always “timed out”.
Here are the demo server and client programs for the socket module from the python docs
# Echo server program
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
And the client:
# Echo client program
import socket
HOST = 'daring.cwi.nl' # The remote host
PORT = 50007 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
On the docs example page I pulled these from, there are more complex examples that employ this idea, but here is the simple answer:
Assuming you're writing the client program, just put all your code that uses the socket when it is at risk of being dropped, inside a try block...
try:
s.connect((HOST, PORT))
s.send("Hello, World!")
...
except socket.timeout:
# whatever you need to do when the connection is dropped
If I'm not mistaken this is usually handled via a timeout.
I translated the code sample in this blog post into Python: How to detect when the client closes the connection?, and it works well for me:
from ctypes import (
CDLL, c_int, POINTER, Structure, c_void_p, c_size_t,
c_short, c_ssize_t, c_char, ARRAY
)
__all__ = 'is_remote_alive',
class pollfd(Structure):
_fields_ = (
('fd', c_int),
('events', c_short),
('revents', c_short),
)
MSG_DONTWAIT = 0x40
MSG_PEEK = 0x02
EPOLLIN = 0x001
EPOLLPRI = 0x002
EPOLLRDNORM = 0x040
libc = CDLL('libc.so.6')
recv = libc.recv
recv.restype = c_ssize_t
recv.argtypes = c_int, c_void_p, c_size_t, c_int
poll = libc.poll
poll.restype = c_int
poll.argtypes = POINTER(pollfd), c_int, c_int
class IsRemoteAlive: # not needed, only for debugging
def __init__(self, alive, msg):
self.alive = alive
self.msg = msg
def __str__(self):
return self.msg
def __repr__(self):
return 'IsRemoteAlive(%r,%r)' % (self.alive, self.msg)
def __bool__(self):
return self.alive
def is_remote_alive(fd):
fileno = getattr(fd, 'fileno', None)
if fileno is not None:
if hasattr(fileno, '__call__'):
fd = fileno()
else:
fd = fileno
p = pollfd(fd=fd, events=EPOLLIN|EPOLLPRI|EPOLLRDNORM, revents=0)
result = poll(p, 1, 0)
if not result:
return IsRemoteAlive(True, 'empty')
buf = ARRAY(c_char, 1)()
result = recv(fd, buf, len(buf), MSG_DONTWAIT|MSG_PEEK)
if result > 0:
return IsRemoteAlive(True, 'readable')
elif result == 0:
return IsRemoteAlive(False, 'closed')
else:
return IsRemoteAlive(False, 'errored')
Trying to improve on #kay response. I made a more pythonic version
(Note that it was not yet tested in a "real-life" environment, and only on Linux)
This detects if the remote side closed the connection, without actually consuming the data:
import socket
import errno
def remote_connection_closed(sock: socket.socket) -> bool:
"""
Returns True if the remote side did close the connection
"""
try:
buf = sock.recv(1, socket.MSG_PEEK | socket.MSG_DONTWAIT)
if buf == b'':
return True
except BlockingIOError as exc:
if exc.errno != errno.EAGAIN:
# Raise on unknown exception
raise
return False
Here is a simple example from an asyncio echo server:
import asyncio
async def handle_echo(reader, writer):
addr = writer.get_extra_info('peername')
sock = writer.get_extra_info('socket')
print(f'New client: {addr!r}')
# Initial of client command
data = await reader.read(100)
message = data.decode()
print(f"Received {message!r} from {addr!r}")
# Simulate a long async process
for _ in range(10):
if remote_connection_closed(sock):
print('Remote side closed early')
return
await asyncio.sleep(1)
# Write the initial message back
print(f"Send: {message!r}")
writer.write(data)
await writer.drain()
writer.close()
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
if __name__ == '__main__':
asyncio.run(main())

Categories