What is a good way to communicate between two separate Python runtimes? Thing's I've tried:
reading/writing on named pipes e.g. os.mkfifo (feels hacky)
dbus services (worked on desktop, but too heavyweight for headless)
sockets (seems too low-level; surely there's a higher level module to use?)
My basic requirement is to be able to run python listen.py like a daemon, able to receive messages from python client.py. The client should just send a message to the existing process and terminate, with return code 0 for success and nonzero for failure (i.e. a two-way communication will be required.)
The multiprocessing library provides listeners and clients that wrap sockets and allow you to pass arbitrary python objects.
Your server could listen to receive python objects:
from multiprocessing.connection import Listener
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey=b'secret password')
conn = listener.accept()
print 'connection accepted from', listener.last_accepted
while True:
msg = conn.recv()
# do something with msg
if msg == 'close':
conn.close()
break
listener.close()
Your client could send commands as objects:
from multiprocessing.connection import Client
address = ('localhost', 6000)
conn = Client(address, authkey=b'secret password')
conn.send('close')
# can also send arbitrary objects:
# conn.send(['a', 2.5, None, int, sum])
conn.close()
Nah, zeromq is the way to go. Delicious, isn't it?
import argparse
import zmq
parser = argparse.ArgumentParser(description='zeromq server/client')
parser.add_argument('--bar')
args = parser.parse_args()
if args.bar:
# client
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect('tcp://127.0.0.1:5555')
socket.send(args.bar)
msg = socket.recv()
print msg
else:
# server
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind('tcp://127.0.0.1:5555')
while True:
msg = socket.recv()
if msg == 'zeromq':
socket.send('ah ha!')
else:
socket.send('...nah')
Based on #vsekhar's answer, here is a Python 3 version with more details and multiple connections:
Server
from multiprocessing.connection import Listener
listener = Listener(('localhost', 6000), authkey=b'secret password')
running = True
while running:
conn = listener.accept()
print('connection accepted from', listener.last_accepted)
while True:
msg = conn.recv()
print(msg)
if msg == 'close connection':
conn.close()
break
if msg == 'close server':
conn.close()
running = False
break
listener.close()
Client
from multiprocessing.connection import Client
import time
# Client 1
conn = Client(('localhost', 6000), authkey=b'secret password')
conn.send('foo')
time.sleep(1)
conn.send('close connection')
conn.close()
time.sleep(1)
# Client 2
conn = Client(('localhost', 6000), authkey=b'secret password')
conn.send('bar')
conn.send('close server')
conn.close()
From my experience, rpyc is by far the simplest and most elegant way to go about it.
I would use sockets; local communication was strongly optimized, so you shouldn't have performance problems and it gives you the ability to distribute your application to different physical nodes if the needs should arise.
With regard to the "low-level" approach, you're right. But you can always use an higher-level wrapper depending on your needs. XMLRPC could be a good candidate, but it is maybe overkill for the task you're trying to perform.
Twisted offers some good protocol simple implementations, such as LineReceiver (for simple line based messages) or the more elegant AMP (which was, by the way, standardized and implemented in different languages).
Check out a cross-platform library/server called RabbitMQ. Might be too heavy for two-process communication, but if you need multi-process or multi-codebase communication (with various different means, e.g. one-to-many, queues, etc), it is a good option.
Requirements:
$ pip install pika
$ pip install bson # for sending binary content
$ sudo apt-get rabbitmq-server # ubuntu, see rabbitmq installation instructions for other platforms
Publisher (sends data):
import pika, time, bson, os
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='logs', type='fanout')
i = 0
while True:
data = {'msg': 'Hello %s' % i, b'data': os.urandom(2), 'some': bytes(bytearray(b'\x00\x0F\x98\x24'))}
channel.basic_publish(exchange='logs', routing_key='', body=bson.dumps(data))
print("Sent", data)
i = i + 1
time.sleep(1)
connection.close()
Subscriber (receives data, can be multiple):
import pika, bson
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='logs', type='fanout')
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
channel.queue_bind(exchange='logs', queue=queue_name)
def callback(ch, method, properties, body):
data = bson.loads(body)
print("Received", data)
channel.basic_consume(callback, queue=queue_name, no_ack=True)
channel.start_consuming()
Examples based on https://www.rabbitmq.com/tutorials/tutorial-two-python.html
I would use sockets, but use Twisted to give you some abstraction, and to make things easy. Their Simple Echo Client / Server example is a good place to start.
You would just have to combine the files and instantiate and run either the client or server depending on the passed argument(s).
Related
I want to introduce a zmq based logging into a Python program. As I was facing ZMQError: Address in use errors I decided to boil it down to a simple proof of concept. I was able to run the boiled down version, but not to receive any log entries. This is the code I used:
Log Publisher:
import time
import logging
from zmq.log import handlers as zmqHandler
logger = logging.getLogger('myapp')
logger.setLevel(logging.ERROR)
zmqH=zmqHandler.PUBHandler('tcp://127.0.0.1:12344')
logger.addHandler(zmqH)
for i in range(50):
logger.error('error test...')
print "Send error #%s" % (str(i))
time.sleep(1)
Result
Send error #0
Send error #1
Send error #2
Send error #3
Send error #4
...
Log Subscriber:
import time
import zmq
def sub_client():
port = "12344"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://127.0.0.1:%s" % port)
# Generate 30 entries
for i in range (30):
print "Listening to publishers..."
message = socket.recv()
print "Received error #%s: %s" % (str(i), message)
time.sleep(1)
sub_client()
Result
Listening to publishers...
So the subscriber is locked at the call of socket.recv(). I started publisher and subscriber in different consoles. Both processes appear when I use netstat:
C:\>netstat -a -n -o | findstr 12344
TCP 127.0.0.1:12344 0.0.0.0:0 LISTEN 1336
TCP 127.0.0.1:12344 127.0.0.1:51937 ESTABLISHED 1336
TCP 127.0.0.1:51937 127.0.0.1:12344 ESTABLISHED 8624
I fail to see my mistake here, any ideas?
In addition to the problem at hand, how do I use this zmq listener in general.
Do I have to create one instance of the PUBHandler per process and then add it to all instances of logger (logging.getLogger('myapp') creates a own logger instance, right?) or do I have to create an own PUBHandler for all of the different classes I use? As the PUBHandlerclass has a createLock() I assume that it is not thread save...
For completness I want to mention the doc of the PUBHandler class
I am using a python(x,y) distribution at Win7 with python 2.7.10 and pyzmq 14.7.0-14
[update]
I ruled out the the windows firewall as the source of the missing packages
The problem is on the subscriber side. Initially a subscriber filters out all messages, until a filter is set. Use socket.setsockopt(opt, value) function to archive this.
The pyZMQ description is not very clear about the use of this function:
getsockopt(opt) get default socket options for new sockets created by
this Context
But the documentation of the zmq_setsockopt function is quite clear (see here):
int zmq_setsockopt (void *socket, int option_name, const void *option_value, size_t option_len)
...
ZMQ_SUBSCRIBE: Establish message filter The ZMQ_SUBSCRIBE option shall establish a new message filter on a ZMQ_SUB socket.
Newly created ZMQ_SUB sockets shall filter out all incoming
messages, therefore you should call this option to establish an
initial message filter.
So the solution is to set a filter with socket.setsockopt(zmq.SUBSCRIBE,filter), where filter is the string you want to filter for. Use filter='' to show all messages. A filter like filter='ERROR' will only display the error messages and suppress all other types like WARNING,INFO or DEBUG.
With this the sub_client() function looks like this:
import time
import zmq
def sub_client():
port = "12344"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://127.0.0.1:%s" % port)
socket.setsockopt(zmq.SUBSCRIBE,'')
# Process 30 updates
print "Listening to publishers..."
for i in range (30):
print "Listening to publishers..."
message = socket.recv()
print "Received error #%s: %s" % (str(i), message)
time.sleep(1)
sub_client()
I know it is an older post, but if someone lands here this is what the subscriber looks like
def sub_client():
port = "12345"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:%s" % port)
socket.subscribe("")
# Process 30 updates
for i in range (30):
print("Listening to publishers...")
message = socket.recv()
print("Received error #%s: %s",str(i), message)
time.sleep(1)
sub_client()
And Publisher looks like
import zmq
import logging
import time
from zmq.log.handlers import PUBHandler
port = "12345"
context = zmq.Context()
pub = context.socket(zmq.PUB)
pub.bind("tcp://*:%s" % port)
handler = PUBHandler(pub)
logger = logging.getLogger()
logger.setLevel(logging.ERROR)
logger.addHandler(handler)
for i in range(50):
logger.error('error test...')
print("publish error",str(i))
time.sleep(1)
I guess you missed to set PUB socket in server.
something like this should do,
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:12344')
zmqh = PUBHandler(publisher)
logger = logging.getLogger('myapp')
logger.setLevel(logging.ERROR)
logger.addHandler(zmqh)
I hope this helps.
This is my server program, how can it send the data received from each client to every other client?
import socket
import os
from threading import Thread
import thread
def listener(client, address):
print "Accepted connection from: ", address
while True:
data = client.recv(1024)
if not data:
break
else:
print repr(data)
client.send(data)
client.close()
host = socket.gethostname()
port = 10016
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host,port))
s.listen(3)
th = []
while True:
print "Server is listening for connections..."
client, address = s.accept()
th.append(Thread(target=listener, args = (client,address)).start())
s.close()
If you need to send a message to all clients, you need to keep a collection of all clients in some way. For example:
clients = set()
clients_lock = threading.Lock()
def listener(client, address):
print "Accepted connection from: ", address
with clients_lock:
clients.add(client)
try:
while True:
data = client.recv(1024)
if not data:
break
else:
print repr(data)
with clients_lock:
for c in clients:
c.sendall(data)
finally:
with clients_lock:
clients.remove(client)
client.close()
It would probably be clearer to factor parts of this out into separate functions, like a broadcast function that did all the sends.
Anyway, this is the simplest way to do it, but it has problems:
If one client has a slow connection, everyone else could bog down writing to it. And while they're blocking on their turn to write, they're not reading anything, so you could overflow the buffers and start disconnecting everyone.
If one client has an error, the client whose thread is writing to that client could get the exception, meaning you'll end up disconnecting the wrong user.
So, a better solution is to give each client a queue, and a writer thread servicing that queue, alongside the reader thread. (You can then extend this in all kinds of ways—put limits on the queue so that people stop trying to talk to someone who's too far behind, etc.)
As Anzel points out, there's a different way to design servers besides using a thread (or two) per client: using a reactor that multiplexes all of the clients' events.
Python 3.x has some great libraries for this built in, but 2.7 only has the clunky and out-of-date asyncore/asynchat and the low-level select.
As Anzel says, Python SocketServer: sending to multiple clients has an answer using asyncore, which is worth reading. But I wouldn't actually use that. If you want to write a reactor-based server in Python 2.x, I'd either use a better third-party framework like Twisted, or find or write a very simple one that sits directly on select.
I took a look at the various ZMQ messaging patterns and I'm not sure which one would do for my project. All I want to do is be able to connect to a server and send a command (the client never receive anything). On the server side, I want to be able to check if there is a message, if there is one, process it, else continue to do other stuff without blocking. That way, the server could continue to work even if there is no client connected.
#client.py
while(True):
select = raw_input()
if select == "1":
socket.send(msg1)
elif select == "2":
socket.send(msg2)
...
#server.py
while(True):
msg = socket.recv() #should not block
if msg == ...
#do stuff
#do other stuff
So which pattern should I use with ZMQ to do that? Example code would be appreciated.
First, since you want one-way communication with only one socket receiving messages, that generally means PUSH-PULL. Here is a version of the client:
import zmq
ctx = zmq.Context.instance()
s = ctx.socket(zmq.PUSH)
url = 'tcp://127.0.0.1:5555'
s.connect(url)
while True:
msg = raw_input("msg > ")
s.send(msg)
if msg == 'quit':
break
so a PUSH socket sends the messages we get from raw_input. It should be clear how to change that logic to generate the messages you want. A bit of bonus is that if you type 'quit', both the client and the server will quit.
There are a variety of ways to do the non-blocking server, depending on the complexity of your application. I'll show a few examples, from the most basic one to the most powerful / extensible.
All of these server examples assume this at the top, setting up the server's PULL socket:
import time
import zmq
ctx = zmq.Context.instance()
s = ctx.socket(zmq.PULL)
url = 'tcp://127.0.0.1:5555'
s.bind(url)
The first example is simple non-blocking recv, which raises a zmq.Again Exception if there are no messages ready to be received:
# server0.py
while True:
try:
msg = s.recv(zmq.NOBLOCK) # note NOBLOCK here
except zmq.Again:
# no message to recv, do other things
time.sleep(1)
else:
print("received %r" % msg)
if msg == 'quit':
break
But that pattern is pretty hard to extend beyond very simple cases. The second example uses a Poller, to check for events on the socket:
# server1.py
poller = zmq.Poller()
poller.register(s)
while True:
events = dict(poller.poll(0))
if s in events:
msg = s.recv()
print("received %r" % msg)
if msg == 'quit':
break
else:
# no message to recv, do other things
time.sleep(1)
In this toy example, this is very similar to the first. But, unlike the first, it is easy to extend to many sockets or events with further calls to poller.register, or passing a timeout other than zero to poller.poll.
The last example uses an eventloop, and actually registers a callback for when messages arrive. You can build very complex applications with this sort of pattern, and it is a fairly straightforward way to write code that only does work when there is work to be done.
# server2.py
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
def print_msg(msg):
print("received %r" % ' '.join(msg))
if msg[0] == 'quit':
ioloop.IOLoop.instance().stop()
# register the print_msg callback to be fired
# whenever there is a message on our socket
stream = ZMQStream(s)
stream.on_recv(print_msg)
# do other things in the meantime
tic = time.time()
def do_other_things():
print("%.3f" % (time.time() - tic))
pc = ioloop.PeriodicCallback(do_other_things, 1000)
pc.start()
# start the eventloop
ioloop.IOLoop.instance().start()
So that's a few basic ways to deal with zmq messages without blocking. You can grab these examples together as a gist.
I am doing a client-server project for my college project,
we have to allocate the login to the client.
Client system will request its status for every 2 seconds(to check whether the client is locked or unlocked). and server will accept the client request and reply the client status to the system.
But the problem is server thread is not responding to the client request.
CLIENT THREAD:
def checkPort():
while True:
try:
s = socket.socket()
s.connect((host, port))
s.send('pc1') # send PC name to the server
status = s.recv(1024) # receive the status from the server
if status == "unlock":
disableIntrrupts() # enable all the functions of system
else:
enableInterrupts() # enable all the functions of system
time.sleep(5)
s.close()
except Exception:
pass
SERVER THREAD:
def check_port():
while True:
try:
print "hello loop is repeating"
conn, addr = s.accept()
data = conn.recv(1024)
if exit_on_click == 1:
break
if (any(sublist[0] == data for sublist in available_sys)):
print "locked"
conn.send("lock")
elif (any(sublist[0] == data for sublist in occupied_sys)):
conn.send("unlock")
print "unlocked"
else:
print "added to gui for first time"
available_sys.append([data,addr[0],nameText,usnText,branchText])
availSysList.insert('end',data)
except Exception:
pass
But my problem is server thread is not executing more than 2 time,
So its unable to accept client request more than one time.
can't we handle multiple client sockets using single server socket?
How to handle multiple client request from server ?
Thanks for any help !!
Its because your server, will block waiting for a new connection on this line
conn, addr = s.accept()
This is because calls like .accept and .read are blocking calls that hold the process
You need to consider an alternative design, where in you either.
Have one process per connection (this idea is stupid)
One thread per connection (this idea is less stupid than the first but still mostly foolish)
Have a non blocking design that allows multiple clients and read/write without blocking execution.
To achieve the first, look at multiprocessing, the second is threading the third is slightly more complicated to get your head around but will yield the best results, the go to library for event driven code in Python is twisted but there are others like
gevent
tulip
tornado
And so so many more that I haven't listed here.
here's an full example of implementing a threaded server. it's fully functional and comes with the benefit of using SSL as well. further, i use threaded event objects to signal another class object after storing my received data in a database.
please note, _sni and _cams_db are additional modules purely of my own. if you want to see the _sni module (provides SNI support for pyOpenSSL), let me know.
what follows this, is a snippet from camsbot.py, there's a whole lot more that far exceeds the scope of this question. what i've built is a centralized message relay system. it listens to tcp/2345 and accepts SSL connections. each connection passes messages into the system. short lived connections will connect, pass message, and disconnect. long lived connections will pass numerous messages after connecting. messages are stored in a database and a threading.Event() object (attached to the DB class) is set to tell the bot to poll the database for new messages and relay them.
the below example shows
how to set up a threaded tcp server
how to pass information from the listener to the accept handler such as config data and etc
in addition, this example also shows
how to employ an SSL socket
how to do some basic certificate validations
how to cleanly wrap and unwrap SSL from a tcp socket
how to use poll() on the socket instead of select()
db.pending is a threading.Event() object in _cams_db.py
in the main process we start another thread that waits on the pending object with db.pending.wait(). this makes that thread wait until another thread does db.pending.set(). once it is set, our waiting thread immediately wakes up and continues to work. when our waiting thread is done, it calls db.pending.clear() and goes back to the beginning of the loop and starts waiting again with db.pending.wait()
while True:
db.pending.wait()
# after waking up, do code. for example, we wait for incoming messages to
# be stored in the database. the threaded server will call db.pending.set()
# which will wake us up. we'll poll the DB for new messages, relay them, clear
# our event flag and go back to waiting.
# ...
db.pending.clear()
snippet from camsbot.py:
import sys, os, sys, time, datetime, threading, select, logging, logging.handlers
import configparser, traceback, re, socket, hashlib
# local .py
sys.path.append('/var/vse/python')
import _util, _webby, _sni, _cams_db, _cams_threaded_server, _cams_bot
# ...
def start_courier(config):
# default values
host = '::'
port = 2345
configp = config['configp']
host = configp.get('main', 'relay msp hostport')
# require ipv6 addresses be specified in [xx:xx:xx] notation, therefore
# it is safe to look for :nnnn at the end
if ':' in host and not host.endswith(']'):
port = host.split(':')[-1]
try:
port = int(port, 10)
except:
port = 2345
host = host.split(':')[:-1][0]
server = _cams_threaded_server.ThreadedTCPServer((host, port), _cams_threaded_server.ThreadedTCPRequestHandler, config)
t = threading.Thread(target=server.serve_forever, name='courier')
t.start()
_cams_threaded_server.py:
import socket, socketserver, select, datetime, time, threading
import sys, struct
from OpenSSL.SSL import SSLv23_METHOD, SSLv3_METHOD, TLSv1_METHOD, OP_NO_SSLv2
from OpenSSL.SSL import VERIFY_NONE, VERIFY_PEER, VERIFY_FAIL_IF_NO_PEER_CERT, Context, Connection
from OpenSSL.SSL import FILETYPE_PEM
from OpenSSL.SSL import WantWriteError, WantReadError, WantX509LookupError, ZeroReturnError, SysCallError
from OpenSSL.crypto import load_certificate
from OpenSSL import SSL
# see note at beginning of answer
import _sni, _cams_db
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass, config):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET6
self.connected = []
self.logger = config['logger']
self.config = config
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sc = Context(TLSv1_METHOD)
sc.set_verify(VERIFY_PEER|VERIFY_FAIL_IF_NO_PEER_CERT, _sni.verify_cb)
sc.set_tlsext_servername_callback(_sni.pick_certificate)
self.sc = sc
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
config = self.server.config
logger = self.server.logger
connected = self.server.connected
sc = self.server.sc
try:
self.peer_hostname = socket.gethostbyaddr(socket.gethostbyname(self.request.getpeername()[0]))[0]
except:
self.peer_hostname = '!'+self.request.getpeername()[0]
logger.info('peer: {}'.format(self.peer_hostname))
ssl_s = Connection(sc, self.request)
ssl_s.set_accept_state()
try:
ssl_s.do_handshake()
except:
t,v,tb = sys.exc_info()
logger.warn('handshake failed {}'.format(v))
ssl_s.setblocking(True)
self.ssl_s = ssl_s
try:
peercert = ssl_s.get_peer_certificate()
except:
peercert = False
t,v,tb = sys.exc_info()
logger.warn('SSL get peer cert failed: {}'.format(v))
if not peercert:
logger.warn('No peer certificate')
else:
acl = config['configp']['main'].get('client cn acl', '').split(' ')
cert_subject = peercert.get_subject().CN
logger.info('Looking for {} in acl: {}'.format(cert_subject,acl))
if cert_subject in acl:
logger.info('{} is permitted'.format(cert_subject))
else:
logger.warn('''client CN not approved''')
# it's ok to block here, every socket has its own thread
ssl_s.setblocking(True)
self.db = config['db']
msgcount = 0
p = select.poll()
# don't want writable, just readable
p.register(self.request, select.POLLIN|select.POLLPRI|select.POLLERR|select.POLLHUP|select.POLLNVAL)
peername = ssl_s.getpeername()
x = peername[0]
if x.startswith('::ffff:'):
x = x[7:]
peer_ip = x
try:
host = socket.gethostbyaddr(x)[0]
except:
host = peer_ip
logger.info('{}/{}:{} connected'.format(host, peer_ip, peername[1]))
connected.append( [host, peername[1]] )
if peercert:
threading.current_thread().setName('{}/port={}/CN={}'.format(host, peername[1], peercert.get_subject().CN))
else:
threading.current_thread().setName('{}/port={}'.format(host, peername[1]))
sockclosed = False
while not sockclosed:
keepreading = True
#logger.debug('starting 30 second timeout for poll')
pe = p.poll(30.0)
if not pe:
# empty list means poll timeout
# for SSL sockets it means WTF. we get an EAGAIN like return even if the socket is blocking
continue
logger.debug('poll indicates: {}'.format(pe))
#define SSL_NOTHING 1
#define SSL_WRITING 2
#define SSL_READING 3
#define SSL_X509_LOOKUP 4
while keepreading and not sockclosed:
data,sockclosed,keepreading = self._read_ssl_data(2, head=True)
if sockclosed or not keepreading:
time.sleep(5)
continue
plen = struct.unpack('H', data)[0]
data,sockclosed,keepreading = self._read_ssl_data(plen)
if sockclosed or not keepreading:
time.sleep(5)
continue
# send thank you, ignore any errors since we appear to have gotten
# the message
try:
self.ssl_s.sendall(b'ty')
except:
pass
# extract the timestamp
message_ts = data[0:8]
msgtype = chr(data[8])
message = data[9:].decode()
message_ts = struct.unpack('d', message_ts)[0]
message_ts = datetime.datetime.utcfromtimestamp(message_ts).replace(tzinfo=datetime.timezone.utc)
self.db.enqueue(config['group'], peer_ip, msgtype, message, message_ts)
self.db.pending.set()
# we're recommended to use the return socket object for any future operations rather than the original
try:
s = ssl_s.unwrap()
s.close()
except:
pass
connected.remove( [host, peername[1]] )
t_name = threading.current_thread().getName()
logger.debug('disconnect: {}'.format(t_name))
def _read_ssl_data(self, wantsize=16384, head=False):
_w = ['WANT_NOTHING','WANT_READ','WANT_WRITE','WANT_X509_LOOKUP']
logger = self.server.logger
data = b''
sockclosed = False
keepreading = True
while len(data) < wantsize and keepreading and not sockclosed:
rlen = wantsize - len(data)
try:
w,wr = self.ssl_s.want(),self.ssl_s.want_read()
#logger.debug(' want({}) want_read({})'.format(_w[w],wr))
x = self.ssl_s.recv(rlen)
#logger.debug(' recv(): {}'.format(x))
if not ( x or len(x) ):
raise ZeroReturnError
data += x
if not (len(x) == len(data) == wantsize):
logger.info(' read={}, len(data)={}, plen={}'.format(len(x),len(data),wantsize))
except WantReadError:
# poll(), when ready, read more
keepreading = False
logger.info(' got WantReadError')
continue
except WantWriteError:
# poll(), when ready, write more
keepreading = False
logger.info(' got WantWriteError')
continue
except ZeroReturnError:
# socket got closed, a '0' bytes read also means the same thing
keepreading = False
sockclosed = True
logger.info(' ZRE, socket closed normally')
continue
except SysCallError:
keepreading = False
sockclosed = True
t,v,tb = sys.exc_info()
if v.args[0] == -1: # normal EOF
logger.info(' EOF found, keepreading=False')
else:
logger.info('{} terminated session abruptly while reading plen'.format(self.peer_hostname))
logger.info('t: {}'.format(t))
logger.info('v: {}'.format(v))
continue
except:
t,v,tb = sys.exc_info()
logger.warning(' fucked? {}'.format(v))
raise
if not head and not len(data) == wantsize:
logger.warn(' short read {} of {}'.format(len(data), wantsize))
return data,sockclosed,keepreading
let's start with a bare bones threaded tcp server.
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
# self.request is your accepted socket, do all your .read() and .wirte() on it
s = self.request
request = s.read(1024)
# decide locked or unlocked. this example arbitrarily writes back 'locked'
s.write('locked')
# we're done, close the socket and exit with a default return of None
s.close()
ok, start your threaded server with this in your main() function:
server = threading.ThreadedTCPServer(('127.0.0.1', 1234), ThreadedTCPRequestHandler)
t = threading.Thread(target=server.serve_forever, name='optional_name')
t.start()
now you can let the threading module handle the semantics of concurrency and not worry about it.
You might want to take a look at 0MQ and concurrent.futures. 0MQ has a Tornado event loop in the library and it reduces the complexity of socket programming. concurrent.futures is a high level interface over threading or multiprocessing.
You can see different concurrent server approaches at
https://bitbucket.org/arco_group/upper/src
These will help you to choose the better way for you.
Cheers
I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?
Short answer:
use a non-blocking recv(), or a blocking recv() / select() with a very
short timeout.
Long answer:
The way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors.
TCP distinguishes between 3 forms of "dropping" a connection: timeout, reset, close.
Of these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after.
Also remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running.
So strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed.
Even if the connection was "dropped", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv().
Checking if the connection was dropped is like asking "what will I receive after reading all data that is currently buffered ?" To find that out, you just have to read all data that is currently bufferred.
I can see how "reading all buffered data", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), "checking" for a read when the buffer is already empty will block, which defeats the purpose of "checking".
In my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea.
What you can do is:
set the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full
stick to blocking mode but set a very short socket timeout. This will allow you to "ping" or "check" the socket with recv(), pretty much what you want to do
use select() call or asyncore module with a very short timeout. Error reporting is still system-specific.
For the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection "dropped" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel.
I guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either:
receive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation.
perform a successful shutdow() and close() on the socket
The python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :)
Also here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.
It depends on what you mean by "dropped". For TCP sockets, if the other end closes the connection either through
close() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.
From the link Jweede posted:
exception socket.timeout:
This exception is raised when a timeout occurs on a socket
which has had timeouts enabled via a prior call to settimeout().
The accompanying value is a string whose value is currently
always “timed out”.
Here are the demo server and client programs for the socket module from the python docs
# Echo server program
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
And the client:
# Echo client program
import socket
HOST = 'daring.cwi.nl' # The remote host
PORT = 50007 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
On the docs example page I pulled these from, there are more complex examples that employ this idea, but here is the simple answer:
Assuming you're writing the client program, just put all your code that uses the socket when it is at risk of being dropped, inside a try block...
try:
s.connect((HOST, PORT))
s.send("Hello, World!")
...
except socket.timeout:
# whatever you need to do when the connection is dropped
If I'm not mistaken this is usually handled via a timeout.
I translated the code sample in this blog post into Python: How to detect when the client closes the connection?, and it works well for me:
from ctypes import (
CDLL, c_int, POINTER, Structure, c_void_p, c_size_t,
c_short, c_ssize_t, c_char, ARRAY
)
__all__ = 'is_remote_alive',
class pollfd(Structure):
_fields_ = (
('fd', c_int),
('events', c_short),
('revents', c_short),
)
MSG_DONTWAIT = 0x40
MSG_PEEK = 0x02
EPOLLIN = 0x001
EPOLLPRI = 0x002
EPOLLRDNORM = 0x040
libc = CDLL('libc.so.6')
recv = libc.recv
recv.restype = c_ssize_t
recv.argtypes = c_int, c_void_p, c_size_t, c_int
poll = libc.poll
poll.restype = c_int
poll.argtypes = POINTER(pollfd), c_int, c_int
class IsRemoteAlive: # not needed, only for debugging
def __init__(self, alive, msg):
self.alive = alive
self.msg = msg
def __str__(self):
return self.msg
def __repr__(self):
return 'IsRemoteAlive(%r,%r)' % (self.alive, self.msg)
def __bool__(self):
return self.alive
def is_remote_alive(fd):
fileno = getattr(fd, 'fileno', None)
if fileno is not None:
if hasattr(fileno, '__call__'):
fd = fileno()
else:
fd = fileno
p = pollfd(fd=fd, events=EPOLLIN|EPOLLPRI|EPOLLRDNORM, revents=0)
result = poll(p, 1, 0)
if not result:
return IsRemoteAlive(True, 'empty')
buf = ARRAY(c_char, 1)()
result = recv(fd, buf, len(buf), MSG_DONTWAIT|MSG_PEEK)
if result > 0:
return IsRemoteAlive(True, 'readable')
elif result == 0:
return IsRemoteAlive(False, 'closed')
else:
return IsRemoteAlive(False, 'errored')
Trying to improve on #kay response. I made a more pythonic version
(Note that it was not yet tested in a "real-life" environment, and only on Linux)
This detects if the remote side closed the connection, without actually consuming the data:
import socket
import errno
def remote_connection_closed(sock: socket.socket) -> bool:
"""
Returns True if the remote side did close the connection
"""
try:
buf = sock.recv(1, socket.MSG_PEEK | socket.MSG_DONTWAIT)
if buf == b'':
return True
except BlockingIOError as exc:
if exc.errno != errno.EAGAIN:
# Raise on unknown exception
raise
return False
Here is a simple example from an asyncio echo server:
import asyncio
async def handle_echo(reader, writer):
addr = writer.get_extra_info('peername')
sock = writer.get_extra_info('socket')
print(f'New client: {addr!r}')
# Initial of client command
data = await reader.read(100)
message = data.decode()
print(f"Received {message!r} from {addr!r}")
# Simulate a long async process
for _ in range(10):
if remote_connection_closed(sock):
print('Remote side closed early')
return
await asyncio.sleep(1)
# Write the initial message back
print(f"Send: {message!r}")
writer.write(data)
await writer.drain()
writer.close()
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
if __name__ == '__main__':
asyncio.run(main())