I have a sample client-server program that does non-blocking I/O for several sockets not using processes or threads. It uses select. Unfortunately, the server just shows lots of blank lines and that's all. Where is the mistake?
Running on MacOS.
Thanks in advance.
Server:
import socket
import select
sock = socket.socket()
sock.bind(('', 10001))
sock.listen()
conn1, _ = sock.accept()
conn2, _ = sock.accept()
conn1.setblocking(0)
conn2.setblocking(0)
epoll = select.poll()
epoll.register(conn1.fileno(), select.POLLIN | select.POLLOUT)
epoll.register(conn2.fileno(), select.POLLIN | select.POLLOUT)
conn_map = {
conn1.fileno(): conn1,
conn2.fileno(): conn2,
}
while True:
events = epoll.poll(1)
for fileno, event in events:
if event & select.POLLIN:
data = conn_map[fileno].recv(1024)
print(data.decode('utf8'))
elif event & select.POLLOUT:
conn_map[fileno].send('ping'.encode('utf8'))
Client:
import socket
from multiprocessing import Pool
def create_socket_and_send_data(number):
with socket.create_connection(('127.0.0.1', 10001)) as sock:
try:
sock.sendall(f'client {number}\n'.encode('utf8'))
except socket.error as ex:
print('data sending error', ex)
print(f'data for {number} has been sent')
if __name__ == '__main__':
with Pool(processes=2) as pool:
pool.map(create_socket_and_send_data, range(2))
Unfortunately, the server just shows lots of blank lines and that's all.
Actually this is not true.
The server prints at the beginning the lines it got from the clients. After they've send these lines the client close the connection which means that select.POLLIN gets triggered again on the socket and recv returns empty data.
This empty data is the sign that the peer has closed the connection. Once it got this sign the server should close the connection to the client and remove the fileno from the select. Instead your server prints the empty string with a newline and continues to expect new POLLIN events. These will come again and again and will always an empty buffer, thus leading to all the empty lines you see.
select is paradoxically easier to use for input than for output. For input, you receive an event each time new data arrives on a socket, so you always ask for all the sockets and have something to process for every new event.
For output, select will just say that a socket if ready to accept new data. Which is almost always true except if you have just filled a buffer. So you should only poll for an output socket when you have something to write there.
So you should register your sockets with select.POLLIN only. For the write part, you should either directly write to a socket without polling if you can hope that the peer should always be able to receive, or set up a queue with pending output per socket, modify the polling state of a socket with select.POLLIN | select.POLLOUT when there is something in its queue and modify it back with select.POLLIN back when the queue is empty again.
Related
I'm trying to write a simple daemon that listens for orders on a Unix socket. The following works, but the connection.recv(1024) line blocks, meaning I can't kill the server gracefully:
import socket, os
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as server:
server.bind("/tmp/sock")
server.listen()
connection, __ = server.accept()
with connection:
while True:
data = connection.recv(1024)
print("Hi!") # This line isn't executed 'til data is sent
if data:
print(data.decode())
Ideally, I'd like to place all of this inside a Thread that checks a self.should_stop property every self.LOOP_TIME seconds, and if that value is set to True, then exit. However, as that .recv() line blocks, there's no way for my program to be doing anything other than waiting at any given time.
Surely there's a proper way to do this, but as I'm new to sockets, I have no idea what that is.
Edit
Jeremy Friesner's answer put me on the right track. I realised that I could allow the thread to block and simply set .should_stop then pass an b"" to the socket so that it'd un-block, see that it should stop, and then exit cleanly. Here's the end result:
import os
import socket
from pathlib import Path
from shutil import rmtree
from threading import Thread
class MyThreadThing(Thread):
RUNTIME_DIR = Path(os.getenv("XDG_RUNTIME_DIR", "/tmp")) / "my-project-name"
def __init__(self):
super().__init__(daemon=True)
self.should_stop = False
if self.RUNTIME_DIR.exists():
rmtree(self.RUNTIME_DIR)
self.RUNTIME_DIR.mkdir(0o700)
self.socket_path = self.RUNTIME_DIR / "my-project.sock"
def run(self) -> None:
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s:
s.bind(self.socket_path.as_posix())
s.listen()
while True:
connection, __ = s.accept()
action = ""
with connection:
while True:
received = connection.recv(1024).decode()
action += received
if not received:
break
# Handle whatever is in `action`
if self.should_stop:
break
self.socket_path.unlink()
def stop(self):
"""
Trigger this when you want to stop the listener.
"""
self.should_stop = True
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s:
s.connect(self.socket_path.as_posix())
s.send(b"")
Using arbitrary-length timeouts is always a bit unsatisfactory -- either you set the timeout-value to a relatively long time, in which case your program becomes slow to react to the quit-request, because it is pointlessly waiting for timeout period to expire... or you set the timeout-value to a relatively short time, in which case your program is constantly waking up to see if it should quit, wasting CPU power 24/7 to check for an event which might never arrive.
A more elegant way to deal with the problem is to create a pipe, and send a byte on the pipe when you want your event-loop to exit. Your event loop can simultaneously "watch" both the pipe's reading-end file-descriptor and your networking-socket(s) via select(), and when that file-descriptor indicates it is ready-for-read, your event loop can respond by exiting. This approach is entirely event-driven, so it requires no CPU wakeups except when there is actually something to do.
Below is an example version of your program that implements a signal-handler for SIGINT (aka pressing Control-C) to sends the please-quit-now byte on the pipe:
import socket, os
import select
import signal, sys
# Any bytes written to (writePipeFD) will become available for reading on (readPipeFD)
readPipeFD, writePipeFD = os.pipe()
# Set up a signal-handler to handle SIGINT (aka Ctrl+C) events by writing a byte to the pipe
def signal_handler(sig, frame):
print("signal_handler() is executing -- SIGINT detected!")
os.write(writePipeFD, b"\0") # doesn't matter what we write; a single 0-byte will do
signal.signal(signal.SIGINT, signal_handler)
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as serverSock:
serverSock.bind("/tmp/sock")
serverSock.listen()
# Wait for incoming connection (or the please-quit signal, whichever comes first)
connection = None
while True:
readReady,writeReady,exceptReady = select.select([readPipeFD,serverSock], [], [])
if (readPipeFD in readReady):
print("accept-loop: Someone wrote a byte to the pipe; time to go away!");
break
if (connection in readReady):
connection, __ = serverSock.accept()
break
# Read data from incoming connection (or the please-quit signal, whichever comes first)
if connection:
with connection:
while True:
readReady,writeReady,exceptReady = select.select([readPipeFD,connection], [], [])
if (readPipeFD in readReady):
print("Connection-loop: Someone wrote a byte to the pipe; time to go away!");
break
if (connection in readReady):
data = connection.recv(1024)
print("Hi!") # This line isn't executed 'til data is sent
if data:
print(data.decode())
print("Bye!")
Use a timeout identical to your LOOP_TIME like so:
import socket, os
LOOP_TIME = 10
should_stop = False
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as server:
server.bind("/tmp/sock")
server.listen()
connection, __ = server.accept()
connection.settimeout(LOOP_TIME)
with connection:
while not should_stop:
try:
data = connection.recv(1024)
except socket.timeout:
continue
print("Hi!") # This line isn't executed 'til data is sent
if data:
print(data.decode())
You may use select, but if it's only a single simple socket, this way is a bit less complicated.
You can choose to place it in a different thread with a self.should_stop or just at the main - it will now listen to the KeyboardInterrupt.
I am writing a simple client/server socket program where clients connect with server and communicate and then they send exit msg to server and then server closes the connection. The code looks like below.
server.py
import socket
import sys
from threading import Thread
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# This is to prevent the socket going into TIME_WAIT status and OSError
# "Address already in use"
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
except socket.error as e:
print('Error occured while creating the socket {}'.format(e))
server_address = ('localhost', 50000)
sock.bind(server_address)
print('**** Server started on {}:{} ****'.format(*server_address))
sock.listen(5)
def client_thread(conn_sock, client_add):
while True:
client_msg = conn_sock.recv(1024).decode()
if client_msg.lower() != 'exit':
print('[{0}:{1}] {2}'.format(*client_add, client_msg))
serv_reply = 'Okay ' + client_msg.upper()
conn_sock.send(bytes(serv_reply, 'utf-8'))
else:
conn_sock.close()
print('{} exitted !!'.format(client_add[0]))
sys.exit()
try:
# Keep the server until there are incominmg connections
while True:
# Wait for the connctions to accept
conn_sock, client_add = sock.accept()
print('Recieved connection from {}:{}'.format(*client_add))
conn_sock.send(
bytes('***** Welcome to {} *****'.format(server_address[0]), 'utf-8'))
Thread(target=client_thread, args=(
conn_sock, client_add), daemon=True).start()
except Exception as e:
print('Some error occured \n {}'.format(e))
except KeyboardInterrupt as e:
print('Program execution cancelled by user')
conn_sock.send(b'exit')
sys.exit(0)
finally:
sock.close()
client.py
import socket
import sys
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 50000)
print('Connecting to {} on {}'.format(*server_address))
sock.connect(server_address)
def exiting(host=''):
print('{} exitted !!'.format(host))
sys.exit()
while True:
serv_msg = sock.recv(1024).decode()
if serv_msg.lower() != 'exit':
print('{1}: {0}'.format(serv_msg, server_address[0]))
client_reply = input('You: ')
sock.send(bytes(client_reply, 'utf-8'))
if client_reply.lower() == 'exit':
exiting()
else:
exiting('Server')
What I want is in case server exits either through ctrl-c or any other way I want all client sockets to be closed and send msg to clients upon which they should close their socket as well.
I am doing below in except section but for some reason the msg sent by server is not being received by the client.
except KeyboardInterrupt as e:
print('Program execution cancelled by user')
conn_sock.send(b'exit')
sys.exit(0)
Surprisingly if I send the 'exit' msg from client_thread as srvr_reply, the client accepts the msg and exit the client socket at its end just fine. So I am not sure as to why the server is not able to send the same message in except section of the code as mentioned above.
I'm sorry to say that abnormal termination of TCP/IP connections is undetectable unless you try to send data through the connection.
This is known as a "Half Open" socket and it's also mention in the Python documentation.
Usually, when a server process crashes, the OS will close TCP/IP sockets, signaling the client about the closure.
When a client receives the signal, the server's termination can be detected while polling. The polling mechanism (i.e. poll / epoll / kqueue) will test for the HUP (hung up) event.
This is why "Half Open" sockets don't happen in development unless the issue is forced. When both the client and the server run on the same machine, the OS will send the signal about the closure.
But if the server computer crashes, or connectivity is lost (i.e. mobile devices), no such signal is sent and the client never knows.
The only way to detect an abnormal termination is a failed write attempt read will not detect the issue (it will act as if no data was received).
This is why they invented the ping concept and this is why HTTP/1.1 servers and clients (that don't support pings) use timeouts to assume termination.
There's a good blog post about Half Open sockets here.
EDIT (clarifications due to comments)
How to handle the situation:
I would recommend the following:
Add an explicit Ping message (or an Empty/NULL message) to your protocol (the messages understood by both the clients and the server).
Monitor the socket for inactivity by recording each send or recv operation.
Add timeout monitoring to your code. This means that you will need to implement polling, such as select (or poll or the OS specific epoll/kqueue), instead of blocking on recv.
When connection timeout is reached, send the Ping / empty message.
For an easy solution, reset the timeout after sending the Ping.
The next time you poll the socket, the polling mechanism should alert you about the failed connection. Alternatively, the second time you try to ping the server/client you will get an error message.
Note that the first send operation might succeed even though the connection was lost.
This is because the TCP/IP layer sends the message but the send function doesn't wait for the TCP/IP's ACK confirmation.
However, by the second time you get to the ping, the TCP/IP layer would have probably realized that no ACK is coming and registered the error in the socket (this takes time).
Why the send failed before exiting the server
The comment I left about this issue is wrong (in part).
The main reason the conn_sock.send(b'exit') failed is because conn_sock is a local variable in the client thread and isn't accessible from the global state where the SIGINT (CTRL+C) is raised.
This makes sense, as what would happen if the server has more than a single client?
However, it is true that socket.send only schedules the data to be sent, so the assumption that the data was actually sent is incorrect.
Also note that socket.send might not send the whole message if there isn't enough room in the kernel's buffer.
I'm trying to write a fairly simple client-server Python application using socket and SocketServer. To allow for two-way communication between client and server, the client maintains one connected socket with the server so it can listen for messages in a separate thread, while the main thread creates one-time-use sockets to send messages to the server. I want my "listening" socket to be blocking, as it is running in a separate thread whose only purpose is to wait for data without blocking the main program. Here is the function where I create this socket:
def connect(self, alias, serverIP):
if not alias or not isinstance(alias, str):
print "ERROR: Must specify an alias"
return
self.serverIP = serverIP
self.downConnection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.downConnection.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.downConnection.setblocking(1)
self.downConnection.connect((self.serverIP, 11100))
self.downConnection.send("SENDSERVER CONNECT %s" % alias)
Here is the loop where the persistent socket listens for messages from the server (with some debugging code thrown in):
i = 0
while True:
print "LOOP", i,
if self.closed:
break
try:
data = self.downConnection.recv(1024)
except socket.timeout, e:
print "Timeout"
pass
else:
print "Received %d" % len(data)
if data:
self.received(data)
i += 1
I would expect to see "Received ##" messages only when the server sends data, and maybe periodic "Timeout" messages otherwise. Instead, the output grows very rapidly and looks entirely like this:
LOOP 33858 Received 0
LOOP 33859 Received 0
LOOP 33860 Received 0
LOOP 33861 Received 0
LOOP 33862 Received 0
LOOP 33863 Received 0
LOOP 33864 Received 0
LOOP 33865 Received 0
So it seems that self.downConnection.recv() is immediately returning an empty string each time it is called, rather than blocking until it receives substantive data like it's supposed to. This is puzzling, as I'm explicitly setting the socket to be blocking (which I think is also the default setting). Constantly executing this loop instead of the thread spending most of its time waiting for data is wasting a good deal of CPU time. What am I doing wrong in setting up the blocking socket?
Here is the full server code. The Comms class is also the superclass of the client class, to allow for some basic common functionality.
Something does seem to be wrong with the connection from the server's end. The server can receive data from the client, but trying to send data to the client gives a socket.error: [Errno 9] Bad file descriptor exception.
I am writing a client-sever program based on Python socket.
The client sends a command to the server and the server responds.
But now, some client can broadcast a message to other clients, so the client can receive more than one response at the same time.
data = s.recv(1024)
the line of code above will retrieve only one response from the server.
but if I use a while loop like this
while True:
data = s.recv(1024)
if not data: break
actually, data=s.recv(1024) will block the program when there is no data left.
I don't want to block the program and want to retrieve all the responses available in the connection at one time. Can anyone find a solution? Thank you.
You can use the select module to wait until the socket is readable or until a timeout has elapsed; you can then perform other processing. For example:
while True:
# If data can be received without blocking (timeout=0), read it now
ready = select.select([s], [], [], 0)
if s in ready[0]:
data = s.recv(1024)
# Process data
else:
# No data is available, perform other tasks
You could make the socket (s) non-blocking. This way, it will retrieve all the received responses and when there is none, it will return back. Of course, with non-blocking, you will have to periodically retry.
You could make the socket (s) non-blocking using the setblocking() method:
s.setblocking(0)
The other option is to use another thread to handle the receive part. This way, your main thread can continue doing its main task and act upon the message only if it receives one.
You can use socket.setblocking or socket.settimeout:
import socket
import sys
HOST = 'www.google.com'
PORT = 80
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.setblocking(0)
s.sendall('Hello, world')
try:
data = s.recv(1024)
except:
print 'Oh noes! %s' % sys.exc_info()[0]
s.close()
socket.recv takes two parameters, the second is a set of flags. If you're on a Linux system, you can do man recv for a list of flags you can supply, and their corresponding errors.
Lastly, in general, you can't really know that the other side is done with sending you data (unless you're controlling both sides), even if you're both following a protocol. I believe the right way to go about it is to use timeouts, and quit after sending a reset (how you do this will depend upon what protocol you're using).
I have a basic client-server script in Python using sockets. The server binds to a specific port and waits for a client connection. When a client connects, they are presented with a raw_input prompt that sends the entered commands to a subproc on the server and pipes the output back to the client.
Sometimes when I execute commands from the client, the output will hang and not present me with the raw_input prompt until I press the [enter] key.
At first I thought this might have been a buffer problem but it happens when I use commands with a small output, like 'clear' or 'ls', etc.
The client code:
import os, sys
import socket
from base64 import *
import time
try:
HOST = sys.argv[1]
PORT = int(sys.argv[2])
except IndexError:
print("You must specify a host IP address and port number!")
print("usage: ./handler_client.py 192.168.1.4 4444")
sys.exit()
socksize = 4096
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
server.connect((HOST, PORT))
print("[+] Connection established!")
print("[+] Type ':help' to view commands.")
except:
print("[!] Connection error!")
sys.exit(2)
while True:
data = server.recv(socksize)
cmd = raw_input(data)
server.sendall(str(cmd))
server.close()
Server code:
import os,sys
import socket
import time
from subprocess import Popen,PIPE,STDOUT,call
HOST = ''
PORT = 4444
socksize = 4096
activePID = []
conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conn.bind((HOST, PORT))
conn.listen(5)
print("Listening on TCP port %s" % PORT)
def reaper():
while activePID:
pid,stat = os.waitpid(0, os.WNOHANG)
if not pid: break
activePID.remove(pid)
def handler(connection):
time.sleep(3)
while True:
cmd = connection.recv(socksize)
proc = Popen(cmd,
shell=True,
stdout=PIPE,
stderr=PIPE,
stdin=PIPE,
)
stdout, stderr = proc.communicate()
if cmd == ":killme":
connection.close()
sys.exit(0)
elif proc:
connection.send( stdout )
connection.send("\nshell => ")
connection.close()
os._exit(0)
def accept():
while 1:
global connection
connection, address = conn.accept()
print "[!] New connection!"
connection.send("\nshell => ")
reaper()
childPid = os.fork() # forks the incoming connection and sends to conn handler
if childPid == 0:
handler(connection)
else:
activePID.append(childPid)
accept()
The problem I see is that the final loop in the client only does one server.recv(socksize), and then it calls raw_input(). If that recv() call does not obtain all of the data sent by the server in that single call, then it also won't collect the prompt that follows the command output and therefore won't show that next prompt. The uncollected input will sit in the socket until you enter the next command, and then it will be collected and shown. (In principle it could take many recv() calls to drain the socket and get to the appended prompt, not just two calls.)
If this is what's happening then you would hit the problem if the command sent back more than one buffer's worth (4KB) of data, or if it generated output in small chunks spaced out in time so that the server side could spread that data over multiple sends that are not coalesced quickly enough for the client to collect them all in a single recv().
To fix this, you need have the client do as many recv() calls as it takes to completely drain the socket. So you need to come up with a way for the client to know that the socket has been drained of everything that the server is going to send in this interaction.
The easiest way to do this is to have the server add boundary markers into the data stream and then have the client inspect those markers to discover when the final data from the current interaction has been collected. There are various ways to do this, but I'd probably have the server insert a "this is the length of the following chunk of data" marker ahead of every chunk it sends, and send a marker with a length of zero after the final chunk.
The client-side main loop then becomes:
forever:
read a marker;
if the length carried in the marker is zero then
break;
else
read exactly that many bytes;.
Note that the client must be sure to recv() the complete marker before it acts on it; stuff can come out of a stream socket in lumps of any size, completely unrelated to the size of the writes that sent that stuff into the socket at the sender's side.
You get to decide whether to send the marker as variable-length text (with a distinctive delimiter) or as fixed-length binary (in which case you have to worry about endian issues if the client and server can be on different systems). You also get to decide whether the client should show each chunk as it arrives (obviously you can't use raw_input() to do that) or whether it should collect all of the chunks and show the whole thing in one blast after the final chunk has been collected.