closing the server after all clients are closed - python

I have a very basic socket script which sends a single message to clients.
Part of server script :
while True:
con,address=s.accept()
con.send("Hello from server".encode())
con.close()
s.close()
Part of client script :
message = s.recv(5)
while message:
print("Message", message.decode())
sleep(1)
message=s.recv(5)
s.close()
I start 2 clients. They both prints the message (5 bytes at a time), then close.
However the server remains open, because it is still waiting for clients.
What is the correct way to exit the server while True loop ?

You have to specify on what condition you want your server to exit. Usually a server is programmed like a daemon, i.e., run indefinitely. In python, you already have a way to break the infinite while loop -- Ctrl-C to trigger a keyboard exception. Otherwise, think of the following:
After N clients handled, break inside the loop. You will need to have a counter and keep track of the clients handled
On some POSIX signal, such as the answer in How do I capture SIGINT in Python?, and usually this is the way daemons do to terminate nicely
By the way, your server code may need rewrite: You currently only handle one client at a time without parallel processing. It will very easy run into head-of-line blocking issues when you have many clients.

Related

How do I keep a UDP server listening in Python 3 without the WHILE loop locking up the program?

Every UDP server example I can find uses a while True loop to listen for incoming data. I'm attempting to use a single UDP socket server as part a kivy window that's also doing other things. As soon as I implement the server's while True loop everything locks up, as I guess I would expect it to do.
How do I listen on a UDP port and also have the rest of the program continue functioning?
I've tried moving the UDP server handling to another (udp_server.py) file and then importing the function, but since I'm importing the while loop nothing changes.
I've also tried assigning the received data to a variable inside udp_server.py and then just importing that variable, with udp_server.py already running separately, but even that is locking up my main program.
I'm 99.99% sure it's just some basic thing that I should already know, but I'm new to Python. Thanks in advance for any help.
Thank you Chris!!!!!!
I'm sure I'm understating the complexity of threading, but it works great now and the only thing I had to add was:
def thread_function():
from udp_server import amx_rx
# do stuff with amx_rx...
# class TouchPanel stuff...
if __name__ == '__main__':
x = threading.Thread(target=thread_function, daemon=True)
x.start()
try:
TouchPanel().run()
except KeyboardInterrupt:
raise
Now I have a running program with a UDP socket listening in the background! Thank you!!!

How to fork and exec a server and wait until it's ready?

Suppose I've got a simple Tornado web server, which starts like this:
app = ... # create an Application
srv = tornado.httpserver.HTTPServer(app)
srv.bind(port)
srv.start()
tornado.ioloop.IOLoop.instance().start()
I am writing an "end-to-end" test, which starts the server in a separate process with subprocess.Popen and then calls the server over HTTP. Now I need to make sure the server did not fail to start (e.g. because the port is busy) and then wait till server is ready.
I wrote a function to wait until the server gets ready :
def wait_till_ready(port, n=10, time_out=0.5):
for i in range(n):
try:
requests.get("http://localhost:" + str(port))
return
except requests.exceptions.ConnectionError:
time.sleep(time_out)
raise Exception("failed to connect to the server")
Is there a better way ?
How can the parent process, which forks and execs the server, make sure that the server didn't fail because the server port is busy for example ? (I can change the server code if I need it).
You could approach it in two ways:
Make a pipe / queue before you fork. Then, just before you start the io loop, notify the parent that everything went fine and you're ready for the request.
Open the port and bind to it before forking. You should make sure you close that socket on the parent side. But otherwise, the only thing which needs to run in the child is the io loop. You can handle all the other errors before the fork.

Receiving multiple messages via socketserver but one is sent

A have a application with two threads. Its a network controlled game,
1. thread (Server)
Accept socket connections and receive messages
When message is sent, create an event and add it to the queue
Code:
class SingleTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
try:
while True:
sleep(0.06)
message = self.rfile.readline().strip()
my_event = pygame.event.Event(USEREVENT, {'control':message})
print message
pygame.event.post(my_event)
2. thread (pygame)
In charge of game rendering
Receives messages via event queue which Server populates
Renders the game based on messages every 60ms
This is how the game looks. The control messages are just speeds for the little square.
For the purpose of debug i connect to the server from a virtual machine with:
ncat 192.168.56.1 2000
And then send control messages. In production, these messages will be sent every 50ms by an Android device.
The problem
In my debug environment, i manually type messages with a period of a few seconds. During the time i don't type anything the game gets rendered many times. What happens is that the message (in server code) is constantly rendered with the previously received value.
I send the following:
1:0.5
On the console where the app is started i receive the following due to line print message in Server code:
alan#alan ~/.../py $ python main.py
1:0.5
What the game does is it acts as it is constantly (with the period it renders, and not every few seconds as i type) receiving this value.
SInce that is happenig i would expect that the print message which is in while True also outputs constantly and that the output is:
alan#alan ~/.../py $ python main.py
1:0.5
1:0.5
1:0.5
1:0.5
....
However that is not the case. Please advise (I'm also open for proposals to what to change the subject to if it isn't explanatory enough)
Your while True loop is polling the socket, which is only going to get messages when they are sent; it has no idea or care what the downstream event consumer is doing with those messages, it is just going to dispatch an event for and print the contents of the next record on the socket queue every .6 seconds. If you want the game to print the current command every render loop, you'll have to put the print statement in the render loop itself, not in the socket poller. Also, since you seem to want to have the last command "stick" and not post a new event unless the user actually inputs something, you might want to put an if message: block around the event dispatch code in the socket handler you have here. Right now, you'll send an empty event every .6 seconds if the user hasn't provided you any input since the last time you checked.
I also don't think it's probably advisable to put a sleep, or the loop you have for that matter, in your socket handler. The SocketServer is going to be calling it every time you receive data on the socket, so that loop is effectively being done for you, and all doing it here is going to do is open you up to overflowing the buffer, I think. If you want to control how often you post events to pygame, you probably want to do that by either blocking events of a certain type from being added if there is already 1 queued, or by grabbing all events of a given type from the queue each game loop and then just ignoring all but the first or last one. You could also control it by checking in the handler if it has been some amount of time since the last event was posted, but then you have to make sure the event consumer is capable of handling an event queue with multiple events waiting on it, and does the appropriate queue flushing when needed.
Edit:
Docs:
The difference is that the readline() call in the second handler will call recv() multiple times until it encounters a newline character, while the single recv() call in the first handler will just return what has been sent from the client in one sendall() call.
So yes, reading the whole line is guaranteed. In fact, I don't think the try is necessary either, since this won't even be called unless there is input to handle.

python socket server/client protocol with unstable client connection

I have a threaded python socket server that opens a new thread for each connection.
The thread is a very simple communication based on question and answer.
Basically client sends initial data transmission, server takes it run an external app that does stuff to the transmission and returns a reply that the server will send back and the loop will begin again until client disconnects.
Now because the client will be on a mobile phone thus an unstable connection I get left with open threads no longer connected and because the loop starts with recv it is rather difficult to break on lost connectivity this way.
I was thinking on adding a send before the recv to test if connection is still alive but this might not help at all if the client disconnects after my failsafe send as the client sends a data stream every 5 seconds only.
I noticed the recv will break sometimes but not always and in those cases I am left with zombie threads using resources.
Also this could be a solid vulnerability for my system to be DOSed.
I have looked through the python manual and Googled since thursday trying to find something for this but most things I find are related to client and non blocking mode.
Can anyone point me in the right direction towards a good way on fixing this issue?
Code samples:
Listener:
serversocket = socket(AF_INET, SOCK_STREAM)
serversocket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
serversocket.bind(addr)
serversocket.listen(2)
logg("Binded to port: " + str(port))
# Listening Loop
while 1:
clientsocket, clientaddr = serversocket.accept()
threading.Thread(target=handler, args=(clientsocket, clientaddr,port,)).start()
# This is useless as it will never get here
serversocket.close()
Handler:
# Socket connection handler (Threaded)
def handler(clientsocket, clientaddr, port):
clientsocket.settimeout(15)
# Loop till client closes connection or connection drops
while 1:
stream = ''
while 1:
ending = stream[-6:] # get stream ending
if ending == '.$$$$.':
break
try:
data = clientsocket.recv(1)
except:
sys.exit()
if not data:
sys.exit()
# this is the usual point where thread is closed when a client closes connection normally
stream += data
# Clear the line ending
stream = base64.b64encode(stream[:-6])
# Send data to be processed
re = getreply(stream)
# Send response to client
try:
clientsocket.send(re + str('.$$$$.'))
except:
sys.exit()
As you can see there are three conditions that at least one should trigger exit if connection fails but sometimes they do not.
Sorry, but I think that threaded idea in this case is not good. As you do not need to process/do a lot of stuff in these threads (workers?) and most of the time these threads are waiting for socket (is the blocking operation, isn't it?) I would advice to read about event-driven programming. According to sockets this pattern is extremly useful, becouse you can do all stuff in one thread. You are communicate with one socket at a time, but the rest of connections are just waiting to data so there is almost no loss. When you send several bytes you just check that maybe another connection requires carrying. You can read about select
and epoll.
In python there is several libraries to play with this nicly:
libev (c library wrapper) - pyev
tornado
twisted
I used tornado in some projects and it is done this task very good. Libev is nice also, but is a c-wrapper so it is a little bit low-level (but very nice for some tasks).
So you should use socket.settimeout(float) with the clientsocket like one of the comments suggested.
The reason you don't see any difference is, when you call socket.recv(bufsize[, flags]) and the timeout runs out an socket.timeout exception is thrown and you catch that exception and exit.
try:
data = clientsocket.recv(1)
except:
sys.exit()
should be somthing like:
try:
data = clientsocket.recv(1)
except timeout:
#timeout occurred
#handle it
clientsocket.close()
sys.exit()

Zeromq with python hangs if connecting to invalid socket

If I connect to an inexistent socket with pyzmq I need to hit CTRL_C to stop the program. Could someone explay why this happens?
import zmq
INVALID_ADDR = 'ipc:///tmp/idontexist.socket'
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect(INVALID_ADDR)
socket.send('hello')
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
conn = dict(poller.poll(1000))
if conn:
if conn.get(socket) == zmq.POLLIN:
print "got result: ", socket.recv(zmq.NOBLOCK)
else:
print 'got no result'
This question was also posted as a pyzmq Issue on GitHub. I will paraphrase my explanation here (I hope that is appropriate, I am fairly new to SO):
A general rule: When in doubt, hangs at the end of your zeromq program are due to LINGER.
The hang here is caused by the LINGER socket option, and happens in the context.term() method called during garbage collection at the very end of the script. The LINGER behavior is described in the zeromq docs, but to put it simply, it is a timeout (in milliseconds) to wait for any pending messages in the queue to be handled after closing the socket before dropping the messages. The default behavior is LINGER=-1, which means to wait forever.
In this case, since no peer was ever started, the 'hello' message that you tried to send is still waiting in the send queue when the socket tries to close. With LINGER=-1, ZeroMQ will wait until a peer is ready to receive that message before shutting down. If you bind a REP socket to 'ipc:///tmp/idontexist.socket' while this script is apparently hanging, the message will be delivered and the script will finish exiting cleanly.
If you do not want your script to wait (as indicated by your print statements that you have already given up on getting a reply), set LINGER to any non-negative value (e.g. socket.linger = 0), and context.term() will return after waiting the specified number of milliseconds.
I should note that the INVALID_ADDR variable name suggests an understanding that connection to an interface that does not yet have a listener is not valid - this is incorrect. zeromq allows bind/connect events to happen in any order, as illustrated by the behavior described above, of binding a REP socket to the interface while the sending script is blocking on term().
In most cases, you can bind and connect ZMQ sockets in either order, so your connect()/send() is simply waiting for the corresponding bind() at the other end, which never comes, so the program appears to hang. Check where the program is hanging by printing out some logging statements...

Categories