I'm using SocketServer.ThreadingMixIn, pretty much as in the docs.
Other than having extracted the clients to run on their own script, I've also redefined the handle method as I want the connection to the client to keep alive and receive more messages:
def handle(self):
try:
while True:
data = self.request.recv(1024)
if not data:
break # Quits the thread if the client was disconnected
else:
print(cur_thread.name)
self.request.send(data)
except:
pass
The problem is that even when I try to terminate the server with server.shutdown() or by KeyboardInterrupt, it will still be blocked on the handle as long as the client maintains an open socket.
So how I can effectively stop the server even if there are still connected clients?
The best solution I found was to use SocketServer.ForkingMixIn instead of SocketServer.ThreadingMixIn.
This way the daemon actually works, even though using processes instead of threads was not exactly what I wanted.
Related
I'm making a Client/Server system in Python that requires the server to be running for prolonged periods of time, for each client that connects it spawns a Thread using threading. When a user joined the Thread for this user is put into a while loop that forever checks for messages from the client. If the client quits then the while loop breaks and the code inside the thread ends. I thought that this was the safe way to 'close a thread' - by simply finishing the code that it had been given to execute.
However upon an unrelated error on my server it gave me an Error - "Error on Thread 40" At the time of the error there were only 4 clients connected however 40 threads are upon?? Is this safe and am I closing my Threads properly and safely in order to avoid the server crashing from memory overload?
def acceptConnections():
while True:
client, client_address = sock.accept()
Thread(target=handleClient, args=(client,)).start()
def handleClient(client):
while True:
message=client.recv() #Receives messages
if message == 'exit':
break
#I thought the Thread closes here?
I'm trying to write a server program and I have a thread for listening for new clients:
class ClientFinder(Thread):
def __init__(self, port):
Thread.__init__(self)
self._continue = True
self._port = port
# try to create socket
def run(self):
# listen for new clients
while self._continue:
# add new clients
def stop(self):
# stop client
self._continue = False
client_finder = ClientFinder(8000)
client_finder.start()
client_finder.stop()
client_finder.join()
I can't join client_finder because it never ends. Calling stop() lets the thread stop after the next client is accepted, so the program just hangs forever.
1) Is it okay for my program to just end even if I haven't joined all my threads (such as by removing the join)? Or is this lazy/bad practice?
2) If it is a problem, what's the solution/best practice to avoid this? From what I've found so far, there's no way to force a thread to stop.
Whether waiting for the current clients to finish is a problem is really your choice. It may be a good idea, or you may prefer to kill connections.
Waiting for a new client is probably a worse thing, since it may never happen. An easy solution would be to have some reasonable timeout for the listening - let's say if nobody connects in 5s, you go back to the loop to check the flag. This is short enough for a typical shutdown solution, but long enough that rechecking shouldn't affect your CPU usage.
If you don't want to wait for a short timeout, you can add a pipe/socket between the thread doing shutdown and your ClientFinder and send a notification to shutdown. Instead of only waiting for a new client, you'd need to wait on both fds (I'm assuming ClientFinder uses sockets) and check which of them got a message.
Socket.close() does not stop any blocking socket.accept() calls that are already running on that socket.
I have several threads in my python program that only run a blocking socket.accept() call on a unix domain socket that has been closed already.
I want to kill these threads by making the socket.accept() calls stop or
raise an exception.
I am trying to do this by loading new code in the program, without stopping the program.
Therefore, changing the code that spawned these threads or that closed the sockets is not an option.
Is there any way to do this?
This is similar to https://stackoverflow.com/a/10090348/3084431, but these solutions wont work for my code:
This point is not true, closing won't raise an exception on the accept. shutdown does, but that can not be called anymore when the thread is closed.
I can not connect to this socket anymore. The socket is closed.
The threads with the accept calls are already running, I can't change them.
Same as 3
For clarification, I have written some example code that has this problem.
This code works in both python 2 and python 3.
import socket
import threading
import time
address = "./socket.sock"
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.bind(address)
sock.listen(5)
def accept():
print(sock.accept())
t = threading.Thread(target=accept, name="acceptorthread")
t.start()
sock.close()
time.sleep(0.5) # give the thread some time to register the closing
print(threading.enumerate()) # the acceptorthread will still be running
What I need is something that I can run after this code has finished that can stop the acceptor thread somehow.
There is no mechanism in kernel to notify every listener that a socket is closed. You have to write something yourself. A simple solution is to use timeout on socket:
sock.settimeout(1)
def accept():
while True:
try:
print(sock.accept())
except socket.timeout:
continue
break
Now when you close the socket the next call (after a timeout) to .accept() will throw a "bad descriptor" exception.
Also remember that sockets api in Python is not thread safe. Wrapping every socket call with a lock (or other synchronization methods) is advised in multi-threaded environment.
More advanced (and efficient) would be to use wrap your socket with a select call. Note that the socket does not have to be in non-blocking mode in order to use it.
Therefore, changing the code that spawned these threads or that closed the sockets is not an option.
If that's the case, then you are doomed. Without changing the code running in threads it is impossible to achieve. It's like asking "how can I fix my broken car without modifying the car". Won't happen, mate.
You should only call .accept() on a socket that has given the "readable" result from some selectors. Then, accept doesn't need to be interrupted.
But in case of spurious wakeup, you should have the listening socket in O_NONBLOCK mode anyway.
I'm trying to connect to more than one server at the same time. I am currently using loop.create_connection but it freezes up at the first non-responding server.
gsock = loop.create_connection(lambda: opensock(sid), server, port)
transport, protocol = loop.run_until_complete(gsock)
I tried threading this but it created problems with the sid value being used as well as various errors such as RuntimeError: Event loop is running and RuntimeError: Event loop stopped before Future completed. Also, according my variables (tho were getting mixed up) the protocol's connection_made() method gets executed when transport, protocol = loop.run_until_complete(gsock) throws an exception.
I don't understand much about the asyncio module so please be as thorough as possible. I dont think I need reader/writer variables, as the reading should be done automatically and trigger data_received() method.
Thank You.
You can connect to many servers at the same time by scheduling all the coroutines concurrently, rather than using loop.run_until_complete to make each connection individually. One way to do that is to use asyncio.gather to schedule them all and wait for each to finish:
import asyncio
# define opensock somewhere
#asyncio.coroutine
def connect_serv(server, port):
try:
transport, protocol = yield from loop.create_connection(lambda: opensock(sid), server, port)
except Exception:
print("Connection to {}:{} failed".format(server, port))
loop = asyncio.get_event_loop()
loop.run_until_complete(
asyncio.gather(
connect_serv('1.2.3.4', 3333),
connect_serv('2.3.4.5', 5555),
connect_serv('google.com', 80),
))
loop.run_forever()
This will kick off all three coroutines listed in the call to gather concurrently, so that if one of them hangs, the others won't be affected; they'll be able to carry on with their work while the other connection hangs. Then, if all of them complete, loop.run_forever() gets executed, which will allow you program to continue running until you stop the loop or kill the program.
The reader/writer variables you mentioned would only be relevant if you used asyncio.open_connection to connect to the servers, rather than create_connection. It uses the Stream API, which is a higher-level API than the protocol/transport-based API that create_connection uses. It's really up to you to decide which you prefer to use. There are examples of both in the asyncio docs, if you want to see a comparison.
I'd like to create a python socket (or SocketServer) that, once connected to a single device, maintains an open connection in order for regular checks to be made to see if any data has been sent. The socket will only listen for one connection.
E.g.:
def get_data(conn):
response='back atcha'
data = conn.recv(1024)
print 'get_data:',data
if data:
conn.send(response)
s = open_socket()
conn, addr = s.accept()
while True:
print 'running'
time.sleep(1)
get_data(conn)
#do other stuff
Once the server socket is bound and the connection has been accepted, the socket blocks when running a .recv until either the connecting client sends some data or closes its socket. As I am waiting for irregular data (could be seconds, could be a day), and the program needs to perform other tasks in the meantime, this blocking is a problem.
I don't want the client to close its socket, as it may need to send (or receive) data at any time to (from) the server. Is the only solution to run this in a separate thread, or is there a simple way to setup the client/server sockets to maintain the connection forever (and is this safe? It'll be running on a VLAN) while not blocking when no data has been received?
You're looking for non-blocking I/O, also called asynchronous I/O. Using a separate thread which blocks on this is very inefficient but it's pretty straightforward.
For a Python asynchronous I/O framework I highly recommend Twisted. Also check out asyncore which comes with the standard library.