I'm making a Client/Server system in Python that requires the server to be running for prolonged periods of time, for each client that connects it spawns a Thread using threading. When a user joined the Thread for this user is put into a while loop that forever checks for messages from the client. If the client quits then the while loop breaks and the code inside the thread ends. I thought that this was the safe way to 'close a thread' - by simply finishing the code that it had been given to execute.
However upon an unrelated error on my server it gave me an Error - "Error on Thread 40" At the time of the error there were only 4 clients connected however 40 threads are upon?? Is this safe and am I closing my Threads properly and safely in order to avoid the server crashing from memory overload?
def acceptConnections():
while True:
client, client_address = sock.accept()
Thread(target=handleClient, args=(client,)).start()
def handleClient(client):
while True:
message=client.recv() #Receives messages
if message == 'exit':
break
#I thought the Thread closes here?
Related
I'm trying to write a server program and I have a thread for listening for new clients:
class ClientFinder(Thread):
def __init__(self, port):
Thread.__init__(self)
self._continue = True
self._port = port
# try to create socket
def run(self):
# listen for new clients
while self._continue:
# add new clients
def stop(self):
# stop client
self._continue = False
client_finder = ClientFinder(8000)
client_finder.start()
client_finder.stop()
client_finder.join()
I can't join client_finder because it never ends. Calling stop() lets the thread stop after the next client is accepted, so the program just hangs forever.
1) Is it okay for my program to just end even if I haven't joined all my threads (such as by removing the join)? Or is this lazy/bad practice?
2) If it is a problem, what's the solution/best practice to avoid this? From what I've found so far, there's no way to force a thread to stop.
Whether waiting for the current clients to finish is a problem is really your choice. It may be a good idea, or you may prefer to kill connections.
Waiting for a new client is probably a worse thing, since it may never happen. An easy solution would be to have some reasonable timeout for the listening - let's say if nobody connects in 5s, you go back to the loop to check the flag. This is short enough for a typical shutdown solution, but long enough that rechecking shouldn't affect your CPU usage.
If you don't want to wait for a short timeout, you can add a pipe/socket between the thread doing shutdown and your ClientFinder and send a notification to shutdown. Instead of only waiting for a new client, you'd need to wait on both fds (I'm assuming ClientFinder uses sockets) and check which of them got a message.
Socket.close() does not stop any blocking socket.accept() calls that are already running on that socket.
I have several threads in my python program that only run a blocking socket.accept() call on a unix domain socket that has been closed already.
I want to kill these threads by making the socket.accept() calls stop or
raise an exception.
I am trying to do this by loading new code in the program, without stopping the program.
Therefore, changing the code that spawned these threads or that closed the sockets is not an option.
Is there any way to do this?
This is similar to https://stackoverflow.com/a/10090348/3084431, but these solutions wont work for my code:
This point is not true, closing won't raise an exception on the accept. shutdown does, but that can not be called anymore when the thread is closed.
I can not connect to this socket anymore. The socket is closed.
The threads with the accept calls are already running, I can't change them.
Same as 3
For clarification, I have written some example code that has this problem.
This code works in both python 2 and python 3.
import socket
import threading
import time
address = "./socket.sock"
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.bind(address)
sock.listen(5)
def accept():
print(sock.accept())
t = threading.Thread(target=accept, name="acceptorthread")
t.start()
sock.close()
time.sleep(0.5) # give the thread some time to register the closing
print(threading.enumerate()) # the acceptorthread will still be running
What I need is something that I can run after this code has finished that can stop the acceptor thread somehow.
There is no mechanism in kernel to notify every listener that a socket is closed. You have to write something yourself. A simple solution is to use timeout on socket:
sock.settimeout(1)
def accept():
while True:
try:
print(sock.accept())
except socket.timeout:
continue
break
Now when you close the socket the next call (after a timeout) to .accept() will throw a "bad descriptor" exception.
Also remember that sockets api in Python is not thread safe. Wrapping every socket call with a lock (or other synchronization methods) is advised in multi-threaded environment.
More advanced (and efficient) would be to use wrap your socket with a select call. Note that the socket does not have to be in non-blocking mode in order to use it.
Therefore, changing the code that spawned these threads or that closed the sockets is not an option.
If that's the case, then you are doomed. Without changing the code running in threads it is impossible to achieve. It's like asking "how can I fix my broken car without modifying the car". Won't happen, mate.
You should only call .accept() on a socket that has given the "readable" result from some selectors. Then, accept doesn't need to be interrupted.
But in case of spurious wakeup, you should have the listening socket in O_NONBLOCK mode anyway.
I'm using SocketServer.ThreadingMixIn, pretty much as in the docs.
Other than having extracted the clients to run on their own script, I've also redefined the handle method as I want the connection to the client to keep alive and receive more messages:
def handle(self):
try:
while True:
data = self.request.recv(1024)
if not data:
break # Quits the thread if the client was disconnected
else:
print(cur_thread.name)
self.request.send(data)
except:
pass
The problem is that even when I try to terminate the server with server.shutdown() or by KeyboardInterrupt, it will still be blocked on the handle as long as the client maintains an open socket.
So how I can effectively stop the server even if there are still connected clients?
The best solution I found was to use SocketServer.ForkingMixIn instead of SocketServer.ThreadingMixIn.
This way the daemon actually works, even though using processes instead of threads was not exactly what I wanted.
I have a simple asynchronous consumer for AMQP/RabbitMQ, written in Python using the Pika library and based on the Asynchronous consumer example from the Pika docs. The main difference is that I want to run mine in a thread and I want it to close the connection properly then exit (i.e. terminate the thread) after a certain time interval. Here are my methods to open a connection and set a timeout. I also open a channel, create an exchange and bind a queue... all that works fine.
def connect(self):
LOGGER.info('OPEN connection...')
return pika.SelectConnection(self._parameters, self.on_connection_open, stop_ioloop_on_close=False)
def on_connection_open(self, unused_connection):
LOGGER.info('Connection opened')
self.add_on_connection_close_callback()
self._connection.add_timeout(5, self.timer_tick)
self.open_recv_channel()
Here's the timeout callback:
def timer_tick(self):
LOGGER.info('---TICK---')
self._stop()
Here's the _stop method:
def _stop(self):
LOGGER.info('Stopping...')
self._connection.close()
LOGGER.info('Stopped')
time.sleep(5)
self._connection.ioloop.stop()
Here's the run method which launches the thread:
def run(self):
print "-Run Started-"
self._connection = self.connect()
self._connection.ioloop.start()
print "-Run Finished-"
Here's the main bit of main():
client = TestClient()
client.start()
client.join()
LOGGER.info('Returned.')
time.sleep(30)
My problem is that the "self._connection.close()" won't work properly. I added an on_close callback:
self._connection.add_on_close_callback(self.on_connection_closed)
But on_connection_closed() is never called. Also, the connection is NOT closed. I can see it in the RabbitMQ management web interface, and it remains even after the thread finishes. Here's the output:
-Run Started-
2015-01-28 14:39:28,431: OPEN connection...
2015-01-28 14:39:28,491: Queue bound
(...[snipped] various other messages here...)
2015-01-28 14:39:28,491: Issuing consumer related RPC commands
2015-01-28 14:39:28,491: Adding consumer cancellation callback
(Pause here waiting for timeout callback)
2015-01-28 14:39:33,505: ---TICK---
2015-01-28 14:39:33,505: Stopping...
2015-01-28 14:39:33,505: Closing connection (200): Normal shutdown
2015-01-28 14:39:33,505: Stopped
-Run Finished-
2015-01-28 14:39:39,507: Returned.
"Closing connection (200): Normal shutdown" comes from Pika, but none of my on_close or on_cancel callbacks are called, whether I start by closing the channel, or just close the connection. The only thing that DOES work is stopping the consumer with "basic_cancel", which causes my "on_cancel_callback" to be called.
I want to use a loop in the main program to create and destroy consumer threads, but at the moment, every time I run one I end up with an orphaned connection left over so my number of connections goes up indefinitely. The connections DO disappear when the program closes.
Using connection.close() should work: From the Pika Docs:
close(reply_code=200, reply_text='Normal shutdown')
Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel.
If you're sharing the connection between your threads this can cause problems. pika is not thread safe and connections shouldn't be used by different threads.
First bit of the FAQ:
Q:
Is Pika thread safe?
A:
Pika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads.
I'm writing a chat server in Python for an assignment. However, I am having an issue shutting down the server. Here's what's happening:
When a client connects, I spawn two threads: readThread and writeThread. readThread is responsible for reading data from the client and printing it to stdout, and writeThread is responsible for reading a message from stdin and sending it to the client.
When the client sends 'EXIT', I want to shutdown the server. My writeThread runs in a loop like this:
def write(self) :
while self.dowrite :
data = sys.stdin.readline().strip();
self.conn.send(data);
print 'WriteThread loop ended';
Now, when I receive EXIT, I set dowrite to false, but, of course, that doesn't break the while loop because of the blocking call sys.stdin.readline().strip().
So, what happens is: to disconnect, the client needs to send EXIT, and then I need to hit return on the console. Is there any way I can work around this, so that when the client sends the exit message, I immediately break out of the while loop in write().
EDIT
How it comes together:
The main thread spawns two threads : read and write, and then waits (joins) for read to finish. Read finishes when it reads EXIT. As soon as the read thread ends, the main thread continues and sets dowrite to false in the write thread, which should end the write loop, but that can only happen once the while loop iterates one more time.
Make child threads to be daemons:
t.daemon = True
Daemon threads will stop if your program exits.
You could use an event to notify other threads about it. In the thread where the event occured:
event.set() # event occurred
In other threads:
event.wait() # wait for event