I am writing a home automation helpers - they are basically small daemon-like python applications. They can run each as a separate process but since there will be made I decided that I will put up a small dispatcher that will spawn each of the daemons in their own threads and be able to act shall a thread die in the future.
This is what it looks like (working with two classes):
from daemons import mosquitto_daemon, gtalk_daemon
from threading import Thread
print('Starting daemons')
mq_client = mosquitto_daemon.Client()
gt_client = gtalk_daemon.Client()
print('Starting MQ')
mq = Thread(target=mq_client.run)
mq.start()
print('Starting GT')
gt = Thread(target=gt_client.run)
gt.start()
while mq.isAlive() and gt.isAlive():
pass
print('something died')
The problem is that MQ daemon (moquitto) will work fine shall I run it directly:
mq_client = mosquitto_daemon.Client()
mq_client.run()
It will start and hang in there listening to all the messages that hit relevant topics - exactly what I'm looking for.
However, run within the dispatcher makes it act weirdly - it will receive a single message and then stop acting yet the thread is reported to be alive. Given it works fine without the threading woodoo I'm assuming I'm doing something wrong in the dispatcher.
I'm quoting the MQ client code just in case:
import mosquitto
import config
import sys
import logging
class Client():
mc = None
def __init__(self):
logging.basicConfig(format=u'%(filename)s:%(lineno)d %(levelname)-8s [%(asctime)s] %(message)s', level=logging.DEBUG)
logging.debug('Class initialization...')
if not Client.mc:
logging.info('Creating an instance of MQ client...')
try:
Client.mc = mosquitto.Mosquitto(config.DEVICE_NAME)
Client.mc.connect(host=config.MQ_BROKER_ADDRESS)
logging.debug('Successfully created MQ client...')
logging.debug('Subscribing to topics...')
for topic in config.MQ_TOPICS:
result, some_number = Client.mc.subscribe(topic, 0)
if result == 0:
logging.debug('Subscription to topic "%s" successful' % topic)
else:
logging.error('Failed to subscribe to topic "%s": %s' % (topic, result))
logging.debug('Settings up callbacks...')
self.mc.on_message = self.on_message
logging.info('Finished initialization')
except Exception as e:
logging.critical('Failed to complete creating MQ client: %s' % e.message)
self.mc = None
else:
logging.critical('Instance of MQ Client exists - passing...')
sys.exit(status=1)
def run(self):
self.mc.loop_forever()
def on_message(self, mosq, obj, msg):
print('meesage!!111')
logging.info('Message received on topic %s: %s' % (msg.topic, msg.payload))
You are passing Thread another class instance's run method... It doesn't really know what to do with it.
threading.Thread can be used in two general ways: spawn a Thread wrapped independent function, or as a base class for a class with a run method.
In your case it appears like baseclass is the way to go, since your Client class has a run method.
Replace the following in your MQ class and it should work:
from threading import Thread
class Client(Thread):
mc = None
def __init__(self):
Thread.__init__(self) # initialize the Thread instance
...
...
def stop(self):
# some sort of command to stop mc
self.mc.stop() # not sure what the actual command is, if one exists at all...
Then when calling it, do it without Thread:
mq_client = mosquitto_daemon.Client()
mq_client.start()
print 'Print this line to be sure we get here after starting the thread loop...'
Several things to consider:
zeromq hates being initialized in 1 thread and run in another. You can rewrite Client() to be a Thread as suggested, or write your own function that will create a Client and run that function in a thread.
Client() has a class level variable mc. I assume that mosquitto_daemon and gtalk_daemon both use the same Client and so they are in contention for which Client.mc wins.
"while mq.isAlive() and gt.isAlive(): pass" will eat an entire processor because it just keeps polling over and over without sleep. Considering that python is only quasi-threaded (the Global Interpreter Lock (GIL) allows only 1 thread to run at a single time), this will stall out your "daemons".
Also considering the GIL, the orignal daemon implementation is likely to perform better.
Related
Currently I am writing an application using the SimpleXMLRPCServer module in Python.
The basic aim of this application is to keep running on a server and keep checking a Queue for any task. If it encounters any new request in the Queue, serve the request.
Snapshot of what I am trying to do :
class MyClass():
"""
This class will have methods which will be exposed to the clients
"""
def __init__(self):
taskQ = Queue.Queue()
def do_some_task(self):
while True:
logging.info("Checking the Queue for any Tasks..")
task = taskQ.get()
# Do some processing based on the availability of some task
Main
if name == "main":
server = SimpleXMLRPCServer.SimpleXMLRPCServer((socket.gethostname(), Port)
classObj = MyClass()
rpcserver.register_function(classObj.do_some_task)
rpcserver.serve_forever()
Once the server is started it remains in the loop forever inside do_some_task method to keep checking the Queue for any task. This is what i wanted to achieve. But now i want to gracefully shutdown the server. In this case i am unable to shutdown the server.
Till now I have Tried using a global flag STOP_SERVER for 'True' and checking its status in the do_some_task while loop to get out of it and stop the server. But no help.
Tried using SHUTDOWN() method of the SimpleXMLRPCServer but it seems it is getting into a infinite loop of somekind.
Could you suggest some proper way to gracefully shutdown the server.
Thanks in advance
You should use handle_request() instead of serve_forever() if you want to close it manualy. Because SimpleXMLRPCServer is implemented as a single thread and the serve_forever() will make the server instance run into an infinite loop.
You can refer to this article. This is an example cited from there:
from SimpleXMLRPCServer import *
class MyServer(SimpleXMLRPCServer):
def serve_forever(self):
self.quit = 0
while not self.quit:
self.handle_request()
def kill():
server.quit = 1
return 1
server = MyServer(('127.0.0.1', 8000))
server.register_function(kill)
server.serve_forever()
By using handle_request(), this code use a state variable self.quit to indicate whether to quit the infinite loop.
The serve_forever function is inherited from a base class in the socketserver module called BaseServer. If you look at this fucntion you'll see it has an attribute called __shutdown_request, and this can be used to break the serving while loop. Because of the double underscore you'll have to access the variable with its mangled name: _BaseServer__shutdown_request.
Putting that all together you can make a very simple quit function as follows:
from xmlrpc.server import SimpleXMLRPCServer
class MyXMLRPCServer(SimpleXMLRPCServer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.register_function(self.quit)
def quit(self):
self._BaseServer__shutdown_request = True
return 0
I'm currently working to add support for gevent-socketio to an existing django project. I'm finding that gevent.monkey.patch_all() call is breaking the cancellation mechanism of a thread which is responsible for receiving data from a socket, we'll call the class SocketReadThread for now.
SocketReadThread is pretty simple, it calls recv() on a blocking socket. When it receives data is processes it and calls recv() again. The thread stops when an exception occurs or when recv() returns 0 bytes as occurs when socket.shutdown(SHUT_RDWR) is called in SocketReadThread.stop_reading()
The problem occurs when the gevent.monkey.patch_all() replaces the default socket implementation. Instead of shutting down nicely I get the following exception:
error: [Errno 9] File descriptor was closed in another greenlet
I'm assuming this is occurring because gevent makes my socket non-blocking in order to work its magic. This means that when I call socket.shutdown(socket.SHUT_RDWR) the greenlet that was doing the work for the monkey patched socket.recv call tried to read from the closed file descriptor.
I coded an example to isolate this issue:
from gevent import monkey
monkey.patch_all()
import socket
import sys
import threading
import time
class SocketReadThread(threading.Thread):
def __init__(self, socket):
super(SocketReadThread, self).__init__()
self._socket = socket
def run(self):
connected = True
while connected:
try:
print "calling socket.recv"
data = self._socket.recv(1024)
if (len(data) < 1):
print "received nothing, assuming socket shutdown"
connected = False
else :
print "Recieved something: {}".format(data)
except socket.timeout as e:
print "Socket timeout: {}".format(e)
connected = false
except :
ex = sys.exc_info()[1]
print "Unexpected exception occurrred: {}".format(str(ex))
raise ex
def stop_reading(self):
self._socket.shutdown(socket.SHUT_RDWR)
self._socket.close()
if __name__ == '__main__':
sock = socket.socket()
sock.connect(('127.0.0.1', 4242))
st = SocketReadThread(sock)
st.start()
time.sleep(3)
st.stop_reading()
st.join()
If you open a terminal an run nc -lp 4242 & (to give this program something to connect to) and then run this program you will see the exception mentioned above. If you remove the call to monkey.patch_all() you will see that it works just fine.
My question is: How can support cancellation of the SocketReadThread in a way that works with or without gevent monkey patching and doesn't require the use of an arbitrary timeout that would make cancellation slow (i.e. calling recv() with a timeout and checking a conditional)?
I found that there were two different workarounds for this. The first was to simply catch and suppress the exception. This appears to work fine since it is common practice for one thread to close a socket in order to cause another thread to exit from a blocking read. I don't know or understand why greenlets would complain about this other than a debugging aid. It is really just an annoyance.
The second option was to use the self-pipe trick (a quick search yields many explanations) as a mechanism to wake up a blocked thread. Essentially we create a second file descriptor (a socket is like a type of file descriptor to the OS) for signaling cancellation. We then use select as our blocking to wait for either incoming data on the socket or a cancellation request to come in on the cancellation file descriptor. See the example code below.
from gevent import monkey
monkey.patch_all()
import os
import select
import socket
import sys
import threading
import time
class SocketReadThread(threading.Thread):
def __init__(self, socket):
super(SocketReadThread, self).__init__()
self._socket = socket
self._socket.setblocking(0)
r, w = os.pipe()
self._cancelpipe_r = os.fdopen(r, 'r')
self._cancelpipe_w = os.fdopen(w, 'w')
def run(self):
connected = True
read_fds = [self._socket, self._cancelpipe_r]
while connected:
print "Calling select"
read_list, write_list, x_list = select.select(read_fds, [], [])
print "Select returned"
if self._cancelpipe_r in read_list :
print "exiting"
self._cleanup()
connected = False
elif self._socket in read_list:
print "calling socket.recv"
data = self._socket.recv(1024)
if (len(data) < 1):
print "received nothing, assuming socket shutdown"
connected = False
self._cleanup()
else :
print "Recieved something: {}".format(data)
def stop_reading(self):
print "writing to pipe"
self._cancelpipe_w.write("\n")
self._cancelpipe_w.flush()
print "joining"
self.join()
print "joined"
def _cleanup(self):
self._cancelpipe_r.close()
self._cancelpipe_w.close()
self._socket.shutdown(socket.SHUT_RDWR)
self._socket.close()
if __name__ == '__main__':
sock = socket.socket()
sock.connect(('127.0.0.1', 4242))
st = SocketReadThread(sock)
st.start()
time.sleep(3)
st.stop_reading()
Again, before running the above program run netcat -lp 4242 & to give it a listening socket to connect to.
let's consider this code in python:
import socket
import threading
import sys
import select
class UDPServer:
def __init__(self):
self.s=None
self.t=None
def start(self,port=8888):
if not self.s:
self.s=socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.s.bind(("",port))
self.t=threading.Thread(target=self.run)
self.t.start()
def stop(self):
if self.s:
self.s.close()
self.t.join()
self.t=None
def run(self):
while True:
try:
#receive data
data,addr=self.s.recvfrom(1024)
self.onPacket(addr,data)
except:
break
self.s=None
def onPacket(self,addr,data):
print addr,data
us=UDPServer()
while True:
sys.stdout.write("UDP server> ")
cmd=sys.stdin.readline()
if cmd=="start\n":
print "starting server..."
us.start(8888)
print "done"
elif cmd=="stop\n":
print "stopping server..."
us.stop()
print "done"
elif cmd=="quit\n":
print "Quitting ..."
us.stop()
break;
print "bye bye"
It runs an interactive shell with which I can start and stop an UDP server.
The server is implemented through a class which launches a thread in which there's a infinite loop of recv/onPacket callback inside a try/except block which should detect the error and the exits from the loop.
What I expect is that when I type "stop" on the shell the socket is closed and an exception is raised by the recvfrom function because of the invalidation of the file descriptor.
Instead, it seems that recvfrom still to block the thread waiting for data even after the close call.
Why this strange behavior ?
I've always used this patter to implements an UDP server in C++ and JAVA and it always worked.
I've tried also with a "select" passing a list with the socket to the xread argument, in order to get an event of file descriptor disruption from select instead that from recvfrom, but select seems to be "insensible" to the close too.
I need to have a unique code which maintain the same behavior on Linux and Windows with python 2.5 - 2.6.
Thanks.
The usual solution is to have a pipe tell the worker thread when to die.
Create a pipe using os.pipe. This gives you a socket with both the reading and writing ends in the same program. It returns raw file descriptors, which you can use as-is (os.read and os.write) or turn into Python file objects using os.fdopen.
The worker thread waits on both the network socket and the read end of the pipe using select.select. When the pipe becomes readable, the worker thread cleans up and exits. Don't read the data, ignore it: its arrival is the message.
When the master thread wants to kill the worker, it writes a byte (any value) to the write end of the pipe. The master thread then joins the worker thread, then closes the pipe (remember to close both ends).
P.S. Closing an in-use socket is a bad idea in a multi-threaded program. The Linux close(2) manpage says:
It is probably unwise to close file descriptors while they may be in use by system calls in other threads in the same process. Since a file descriptor may be re-used, there are some obscure race conditions that may cause unintended side effects.
So it's lucky your first approach did not work!
This is not java. Good hints:
Don't use threads. Use asynchronous IO.
Use a higher level networking framework
Here's an example using twisted:
from twisted.internet.protocol import DatagramProtocol
from twisted.internet import reactor, stdio
from twisted.protocols.basic import LineReceiver
class UDPLogger(DatagramProtocol):
def datagramReceived(self, data, (host, port)):
print "received %r from %s:%d" % (data, host, port)
class ConsoleCommands(LineReceiver):
delimiter = '\n'
prompt_string = 'myserver> '
def connectionMade(self):
self.sendLine('My Server Admin Console!')
self.transport.write(self.prompt_string)
def lineReceived(self, line):
line = line.strip()
if line:
if line == 'quit':
reactor.stop()
elif line == 'start':
reactor.listenUDP(8888, UDPLogger())
self.sendLine('listening on udp 8888')
else:
self.sendLine('Unknown command: %r' % (line,))
self.transport.write(self.prompt_string)
stdio.StandardIO(ConsoleCommands())
reactor.run()
Example session:
My Server Admin Console!
myserver> foo
Unknown command: 'foo'
myserver> start
listening on udp 8888
myserver> quit
I've written a very simple python class which waits for connections on a socket. The intention is to stick this class into an existing app and asyncronously send data to connecting clients.
The problem is that when waiting on an socket.accept(), I cannot end my application by pressing ctrl-c. Neither can I detect when my class goes out of scope and notify it to end.
Ideally the application below should quit after the time.sleep(4) expires. As you can see below, I tried using select, but this also prevents the app from responding to ctrl-c. If I could detect that the variable 'a' has gone out of scope in the main method, I could set the quitting flag (and reduce the timeout on select to make it responsive).
Any ideas?
thanks
import sys
import socket
import threading
import time
import select
class Server( threading.Thread ):
def __init__( self, i_port ):
threading.Thread.__init__( self )
self.quitting = False
self.serversocket = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
self.serversocket.bind( (socket.gethostname(), i_port ) )
self.serversocket.listen(5)
self.start()
def run( self ):
# Wait for connection
while not self.quitting:
rr,rw,err = select.select( [self.serversocket],[],[], 20 )
if rr:
(clientsocket, address) = self.serversocket.accept()
clientsocket.close()
def main():
a = Server( 6543 )
time.sleep(4)
if __name__=='__main__':
main()
Add self.setDaemon(True) to the __init__ before self.start().
(In Python 2.6 and later, self.daemon = True is preferred).
The key idea is explained here:
The entire Python program exits when
no alive non-daemon threads are left.
So, you need to make "daemons" of those threads who should not keep the whole process alive just by being alive themselves. The main thread is always non-daemon, by the way.
I don't recommend the setDaemon feature for normal shutdown. It's sloppy; instead of having a clean shutdown path for threads, it simply kills the thread with no chance for cleanup. It's good to set it, so your program doesn't get stuck if the main thread exits unexpectedly, but it's not a good normal shutdown path except for quick hacks.
import sys, os, socket, threading, time, select
class Server(threading.Thread):
def __init__(self, i_port):
threading.Thread.__init__(self)
self.setDaemon(True)
self.quitting = False
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket.bind((socket.gethostname(), i_port))
self.serversocket.listen(5)
self.start()
def shutdown(self):
if self.quitting:
return
self.quitting = True
self.join()
def run(self):
# Wait for connection
while not self.quitting:
rr,rw,err = select.select([self.serversocket],[],[], 1)
print rr
if rr:
(clientsocket, address) = self.serversocket.accept()
clientsocket.close()
print "shutting down"
self.serversocket.close()
def main():
a = Server(6543)
try:
time.sleep(4)
finally:
a.shutdown()
if __name__=='__main__':
main()
Note that this will delay for up to a second after calling shutdown(), which is poor behavior. This is normally easy to fix: create a wakeup pipe() that you can write to, and include it in the select; but although this is very basic, I couldn't find any way to do this in Python. (os.pipe() returns file descriptors, not file objects that we can write to.) I havn't dig deeper, since it's tangental to the question.
I have a class that I wish to test via SimpleXMLRPCServer in python. The way I have my unit test set up is that I create a new thread, and start SimpleXMLRPCServer in that. Then I run all the test, and finally shut down.
This is my ServerThread:
class ServerThread(Thread):
running = True
def run(self):
self.server = #Creates and starts SimpleXMLRPCServer
while (self.running):
self.server.handle_request()
def stop(self):
self.running = False
self.server.server_close()
The problem is, that calling ServerThread.stop(), followed by Thread.stop() and Thread.join() will not cause the thread to stop properly if it's already waiting for a request in handle_request. And since there doesn't seem to be any interrupt or timeout mechanisms here that I can use, I am at a loss for how I can cleanly shut down the server thread.
I had the same problem and after hours of research i solved it by switching from using my own handle_request() loop to serve_forever() to start the server.
serve_forever() starts an internal loop like yours. This loop can be stopped by calling shutdown(). After stopping the loop it is possible to stop the server with server_close().
I don't know why this works and the handle_request() loop don't, but it does ;P
Here is my code:
from threading import Thread
from xmlrpc.server import SimpleXMLRPCServer
from pyWebService.server.service.WebServiceRequestHandler import WebServiceRquestHandler
class WebServiceServer(Thread):
def __init__(self, ip, port):
super(WebServiceServer, self).__init__()
self.running = True
self.server = SimpleXMLRPCServer((ip, port),requestHandler=WebServiceRquestHandler)
self.server.register_introspection_functions()
def register_function(self, function):
self.server.register_function(function)
def run(self):
self.server.serve_forever()
def stop_server(self):
self.server.shutdown()
self.server.server_close()
print("starting server")
webService = WebServiceServer("localhost", 8010)
webService.start()
print("stopping server")
webService.stop_server()
webService.join()
print("server stopped")
Two suggestions.
Suggestion One is to use a separate process instead of a separate thread.
Create a stand-alone XMLRPC server program.
Start it with subprocess.Popen().
Kill it when the test is done. In standard OS's (not Windows) the kill works nicely. In Windows, however, there's no trivial kill function, but there are recipes for this.
The other suggestion is to have a function in your XMLRPC server which causes server self-destruction. You define a function that calls sys.exit() or os.abort() or raises a similar exception that will stop the process.
This is my way. send SIGTERM to self. (Works for me)
Server code
import os
import signal
import xmlrpc.server
server = xmlrpc.server.SimpleXMLRPCServer(("0.0.0.0", 8000))
server.register_function(lambda: os.kill(os.getpid(), signal.SIGTERM), 'quit')
server.serve_forever()
Client code
import xmlrpc.client
c = xmlrpc.client.ServerProxy("http://localhost:8000")
try:
c.quit()
except ConnectionRefusedError:
pass