Shutdown an SimpleXMLRPCServer server in python - python

Currently I am writing an application using the SimpleXMLRPCServer module in Python.
The basic aim of this application is to keep running on a server and keep checking a Queue for any task. If it encounters any new request in the Queue, serve the request.
Snapshot of what I am trying to do :
class MyClass():
"""
This class will have methods which will be exposed to the clients
"""
def __init__(self):
taskQ = Queue.Queue()
def do_some_task(self):
while True:
logging.info("Checking the Queue for any Tasks..")
task = taskQ.get()
# Do some processing based on the availability of some task
Main
if name == "main":
server = SimpleXMLRPCServer.SimpleXMLRPCServer((socket.gethostname(), Port)
classObj = MyClass()
rpcserver.register_function(classObj.do_some_task)
rpcserver.serve_forever()
Once the server is started it remains in the loop forever inside do_some_task method to keep checking the Queue for any task. This is what i wanted to achieve. But now i want to gracefully shutdown the server. In this case i am unable to shutdown the server.
Till now I have Tried using a global flag STOP_SERVER for 'True' and checking its status in the do_some_task while loop to get out of it and stop the server. But no help.
Tried using SHUTDOWN() method of the SimpleXMLRPCServer but it seems it is getting into a infinite loop of somekind.
Could you suggest some proper way to gracefully shutdown the server.
Thanks in advance

You should use handle_request() instead of serve_forever() if you want to close it manualy. Because SimpleXMLRPCServer is implemented as a single thread and the serve_forever() will make the server instance run into an infinite loop.
You can refer to this article. This is an example cited from there:
from SimpleXMLRPCServer import *
class MyServer(SimpleXMLRPCServer):
def serve_forever(self):
self.quit = 0
while not self.quit:
self.handle_request()
def kill():
server.quit = 1
return 1
server = MyServer(('127.0.0.1', 8000))
server.register_function(kill)
server.serve_forever()
By using handle_request(), this code use a state variable self.quit to indicate whether to quit the infinite loop.

The serve_forever function is inherited from a base class in the socketserver module called BaseServer. If you look at this fucntion you'll see it has an attribute called __shutdown_request, and this can be used to break the serving while loop. Because of the double underscore you'll have to access the variable with its mangled name: _BaseServer__shutdown_request.
Putting that all together you can make a very simple quit function as follows:
from xmlrpc.server import SimpleXMLRPCServer
class MyXMLRPCServer(SimpleXMLRPCServer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.register_function(self.quit)
def quit(self):
self._BaseServer__shutdown_request = True
return 0

Related

Pass data asynchronously to a Python Server class

I have to pass a data from my test cases to a mock server.
What is the best way to do that ?
This is what I have so far
mock_server.py
class ThreadedUDPServer(SocketServer.ThreadingMixIn, SocketServer.UDPServer):
pass
class ThreadedUDPRequestHandler(SocketServer.BaseRequestHandler):
def __init__(self, request, client_address, server):
SocketServer.BaseRequestHandler.__init__(self,request,client_address,server)
def handle(self):
print server.data #this is where i need the data
class server_wrap:
def __init__(self):
self.server = ThreadedUDPServer( ("127.0.0.1",49555) , ThreadedUDPRequestHandler)
def set_data(self,data)
self.server.data = data
def start(self)
server_thread = threading.Thread(target=self.server.serve_forever())
def stop(self)
self.server.shutdown()
test_mock.py
server_inst = server_wrap()
server_inst.start()
#code which sets the data and expects the handle method to print the data set
server_inst.stop()
The problem which i have with this code is, the execution stops at server_inst.start(), where the server goes in to an infinite listening mode
Other Solutions that I have tried, but failed:
Using global variables
Using queues
starting mock_server.py
with its own main
Let me know about any other possible solutions. Thanks in advance
Update 1:
Using separate threads to send data to the socket:
Changes
test_mock.py
def test_set_data(data)
server_inst = server_wrap()
server_inst.set_data(data)
server_inst.start()
if __name__ == "__main__":
thread = Thread(target=test_set_data, args=("foo_data))
thread.setDaemon(True)
thread.start()
#test code which verifies if data set is same
#works so far, able to pass data
#problem starts now
thread = Thread(target=test_set_data, args=("bar_data))
thread.setDaemon(True)
thread.start()
#says address already in use error
#Tried calling server.shuddown() in handle , but error persists. Also there is no thread.shop in threading.Thread object
Thanks
The server should go to listening mode.
You don't need the server_inst.stop until all the data was sent, and the test finishes. Maybe in you test tear down, or when the the test suite is completed.
To send data to the server, and let the handle pick it, you should open a socket on anohter thread. Then send the data to the server via this socket.
This code should look something like this:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(("127.0.0.1",49555))
sock.send(... the data ...)
received = sock.recv(1024) # the handle can send a response
sock.close()
Add a function in your django code, which does run on another thread. This function will open the socket, connect, send the data and get the response. You can call it from a view, a middleware etc.

proper threading in python

I am writing a home automation helpers - they are basically small daemon-like python applications. They can run each as a separate process but since there will be made I decided that I will put up a small dispatcher that will spawn each of the daemons in their own threads and be able to act shall a thread die in the future.
This is what it looks like (working with two classes):
from daemons import mosquitto_daemon, gtalk_daemon
from threading import Thread
print('Starting daemons')
mq_client = mosquitto_daemon.Client()
gt_client = gtalk_daemon.Client()
print('Starting MQ')
mq = Thread(target=mq_client.run)
mq.start()
print('Starting GT')
gt = Thread(target=gt_client.run)
gt.start()
while mq.isAlive() and gt.isAlive():
pass
print('something died')
The problem is that MQ daemon (moquitto) will work fine shall I run it directly:
mq_client = mosquitto_daemon.Client()
mq_client.run()
It will start and hang in there listening to all the messages that hit relevant topics - exactly what I'm looking for.
However, run within the dispatcher makes it act weirdly - it will receive a single message and then stop acting yet the thread is reported to be alive. Given it works fine without the threading woodoo I'm assuming I'm doing something wrong in the dispatcher.
I'm quoting the MQ client code just in case:
import mosquitto
import config
import sys
import logging
class Client():
mc = None
def __init__(self):
logging.basicConfig(format=u'%(filename)s:%(lineno)d %(levelname)-8s [%(asctime)s] %(message)s', level=logging.DEBUG)
logging.debug('Class initialization...')
if not Client.mc:
logging.info('Creating an instance of MQ client...')
try:
Client.mc = mosquitto.Mosquitto(config.DEVICE_NAME)
Client.mc.connect(host=config.MQ_BROKER_ADDRESS)
logging.debug('Successfully created MQ client...')
logging.debug('Subscribing to topics...')
for topic in config.MQ_TOPICS:
result, some_number = Client.mc.subscribe(topic, 0)
if result == 0:
logging.debug('Subscription to topic "%s" successful' % topic)
else:
logging.error('Failed to subscribe to topic "%s": %s' % (topic, result))
logging.debug('Settings up callbacks...')
self.mc.on_message = self.on_message
logging.info('Finished initialization')
except Exception as e:
logging.critical('Failed to complete creating MQ client: %s' % e.message)
self.mc = None
else:
logging.critical('Instance of MQ Client exists - passing...')
sys.exit(status=1)
def run(self):
self.mc.loop_forever()
def on_message(self, mosq, obj, msg):
print('meesage!!111')
logging.info('Message received on topic %s: %s' % (msg.topic, msg.payload))
You are passing Thread another class instance's run method... It doesn't really know what to do with it.
threading.Thread can be used in two general ways: spawn a Thread wrapped independent function, or as a base class for a class with a run method.
In your case it appears like baseclass is the way to go, since your Client class has a run method.
Replace the following in your MQ class and it should work:
from threading import Thread
class Client(Thread):
mc = None
def __init__(self):
Thread.__init__(self) # initialize the Thread instance
...
...
def stop(self):
# some sort of command to stop mc
self.mc.stop() # not sure what the actual command is, if one exists at all...
Then when calling it, do it without Thread:
mq_client = mosquitto_daemon.Client()
mq_client.start()
print 'Print this line to be sure we get here after starting the thread loop...'
Several things to consider:
zeromq hates being initialized in 1 thread and run in another. You can rewrite Client() to be a Thread as suggested, or write your own function that will create a Client and run that function in a thread.
Client() has a class level variable mc. I assume that mosquitto_daemon and gtalk_daemon both use the same Client and so they are in contention for which Client.mc wins.
"while mq.isAlive() and gt.isAlive(): pass" will eat an entire processor because it just keeps polling over and over without sleep. Considering that python is only quasi-threaded (the Global Interpreter Lock (GIL) allows only 1 thread to run at a single time), this will stall out your "daemons".
Also considering the GIL, the orignal daemon implementation is likely to perform better.

requestloop(loopCondition) doesn't release even after loopCondition is False

I have some issues with the requestLoop methode of the Pyro4.Daemon object.
What I want is to call remotely a "stop()" method for releasing the requestLoop function and shutdown my daemon.
This small exemple doesn't work
SERVER
#!/usr/bin/python
# -*- coding: utf-8 -*-
from daemon import Pyro4
class Audit(object):
def start_audit(self):
with Pyro4.Daemon() as daemon:
self_uri = daemon.register(self)
ns = Pyro4.locateNS()
ns.register("Audit", self_uri)
self.running = True
print("starting")
daemon.requestLoop(loopCondition=self.still_running)
print("stopped")
self.running = None
def hi(self, string):
print string
def stop(self):
self.running = False
def still_running(self):
return self.running
def main():
# lancement de l'auditor
auditor = Audit()
auditor.start_audit()
if __name__ == "__main__" :
main()
CLIENT
import Pyro4
def main():
with Pyro4.Proxy("PYRONAME:Audit") as au:
au.hi("hello")
au.hi("another hi")
au.stop()
What I expect is to see the server print "hello" and "another hi" and then shutdown.
But the shutdown doesn't happen, the server is still blocked in the requestloop method.
I can use my proxy as long as I want.
BUT, if I create another client, at the first remote call, the server will shutdown and the client will throw an error:
Pyro4.errors.ConnectionClosedError: receiving: not enough data
All my test are saying that I need to create a 2nd proxy and throw the exeption for pass the requestloop on my server.
Does any one have a idea of how to clean this issue ?
If you look at the examples/callback/client.py in the sources you'll see the following comment:
# We need to set either a socket communication timeout,
# or use the select based server. Otherwise the daemon requestLoop
# will block indefinitely and is never able to evaluate the loopCondition.
Pyro4.config.COMMTIMEOUT=0.5
Hence, you need to do is set the COMMTIMEOUT in your server file and it will work fine according to my tests.
Note: You can also add a print statement to the still_running method to check when it's being called. Without the configuration above, you'll see that it looks like the method is executed only when a new event is received, so the server doesn't shutdown after the next event to the one that set running to False is received. For example, if you execute the client program twice, the server will shutdown.

Integrating a simple web server into a custom main loop in python?

I have an application in python with a custom main loop (I don't believe the details are important). I'd like to integrate a simple non-blocking web server into the application which can introspect the application objects and possibly provide an interface to manipulate them. What's the best way to do this?
I'd like to avoid anything that uses threading. The ideal solution would be a server with a "stepping" function that can be called from my main loop, do its thing, then return program control until the next go-round.
The higher-level the solution, the better (though something as monolithic as Django might be overkill).
Ideally, a solution will look like this:
def main():
"""My main loop."""
http_server = SomeCoolHttpServer(port=8888)
while True:
# Do my stuff here...
# ...
http_server.next() # Server gets it's turn.
# Do more of my stuff here...
# ...
Twisted is designed to make stuff like that fairly simple
import time
from twisted.web import server, resource
from twisted.internet import reactor
class Simple(resource.Resource):
isLeaf = True
def render_GET(self, request):
return "<html>%s Iterations!</html>"%n
def main():
global n
site = server.Site(Simple())
reactor.listenTCP(8080, site)
reactor.startRunning(False)
n=0
while True:
n+=1
if n%1000==0:
print n
time.sleep(0.001)
reactor.iterate()
if __name__=="__main__":
main()
I'd suggest creating a new thread and running a web server (such as Python's built-in SimpleHTTPServer or BaseHTTPServer). Threads really aren't that scary when it comes down to it.
from threading import Event, Thread
import BaseHTTPServer
shut_down = Event()
def http_server():
server_address = ('', 8000)
httpd = BaseHTTPServer.HTTPServer(server_address, BaseHTTPServer.BaseHTTPRequestHandler)
while not shut_down.is_set():
httpd.handle_request()
thread = Thread(target=http_server)
thread.start()

Running SimpleXMLRPCServer in separate thread and shutting down

I have a class that I wish to test via SimpleXMLRPCServer in python. The way I have my unit test set up is that I create a new thread, and start SimpleXMLRPCServer in that. Then I run all the test, and finally shut down.
This is my ServerThread:
class ServerThread(Thread):
running = True
def run(self):
self.server = #Creates and starts SimpleXMLRPCServer
while (self.running):
self.server.handle_request()
def stop(self):
self.running = False
self.server.server_close()
The problem is, that calling ServerThread.stop(), followed by Thread.stop() and Thread.join() will not cause the thread to stop properly if it's already waiting for a request in handle_request. And since there doesn't seem to be any interrupt or timeout mechanisms here that I can use, I am at a loss for how I can cleanly shut down the server thread.
I had the same problem and after hours of research i solved it by switching from using my own handle_request() loop to serve_forever() to start the server.
serve_forever() starts an internal loop like yours. This loop can be stopped by calling shutdown(). After stopping the loop it is possible to stop the server with server_close().
I don't know why this works and the handle_request() loop don't, but it does ;P
Here is my code:
from threading import Thread
from xmlrpc.server import SimpleXMLRPCServer
from pyWebService.server.service.WebServiceRequestHandler import WebServiceRquestHandler
class WebServiceServer(Thread):
def __init__(self, ip, port):
super(WebServiceServer, self).__init__()
self.running = True
self.server = SimpleXMLRPCServer((ip, port),requestHandler=WebServiceRquestHandler)
self.server.register_introspection_functions()
def register_function(self, function):
self.server.register_function(function)
def run(self):
self.server.serve_forever()
def stop_server(self):
self.server.shutdown()
self.server.server_close()
print("starting server")
webService = WebServiceServer("localhost", 8010)
webService.start()
print("stopping server")
webService.stop_server()
webService.join()
print("server stopped")
Two suggestions.
Suggestion One is to use a separate process instead of a separate thread.
Create a stand-alone XMLRPC server program.
Start it with subprocess.Popen().
Kill it when the test is done. In standard OS's (not Windows) the kill works nicely. In Windows, however, there's no trivial kill function, but there are recipes for this.
The other suggestion is to have a function in your XMLRPC server which causes server self-destruction. You define a function that calls sys.exit() or os.abort() or raises a similar exception that will stop the process.
This is my way. send SIGTERM to self. (Works for me)
Server code
import os
import signal
import xmlrpc.server
server = xmlrpc.server.SimpleXMLRPCServer(("0.0.0.0", 8000))
server.register_function(lambda: os.kill(os.getpid(), signal.SIGTERM), 'quit')
server.serve_forever()
Client code
import xmlrpc.client
c = xmlrpc.client.ServerProxy("http://localhost:8000")
try:
c.quit()
except ConnectionRefusedError:
pass

Categories