How to run a blocking process on a background thread in python - python

I have a unit test script that needs tests all the rest api's. At the same time I also have a xmpp server that generates messages.
I need to run an instance of the xmpp inside my unit test to receive those messages. But the problem is that the xmpp is a blocking process.
---> self.process(block=True)
This results in the unit test stalling.
Is there any way I can run this xmpp on a background thread and continue to receive msgs and run the unit test on the main thread. If yes, can I have a code snippet which I could implement.
Thanks in advance.

One solution is start the server in the background in your setUp() routine -- ie os.system('myserver &'), then kill it when the test is over, in tearDown()
If you want direct control over the server, use fork() and follow roughly the same pattern as #1.
Example:
import os, sys, subprocess, time, unittest
def server():
try:
for _ in xrange(5, 0, -1):
print 'ding'
time.sleep(1)
except KeyboardInterrupt:
pass
class TestClient(unittest.TestCase):
def setUp(self):
self.server_pid = None
pid = os.fork()
if not pid: # child
return server()
# parent
self.server_pid = pid
def test1(self):
print 'test server, PID',self.server_pid
time.sleep(2)
def tearDown(self):
if not self.server_pid:
return
import signal
os.kill(self.server_pid, signal.SIGINT)
Run with:
python -m unittest ptest
Output:
test server, PID 16490 ding ding .test server, PID None
---------------------------------------------------------------------- Ran 1 test in 2.003s
OK

Related

python zerorpc and multiprocessing issue

I'm implementing a bi-directional ping-pong demo app between an electron app and a python backend.
This is the code for the python part which causes the problems:
import sys
import zerorpc
import time
from multiprocessing import Process
def ping_response():
print("Sleeping")
time.sleep(5)
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4243")
print("sending pong")
c.pong()
class Api(object):
def echo(self, text):
"""echo any text"""
return text
def ping(self):
p = Process(target=ping_response, args=())
p.start()
print("got ping")
return
def parse_port():
port = 4242
try:
port = int(sys.argv[1])
except Exception as e:
pass
return '{}'.format(port)
def main():
addr = 'tcp://127.0.0.1:' + parse_port()
s = zerorpc.Server(Api())
s.bind(addr)
print('start running on {}'.format(addr))
s.run()
if __name__ == '__main__':
main()
Each time ping() is called from javascript side it will start a new process that simulates some work (sleeping for 5 seconds) and replies by calling pong on nodejs server to indicate work is done.
The issue is that the pong() request never gets to javascript side. If instead of spawning a new process I create a new thread using _thread and execute the same code in ping_response(), the pong request arrives in the javascript side. Also if I manually run the bash command zerorpc tcp://localhost:4243 pong I can see that the pong request is received by the nodejs script so the server on the javascript side works ok.
What happens with zerorpc client when I create a new process and it doesn't manage to send the request ?
Thank you.
EDIT
It seems it gets stuck in c.pong()
Try using gipc.start_process() from the gipc module (via pip) instead of multiprocessing.Process(). It creates a new gevent context which otherwise multiprocessing will accidentally inherit.

RQ Timeout does not kill multi-threaded jobs

I'm having problems running multithreaded tasks using python RQ (tested on v0.5.6 and v0.6.0).
Consider the following piece of code, as a simplified version of what I'm trying to achieve:
thing.py
from threading import Thread
class MyThing(object):
def say_hello(self):
while True:
print "Hello World"
def hello_task(self):
t = Thread(target=self.say_hello)
t.daemon = True # seems like it makes no difference
t.start()
t.join()
main.py
from rq import Queue
from redis import Redis
from thing import MyThing
conn = Redis()
q = Queue(connection=conn)
q.enqueue(MyThing().say_hello, timeout=5)
When executing main.py (while rqworker is running in background), the job breaks as expected by timeout, within 5 seconds.
Problem is, when I'm setting a task containing thread/s such as MyThing().hello_task, the thread runs forever and nothing happens when the 5 seconds timeout is over.
How can I run a multithreaded task with RQ, such that the timeout will kill the task, its sons, grandsons and their wives?
When you run t.join(), the hello_task thread blocks and waits until the say_hello thread returns - thus not receiving the timeout signal from rq. You can allow the main thread to run and properly receive the timeout signal by using Thread.join with a set amount of time to wait, while waiting for the thread to finish running. Like so:
def hello_task(self):
t = Thread(target=self.say_hello)
t.start()
while t.isAlive():
t.join(1) # Block for 1 second
That way you could also catch the timeout exception and handle it, if you wish:
def hello_task(self):
t = Thread(target=self.say_hello)
t.start()
try:
while t.isAlive():
t.join(1) # Block for 1 second
except JobTimeoutException: # From rq.timeouts.JobTimeoutException
print "Thread killed due to timeout"
raise

How to really test signal handling in Python?

My code is simple:
def start():
signal(SIGINT, lambda signal, frame: raise SystemExit())
startTCPServer()
So I register my application with signal handling of SIGINT, then I start a start a TCP listener.
here are my questions:
How can I using python code to send a SIGINT signal?
How can I test whether if the application receives a signal of SIGINT, it will raise a SystemExit exception?
If I run start() in my test, it will block and how can I send a signal to it?
First of, testing the signal itself is a functional or integration test, not a unit test. See What's the difference between unit, functional, acceptance, and integration tests?
You can run your Python script as a subprocess with subprocess.Popen(), then use the Popen.send_signal() method to send signals to that process, then test that the process has exited with Popen.poll().
How can I using python code to send a SIGINT signal?
You can use os.kill, which slightly misleadingly, can used to send any signal to any process by its ID. The process ID of the application/test can be found by os.getpid(), so you would have...
pid = os.getpid()
# ... other code discussed later in the answer ...
os.kill(pid, SIGINT)
How can I test whether if the application receives a signal of SIGINT, it will raise a SystemExit exception?
The usual way in a test you can check that some code raises SystemExit, is with unittest.TestCase::assertRaises...
import start
class TestStart(unittest.TestCase):
def test_signal_handling(self):
# ... other code discussed later in the answer ...
with self.assertRaises(SystemExit):
start.start()
If I run start() in my test, it will block and how can I send a signal to it?
This is the trick: you can start another thread which then sends a signal back to the main thread which is blocking.
Putting it all together, assuming your production start function is in start.py:
from signal import (
SIGINT,
signal,
)
import socketserver
def startTCPServer():
# Taken from https://docs.python.org/3.4/library/socketserver.html#socketserver-tcpserver-example
class MyTCPHandler(socketserver.BaseRequestHandler):
def handle(self):
self.data = self.request.recv(1024).strip()
self.request.sendall(self.data.upper())
HOST, PORT = "localhost", 9999
server = socketserver.TCPServer((HOST, PORT), MyTCPHandler)
server.serve_forever()
def start():
def raiseSystemExit(_, __):
raise SystemExit
signal(SIGINT, raiseSystemExit)
startTCPServer()
Then your test code could be like the following, say in test.py
import os
from signal import (
SIGINT,
)
import threading
import time
import unittest
import start
class TestStart(unittest.TestCase):
def test_signal_handling(self):
pid = os.getpid()
def trigger_signal():
# You could do something more robust, e.g. wait until port is listening
time.sleep(1)
os.kill(pid, SIGINT)
thread = threading.Thread(target=trigger_signal)
thread.daemon = True
thread.start()
with self.assertRaises(SystemExit):
start.start()
if __name__ == '__main__':
unittest.main()
and run using
python test.py
The above is the same technique as in the answer at https://stackoverflow.com/a/49500820/1319998

Making sure a worker process always terminate in zeroMQ

I am implementing a pipeline pattern with zeroMQ using the python bindings.
tasks are fanned out to workers which listen for new tasks with an infinite loop like this:
while True:
socks = dict(self.poller.poll())
if self.receiver in socks and socks[self.receiver] == zmq.POLLIN:
msg = self.receiver.recv_unicode(encoding='utf-8')
self.process(msg)
if self.hear in socks and socks[self.hear] == zmq.POLLIN:
msg = self.hear.recv()
print self.pid,":", msg
sys.exit(0)
they exit when they get a message from the sink node, confirming having received all the results expected.
however, worker may miss such a message and not finish. What is the best way to have workers always finish, when they have no way to know (other than through the already mentioned message, that there are no further tasks to process).
Here is the testing code I wrote for checking the workers status:
#-*- coding:utf-8 -*-
"""
Test module containing tests for all modules of pypln
"""
import unittest
from servers.ventilator import Ventilator
from subprocess import Popen, PIPE
import time
class testWorkerModules(unittest.TestCase):
def setUp(self):
self.nw = 4
#spawn 4 workers
self.ws = [Popen(['python', 'workers/dummy_worker.py'], stdout=None) for i in range(self.nw)]
#spawn a sink
self.sink = Popen(['python', 'sinks/dummy_sink.py'], stdout=None)
#start a ventilator
self.V = Ventilator()
# wait for workers and sinks to connect
time.sleep(1)
def test_send_unicode(self):
'''
Pushing unicode strings through workers to sinks.
'''
self.V.push_load([u'são joão' for i in xrange(80)])
time.sleep(1)
#[p.wait() for p in self.ws]#wait for the workers to terminate
wsr = [p.poll() for p in self.ws]
while None in wsr:
print wsr, [p.pid for p in self.ws if p.poll() == None] #these are the unfinished workers
time.sleep(0.5)
wsr = [p.poll() for p in self.ws]
self.sink.wait()
self.sink = self.sink.returncode
self.assertEqual([0]*self.nw, wsr)
self.assertEqual(0, self.sink)
if __name__ == '__main__':
unittest.main()
All the messaging stuff eventually ends up with heartbeats. If you (as a worker or a sink or whatever) discover that a component you need to work with is dead, you can basically either try to connect somewhere else or kill yourself. So if you as a worker discover that the sink is there no more, just exit. This also means that you may exit even though the sink is still there but the connection is broken. But I am not sure you can do more, perhaps set all the timeouts more reasonably...

Running SimpleXMLRPCServer in separate thread and shutting down

I have a class that I wish to test via SimpleXMLRPCServer in python. The way I have my unit test set up is that I create a new thread, and start SimpleXMLRPCServer in that. Then I run all the test, and finally shut down.
This is my ServerThread:
class ServerThread(Thread):
running = True
def run(self):
self.server = #Creates and starts SimpleXMLRPCServer
while (self.running):
self.server.handle_request()
def stop(self):
self.running = False
self.server.server_close()
The problem is, that calling ServerThread.stop(), followed by Thread.stop() and Thread.join() will not cause the thread to stop properly if it's already waiting for a request in handle_request. And since there doesn't seem to be any interrupt or timeout mechanisms here that I can use, I am at a loss for how I can cleanly shut down the server thread.
I had the same problem and after hours of research i solved it by switching from using my own handle_request() loop to serve_forever() to start the server.
serve_forever() starts an internal loop like yours. This loop can be stopped by calling shutdown(). After stopping the loop it is possible to stop the server with server_close().
I don't know why this works and the handle_request() loop don't, but it does ;P
Here is my code:
from threading import Thread
from xmlrpc.server import SimpleXMLRPCServer
from pyWebService.server.service.WebServiceRequestHandler import WebServiceRquestHandler
class WebServiceServer(Thread):
def __init__(self, ip, port):
super(WebServiceServer, self).__init__()
self.running = True
self.server = SimpleXMLRPCServer((ip, port),requestHandler=WebServiceRquestHandler)
self.server.register_introspection_functions()
def register_function(self, function):
self.server.register_function(function)
def run(self):
self.server.serve_forever()
def stop_server(self):
self.server.shutdown()
self.server.server_close()
print("starting server")
webService = WebServiceServer("localhost", 8010)
webService.start()
print("stopping server")
webService.stop_server()
webService.join()
print("server stopped")
Two suggestions.
Suggestion One is to use a separate process instead of a separate thread.
Create a stand-alone XMLRPC server program.
Start it with subprocess.Popen().
Kill it when the test is done. In standard OS's (not Windows) the kill works nicely. In Windows, however, there's no trivial kill function, but there are recipes for this.
The other suggestion is to have a function in your XMLRPC server which causes server self-destruction. You define a function that calls sys.exit() or os.abort() or raises a similar exception that will stop the process.
This is my way. send SIGTERM to self. (Works for me)
Server code
import os
import signal
import xmlrpc.server
server = xmlrpc.server.SimpleXMLRPCServer(("0.0.0.0", 8000))
server.register_function(lambda: os.kill(os.getpid(), signal.SIGTERM), 'quit')
server.serve_forever()
Client code
import xmlrpc.client
c = xmlrpc.client.ServerProxy("http://localhost:8000")
try:
c.quit()
except ConnectionRefusedError:
pass

Categories