I'm starting a webserver in a new thread. After all tests are run I want to kill the child thread with running server inside. The only one solution is interrupting entire process with all threads inside by calling "os.system('kill %d' % os.getpid())" (see the code below). I'm not sure it's the smartest solution. I'm not sure all threads will be killed after all. Could I send some kind of "Keyboard interrupt" signal to stop the thread before exiting main thread?
import http
import os
import sys
import unittest
import time
import requests
import threading
from addresses import handle_get_addresses, load_addresses
from webserver import HTTPHandler
def run_in_thread(fn):
def run(*k, **kw):
t = threading.Thread(target=fn, args=k, kwargs=kw)
t.start()
return t
return run
#run_in_thread
def start_web_server():
web_host = 'localhost'
print("starting server...")
web_port = 8808
httpd = http.server.HTTPServer((web_host, web_port), HTTPHandler)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
class TestAddressesApi(unittest.TestCase):
WEB_SERVER_THREAD: threading.Thread = None
#classmethod
def setUpClass(cls):
cls.WEB_SERVER_THREAD = start_web_server()
pass
#classmethod
def tearDownClass(cls):
print("shutting down the webserver...")
# here someting like cls.WEB_SERVER_THREAD.terminate()
# instead of line below
os.system('kill %d' % os.getpid())
def test_get_all_addresses(self):
pass
def test_1(self):
pass
if __name__ == "__main__":
unittest.main()
Maybe threading.Event is you wanted.
Just found a solution. Daemon Threads stop executing when main thread stops working
Related
I need to run a gstreamer pipeline to perform video streaming. The GStreamer pipeline requires a GObject.MainLoop object which has a run() method that does not terminate until quit() is called.
For this I create a process (P2) from my main application process (P1), which runs the GObject.MainLoop instance in its main thread. The problem is that loop goes on indefinitly within the process P2 and I'm unable to exit/quit it from the main application process (P1).
Following is the section of code that might help understanding the scenario.
'''
start() spawns a new process P2 that runs Mainloop within its main thread.
stop() is called from P1, but does not quit the Mainloop. This is probably because
processes do not have shared memory
'''
from multiprocessing import Process
import gi
from gi.repository import GObject
class Main:
def __init__(self):
self.process = None
self.loop = GObject.MainLoop()
def worker(self):
self.loop.run()
def start(self):
self.process=Process(target=self.worker, args=())
self.process.start()
def stop(self):
self.loop.quit()
Next, I tried using a multiprocessing Queue for sharing the 'loop' variable between the processes, but am still unable to quit the mainloop.
'''
start() spawns a new process and puts the loop object in a multiprocessing Queue
stop() calls get() from the loop and calls the quit() method, though it still does not quit the mainloop.
'''
from multiprocessing import Process, Queue
import gi
from gi.repository import GObject
class Main:
def __init__(self):
self.p=None
self.loop = GObject.MainLoop()
self.queue = Queue()
def worker(self):
self.queue.put(self.loop)
self.loop.run()
def start(self):
self.p=Process(target=self.worker, args=())
self.p.start()
def stop(self):
# receive loop instance shared by Child Process
loop=self.queue.get()
loop.quit()
How do I call the quit method for the MainLoop object which is only accessible within the child Process P2?
Ok firstly we need to be using threads not processes. Processes will be in a different address space.
What is the difference between a process and a thread?
Try passing the main loop object to a separate thread that does the actual work. This will make your main method in to nothing but a basic GLib event processing loop, but that is fine and the normal behavior in many GLib applciations.
Lastly, we need to handle the race condition of the child process finishing its work before the main loop activates. We do this with the while not loop.is_running() snippet.
from threading import Thread
import gi
from gi.repository import GObject
def worker(loop):
while not loop.is_running():
print("waiting for loop to run")
print("working")
loop.quit()
print("quitting")
class Main:
def __init__(self):
self.thread = None
self.loop = GObject.MainLoop()
def start(self):
self.thread=Thread(target=worker, args=(self.loop,))
self.thread.start()
self.loop.run()
def main():
GObject.threads_init()
m = Main()
m.start()
if __name__ =='__main__' : main()
I extended multiprocessing.Process module in my class Main and overridden its run() method to actually run the GObject.Mainloop instance inside another thread (T1) instead of its main thread. And then implemented a wait-notify mechanism which will make the main thread of Process (P2) to go under wait-notify loop and used multiprocessing.Queue to forward messages to the main thread of P2 and P2 will be notified at the same time. For eg, stop() method, which will send the quit message to P2 for which a handler is defined in the overridden run() method.
This module can be extended to parse any number of messages to the Child Process provided their handlers are to be defined also.
Following is the code snippet which I used.
from multiprocessing import Process, Condition, Queue
from threading import Thread
import gi
from gi.repository import GObject
loop=GObject.MainLoop()
def worker():
loop.run()
class Main(Process):
def __init__(self, target=None, args=()):
self.target=target
self.args=tuple(args)
print self.args
self.message_queue = Queue()
self.cond = Condition()
self.thread = None
self.loop = GObject.MainLoop()
Process.__init__(self)
def run(self):
if self.target:
self.thread = Thread(target=self.target, args=())
print "running target method"
self.thread.start()
while True:
with self.cond:
self.cond.wait()
msg = self.message_queue.get()
if msg == 'quit':
print loop.is_running()
loop.quit()
print loop.is_running()
break
else:
print 'message received', msg
def send_message(self, msg):
self.message_queue.put(msg)
with self.cond:
self.cond.notify_all()
def stop(self):
self.send_message("quit")
self.join()
def func1(self):
self.send_message("msg 1") # handler is defined in the overridden run method
# few others functions which will send unique messages to the process, and their handlers
# are defined in the overridden run method above
This method is working fine for my scenerio but suggestions are welcomed if there is a better way to do the same.
I need to terminate external programs which run from an asyncio Python script with a specific signal, say SIGTERM. My problem is that programs always receives SIGINT even if I send them SIGTERM signal.
Here is a test case, source code for a fakeprg used in the test below can be found here.
import asyncio
import traceback
import os
import os.path
import sys
import time
import signal
import shlex
from functools import partial
class ExtProgramRunner:
run = True
processes = []
def __init__(self):
pass
def start(self, loop):
self.current_loop = loop
self.current_loop.add_signal_handler(signal.SIGINT, lambda: asyncio.async(self.stop('SIGINT')))
self.current_loop.add_signal_handler(signal.SIGTERM, lambda: asyncio.async(self.stop('SIGTERM')))
asyncio.async(self.cancel_monitor())
asyncio.Task(self.run_external_programs())
#asyncio.coroutine
def stop(self, sig):
print("Got {} signal".format(sig))
self.run = False
for process in self.processes:
print("sending SIGTERM signal to the process with pid {}".format(process.pid))
process.send_signal(signal.SIGTERM)
print("Canceling all tasks")
for task in asyncio.Task.all_tasks():
task.cancel()
#asyncio.coroutine
def cancel_monitor(self):
while True:
try:
yield from asyncio.sleep(0.05)
except asyncio.CancelledError:
break
print("Stopping loop")
self.current_loop.stop()
#asyncio.coroutine
def run_external_programs(self):
os.makedirs("/tmp/files0", exist_ok=True)
os.makedirs("/tmp/files1", exist_ok=True)
# schedule tasks for execution
asyncio.Task(self.run_cmd_forever("/tmp/fakeprg /tmp/files0 1000"))
asyncio.Task(self.run_cmd_forever("/tmp/fakeprg /tmp/files1 5000"))
#asyncio.coroutine
def run_cmd_forever(self, cmd):
args = shlex.split(cmd)
while self.run:
process = yield from asyncio.create_subprocess_exec(*args)
self.processes.append(process)
exit_code = yield from process.wait()
for idx, p in enumerate(self.processes):
if process.pid == p.pid:
self.processes.pop(idx)
print("External program '{}' exited with exit code {}, relauching".format(cmd, exit_code))
def main():
loop = asyncio.get_event_loop()
try:
daemon = ExtProgramRunner()
loop.call_soon(daemon.start, loop)
# start main event loop
loop.run_forever()
except KeyboardInterrupt:
pass
except asyncio.CancelledError as exc:
print("asyncio.CancelledError")
except Exception as exc:
print(exc, file=sys.stderr)
print("====", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
finally:
print("Stopping daemon...")
loop.close()
if __name__ == '__main__':
main()
The reason for this is: When you start your python program (parent) and it starts it's processes /tmp/fakeprg (children) they get all different processes with its pid but they all run in the same foreground process group. Your shell is bound to this group, so when you hit Ctrl-C (SIGINT), Ctrl-Y (SIGTSTP) or Ctrl-\ (SIGQUIT) they are sent to all processes in the foreground process group.
In your code this happens before the parent can even send the signal to its children through send_signal, so this line sends a signal to an already dead process (and should fail, so IMO that's an issue with asyncio).
To solve that, you can explicitly put your child process into a separate process group, like this:
asyncio.create_subprocess_exec(*args, preexec_fn=os.setpgrp)
I am testing Python threading with the following script:
import threading
class FirstThread (threading.Thread):
def run (self):
while True:
print 'first'
class SecondThread (threading.Thread):
def run (self):
while True:
print 'second'
FirstThread().start()
SecondThread().start()
This is running in Python 2.7 on Kubuntu 11.10. Ctrl+C will not kill it. I also tried adding a handler for system signals, but that did not help:
import signal
import sys
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
To kill the process I am killing it by PID after sending the program to the background with Ctrl+Z, which isn't being ignored. Why is Ctrl+C being ignored so persistently? How can I resolve this?
Ctrl+C terminates the main thread, but because your threads aren't in daemon mode, they keep running, and that keeps the process alive. We can make them daemons:
f = FirstThread()
f.daemon = True
f.start()
s = SecondThread()
s.daemon = True
s.start()
But then there's another problem - once the main thread has started your threads, there's nothing else for it to do. So it exits, and the threads are destroyed instantly. So let's keep the main thread alive:
import time
while True:
time.sleep(1)
Now it will keep print 'first' and 'second' until you hit Ctrl+C.
Edit: as commenters have pointed out, the daemon threads may not get a chance to clean up things like temporary files. If you need that, then catch the KeyboardInterrupt on the main thread and have it co-ordinate cleanup and shutdown. But in many cases, letting daemon threads die suddenly is probably good enough.
KeyboardInterrupt and signals are only seen by the process (ie the main thread)... Have a look at Ctrl-c i.e. KeyboardInterrupt to kill threads in python
I think it's best to call join() on your threads when you expect them to die. I've taken the liberty to make the change your loops to end (you can add whatever cleanup needs are required to there as well). The variable die is checked on each pass and when it's True, the program exits.
import threading
import time
class MyThread (threading.Thread):
die = False
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run (self):
while not self.die:
time.sleep(1)
print (self.name)
def join(self):
self.die = True
super().join()
if __name__ == '__main__':
f = MyThread('first')
f.start()
s = MyThread('second')
s.start()
try:
while True:
time.sleep(2)
except KeyboardInterrupt:
f.join()
s.join()
An improved version of #Thomas K's answer:
Defining an assistant function is_any_thread_alive() according to this gist, which can terminates the main() automatically.
Example codes:
import threading
def job1():
...
def job2():
...
def is_any_thread_alive(threads):
return True in [t.is_alive() for t in threads]
if __name__ == "__main__":
...
t1 = threading.Thread(target=job1,daemon=True)
t2 = threading.Thread(target=job2,daemon=True)
t1.start()
t2.start()
while is_any_thread_alive([t1,t2]):
time.sleep(0)
One simple 'gotcha' to beware of, are you sure CAPS LOCK isn't on?
I was running a Python script in the Thonny IDE on a Pi4. With CAPS LOCK on, Ctrl+Shift+C is passed to the keyboard buffer, not Ctrl+C.
I have following code which compares user input
import thread,sys
if(username.get_text() == 'xyz' and password.get_text()== '123' ):
thread.start_new_thread(run,())
def run():
print "running client"
start = datetime.now().second
while True:
try:
host ='localhost'
port = 5010
time = abs(datetime.now().second-start)
time = str(time)
print time
client = socket.socket()
client.connect((host,port))
client.send(time)
except socket.error:
pass
If I just call the function run() it works but when I try to create a thread to run this function, for some reason the thread is not created and run() function is not executed I am unable to find any error..
Thanks in advance...
you really should use the threading module instead of thread.
what else are you doing? if you create a thread like this, then the interpreter will exit no matter if the thread is still running or not
for example:
import thread
import time
def run():
time.sleep(2)
print('ok')
thread.start_new_thread(run, ())
--> this produces:
Unhandled exception in thread started by
sys.excepthook is missing
lost sys.stderr
where as:
import threading
import time
def run():
time.sleep(2)
print('ok')
t=threading.Thread(target=run)
t.daemon = True # set thread to daemon ('ok' won't be printed in this case)
t.start()
works as expected. if you don't want to keep the interpreter waiting for the thread, just set daemon=True* on the generated Thread.
*edit: added that in example
thread is a low level library, you should use threading.
from threading import Thread
t = Thread(target=run, args=())
t.start()
I am working on a xmlrpc server which has to perform certain tasks cyclically. I am using twisted as the core of the xmlrpc service but I am running into a little problem:
class cemeteryRPC(xmlrpc.XMLRPC):
def __init__(self, dic):
xmlrpc.XMLRPC.__init__(self)
def xmlrpc_foo(self):
return 1
def cycle(self):
print "Hello"
time.sleep(3)
class cemeteryM( base ):
def __init__(self, dic): # dic is for cemetery
multiprocessing.Process.__init__(self)
self.cemRPC = cemeteryRPC()
def run(self):
# Start reactor on a second process
reactor.listenTCP( c.PORT_XMLRPC, server.Site( self.cemRPC ) )
p = multiprocessing.Process( target=reactor.run )
p.start()
while not self.exit.is_set():
self.cemRPC.cycle()
#p.join()
if __name__ == "__main__":
import errno
test = cemeteryM()
test.start()
# trying new method
notintr = False
while not notintr:
try:
test.join()
notintr = True
except OSError, ose:
if ose.errno != errno.EINTR:
raise ose
except KeyboardInterrupt:
notintr = True
How should i go about joining these two process so that their respective joins doesn't block?
(I am pretty confused by "join". Why would it block and I have googled but can't find much helpful explanation to the usage of join. Can someone explain this to me?)
Regards
Do you really need to run Twisted in a separate process? That looks pretty unusual to me.
Try to think of Twisted's Reactor as your main loop - and hang everything you need off that - rather than trying to run Twisted as a background task.
The more normal way of performing this sort of operation would be to use Twisted's .callLater or to add a LoopingCall object to the Reactor.
e.g.
from twisted.web import xmlrpc, server
from twisted.internet import task
from twisted.internet import reactor
class Example(xmlrpc.XMLRPC):
def xmlrpc_add(self, a, b):
return a + b
def timer_event(self):
print "one second"
r = Example()
m = task.LoopingCall(r.timer_event)
m.start(1.0)
reactor.listenTCP(7080, server.Site(r))
reactor.run()
Hey asdvawev - .join() in multiprocessing works just like .join() in threading - it's a blocking call the main thread runs to wait for the worker to shut down. If the worker never shuts down, then .join() will never return. For example:
class myproc(Process):
def run(self):
while True:
time.sleep(1)
Calling run on this means that join() will never, ever return. Typically to prevent this I'll use an Event() object passed into the child process to allow me to signal the child when to exit:
class myproc(Process):
def __init__(self, event):
self.event = event
Process.__init__(self)
def run(self):
while not self.event.is_set():
time.sleep(1)
Alternatively, if your work is encapsulated in a queue - you can simply have the child process work off of the queue until it encounters a sentinel (typically a None entry in the queue) and then shut down.
Both of these suggestions means that prior to calling .join() you can send set the event, or insert the sentinel and when join() is called, the process will finish it's current task and then exit properly.