In Python, how do I know when a process is finished? - python
From within a Python GUI (PyGTK) I start a process (using multiprocessing). The process takes a long time (~20 minutes) to finish. When the process is finished I would like to clean it up (extract the results and join the process). How do I know when the process has finished?
My colleague suggested a busy loop within the parent process that checks if the child process has finished. Surely there is a better way.
In Unix, when a process is forked, a signal handler is called from within the parent process when the child process has finished. But I cannot see anything like that in Python. Am I missing something?
How is it that the end of a child process can be observed from within the parent process? (Of course, I do not want to call Process.join() as it would freeze up the GUI interface.)
This question is not limited to multi-processing: I have exactly the same problem with multi-threading.
I think as a part of making python multi-platform, simple things like SIGCHLD must be done yourself. Agreed, this is a little more work when all you want to do is know when the child is done, but it really isn't THAT painful. Consider the following that uses a child process to do the work, two multiprocessing.Event instances, and a thread to check if the child process is done:
import threading
from multiprocessing import Process, Event
from time import sleep
def childsPlay(event):
print "Child started"
for i in range(3):
print "Child is playing..."
sleep(1)
print "Child done"
event.set()
def checkChild(event, killEvent):
event.wait()
print "Child checked, and is done playing"
if raw_input("Do again? y/n:") == "y":
event.clear()
t = threading.Thread(target=checkChild, args=(event, killEvent))
t.start()
p = Process(target=childsPlay, args=(event,))
p.start()
else:
cleanChild()
killEvent.set()
def cleanChild():
print "Cleaning up the child..."
if __name__ == '__main__':
event = Event()
killEvent = Event()
# process to do work
p = Process(target=childsPlay, args=(event,))
p.start()
# thread to check on child process
t = threading.Thread(target=checkChild, args=(event, killEvent))
t.start()
try:
while not killEvent.is_set():
print "GUI running..."
sleep(1)
except KeyboardInterrupt:
print "Quitting..."
exit(0)
finally:
print "Main done"
EDIT
Joining to all processes and threads created is a good practice because it will help indicate when zombie (never-finishing) processes/threads are being created. I've altered the above code making a ChildChecker class that inherits from threading.Thread. It's sole purpose is to start a job in a separate process, wait for that process to finish, and then notify the GUI when everything is complete. Joining on the ChildChecker will also join the process it is "checking". Now, if the process doesn't join after 5 seconds, the thread will force terminate the process. Enter "y" creates starts a child process running "endlessChildsPlay" that must demonstrate force termination.
import threading
from multiprocessing import Process, Event
from time import sleep
def childsPlay(event):
print "Child started"
for i in range(3):
print "Child is playing..."
sleep(1)
print "Child done"
event.set()
def endlessChildsPlay(event):
print "Endless child started"
while True:
print "Endless child is playing..."
sleep(1)
event.set()
print "Endless child done"
class ChildChecker(threading.Thread):
def __init__(self, killEvent):
super(ChildChecker, self).__init__()
self.killEvent = killEvent
self.event = Event()
self.process = Process(target=childsPlay, args=(self.event,))
def run(self):
self.process.start()
while not self.killEvent.is_set():
self.event.wait()
print "Child checked, and is done playing"
if raw_input("Do again? y/n:") == "y":
self.event.clear()
self.process = Process(target=endlessChildsPlay, args=(self.event,))
self.process.start()
else:
self.cleanChild()
self.killEvent.set()
def join(self):
print "Joining child process"
# Timeout on 5 seconds
self.process.join(5)
if self.process.is_alive():
print "Child did not join! Killing.."
self.process.terminate()
print "Joining ChildChecker thread"
super(ChildChecker, self).join()
def cleanChild(self):
print "Cleaning up the child..."
if __name__ == '__main__':
killEvent = Event()
# thread to check on child process
t = ChildChecker(killEvent)
t.start()
try:
while not killEvent.is_set():
print "GUI running..."
sleep(1)
except KeyboardInterrupt:
print "Quitting..."
exit(0)
finally:
t.join()
print "Main done"
This answer is really simple! (It just took me days to work it out.)
Combined with PyGTK's idle_add(), you can create an AutoJoiningThread. The total code is borderline trivial:
class AutoJoiningThread(threading.Thread):
def run(self):
threading.Thread.run(self)
gobject.idle_add(self.join)
If you want to do more than just join (such as collecting results) then you can extend the above class to emit signals on completion, as is done in the following example:
import threading
import time
import sys
import gobject
gobject.threads_init()
class Child:
def __init__(self):
self.result = None
def play(self, count):
print "Child starting to play."
for i in range(count):
print "Child playing."
time.sleep(1)
print "Child finished playing."
self.result = 42
def get_result(self, obj):
print "The result was "+str(self.result)
class AutoJoiningThread(threading.Thread, gobject.GObject):
__gsignals__ = {
'finished': (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
gobject.GObject.__init__(self)
def run(self):
threading.Thread.run(self)
gobject.idle_add(self.join)
gobject.idle_add(self.emit, 'finished')
def join(self):
threading.Thread.join(self)
print "Called Thread.join()"
if __name__ == '__main__':
print "Creating child"
child = Child()
print "Creating thread"
thread = AutoJoiningThread(target=child.play,
args=(3,))
thread.connect('finished', child.get_result)
print "Starting thread"
thread.start()
print "Running mainloop (Ctrl+C to exit)"
mainloop = gobject.MainLoop()
try:
mainloop.run()
except KeyboardInterrupt:
print "Received KeyboardInterrupt. Quiting."
sys.exit()
print "God knows how we got here. Quiting."
sys.exit()
The output of the above example will depend on the order the threads are executed, but it will be similar to:
Creating child
Creating thread
Starting thread
Child starting to play.
Child playing.
Running mainloop (Ctrl+C to exit)
Child playing.
Child playing.
Child finished playing.
Called Thread.join()
The result was 42
^CReceived KeyboardInterrupt. Quiting.
It's not possible to create an AutoJoiningProcess in the same way (because we cannot call idle_add() across two different processes), however we can use an AutoJoiningThread to get what we want:
class AutoJoiningProcess(multiprocessing.Process):
def start(self):
thread = AutoJoiningThread(target=self.start_process)
thread.start() # automatically joins
def start_process(self):
multiprocessing.Process.start(self)
self.join()
To demonstrate AutoJoiningProcess here is another example:
import threading
import multiprocessing
import time
import sys
import gobject
gobject.threads_init()
class Child:
def __init__(self):
self.result = multiprocessing.Manager().list()
def play(self, count):
print "Child starting to play."
for i in range(count):
print "Child playing."
time.sleep(1)
print "Child finished playing."
self.result.append(42)
def get_result(self, obj):
print "The result was "+str(self.result)
class AutoJoiningThread(threading.Thread, gobject.GObject):
__gsignals__ = {
'finished': (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
gobject.GObject.__init__(self)
def run(self):
threading.Thread.run(self)
gobject.idle_add(self.join)
gobject.idle_add(self.emit, 'finished')
def join(self):
threading.Thread.join(self)
print "Called Thread.join()"
class AutoJoiningProcess(multiprocessing.Process, gobject.GObject):
__gsignals__ = {
'finished': (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def __init__(self, *args, **kwargs):
multiprocessing.Process.__init__(self, *args, **kwargs)
gobject.GObject.__init__(self)
def start(self):
thread = AutoJoiningThread(target=self.start_process)
thread.start()
def start_process(self):
multiprocessing.Process.start(self)
self.join()
gobject.idle_add(self.emit, 'finished')
def join(self):
multiprocessing.Process.join(self)
print "Called Process.join()"
if __name__ == '__main__':
print "Creating child"
child = Child()
print "Creating thread"
process = AutoJoiningProcess(target=child.play,
args=(3,))
process.connect('finished',child.get_result)
print "Starting thread"
process.start()
print "Running mainloop (Ctrl+C to exit)"
mainloop = gobject.MainLoop()
try:
mainloop.run()
except KeyboardInterrupt:
print "Received KeyboardInterrupt. Quiting."
sys.exit()
print "God knows how we got here. Quiting."
sys.exit()
The resulting output will be very similar to the example above, except this time we have both the process joining and it's attendant thread joining too:
Creating child
Creating thread
Starting thread
Running mainloop (Ctrl+C to exit)
Child starting to play.
Child playing.
Child playing.
Child playing.
Child finished playing.
Called Process.join()
The result was [42]
Called Thread.join()
^CReceived KeyboardInterrupt. Quiting.
Unfortunately:
This solution is dependent on gobject, due to the use of idle_add(). gobject is used by PyGTK.
This is not a true parent/child relationship. If one of these threads is started by another thread, then it will nonetheless be joined by the thread running the mainloop, not the parent thread. This problem holds true for AutoJoiningProcess too, except there I imagine an exception would be thrown.
Thus to use this approach, it would be best to only create threads/process from within the mainloop/GUI.
You can use a queue to communicate with child processes. You can stick intermediate results on it, or messages indicating that milestones have been hit (for progress bars) or just a message indicating that the process is ready to be joined. Polling it with empty is easy and fast.
If you really only want to know if it's done, you can watch the exitcode of your process or poll is_alive().
In my efforts to try to find an answer to my own question, I stumbled across PyGTK's idle_add() function. This gives me the following possibility:
Create a new child process that communicates via a Queue.
Create a listener thread that listens to the Queue, when the child process sends the listener a message saying that it is finished, the listener calls idle_add() that sets up a callback.
During the next time around the main loop the parent process will call the callback.
The callback can extract results, join the child process and join the listener-thread.
This seems an overly complex way to re-create Unix's call-callback-when-child-process-is-done.
This must be an uber-common problem with GUIs in Python. Surely there is a standard pattern to solve this problem?
have a look at the subprocess module:
http://docs.python.org/library/subprocess.html
import subprocess
let pipe = subprocess.Popen("ls -l", stdout=subprocess.PIPE)
allText = pipe.stdout.read()
pipe.wait()
retVal = pipe.returncode
Related
Stopping eval code dinamically on event fired [duplicate]
What's the proper way to tell a looping thread to stop looping? I have a fairly simple program that pings a specified host in a separate threading.Thread class. In this class it sleeps 60 seconds, the runs again until the application quits. I'd like to implement a 'Stop' button in my wx.Frame to ask the looping thread to stop. It doesn't need to end the thread right away, it can just stop looping once it wakes up. Here is my threading class (note: I haven't implemented looping yet, but it would likely fall under the run method in PingAssets) class PingAssets(threading.Thread): def __init__(self, threadNum, asset, window): threading.Thread.__init__(self) self.threadNum = threadNum self.window = window self.asset = asset def run(self): config = controller.getConfig() fmt = config['timefmt'] start_time = datetime.now().strftime(fmt) try: if onlinecheck.check_status(self.asset): status = "online" else: status = "offline" except socket.gaierror: status = "an invalid asset tag." msg =("{}: {} is {}. \n".format(start_time, self.asset, status)) wx.CallAfter(self.window.Logger, msg) And in my wxPyhton Frame I have this function called from a Start button: def CheckAsset(self, asset): self.count += 1 thread = PingAssets(self.count, asset, self) self.threads.append(thread) thread.start()
Threaded stoppable function Instead of subclassing threading.Thread, one can modify the function to allow stopping by a flag. We need an object, accessible to running function, to which we set the flag to stop running. We can use threading.currentThread() object. import threading import time def doit(arg): t = threading.currentThread() while getattr(t, "do_run", True): print ("working on %s" % arg) time.sleep(1) print("Stopping as you wish.") def main(): t = threading.Thread(target=doit, args=("task",)) t.start() time.sleep(5) t.do_run = False if __name__ == "__main__": main() The trick is, that the running thread can have attached additional properties. The solution builds on assumptions: the thread has a property "do_run" with default value True driving parent process can assign to started thread the property "do_run" to False. Running the code, we get following output: $ python stopthread.py working on task working on task working on task working on task working on task Stopping as you wish. Pill to kill - using Event Other alternative is to use threading.Event as function argument. It is by default False, but external process can "set it" (to True) and function can learn about it using wait(timeout) function. We can wait with zero timeout, but we can also use it as the sleeping timer (used below). def doit(stop_event, arg): while not stop_event.wait(1): print ("working on %s" % arg) print("Stopping as you wish.") def main(): pill2kill = threading.Event() t = threading.Thread(target=doit, args=(pill2kill, "task")) t.start() time.sleep(5) pill2kill.set() t.join() Edit: I tried this in Python 3.6. stop_event.wait() blocks the event (and so the while loop) until release. It does not return a boolean value. Using stop_event.is_set() works instead. Stopping multiple threads with one pill Advantage of pill to kill is better seen, if we have to stop multiple threads at once, as one pill will work for all. The doit will not change at all, only the main handles the threads a bit differently. def main(): pill2kill = threading.Event() tasks = ["task ONE", "task TWO", "task THREE"] def thread_gen(pill2kill, tasks): for task in tasks: t = threading.Thread(target=doit, args=(pill2kill, task)) yield t threads = list(thread_gen(pill2kill, tasks)) for thread in threads: thread.start() time.sleep(5) pill2kill.set() for thread in threads: thread.join()
This has been asked before on Stack. See the following links: Is there any way to kill a Thread in Python? Stopping a thread after a certain amount of time Basically you just need to set up the thread with a stop function that sets a sentinel value that the thread will check. In your case, you'll have the something in your loop check the sentinel value to see if it's changed and if it has, the loop can break and the thread can die.
I read the other questions on Stack but I was still a little confused on communicating across classes. Here is how I approached it: I use a list to hold all my threads in the __init__ method of my wxFrame class: self.threads = [] As recommended in How to stop a looping thread in Python? I use a signal in my thread class which is set to True when initializing the threading class. class PingAssets(threading.Thread): def __init__(self, threadNum, asset, window): threading.Thread.__init__(self) self.threadNum = threadNum self.window = window self.asset = asset self.signal = True def run(self): while self.signal: do_stuff() sleep() and I can stop these threads by iterating over my threads: def OnStop(self, e): for t in self.threads: t.signal = False
I had a different approach. I've sub-classed a Thread class and in the constructor I've created an Event object. Then I've written custom join() method, which first sets this event and then calls a parent's version of itself. Here is my class, I'm using for serial port communication in wxPython app: import wx, threading, serial, Events, Queue class PumpThread(threading.Thread): def __init__ (self, port, queue, parent): super(PumpThread, self).__init__() self.port = port self.queue = queue self.parent = parent self.serial = serial.Serial() self.serial.port = self.port self.serial.timeout = 0.5 self.serial.baudrate = 9600 self.serial.parity = 'N' self.stopRequest = threading.Event() def run (self): try: self.serial.open() except Exception, ex: print ("[ERROR]\tUnable to open port {}".format(self.port)) print ("[ERROR]\t{}\n\n{}".format(ex.message, ex.traceback)) self.stopRequest.set() else: print ("[INFO]\tListening port {}".format(self.port)) self.serial.write("FLOW?\r") while not self.stopRequest.isSet(): msg = '' if not self.queue.empty(): try: command = self.queue.get() self.serial.write(command) except Queue.Empty: continue while self.serial.inWaiting(): char = self.serial.read(1) if '\r' in char and len(msg) > 1: char = '' #~ print('[DATA]\t{}'.format(msg)) event = Events.PumpDataEvent(Events.SERIALRX, wx.ID_ANY, msg) wx.PostEvent(self.parent, event) msg = '' break msg += char self.serial.close() def join (self, timeout=None): self.stopRequest.set() super(PumpThread, self).join(timeout) def SetPort (self, serial): self.serial = serial def Write (self, msg): if self.serial.is_open: self.queue.put(msg) else: print("[ERROR]\tPort {} is not open!".format(self.port)) def Stop(self): if self.isAlive(): self.join() The Queue is used for sending messages to the port and main loop takes responses back. I've used no serial.readline() method, because of different end-line char, and I have found the usage of io classes to be too much fuss.
Depends on what you run in that thread. If that's your code, then you can implement a stop condition (see other answers). However, if what you want is to run someone else's code, then you should fork and start a process. Like this: import multiprocessing proc = multiprocessing.Process(target=your_proc_function, args=()) proc.start() now, whenever you want to stop that process, send it a SIGTERM like this: proc.terminate() proc.join() And it's not slow: fractions of a second. Enjoy :)
My solution is: import threading, time def a(): t = threading.currentThread() while getattr(t, "do_run", True): print('Do something') time.sleep(1) def getThreadByName(name): threads = threading.enumerate() #Threads list for thread in threads: if thread.name == name: return thread threading.Thread(target=a, name='228').start() #Init thread t = getThreadByName('228') #Get thread by name time.sleep(5) t.do_run = False #Signal to stop thread t.join()
I find it useful to have a class, derived from threading.Thread, to encapsulate my thread functionality. You simply provide your own main loop in an overridden version of run() in this class. Calling start() arranges for the object’s run() method to be invoked in a separate thread. Inside the main loop, periodically check whether a threading.Event has been set. Such an event is thread-safe. Inside this class, you have your own join() method that sets the stop event object before calling the join() method of the base class. It can optionally take a time value to pass to the base class's join() method to ensure your thread is terminated in a short amount of time. import threading import time class MyThread(threading.Thread): def __init__(self, sleep_time=0.1): self._stop_event = threading.Event() self._sleep_time = sleep_time """call base class constructor""" super().__init__() def run(self): """main control loop""" while not self._stop_event.isSet(): #do work print("hi") self._stop_event.wait(self._sleep_time) def join(self, timeout=None): """set stop event and join within a given time period""" self._stop_event.set() super().join(timeout) if __name__ == "__main__": t = MyThread() t.start() time.sleep(5) t.join(1) #wait 1s max Having a small sleep inside the main loop before checking the threading.Event is less CPU intensive than looping continuously. You can have a default sleep time (e.g. 0.1s), but you can also pass the value in the constructor.
Sometimes you don't have control over the running target. In those cases you can use signal.pthread_kill to send a stop signal. from signal import pthread_kill, SIGTSTP from threading import Thread from itertools import count from time import sleep def target(): for num in count(): print(num) sleep(1) thread = Thread(target=target) thread.start() sleep(5) pthread_kill(thread.ident, SIGTSTP) result 0 1 2 3 4 [14]+ Stopped
How to Interrupt/Stop/End a hanging multi-threaded python program
I have a python program that implements threads like this: class Mythread(threading.Thread): def __init__(self, name, q): threading.Thread.__init__(self) self.name = name self.q = q def run(self): print "Starting %s..." % (self.name) while True: ## Get data from queue data = self.q.get() ## do_some_processing with data ### process_data(data) ## Mark Queue item as done self.q.task_done() print "Exiting %s..." % (self.name) def call_threaded_program(): ##Setup the threads. Define threads,queue,locks threads = [] q = Queue.Queue() thread_count = n #some number data_list = [] #some data list containing data ##Create Threads for thread_id in range(1, thread_count+1): thread_name = "Thread-" + str(thread_id) thread = Mythread(thread_name,q) thread.daemon = True thread.start() ##Fill data in Queue for data_item in data_list: q.put(data_item) try: ##Wait for queue to be exhausted and then exit main program q.join() except (KeyboardInterrupt, SystemExit) as e: print "Interrupt Issued. Exiting Program with error state: %s"%(str(e)) exit(1) The call_threaded_program() is called from a different program. I have the code working under normal circumstances. However if an error/exception occurs in one of the threads, then the program is stuck (as the queue join is infinitely blocking). The only way I am able to quit this program is to close the terminal itself. What is the best way to terminate this program when a thread bails out? Is there a clean (actually I would take any way) way of doing this? I know this question has been asked numerous times, but I am still unable to find a convincing answer. I would really appreciate any help. EDIT: I tried removing the join on the queue and used a global exit flag as suggested in Is there any way to kill a Thread in Python? However, Now the behavior is so strange, I can't comprehend what is going on. import threading import Queue import time exit_flag = False class Mythread (threading.Thread): def __init__(self,name,q): threading.Thread.__init__(self) self.name = name self.q = q def run(self): try: # Start Thread print "Starting %s...."%(self.name) # Do Some Processing while not exit_flag: data = self.q.get() print "%s processing %s"%(self.name,str(data)) self.q.task_done() # Exit thread print "Exiting %s..."%(self.name) except Exception as e: print "Exiting %s due to Error: %s"%(self.name,str(e)) def main(): global exit_flag ##Setup the threads. Define threads,queue,locks threads = [] q = Queue.Queue() thread_count = 20 data_list = range(1,50) ##Create Threads for thread_id in range(1,thread_count+1): thread_name = "Thread-" + str(thread_id) thread = Mythread(thread_name,q) thread.daemon = True threads.append(thread) thread.start() ##Fill data in Queue for data_item in data_list: q.put(data_item) try: ##Wait for queue to be exhausted and then exit main program while not q.empty(): pass # Stop the threads exit_flag = True # Wait for threads to finish print "Waiting for threads to finish..." while threading.activeCount() > 1: print "Active Threads:",threading.activeCount() time.sleep(1) pass print "Finished Successfully" except (KeyboardInterrupt, SystemExit) as e: print "Interrupt Issued. Exiting Program with error state: %s"%(str(e)) if __name__ == '__main__': main() The program's output is as below: #Threads get started correctly #The output also is getting processed but then towards the end, All i see are Active Threads: 16 Active Threads: 16 Active Threads: 16... The program then just hangs or keeps on printing the active threads. However since the exit flag is set to True, the thread's run method is not being exercised. So I have no clue as to how these threads are kept up or what is happening. EDIT: I found the problem. In the above code, thread's get method were blocking and hence unable to quit. Using a get method with a timeout instead did the trick. I have the code for just the run method that I modified below def run(self): try: #Start Thread printing "Starting %s..."%(self.name) #Do Some processing while not exit_flag: try: data = self.q.get(True,self.timeout) print "%s processing %s"%(self.name,str(data)) self.q.task_done() except: print "Queue Empty or Timeout Occurred. Try Again for %s"%(self.name) # Exit thread print "Exiting %s..."%(self.name) except Exception as e: print "Exiting %s due to Error: %s"%(self.name,str(e))
If you want to force all the threads to exit when the process exits, you can set the "daemon" flag of the thread to True before the thread is created. http://docs.python.org/2/library/threading.html#threading.Thread.daemon
I did it once in C. Basically i had a main process that were starting the other ones and kept tracks of them, ie. stored the PID and waited for the return code. If you have an error in a process the code will indicate so and then you can stop every other process. Hope this helps Edit: Sorry i can have forgotten in my answer that you were using threads. But I think it still applies. You can either wrap or modify the thread to get a return value or you can use the multithread pool library. how to get the return value from a thread in python? Python thread exit code
How to "listen" to a multiprocessing queue in Python
I will start with the code, I hope it is simple enough: import Queue import multiprocessing class RobotProxy(multiprocessing.Process): def __init__(self, commands_q): multiprocessing.Process.__init__(self) self.commands_q = commands_q def run(self): self.listen() print "robot started" def listen(self): print "listening" while True: print "size", self.commands_q.qsize() command = self.commands_q.get() print command if command is "start_experiment": self.start_experiment() elif command is "end_experiment": self.terminate_experiment() break else: raise Exception("Communication command not recognized") print "listen finished" def start_experiment(self): #self.vision = ds.DropletSegmentation( ) print "start experiment" def terminate_experiment(self): print "terminate experiment" if __name__ == "__main__": command_q = Queue.Queue() robot_proxy = RobotProxy( command_q ) robot_proxy.start() #robot_proxy.listen() print "after start" print command_q.qsize() command_q.put("start_experiment") command_q.put("end_experiment") print command_q.qsize() raise SystemExit So basically I launch a process, and I want this process to listen to commands put on the Queue. When I execute this code, I get the following: after start 0 2 listening size 0 it seems that I am not sharing the Queue properly, or that I am doing any other error. The program gets stuck forever in that "self.commands_q.get() when in theory the queue has 2 elements
You need to use multiprocessing.Queue instead of Queue.Queue in order to have the Queue object be shared across processes. See here: Multiprocessing Queues
Check if parent thread is running
I am wondering how to check to see if a parent thread is still alive/stuck. Basically I have a parent thread sending commands to a child. If parent thread dies or hits a deadlock condition I do not want the child to continue to live. Below is basic framework of my implementation thus far. from Queue import Queue from threading import Thread class myClass: def __init__(self): self.currentCommand = Queue() t = Thread(target=self._run) t.start() def close(self): self._sendCommand("close") def _run(self): while True: if self.currentCommand.empty(): pass #do some task else: command = self.currentCommand.get() if command == "close": #clean up self.currentCommand.task_done() break else: #do command task self.currentCommand.task_done() def _sendCommand(self, command): self.currentCommand.put(command) self.currentCommand.join() One idea I have is to periodically send computer time to child from parent. If time is greater than a set value child will die. Is there a easier or more effect method? Also within the python documentation, there is an isAlive method within the threading class but I am unsure how to use it.
You could just pass an Event object down to the child thread, which it can check to see if the parent indicated a quit. Then you just wrap the critical section in the parent thread with a finally that will set the bit no matter what: import time from threading import Thread, Event def child(quit): for _ in xrange(10): if quit.isSet(): print "Parent is dead. Leaving child." return print "Child alive" time.sleep(.5) def parent(): quitEvent = Event() t = Thread(target=child, args=(quitEvent,)) t.start() try: time.sleep(2) raise Exception("Parent thread raises exception") finally: quitEvent.set() t.join() if __name__ == "__main__": t = Thread(target=parent, args=()) t.start() t.join() Though the matter of the parent thread dead-locking during it's own work would probably require a "heartbeat" approach like you suggested, where it is periodically indicating that it is alive. You could do that with either the queue that you pass down to the child, or you can continue to use the Event object. The parent would periodically set the event, and the child would expect it to be set at certain intervals, and then would clear it right after. Here is an example of using the Event as a heartbeat, in the case where the parent might be deadlocked and not checking in: def child(heartbeat): for _ in xrange(10): if not heartbeat.isSet(): print "Parent is dead. Leaving child." return heartbeat.clear() print "Child alive" time.sleep(1) def parent(): heartbeat = Event() heartbeat.set() t = Thread(target=child, args=(heartbeat,)) t.start() i = 0 while i < 20: print "Parent alive" i += 1 heartbeat.set() time.sleep(.1) print "Parent done looping...pretending to be deadlocked" time.sleep(5) t.join() As the parent is doing it's own work, it is setting the heartbeat bit. The child is checking for this bit periodically. If it finds the parent is not set, then it assumes it is dead and quits. You would need to establish a heartbeat interval that is appropriate. The parent needs to check in more often than the child is checking it, or the child might check to soon and think the parent is gone.
It's possible to use isAlife if you somehow share parent thread object with child: parent_thread = None def child(): while True: time.sleep(1) if not parent_thread.isAlive(): break print('child alife') def parent(): t = threading.Thread(target=child) t.start() for i in range(10): print('parent alife') time.sleep(1) parent_thread = threading.Thread(target=parent) parent_thread.start()
You can use the following line: threading.main_thread().is_alive()
python multiprocessing member variable not set
In the following script, I get the "stop message received" output but the process never ends. Why is that? Is there another way to end a process besides terminate or os.kill that is along these lines? from multiprocessing import Process from time import sleep class Test(Process): def __init__(self): Process.__init__(self) self.stop = False def run(self): while self.stop == False: print "running" sleep(1.0) def end(self): print "stop message received" self.stop = True if __name__ == "__main__": test = Test() test.start() sleep(1.0) test.end() test.join()
The start method has cloned the object into a separate process, where it executes run. The end method is nothing special, so it runs in the process that calls it -- the changes it performs to that object are not sent to the clone object. So, use instead an appropriate means of interprocess communication, such as a multiprocessing.Event instance, e.g.: from multiprocessing import Process, Event from time import sleep class Test(Process): def __init__(self): Process.__init__(self) self.stop = Event() def run(self): while not self.stop.is_set(): print "running" sleep(1.0) def end(self): print "stop message received" self.stop.set() if __name__ == "__main__": test = Test() test.start() sleep(1.0) test.end() test.join() As you see, the required changes are minimal.