Related: Catch a thread's exception in the caller thread in Python
When I catch exception in a child thread:
I need to let the thread caller to know
In the related posting, accepted answer checks for queue which is fine but it prevents the main caller from doing anything else because its keep checking the queue
So it would be ideal if it's event based.
The solution should be: main caller does things (not continuously checking the queue for error in child thread) and if something goes bad inside child thread, child thread lets main caller know through some event and main caller process the error.
I ve been looking at different articles and solutions but all event stuff is directed towards main caller communicating to a child thread and not vice versa
any solution that a child thread communicating to the caller via events?
There is no such thing as a "thread caller". All threads are equal.
In your head the "main thread" is something special because Python gives that to you for free but as far as Python is concerned, the main thread is just a tread like any other.
So your question boils down to: How can I exchange information between two threads?
With one or more queues.
Usually, the pseudo code looks like this:
Create N workers, connected to an input and an output queue.
Put work into the input queue
Wait for results on the output queue
Join the threads
Note that you don't have to block in step #3. You can also ask the queue whether it has any elements. If so, then get() won't block. If it's empty, then you can do other things.
Step #4 is often more simple to implement, when you define a "kill" command to which the threads respond with a "I'm done" reply.
You can then send N kill commands and wait for the N replies. Afterwards, you can be sure that all threads are done.
Related
When running my code I start a thread that runs for around 50 seconds and does a lot of background stuff. If I run this program and then close it soon after, the stuff still goes on in the background for a while because the thread never dies. How can I kill the thread gracefully in my closeEvent method in my MianWindow class? I've tried setting up a method called exit(), creating a signal 'quitOperation' in the thread in question, and then tried to use
myThread.quitOperation.emit()
I expected that this would call my exit() function in my thread because I have this line in my constructor:
self.quitOperation.connect(self.exit)
However, when I use the first line it breaks, saying that 'myThread' has no attribute 'quitOperation'. Why is this? Is there a better way?
I'm not sure for python, but I assume this myThread.quitOperation.emit() emits a signal for the thread to exit. The point is that while your worker is using the thread and does not return, nor runs QCoreApplication::processEvents(), myThread will never have the chance to actually process your request (this is called thread starvation).
Correct answer may depend on the situation, and the nature of the "stuff" your thread is doing. The most common practice is that the main thread sends a signal to the worker thread where a slot sets a flag. In the blocking process you regularly check this flag. It it is set you stop whatever "stuff" you are doing, tell your worker thread that it can quit (with a signal preferably with queued connection), call a deleteLater() on the worker object itself, and return from any functions you are currently in, so that the thread's event handler can run, and clear your worker object and itself up, the finally quit.
In case your "stuff" is a huge cycle of very fast operation like simple mathematics or directory navigation one-by-one that takes only a few milliseconds each, this will be enough.
In case your "stuff" contain huge blocking parts that you have no control of (an thus you can't place this flag checking call in it), you may need to wait in the main thread until the worker thread quits.
In case you use direct connect to set the flag, or you set it directly, be sure to protect the read/write access of the flag with a QMutex to prevent inconsistent reads, or user a queued connection to ensure single thread access of the flag.
While highly discouraged, optionally you can use QThread's terminate() method to instantaneously kill the thread. You should never do this as it may cause memory leak, heap corruption, resource leaking and any nasty stuff as destructors and clean-up codes will not run, and the execution can be halted at an undesired state.
I am writing a class that creates threads that timeout if not used within a certain time. The class allows you to pump data to a specific thread (by keyword), and if it doesn't exist it creates the thread.
Anywho, the problem I have is main supervisor class doesn't know when threads have ended. I can't put blocking code like join or poll to see if it's alive. What I want is an event handler, that is called when a thread ends (or is just about to end) so that I can inform the supervisor that the thread is no longer active.
Is this something that can be done with signal or something similar?
As psuedocode, I'm looking for something like:
def myHandlerFunc():
# inform supervisor the thread is dead
t1 = ThreadFunc()
t1.eventHandler(condition=thread_dies, handler=myHandlerFunc)
EDIT: Perhaps a better way would be to pass a ref to the parent down to the thread, and have the thread tell parent class directly. I'm sure someone will tell me off for data flow inversion.
EDIT: Here is some psuedocode:
class supervisor():
def __init__:
Setup thread dict with all threads as inactive
def dispatch(target, message):
if(target thread inactive):
create new thread
send message to thread
def thread_timeout_handler():
# Func is called asynchronously when a thread dies
# Does some stuff over here
def ThreadFunc():
while( !timeout ):
wait for message:
do stuff with message
(Tell supervisor thread is closing?)
return
The main point is that you send messages to the threads (referenced by keyword) through the supervisor. The supervisor makes sure the thread is alive (since they timeout after a while), creates a new one if it dies, and sends the data over.
Looking at this again, it's easy to avoid needing an event handler as I can just check if the thread is alive using threadObj.isAlive() instead of dynamically keeping a dict of thread statuses.
But out of curiosity, is it possible to get a handler to be called in the supervisor class by signals sent from the thread? The main App code would call the supervisor.dispatch() function once, then do other stuff. It would later be interrupted by the thread_timeout_handler function, as the thread had closed.
You still don't mention if you are using a message/event loop framework, which would provide a way for you to dispatch a call to the "main" thread and call an event handler.
Assuming you're not, than you can't just interrupt or call into the main thread.
You don't need to, though, as you only need to know if a thread is alive when you decide if you need to create a new one. You can do your checking at this time. This way, you only need a way to communicate the "finished" state between threads. There are a lot of ways to do this (I've never used .isAlive(), but you can pass information back in a Queue, Event, or even a shared variable).
Using Event it would look something like this:
class supervisor():
def __init__:
Setup thread dict with all threads as inactive
def dispatch(target, message):
if(thread.event.is_set()):
create new thread
thread.event = Event()
send message to thread
def ThreadFunc(event):
while( !timeout ):
wait for message:
do stuff with message
event.set()
return
Note that this way there is still a possible race condition. The supervisor thread might check is_set() right before the worker thread calls .set() which will lie about the thread's ability to do work. The same problem would exist with isAlive().
Is there a reason you don't just use a threadpool?
I have an application (actually a plugin for another application) that manages threads to communicate with external sensor devices. The external sensors send events to the application, but the application may also send actions to the sensors. There are several types of devices and each has unique qualities (temperature, Pressure, etc.) that require special coding. All communications with the sensor devices is over IP.
In the applications, I create a thread for each instance of a sensor. This is an example of the code
self.phThreadDict[phDevId] = tempsensor(self, phDevId, phIpAddr, phIpPort, phSerial, self.triggerDict)
self.phThreadDict[phDevId].start()
In each thread I setup callback handlers for events sent by the sensor and then go into a loop at the end.
while not self.shutdown:
self.plugin.sleep(0.5)
The threads then handle incoming events and make calls into the main thread, or the actual program that spawned the main thread. All of this works quite well.
But, at times I also need to send requests to a specific sensor. Methods are defined in each thread for that purpose and I call those methods from the main thread. For example:
self.phThreadDict[dev.id].writeDigitalOutput(textLine, lcdMessage)
This also works, but I believe the code is actually executed in the main thread rather than in the thread specific to the sensor.
My question is: What options do I have for passing work to the specific target thread and having the thread execute the work and then return success or fail?
Expanding a bit on Thomas Orozco's spot-on comments,
self.phThreadDict[dev.id].writeDigitalOutput(textLine, lcdMessage)
is executed in whichever thread runs it. If you run it from the main thread, then the main thread will do all of it. If from some other thread, then that thread will run it.
In addition to a Queue per thread, for the threads to receive descriptions of work items to process, you also want a single Queue for threads to put results on (you can also use another Queue per thread for this, but that's overkill).
The main thread will pull results off the latter Queue. Note that you can - and it's very common to do so - put tuples on Queues. So, for example, on the talk-back-to-the-main-thread Queue threads will likely put tuples of the form:
(result, my_thread_id, original_work_description)
That's enough to figure out which thread returned what result in response to which work item. Maybe you don't need all of that. Maybe you need more than that. Can't guess ;-)
Indeed, this is executing code in the main thread.
Use queues, that's what they're meant for (task synchronization and message passing between threads).
Use one queue per sensor manager thread.
Your sensor manager threads should be getting items from the queue instead of sleeping (this is a blocking call).
Your "main" thread should be putting items in the queue instead of running functions (this is generally a non-blocking call).
All you need to do is define a message format that lets the main thread tell the manager threads what functions to execute and what arguments to use.
I have a program using a thread. When my program is closed, my thread is still running and that's normal. I would like to know how my thread can detect that the main program is terminated; by itself ONLY. How would I do that?
My thread is in an infinite loop and process many object in a Queue. I can't define my thread as a daemon, else I can lose some data at the end of the main program. I don't want that my main program set a boolean value when it closed.
If you can get a handle to the main thread, you can call is_alive() on it.
Alternatively, you can call threading.enumerate() to get a list of all currently living threads, and check to see if the main thread is in there.
Or if even that is impossible, then you might be able to check to see if the child thread is the only remaining non-daemon thread.
Would it work if your manager tracked how many open threads there were, then the children killed themselves when starved of input? So the parent would start pushing data on to the queue, and the workers would consume data from the queue. If a worker found nothing on the queue for a certain timeout period, it would kill itself. The main thread would then track how many workers were operating and periodically start new workers if the number of active workers were under a given threshold.
In my program I have a bunch of threads running and I'm trying
to interrupt the main thread to get it to do something asynchronously.
So I set up a handler and send the main process a SIGUSR1 - see the code
below:
def SigUSR1Handler(signum, frame):
self._logger.debug('Received SIGUSR1')
return
signal.signal(signal.SIGUSR1, SigUSR1Handler)
[signal.signal(signal.SIGUSR1, signal.SIG_IGN)]
In the above case, all the threads and the main process stops - from a 'c'
point of view this was unexpected - I want the threads to continue as they
were before the signal. If I put the SIG_IGN in instead, everything continues
fine.
Can somebody tell me how to do this? Maybe I have to do something with the 'frame'
manually to get back to where it was..just a guess though
thanks in advance,
Thanks for your help on this.
To explain a bit more, I have thread instances writing string information to
a socket which is also output to a file. These threads run their own timers so they
independently write their outputs to the socket. When the program runs I also see
their output on stdout but it all stops as soon as I see the debug line from the signal.
I need the threads to constantly send this info but I need the main program to
take a command so it also starts doing something else (in parallel) for a while.
I thought I'd just be able to send a signal from the command line to trigger this.
Mixing signals and threads is always a little precarious. What you describe should not happen, however. Python only handles signals in the main thread. If the OS delivered the signal to another thread, that thread may be briefly interrupted (when it's performing, say, a systemcall) but it won't execute the signal handler. The main thread will be asked to execute the signalhandler at the next opportunity.
What are your threads (including the main thread) actually doing when you send the signal? How do you notice that they all 'stop'? Is it a brief pause (easily explained by the fact that the main thread will need to acquire the GIL before handling the signal) or does the process break down entirely?
I'll sort-of answer my own question:
In my first attempt at this I was using time.sleep(run_time) in the main
thread to control how long the threads ran until they were stopped. By adding
debug I could see that the sleep loop seemed to be exiting as soon as the
signal handler returned so everything was shutting down normally but early!
I've replaced the sleep with a while loop and that doesn't jump out after
the signal handler returns so my threads keep running. So it solves the
problem but I'm still a bit puzzled about sleep()'s behaviour.
You should probably use a threading.Condition variable instead of sending signals. Have your main thread check it every loop and perform its special operation if it's been set.
If you insist on using signals, you'll want to move to using subprocess instead of threads, as your problem is likely due to the GIL.
Watch this presentation by David Beazley.
http://blip.tv/file/2232410
It also explains some quirky behavior related to threads and signals (Python specific, not the general quirkiness of the subject :-) ).
http://pyprocessing.berlios.de/ Pyprocessing is a neat library that makes it easier to work with separate processes in Python.