This question is very much related to this one, which doesn't have a solution, but it is not exactly the same.
I would like to ask if there is a way of launching a background task in PyQt, and be able to kill it by pressing a button.
My problem is that I have an user interface and some external (3rd party) functions that take a while to compute. In order to not frozen the user interface while the task are computing, I run them on the background using QThread and synchronize the UI when they finish using signals.
However, I would like to add the option for the external user to press a button and cancel the current task (because the task is not needed/desired anymore).
Something that to me looks as simple as a kill -9 *task* in linux, is quite hard/ofuscated to obtain in Qt.
Right now I'm using custom Qthreads of the form of:
mythread = Mythread()
mythread.finished.connect(mycallback)
mythread.start()
Where Mythread inherits QThread overriding the run method.
In the user interface, there is one button that tries to kill that thread by either using:
mythread.exit(0)
mythread.quit()
mythread.terminate()
None of them works... I'm aware that the documentation states that the terminate method does have weird behaviours...
So the question is.. I'm facing this problem wrong? How to kill a QThread? If is not possible, is there any alternative to this?
Thanks!
It's a very common mistake to try to kill a QThread in the way you're suggesting. This seems to be due to a failure to realise that it's the long-running task that needs to be stopped, rather than the thread itself.
The task was moved to the worker thread because it was blocking the main/GUI thread. But that situation doesn't really change once the task is moved. It will block the worker thread in exactly the same way that it was blocking the main thread. For the thread to finish, the task itself either has to complete normally, or be programmatically halted in some way. That is, something must be done to allow the thread's run() method to exit normally (which often entails breaking out of a blocking loop).
A common way to cancel a long-running task is via a simple stop-flag:
class Thread(QThread):
def stop(self):
self._flag = False
def run(self):
self._flag = True
for item in get_items():
process_item(item)
if not self._flag:
break
self._flag = False
Related
This question is very much related to this one, which doesn't have a solution, but it is not exactly the same.
I would like to ask if there is a way of launching a background task in PyQt, and be able to kill it by pressing a button.
My problem is that I have an user interface and some external (3rd party) functions that take a while to compute. In order to not frozen the user interface while the task are computing, I run them on the background using QThread and synchronize the UI when they finish using signals.
However, I would like to add the option for the external user to press a button and cancel the current task (because the task is not needed/desired anymore).
Something that to me looks as simple as a kill -9 *task* in linux, is quite hard/ofuscated to obtain in Qt.
Right now I'm using custom Qthreads of the form of:
mythread = Mythread()
mythread.finished.connect(mycallback)
mythread.start()
Where Mythread inherits QThread overriding the run method.
In the user interface, there is one button that tries to kill that thread by either using:
mythread.exit(0)
mythread.quit()
mythread.terminate()
None of them works... I'm aware that the documentation states that the terminate method does have weird behaviours...
So the question is.. I'm facing this problem wrong? How to kill a QThread? If is not possible, is there any alternative to this?
Thanks!
It's a very common mistake to try to kill a QThread in the way you're suggesting. This seems to be due to a failure to realise that it's the long-running task that needs to be stopped, rather than the thread itself.
The task was moved to the worker thread because it was blocking the main/GUI thread. But that situation doesn't really change once the task is moved. It will block the worker thread in exactly the same way that it was blocking the main thread. For the thread to finish, the task itself either has to complete normally, or be programmatically halted in some way. That is, something must be done to allow the thread's run() method to exit normally (which often entails breaking out of a blocking loop).
A common way to cancel a long-running task is via a simple stop-flag:
class Thread(QThread):
def stop(self):
self._flag = False
def run(self):
self._flag = True
for item in get_items():
process_item(item)
if not self._flag:
break
self._flag = False
I am using QThread to do some calculations in a separate Thread.
The Thread gets started by a button click, witch launches the function StartMeasurement().
The Thread can finish the process by itself (after finished the calculations)
and emits the PyQT Signal finished. Or the thread can be stopped by the User by the stopBtn click.
The terminate() function is working, but I get a lot of troubles when I try to start the thread again.
Is it recommendable to use the movetoThread() approach here?
Or how could I ensure that the thread is stopped correctly to enable a proper restart. (means, starting new!)
# starts the measurment in a Thread: StartMeasurement()
def StartMeasurement(self):
self.thread = measure.CMeasurementThread(self.osziObj, self.genObj, self.measSetup)
self.thread.newSample.connect(self.plotNewSample)
self.thread.finished.connect(self.Done)
self.stopBtn.clicked.connect(self.thread.terminate)
self.stopBtn.clicked.connect(self.Stop)
self.thread.start()
It's not a problem. The general practice when working with QThread is to connect its finished() signal to the deleteLater() slot of the objects that have been moved to the separate thread via moveToThread(). It's done in order to properly manage the memory when you then destroy your thread because it's assumed that you will first quit the thread and then destroy its instance. ;) This should tell you that stopping a thread has nothing to do with the destruction of those objects UNLESS you have established the connection I've described above.
It is perfectly fine to restart a thread IF you have stopped it properly using quit() and wait() to actually wait untill the stopping is completed.
However my advice is to not do that unless that extra thread has a huge impact on your application for some reason (highly unlikely with modern machines).
Instead of restarting the thread consider the following options:
implement a pause flag that just makes the thread run without doing anything if it's set to true (I've used this example of mine many times to demonstrate such behaviour (check the worker.cpp and the doWork() function in particular) - it's in C++ but it can be ported to PyQt in no time)
use QRunnable - its designed to run something and then (unless autoDelete is set to true) return to the thread pool. It's really nice if you have tasks that occur every once in a while and you don't need a constatly running separate thread. If you want to use signals and slots (to get the result of the calculation done inside the QRunnable::run() you will have to first inherit from QObject and then from QRunnable
Use futures (check the Qt Concurrent module)
I suggest that you first read the Example use cases for the various threading technologies Qt provides.
I am creating a custom job scheduler with a web frontend in python 3.4 on linux. This program creates a daemon (consumer) thread that waits for jobs to come available in a PriorityQueue. These jobs can manually be added through the web interface which adds them to the queue. When the consumer thread finds a job, it executes a program using subprocess.run, and waits for it to finish.
The basic idea of the worker thread:
class Worker(threading.Thread):
def __init__(self, queue):
self.queue = queue
# more code here
def run(self):
while True:
try:
job = self.queue.get()
#do some work
proc = subprocess.run("myprogram", timeout=my_timeout)
#do some more things
except TimeoutExpired:
#do some administration
self.queue.add(job)
However:
This consumer should be able to receive some kind of signal from the frontend (main thread) that it should stop the current job and instead work on the next job in the queue (saving the state of the current job and adding it to the end of the queue again). This can (and will most likely) happen while blocked on subprocess.run().
The subprocesses can simply be killed (the program that is executed saves sme state in a file) but the worker thread needs to do some administration on the killed job to make sure it can be resumed later on.
There can be multiple such worker threads.
Signal handlers are not an option (since they are always handled by the main thread which is a webserver and should not be bothered with this).
Having an event loop in which the process actively polls for events (such as the child exiting, the timeout occurring or the interrupt event) is in this context not really a solution but an ugly hack. The jobs are performance-heavy and constant context switches are unwanted.
What synchronization primitives should I use to interrupt this thread or to make sure it waits for several events at the same time in a blocking fashion?
I think you've accidentally glossed over a simple solution: your second bullet point says that you have the ability to kill the programs that are running in subprocesses. Notice that subprocess.call returns the return code of the subprocess. This means that you can let the main thread kill the subprocess, and just check the return code to see if you need to do any cleanup. Even better, you could use subprocess.check_call instead, which will raise an exception for you if the returncode isn't 0. I don't know what platform you're working on, but on Linux, killed processes generally don't return a 0 if they're killed.
It could look something like this:
class Worker(threading.Thread):
def __init__(self, queue):
self.queue = queue
# more code here
def run(self):
while True:
try:
job = self.queue.get()
#do some work
subprocess.check_call("myprogram", timeout=my_timeout)
#do some more things
except (TimeoutExpired, subprocess.CalledProcessError):
#do some administration
self.queue.add(job)
Note that if you're using Python 3.5, you can use subprocess.run instead, and set the check argument to True.
If you have a strong need to handle the cases where the worker needs to be interrupted when it isn't running the subprocess, then I think you're going to have to use a polling loop, because I don't think the behavior you're looking for is supported for threads in Python. You can use a threading.Event object to pass the "stop working now" pseudo-signal from your main thread to the worker, and have the worker periodically check the state of that event object.
If you're willing to consider using multiple processing stead of threads, consider switching over to the multiprocessing module, which would allow you to handle signals. There is more overhead to spawning full-blown subprocesses instead of threads, but you're essentially looking for signal-like asynchronous behavior, and I don't think Python's threading library supports anything like that. One benefit though, would be that you would be freed from the Global Interpreter Lock(PDF link), so you may actually see some speed benefits if your worker processes (formerly threads) are doing anything CPU intensive.
I am writing a class that creates threads that timeout if not used within a certain time. The class allows you to pump data to a specific thread (by keyword), and if it doesn't exist it creates the thread.
Anywho, the problem I have is main supervisor class doesn't know when threads have ended. I can't put blocking code like join or poll to see if it's alive. What I want is an event handler, that is called when a thread ends (or is just about to end) so that I can inform the supervisor that the thread is no longer active.
Is this something that can be done with signal or something similar?
As psuedocode, I'm looking for something like:
def myHandlerFunc():
# inform supervisor the thread is dead
t1 = ThreadFunc()
t1.eventHandler(condition=thread_dies, handler=myHandlerFunc)
EDIT: Perhaps a better way would be to pass a ref to the parent down to the thread, and have the thread tell parent class directly. I'm sure someone will tell me off for data flow inversion.
EDIT: Here is some psuedocode:
class supervisor():
def __init__:
Setup thread dict with all threads as inactive
def dispatch(target, message):
if(target thread inactive):
create new thread
send message to thread
def thread_timeout_handler():
# Func is called asynchronously when a thread dies
# Does some stuff over here
def ThreadFunc():
while( !timeout ):
wait for message:
do stuff with message
(Tell supervisor thread is closing?)
return
The main point is that you send messages to the threads (referenced by keyword) through the supervisor. The supervisor makes sure the thread is alive (since they timeout after a while), creates a new one if it dies, and sends the data over.
Looking at this again, it's easy to avoid needing an event handler as I can just check if the thread is alive using threadObj.isAlive() instead of dynamically keeping a dict of thread statuses.
But out of curiosity, is it possible to get a handler to be called in the supervisor class by signals sent from the thread? The main App code would call the supervisor.dispatch() function once, then do other stuff. It would later be interrupted by the thread_timeout_handler function, as the thread had closed.
You still don't mention if you are using a message/event loop framework, which would provide a way for you to dispatch a call to the "main" thread and call an event handler.
Assuming you're not, than you can't just interrupt or call into the main thread.
You don't need to, though, as you only need to know if a thread is alive when you decide if you need to create a new one. You can do your checking at this time. This way, you only need a way to communicate the "finished" state between threads. There are a lot of ways to do this (I've never used .isAlive(), but you can pass information back in a Queue, Event, or even a shared variable).
Using Event it would look something like this:
class supervisor():
def __init__:
Setup thread dict with all threads as inactive
def dispatch(target, message):
if(thread.event.is_set()):
create new thread
thread.event = Event()
send message to thread
def ThreadFunc(event):
while( !timeout ):
wait for message:
do stuff with message
event.set()
return
Note that this way there is still a possible race condition. The supervisor thread might check is_set() right before the worker thread calls .set() which will lie about the thread's ability to do work. The same problem would exist with isAlive().
Is there a reason you don't just use a threadpool?
I am building an application with a GUI and several worker threads. Now I want it to be a multi-threaded application so I am executing the same thread several times, in a loop, each thread grabbing different input parameters defined in a class outside of the thread.
So my mainGui.py file looks something like this (relevant code shown only):
self.workers = [worker.Worker(), worker.Worker(), worker.Worker()]
for i in xrange(threadCount):
self.currentWorker = self.workers[i]
self.currentWorker.alterTable.connect(self.alterMainTable)
self.currentWorker.start()
time.sleep(0.1)
As you may imagine, I am connecting the Worker's alterTable signal to the alterMainTable() method I have defined in my main GUI thread. This method updates the table in the GUI.
The worker thread looks something like this:
class Worker(QThread):
alterTable = Signal(dict)
def __init__(self, parent=None):
super(Worker, self).__init__(parent)
def sendToTable(self, param1, param2, param3):
"""This method emits the signal with params as defined above"""
params = {}
params["param1"] = param1
params["param2"] = param2
params["param3"] = param3
self.alterTable.emit(params)
def run(self):
#Perform a lengthy task, do this every now and then:
self.sendToTable(param, param2, param3)
When I am running this application in a single worker thread (so when I'm not calling that loop in the main thread), it works fine - the signals are emitted, and the main table in the GUI is updated.
However, the problems arise when I run several threads at once. The Worker threads do their job, but the Signal is only emitted sometimes. Or, better yet, it is emitted as if Qt (or whatever) was waiting for all the threads to finish, and then update the table. This is literally what happens - I can see in Python console that the threads are performing their tasks, and once all of them are doing whatever they were doing, the table is suddenly populated with a bunch of data at once.
As you may imagine, another problem that comes out of this is that since there are no events being processed, after some time, my application appears to be frozen.
I have tried adding the Qt.DirectConnection to the connect() method, but that didn't really help.
Bonus question: I've been reading about this topic on SO and other websites, it seems that people recommend QRunnable() instead of QThread() especially when it comes to subclassing it. Consequently, I'd be using QThreadPool(). But when I tried this, it seems that I cannot emit a signal from a QRunnable - it gives me the AttributeError: 'PySide.QtCore.Signal' object has no attribute 'connect', even though the Signal is defined within the QRunnable class - which is quite odd, I must say.
EDIT: In another answer on SO, someone mentioned that one may be "spamming" the main GUI thread with events to be processed. However, I don't believe that this is the case here as the sendToTable() method from the QThread is only called 5-6 times at most from the thread, and the threadCount is never larger than 20 at most, but I usually keep it at around 5.
And, as usual, I answer my question after 2 days of debugging and minutes after posting on SO.
I had a leftover workerThread.wait() method call after all the threads have been started. So naturally my application did what it was told to do - waited for the thread to finish.
I removed that method call and also put the QCoreApplication.processEvents() inside the loop which started the threads, now it all works like a charm.
Once again, thank you, the invisible, almighty person of SO!