How to stop loop running in executor? - python

I am running function that takes time to finish. The user has a choice to stop this function/event. Is there an easy way to stop the thread or loop?
class ThreadsGenerator:
MAX_WORKERS = 5
def __init__(self):
self._executor = ThreadPoolExecutor(max_workers=self.MAX_WORKERS)
self.loop = None
self.future = None
def execute_function(self, function_to_execute, *args):
self.loop = asyncio.get_event_loop()
self.future = self.loop.run_in_executor(self._executor, function_to_execute, *args)
return self.future
I want to stop the function as quickly as possible when the user click the stop button, not waiting to finish its job.
Thanks in advance!

Is there an easy way to stop the thread or loop?
You cannot forcefully stop a thread. To implement the cancel functionality, your function will need to accept a should_stop argument, for example an instance of threading.Event, and occasionally check if it has been set.
If you really need a forceful stop, and if your function is runnable in a separate process through multiprocessing, you can run it in a separate process and kill the process when it is supposed to stop. See this answer for an elaboration of that approach in the context of asyncio.

Related

stopping a Qthread with a QTimer

I am starting a Qthread in my GUI to perform an optimization function. I want to include a stopping function that can interrupt the Qthread, and end the optimization function immediately.
I read that using Qthread.terminate() is not recommended; Using a stopping flag is not possible because the nature of the function is not a loop.
I thought about using a QTimer in the QThread (a watchdog timer) that periodically checks a stopping flag, and if it is triggered, just end the optimization function, but I can not really imagine how such an idea can be written.
any Ideas?
class Worker(QObject):
finished = pyqtSignal()
def run(self):
# implement a QTimer here that will somehow interrupt the minimize function
# minimize is an arbitrary function that takes too long to run, and uses child processes to do so
result = minimize(something)
FROM_Optimization_Process,_ = loadUiType(os.path.join(os.path.dirname(__file__),"ui_files/Optimization_process_window.ui"))
class Optimization_process_window(QDialog, FROM_Optimization_Process):
def __init__(self, parent=None):
# first UI
super(Optimization_process_window, self).__init__(parent)
self.setupUi(self)
def start_solving_thread(self):
self.thread = QThread()
self.thread.daemon = True
self.worker = Worker()
self.worker.moveToThread(self.thread)
self.thread.started.connect(self.worker.run)
self.worker.finished.connect(self.thread.quit)
self.worker.finished.connect(self.worker.deleteLater)
self.thread.finished.connect(self.thread.deleteLater)
self.thread.start()
def stop_solving(self):
# implement an interrupt function here
self.thread.quit()
self.thread.wait()
You misunderstand how timers work. They cannot "interrupt" a code running in a thread in any way. Timers only can work when event loop in the thread is idle and ready to process the timer events and signal. And that event loop is blocked if some code is running in that thread.
In other words, QTimer in your thread will not work unless you unblock the event loop time to time to process the timer signal. But from what I see, you probably do some intensive work in your minimize(something) function. And it blocks the event loop completely.
If you want to be able to implement worker/thread interruption, the only way is to implement interruptions into your minimize(something) function by periodical polling. You need to split the work in this function into certain blocks and after each block is done, you check if the worker/thread is supposed to be stopped.
QThread has a helper functions for this. It is QThread.requestInterruption() and QThread.isInterruptionRequested(), these functions are thread safe. And you can access the thread instance from your worker by calling QObject.thread(). But it is you responsibility to check QThread.isInterruptionRequested() frequently enough in the code after each block of work is done.
Of course you can develop your own methods for aborting the work, possibly guarded by mutexes... Nevertheless you must check it periodically. There is no way around it.
def minimize(self, something): # a method of Worker class
for i in range(1000000):
if i % 1000 == 0: # this is just to represent a block of work
if self.thread().isInterruptionRequested():
return
self.minimize_next_part(something)
self.finished.emit()
The stopper function should then be:
def stop_solving(self):
self.thread.requestInterruption() # this will unblock the event loop in just a moment
self.thread.quit() # this ends event loop and will emit thread's finished signal
# self.thread.wait() # you do not need this probbaly, it depends...
(I am sorry for potential syntax errors, I am C++ programmer, not Pythonista)
I know this looks stupid, but really there is no other miraculous mechanism for interrupting threads. You simply need periodical polling.
PS: instead of
self.worker.finished.connect(self.worker.deleteLater)
you should use
self.thread.finished.connect(self.worker.deleteLater)
otherwise the worker will not be deleted if the thread gets interrupted because then Worker.finished signal gets never called.

How to stop function from finishing its code in event loop

I have event loop that runs the function asynchronously. However, that function generates big data so the execution will be a little bit long when the program visits that function. I also implemented a stop button, so the app will exit that function even if the event loop is not yet finish. The problem is that is how to exit on that function immediately or how to kill a thread in asyncio.
I already tried using a flag in the function but the execution of the function is fast enough to check the flag before the user click on the stop button. To be short, the thread already running on the background.
def execute_function(self, function_to_execute, *args):
self.loop = asyncio.get_event_loop()
self.future = self.loop.run_in_executor(self._executor, function_to_execute, *args)
return self.future
def stop_function(self):
if self._executor:
self._executor.shutdown(wait=False)
self.loop.stop()
self.loop.close()
Is there something wrong or missing on the code I've given? The expected output should be, the program will not generate the data at the end if I click the stop button.
You can use threading.Event passing it to your blocking function
event = threading.Event()
future = loop.run_in_executor(executor, blocking, event)
# When done
event.set()
The blocking function just has to check is_set and when you want to stop it just call event.set() as above.
def blocking(event):
while 1:
time.sleep(1)
if event.is_set():
break

Can you join a Python queue without blocking?

Python's Queue has a join() method that will block until task_done() has been called on all the items that have been taken from the queue.
Is there a way to periodically check for this condition, or receive an event when it happens, so that you can continue to do other things in the meantime? You can, of course, check if the queue is empty, but that doesn't tell you if the count of unfinished tasks is actually zero.
The Python Queue itself does not support this, so you could try the following
from threading import Thread
class QueueChecker(Thread):
def __init__(self, q):
Thread.__init__(self)
self.q = q
def run(self):
q.join()
q_manager_thread = QueueChecker(my_q)
q_manager_thread.start()
while q_manager_thread.is_alive():
#do other things
#when the loop exits the tasks are done
#because the thread will have returned
#from blocking on the q.join and exited
#its run method
q_manager_thread.join() #to cleanup the thread
a while loop on the thread.is_alive() bit might not be exactly what you want, but at least you can see how to asynchronously check on the status of the q.join now.

Terminate thread immediately when some variable set to true

how to terminate the thread when some variable value set to true?
import threading
class test(threading.Thread):
def __init__(self):
self.stopThread = False
self.start():
def start(self):
while not self.stopThread:
callOtherFunction //which takes alomst 30 sec to execute
def stop(self):
self.stopThread = True
now the problem is that if the start function is called and while loop started then it will check the stop condition on next iteration, when it completed its internal work, so if the call is made to callOtherFunction then it still waits 30 sec to exit.. i want to immediately terminate the thread when the variable is set. is it possible?
This question was answered here:
Is there any way to kill a Thread in Python?
The bottom line is that, if you can help it, you should avoid killing a thread this way. However, if you must, there are some tricks you can try.
Also, to be thread-safe, you should make self.stopThread a threading.Event() and use set() and clear() to signal when the thread should be stopped.

Python Queue waiting for thread before getting next item

I have a queue that always needs to be ready to process items when they are added to it. The function that runs on each item in the queue creates and starts thread to execute the operation in the background so the program can go do other things.
However, the function I am calling on each item in the queue simply starts the thread and then completes execution, regardless of whether or not the thread it started completed. Because of this, the loop will move on to the next item in the queue before the program is done processing the last item.
Here is code to better demonstrate what I am trying to do:
queue = Queue.Queue()
t = threading.Thread(target=worker)
t.start()
def addTask():
queue.put(SomeObject())
def worker():
while True:
try:
# If an item is put onto the queue, immediately execute it (unless
# an item on the queue is still being processed, in which case wait
# for it to complete before moving on to the next item in the queue)
item = queue.get()
runTests(item)
# I want to wait for 'runTests' to complete before moving past this point
except Queue.Empty, err:
# If the queue is empty, just keep running the loop until something
# is put on top of it.
pass
def runTests(args):
op_thread = SomeThread(args)
op_thread.start()
# My problem is once this last line 't.start()' starts the thread,
# the 'runTests' function completes operation, but the operation executed
# by some thread is not yet done executing because it is still running in
# the background. I do not want the 'runTests' function to actually complete
# execution until the operation in thread t is done executing.
"""t.join()"""
# I tried putting this line after 't.start()', but that did not solve anything.
# I have commented it out because it is not necessary to demonstrate what
# I am trying to do, but I just wanted to show that I tried it.
Some notes:
This is all running in a PyGTK application. Once the 'SomeThread' operation is complete, it sends a callback to the GUI to display the results of the operation.
I do not know how much this affects the issue I am having, but I thought it might be important.
A fundamental issue with Python threads is that you can't just kill them - they have to agree to die.
What you should do is:
Implement the thread as a class
Add a threading.Event member which the join method clears and the thread's main loop occasionally checks. If it sees it's cleared, it returns. For this override threading.Thread.join to check the event and then call Thread.join on itself
To allow (2), make the read from Queue block with some small timeout. This way your thread's "response time" to the kill request will be the timeout, and OTOH no CPU choking is done
Here's some code from a socket client thread I have that has the same issue with blocking on a queue:
class SocketClientThread(threading.Thread):
""" Implements the threading.Thread interface (start, join, etc.) and
can be controlled via the cmd_q Queue attribute. Replies are placed in
the reply_q Queue attribute.
"""
def __init__(self, cmd_q=Queue.Queue(), reply_q=Queue.Queue()):
super(SocketClientThread, self).__init__()
self.cmd_q = cmd_q
self.reply_q = reply_q
self.alive = threading.Event()
self.alive.set()
self.socket = None
self.handlers = {
ClientCommand.CONNECT: self._handle_CONNECT,
ClientCommand.CLOSE: self._handle_CLOSE,
ClientCommand.SEND: self._handle_SEND,
ClientCommand.RECEIVE: self._handle_RECEIVE,
}
def run(self):
while self.alive.isSet():
try:
# Queue.get with timeout to allow checking self.alive
cmd = self.cmd_q.get(True, 0.1)
self.handlers[cmd.type](cmd)
except Queue.Empty as e:
continue
def join(self, timeout=None):
self.alive.clear()
threading.Thread.join(self, timeout)
Note self.alive and the loop in run.

Categories