Interrupt function execution from another function in Python - python

I have a function a doing some tasks and another function b being a callback to some events. Whenever an event happens, function b is called and I would like to make it able to interrupt the execution of function a. Both functions are declared inside the same class.
Function a is not supposed to call function b. Function b is totally independent, it is a callback to an external event like "user face detected" coming from ROS: robot operating system.
what I need is basically something like Ctrl+C that can be called from within Python and which only aborts a targeted function and not the whole program.
Can this be done in Python?

It's generally recommended not to use exception calling for flow-control. Instead, look to python stdlib's threading.Event, even if you only plan on using a single thread (even the most basic Python program uses at least one thread).
This answer https://stackoverflow.com/a/46346184/914778 has a good explanation of how calling one function (function b) could interrupt another (function a).
Here's a few important parts, summarized from that other answer.
Set up your threading libraries:
from threading import Event
global exit
exit = Event()
This is a good replacement for time.sleep(60), as it can be interrupt:
exit.wait(60)
This code will execute, until you change exit to "set":
while not exit.is_set():
do_a_thing()
This will cause exit.wait(60) to stop waiting, and exit.is_set() will return True:
exit.set()
This will enable execution again, and exit.is_set() will return False:
exit.clear()

I would do the following:
define a custom exception
call the callback function within an appropriate try/catch block
if the callback function decides to break the execution, it will raise exception and the caller will catch it and handle it as needed.
Here's some pseudo-code:
class InterruptExecution (Exception):
pass
def function_a():
while some_condition_is_true():
do_something()
if callback_time():
try:
function_b()
except InterruptExecution:
break
do_something_else()
do_final_stuff()
def function_b():
do_this_and_that()
if interruption_needed():
raise (InterruptExecution('Stop the damn thing'))

I had done using Threading.
import threading
class myThread (threading.Thread):
def __init__(self, threadID, name, counter):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.counter = counter
def run(self):
# Get lock to synchronize threads
#threadLock.acquire()
if self.name == 'a':
function_a(self.name, self.counter, 3)
if self.name == 'b':
function_b(self.name, self.counter, 3)
def function_a(threadName, delay, counter):
name = raw_input("Name")
print name
def function_b(threadName, delay, counter):
global thread1
thread1.shutdown = True
thread1._Thread__stop()
# Create new threads
thread1 = myThread(1, "a", 0)
thread2 = myThread(2, "b", 0)
# Start new Threads
thread1.start()
thread2.start()
function_a stopped executing when thread1 is stopped

Related

How to terminate all threads at once [duplicate]

Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().

Run only one Instance of a Thread

I am pretty new to Python and have a question about threading.
I have one function that is called pretty often. This function starts another function in a new Thread.
def calledOften(id):
t = threading.Thread(target=doit, args=(id))
t.start()
def doit(arg):
while true:
#Long running function that is using arg
When calledOften is called everytime a new Thread is created. My goal is to always terminate the last running thread --> At all times there should be only one running doit() Function.
What I tried:
How to stop a looping thread in Python?
def calledOften(id):
t = threading.Thread(target=doit, args=(id,))
t.start()
time.sleep(5)
t.do_run = False
This code (with a modified doit Function) worked for me to stop the thread after 5 seconds.
but i can not call t.do_run = False before I start the new thread... Thats pretty obvious because it is not defined...
Does somebody know how to stop the last running thread and start a new one?
Thank you ;)
I think you can decide when to terminate the execution of a thread from inside the thread by yourself. That should not be creating any problems for you. You can think of a Threading manager approach - something like below
import threading
class DoIt(threading.Thread):
def __init__(self, id, stop_flag):
super().__init__()
self.id = id
self.stop_flag = stop_flag
def run(self):
while not self.stop_flag():
pass # do something
class CalledOftenManager:
__stop_run = False
__instance = None
def _stop_flag(self):
return CalledOftenManager.__stop_run
def calledOften(self, id):
if CalledOftenManager.__instance is not None:
CalledOftenManager.__stop_run = True
while CalledOftenManager.__instance.isAlive():
pass # wait for the thread to terminate
CalledOftenManager.__stop_run = False
CalledOftenManager.__instance = DoIt(id, CalledOftenManager._stop_flag)
CalledOftenManager.__instance.start()
# Call Manager always
CalledOftenManager.calledOften(1)
CalledOftenManager.calledOften(2)
CalledOftenManager.calledOften(3)
Now, what I tried here is to make a controller for calling the thread DoIt. Its one approach to achieve what you need.

Killing all threads Python [duplicate]

Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().

Kill a Python Thread [duplicate]

Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().

How can I invoke a thread multiple times in Python?

I'm sorry if it is a stupid question. I am trying to use a number of classes of multi-threading to finish different jobs, which involves invoking these multi-threadings at different times for many times. But I am not sure which method to use. The code looks like this:
class workers1(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
do some stuff
class workers2(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
do some stuff
class workers3(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
do some stuff
WorkerList1=[workers1(i) for i in range(X)]
WorkerList2=[workers2(i) for i in range(XX)]
WorkerList2=[workers3(i) for i in range(XXX)]
while True:
for thread in WorkerList1:
thread.run (start? join? or?)
for thread in WorkerList2:
thread.run (start? join? or?)
for thread in WorkerList3:
thread.run (start? join? or?)
do sth .
I am trying to have all the threads in all the WorkerList to start functioning at the same time, or at least start around the same time. After sometime once they were all terminated, I would like to invoke all the threads again.
If there were no loop, I can just use .start; but since I can only start a thread once, start apparently does not fit here. If I use run, it seems that all the threads start sequentially, not only the threads in the same list, but also threads from different lists.
Can anyone please help?
there are a lot of misconceptions here:
you can only start a specific instance of a thread once. but in your case, the for loop is looping over different instances of a thread, each instance being assigned to the variable thread in the loop, so there is no problem at all in calling the start() method over each thread. (you can think of it as if the variable thread is an alias of the Thread() object instantiated in your list)
run() is not the same as join(): calling run() performs as if you were programming sequentially. the run() method does not start a new thread, it simply execute the statements in in the method, as for any other function call.
join() does not start executing anything: it only waits for a thread to finish. in order for join() to work properly for a thread, you have to call start() on this thread first.
additionally, you should note that you cannot restart a thread once it has finished execution: you have to recreate the thread object for it to be started again. one workaround to get this working is to call Thread.__init__() at the end of the run() method. however, i would not recommend doing this since this will disallow the use of the join() method to detect the end of execution of the thread.
If you would call thread.start() in the loops, you would actually start every thread only once, because all the entries in your list are distinct thread objects (it does not matter they belong to the same class). You should never call the run() method of a thread directly -- it is meant to be called by the start() method. Calling it directly would not call it in a separate thread.
The code below creates a class that is just a thread but the start and calls the initialization of the Thread class again so that the thread doesn't know it has been called.
from threading import Thread
class MTThread(Thread):
def __init__(self, name = "", target = None):
self.mt_name = name
self.mt_target = target
Thread.__init__(self, name = name, target = target)
def start(self):
super().start()
Thread.__init__(self, name = self.mt_name, target = self.mt_target)
def run(self):
super().run()
Thread.__init__(self, name = self.mt_name, target = self.mt_target)
def code():
#Some code
thread = MTThread(name = "SomeThread", target = code)
thread.start()
thread.start()
I had this same dilemma and came up with this solution which has worked perfectly for me. It also allows a thread-killing decorator to be used efficiently.
The key feature is the use of a thread refresher which is instantiated and .started in main. This thread-refreshing thread will run a function that instantiates and starts all other (real, task-performing) threads. Decorating the thread-refreshing function with a thread-killer allows you to kill all threads when a certain condition is met, such as main terminating.
#ThreadKiller(arg) #qu'est-ce que c'est
def RefreshThreads():
threadTask1 = threading.Thread(name = "Task1", target = Task1, args = (anyArguments))
threadTask2 = threading.Thread(name = "Task2", target = Task2, args = (anyArguments))
threadTask1.start()
threadTask2.start()
#Main
while True:
#do stuff
threadRefreshThreads = threading.Thread(name = "RefreshThreads", target = RefreshThreads, args = ())
threadRefreshThreads.start()
from threading import Thread
from time import sleep
def runA():
while a==1:
print('A\n')
sleep(0.5)
if __name__ == "__main__":
a=1
t1 = Thread(target = runA)
t1.setDaemon(True)
t1.start()
sleep(2)
a=0
print(" now def runA stops")
sleep(3)
print("and now def runA continue")
a=1
t1 = Thread(target = runA)
t1.start()
sleep(2)

Categories