Related
I am preparing a Python multiprocessing tool where I use Process and Queue commands. The queue is putting another script in a process to run in parallel. As a sanity check, in the queue, I want to check if there is any error happing in my other script and return a flag/message if there was an error (status = os.system() will run the process and status is a flag for error). But I can't output errors from the queue/child in the consumer process to the parent process. Following are the main parts of my code (shortened):
import os
import time
from multiprocessing import Process, Queue, Lock
command_queue = Queue()
lock = Lock()
p = Process(target=producer, args=(command_queue, lock, test_config_list_path))
for i in range(consumer_num):
c = Process(target=consumer, args=(command_queue, lock))
consumers.append(c)
p.daemon = True
p.start()
for c in consumers:
c.daemon = True
c.start()
p.join()
for c in consumers:
c.join()
if error_flag:
Stop_this_process_and_send_a_message!
def producer(queue, lock, ...):
for config_path in test_config_list_path:
queue.put((config_path, process_to_be_queued))
def consumer(queue, lock):
while True:
elem = queue.get()
if elem is None:
return
status = os.system(elem[1])
if status:
error_flag = 1
time.sleep(3)
Now I want to get that error_flag and use it in the main code to handle things. But seems I can't output error_flag from the consumer (child) part to the main part of the code. I'd appreciate it if someone can help with this.
Given your update, I also pass an multiprocessing.Event instance to your to_do process. This allows you to simply issue a call to wait on the event in the main process, which will block until a call to set is called on it. Naturally, when to_do or one of its threads detects a script error, it would call set on the event after setting error_flag.value to True. This will wake up the main process who can then call method terminate on the process, which will do what you want. On a normal completion of to_do, it still is necessary to call set on the event since the main process is blocking until the event has been set. But in this case the main process will just call join on the process.
Using a multiprocessing.Value instance alone would have required periodically checking its value in a loop, so I think waiting on a multiprocessing.Event is better. I have also made a couple of other updates to your code with comments, so please review them:
import multiprocessing
from ctypes import c_bool
...
def to_do(event, error_flag):
# Run the tests
wrapper_threads.main(event, error_flag)
# on error or normal process completion:
event.set()
def git_pull_change(path_to_repo):
repo = Repo(path)
current = repo.head.commit
repo.remotes.origin.pull()
if current == repo.head.commit:
print("Repo not changed. Sleep mode activated.")
# Call to time.sleep(some_number_of_seconds) should go here, right?
return False
else:
print("Repo changed. Start running the tests!")
return True
def main():
while True:
status = git_pull_change(git_path)
if status:
# The repo was just pulled, so no point in doing it again:
#repo = Repo(git_path)
#repo.remotes.origin.pull()
event = multiprocessing.Event()
error_flag = multiprocessing.Value(c_bool, False, lock=False)
process = multiprocessing.Process(target=to_do, args=(event, error_flag))
process.start()
# wait for an error or normal process completion:
event.wait()
if error_flag.value:
print('Error! breaking the process!!!!!!!!!!!!!!!!!!!!!!!')
process.terminate() # Kill the process
else:
process.join()
break
You should always tag multiprocessing questions with the platform you are running on. Since I do not see your process-creating code within a if __name__ == '__main__': block, I have to assume you are running on a platform that uses OS fork calls to create new processes, such as Linux.
That means your newly created processes inherit the value of error_flag when they are created but for all intents and purposes, if a process modifies this variable, it is modifying a local copy of this variable that exists in an address space that is unique to that process.
You need to create error_flag in shared memory and pass it as an argument to your process:
from multiprocessing import Value
from ctypes import c_bool
...
error_flag = Value(c_bool, False, lock=False)
for i in range(consumer_num):
c = Process(target=consumer, args=(command_queue, lock, error_flag))
consumers.append(c)
...
if error_flag.value:
...
#Stop_this_process_and_send_a_message!
def consumer(queue, lock, error_flag):
while True:
elem = queue.get()
if elem is None:
return
status = os.system(elem[1])
if status:
error_flag.value = True
time.sleep(3)
But I have a questions/comments for you. You have in your original code the following statement:
if error_flag:
Stop_this_process_and_send_a_message!
But this statement is located after you have already joined all the started processes. So what processes are there to stop and where are you sending a message to (you have potentially multiple consumers any of which might be setting the error_flag -- by the way, no need to have this done under a lock since setting the value True is an atomic action). And since you are joining all your processes, i.e. waiting for them to complete, I am not sure why you are making them daemon processes. You are also passing a Lock instance to your producer and consumers, but it is not being used at all.
Your consumers return when they get a None record from the queue. So if you have N consumers, the last N elements of test_config_path need to be None.
I also see no need for having the producer process. The main process could just as well write all the records to the queue either before or even after it starts the consumer processes.
The call to time.sleep(3) you have at the end of function consumer is unreachable.
So the above code summary is the inner process to run some tests in parallel. I removed the def function part from it, but just assume that is the wrapper_threads in the following code summary. Here I'll add the parent process which is checking a variable (let's assume a commit in my git repo). The following process is meant to run indefinitely and when there is a change it will trigger the multiprocess in the main question:
def to_do():
# Run the tests
wrapper_threads.main()
def git_pull_change(path_to_repo):
repo = Repo(path)
current = repo.head.commit
repo.remotes.origin.pull()
if current == repo.head.commit:
print("Repo not changed. Sleep mode activated.")
return False
else:
print("Repo changed. Start running the tests!")
return True
def main():
process = None
while True:
status = git_pull_change(git_path)
if status:
repo = Repo(git_path)
repo.remotes.origin.pull()
process = multiprocessing.Process(target=to_do)
process.start()
if error_flag.value:
print('Error! breaking the process!!!!!!!!!!!!!!!!!!!!!!!')
os.system('pkill -U user XXX')
break
Now I want to propagate that error_flag from the child process to this process and stop process XXX. The problem is that I don't know how to bring that error_flag to this (grand)parent process.
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().
Is it possible to get a ThreadPoolExecutor to wait for all its futures and their add_done_callback() functions to complete without having to call .shutdown(wait=True)? The following code snippet illustrates the essence of what I'm trying to accomplish, which is to reuse the thread pool between iterations of the outer loop.
from concurrent.futures import ThreadPoolExecutor, wait
import time
def proc_func(n):
return n + 1
def create_callback_func(fid, sleep_time):
def callback(future):
time.sleep(sleep_time)
fid.write(str(future.result()))
return
return callback
num_workers = 4
num_files_write = 3
num_tasks = 8
sleep_time = 1
pool = ThreadPoolExecutor(max_workers=num_workers)
for n in range(num_files_write):
fid = open(f'test{n}.txt', 'w')
futs = []
callback_func = create_callback_func(fid, sleep_time)
for t in range(num_tasks):
fut = pool.submit(proc_func, n)
fut.add_done_callback(callback_func)
futs.append(fut)
wait(futs)
fid.close()
pool.shutdown(wait=True)
Running this code throws a bunch of ValueError: I/O operation on closed file. and the three files that get written have contents:
test0.txt -> 1111
test1.txt -> 2222
test3.txt -> 3333
Clearly this is wrong and there should be eight of each numeral. If I create and shutdown a separate ThreadPoolExecutor for each file, then the correct result is achieved. So I know that the Executor has the ability to properly wait for all the callbacks to finish, but can I tell it to do so without shutting it down?
I'm afraid that cannot be done and you are "misusing" the callback.
The primary purpose of the callback is to notify that the scheduled work has been done.
The internal future states are PENDING -> RUNNING -> FINISHED (disregarding cancellations for brevity). When the FINISHED state is reached, the callbacks are invoked, but there is no next state when they finish. That's why it is not possible to synchronize with that event.
The core of the execution of a submitted function in one of the available threads is (simplified):
try:
result = self.fn(*self.args, **self.kwargs)
except BaseException as exc:
self.future.set_exception(exc)
else:
self.future.set_result(result)
where both set_exception and set_result look like this (very simplified):
... save the result/exception
self._state = FINISHED
... wakeup all waiters
self._invoke_callbacks() # this is the last statement
The future is in FINISHED, i.e. "done" state when the "done" callback is called. It would not make sense to notify that the work is done before marking it done.
As you noticed already, in your code:
wait(futs)
fid.close()
the wait returns, the file get closed, but the callback is not finished yet and fails attemtping to write to a closed file.
The second question is why shutdown(wait=True) works? Simply because it waits for all threads:
if wait:
for t in self._threads:
t.join()
Those threads execute also the callbacks (see the code snippets above). That's why the callback execution must be finished when the threads are finished.
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().
In this documentation ( https://pymotw.com/3/concurrent.futures/ ) it says:
"The ProcessPoolExecutor works in the same way as ThreadPoolExecutor, but uses processes instead of threads. This allows CPU-intensive operations to use a separate CPU and not be blocked by the CPython interpreter’s global interpreter lock."
This sounds great! It also says:
"If something happens to one of the worker processes to cause it to exit unexpectedly, the ProcessPoolExecutor is considered “broken” and will no longer schedule tasks."
This sounds bad :( So I guess my question is: What is considered "Unexpectedly?" Does that just mean the exit signal is not 1? Can I safely exit the thread and still keep processing a queue? The example is as follows:
from concurrent import futures
import os
import signal
with futures.ProcessPoolExecutor(max_workers=2) as ex:
print('getting the pid for one worker')
f1 = ex.submit(os.getpid)
pid1 = f1.result()
print('killing process {}'.format(pid1))
os.kill(pid1, signal.SIGHUP)
print('submitting another task')
f2 = ex.submit(os.getpid)
try:
pid2 = f2.result()
except futures.process.BrokenProcessPool as e:
print('could not start new tasks: {}'.format(e))
I hadn't see it IRL, but from the code it looks like the returned file descriptors not contains the results_queue file descriptor.
from concurrent.futures.process:
reader = result_queue._reader
while True:
_add_call_item_to_queue(pending_work_items,
work_ids_queue,
call_queue)
sentinels = [p.sentinel for p in processes.values()]
assert sentinels
ready = wait([reader] + sentinels)
if reader in ready: # <===================================== THIS
result_item = reader.recv()
else:
# Mark the process pool broken so that submits fail right now.
executor = executor_reference()
if executor is not None:
executor._broken = True
executor._shutdown_thread = True
executor = None
# All futures in flight must be marked failed
for work_id, work_item in pending_work_items.items():
work_item.future.set_exception(
BrokenProcessPool(
"A process in the process pool was "
"terminated abruptly while the future was "
"running or pending."
))
# Delete references to object. See issue16284
del work_item
the wait function depends on system, but assuming linux OS (at multiprocessing.connection, removed all timeout related code):
def wait(object_list, timeout=None):
'''
Wait till an object in object_list is ready/readable.
Returns list of those objects in object_list which are ready/readable.
'''
with _WaitSelector() as selector:
for obj in object_list:
selector.register(obj, selectors.EVENT_READ)
while True:
ready = selector.select(timeout)
if ready:
return [key.fileobj for (key, events) in ready]
else:
# some timeout code