I have a long process that I've scheduled to run in a thread, because otherwise it will freeze the UI in my wxpython application.
I'm using:
threading.Thread(target=myLongProcess).start()
to start the thread and it works, but I don't know how to pause and resume the thread. I looked in the Python docs for the above methods, but wasn't able to find them.
Could anyone suggest how I could do this?
I did some speed tests as well, the time to set the flag and for action to be taken is pleasantly fast 0.00002 secs on a slow 2 processor Linux box.
Example of thread pause test using set() and clear() events:
import threading
import time
# This function gets called by our thread.. so it basically becomes the thread init...
def wait_for_event(e):
while True:
print('\tTHREAD: This is the thread speaking, we are Waiting for event to start..')
event_is_set = e.wait()
print('\tTHREAD: WHOOOOOO HOOOO WE GOT A SIGNAL : %s' % event_is_set)
# or for Python >= 3.6
# print(f'\tTHREAD: WHOOOOOO HOOOO WE GOT A SIGNAL : {event_is_set}')
e.clear()
# Main code
e = threading.Event()
t = threading.Thread(name='pausable_thread',
target=wait_for_event,
args=(e,))
t.start()
while True:
print('MAIN LOOP: still in the main loop..')
time.sleep(4)
print('MAIN LOOP: I just set the flag..')
e.set()
print('MAIN LOOP: now Im gonna do some processing')
time.sleep(4)
print('MAIN LOOP: .. some more processing im doing yeahhhh')
time.sleep(4)
print('MAIN LOOP: ok ready, soon we will repeat the loop..')
time.sleep(2)
There is no method for other threads to forcibly pause a thread (any more than there is for other threads to kill that thread) -- the target thread must cooperate by occasionally checking appropriate "flags" (a threading.Condition might be appropriate for the pause/unpause case).
If you're on a unix-y platform (anything but windows, basically), you could use multiprocessing instead of threading -- that is much more powerful, and lets you send signals to the "other process"; SIGSTOP should unconditionally pause a process and SIGCONT continues it (if your process needs to do something right before it pauses, consider also the SIGTSTP signal, which the other process can catch to perform such pre-suspension duties. (There may be ways to obtain the same effect on Windows, but I'm not knowledgeable about them, if any).
You can use signals: http://docs.python.org/library/signal.html#signal.pause
To avoid using signals you could use a token passing system. If you want to pause it from the main UI thread you could probably just use a Queue.Queue object to communicate with it.
Just pop a message telling the thread the sleep for a certain amount of time onto the queue.
Alternatively you could simply continuously push tokens onto the queue from the main UI thread. The worker should just check the queue every N seconds (0.2 or something like that). When there are no tokens to dequeue the worker thread will block. When you want it to start again just start pushing tokens on to the queue from the main thread again.
The multiprocessing module works fine on Windows. See the documentation here (end of first paragraph):
http://docs.python.org/library/multiprocessing.html
On the wxPython IRC channel, we had a couple fellows trying multiprocessing out and they said it worked. Unfortunately, I have yet to see anyone who has written up a good example of multiprocessing and wxPython.
If you (or anyone else on here) come up with something, please add it to the wxPython wiki page on threading here: http://wiki.wxpython.org/LongRunningTasks
You might want to check that page out regardless as it has several interesting examples using threads and queues.
You might take a look at the Windows API for thread suspension.
As far as I'm aware there is no POSIX/pthread equivalent. Furthermore, I cannot ascertain if thread handles/IDs are made available from Python. There are also potential issues with Python, as its scheduling is done using the native scheduler, it's unlikely that it is expecting threads to suspend, particularly if threads suspended while holding the GIL, amongst other possibilities.
I had the same issue. It is more effective to use time.sleep(1800) in the thread loop to pause the thread execution.
e.g
MON, TUE, WED, THU, FRI, SAT, SUN = range(7) #Enumerate days of the week
Thread 1 :
def run(self):
while not self.exit:
try:
localtime = time.localtime(time.time())
#Evaluate stock
if localtime.tm_hour > 16 or localtime.tm_wday > FRI:
# do something
pass
else:
print('Waiting to evaluate stocks...')
time.sleep(1800)
except:
print(traceback.format_exc())
Thread 2
def run(self):
while not self.exit:
try:
localtime = time.localtime(time.time())
if localtime.tm_hour >= 9 and localtime.tm_hour <= 16:
# do something
pass
else:
print('Waiting to update stocks indicators...')
time.sleep(1800)
except:
print(traceback.format_exc())
Related
I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:
while True:
pass
or
while True:
time.sleep(1)
Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?
I would imagine time.sleep() will have less overhead on the system. Using pass will cause the loop to immediately re-evaluate and peg the CPU, whereas using time.sleep will allow the execution to be temporarily suspended.
EDIT: just to prove the point, if you launch the python interpreter and run this:
>>> while True:
... pass
...
You can watch Python start eating up 90-100% CPU instantly, versus:
>>> import time
>>> while True:
... time.sleep(1)
...
Which barely even registers on the Activity Monitor (using OS X here but it should be the same for every platform).
Why sleep? You don't want to sleep, you want to wait for the threads to finish.
So
# store the threads you start in a your_threads list, then
for a_thread in your_threads:
a_thread.join()
See: thread.join
If you are looking for a short, zero-cpu way to loop forever until a KeyboardInterrupt, you can use:
from threading import Event
Event().wait()
Note: Due to a bug, this only works on Python 3.2+. In addition, it appears to not work on Windows. For this reason, while True: sleep(1) might be the better option.
For some background, Event objects are normally used for waiting for long running background tasks to complete:
def do_task():
sleep(10)
print('Task complete.')
event.set()
event = Event()
Thread(do_task).start()
event.wait()
print('Continuing...')
Which prints:
Task complete.
Continuing...
signal.pause() is another solution, see https://docs.python.org/3/library/signal.html#signal.pause
Cause the process to sleep until a signal is received; the appropriate handler will then be called. Returns nothing. Not on Windows. (See the Unix man page signal(2).)
I've always seen/heard that using sleep is the better way to do it. Using sleep will keep your Python interpreter's CPU usage from going wild.
You don't give much context to what you are really doing, but maybe Queue could be used instead of an explicit busy-wait loop? If not, I would assume sleep would be preferable, as I believe it will consume less CPU (as others have already noted).
[Edited according to additional information in comment below.]
Maybe this is obvious, but anyway, what you could do in a case where you are reading information from blocking sockets is to have one thread read from the socket and post suitably formatted messages into a Queue, and then have the rest of your "worker" threads reading from that queue; the workers will then block on reading from the queue without the need for neither pass, nor sleep.
Running a method as a background thread with sleep in Python:
import threading
import time
class ThreadingExample(object):
""" Threading example class
The run() method will be started and it will run in the background
until the application exits.
"""
def __init__(self, interval=1):
""" Constructor
:type interval: int
:param interval: Check interval, in seconds
"""
self.interval = interval
thread = threading.Thread(target=self.run, args=())
thread.daemon = True # Daemonize thread
thread.start() # Start the execution
def run(self):
""" Method that runs forever """
while True:
# Do something
print('Doing something imporant in the background')
time.sleep(self.interval)
example = ThreadingExample()
time.sleep(3)
print('Checkpoint')
time.sleep(2)
print('Bye')
I am very new to the concept of threading and the concepts are still somewhat fuzzy.
But as of now i have a requirement in which i spin up an arbitrary number of threads from my Python program and then my Python program should indicate to the user running the process which threads have finished executing. Below is my first try:
import threading
from threading import Thread
from time import sleep
def exec_thread(n):
name = threading.current_thread().getName()
filename = name + ".txt"
with open(filename, "w+") as file:
file.write(f"My name is {name} and my main thread is {threading.main_thread()}\n")
sleep(n)
file.write(f"{name} exiting\n")
t1 = Thread(name="First", target=exec_thread, args=(10,))
t2 = Thread(name="Second", target=exec_thread, args=(2,))
t1.start()
t2.start()
while len(threading.enumerate()) > 1:
print(f"Waiting ... !")
sleep(5)
print(f"The threads are done"
So this basically tells me when all the threads are done executing.
But i want to know as soon as any one of my threads have completed execution so that i can tell the user that please check the output file for the thread.
I cannot use thread.join() since that would block my main program and the user would not know anything unless everything is complete which might take hours. The user wants to know as soon as some results are available.
Now i know that we can check individual threads whether they are active or not by doing : thread.isAlive() but i was hoping for a more elegant solution in which if the child threads can somehow communicate with the main thread and say I am done !
Many thanks for any answers in advance.
The simplest and most straightforward way to indicate a single thread is "done" is to put the required notification in the thread's implementation method, as the very last step. For example, you could print out a notification to the user.
Or, you could use events, see: https://docs.python.org/3/library/threading.html#event-objects
This is one of the simplest mechanisms for communication between
threads: one thread signals an event and other threads wait for it.
An event object manages an internal flag that can be set to true with
the set() method and reset to false with the clear() method. The
wait() method blocks until the flag is true.
So, the "final act" in your thread implementation would be to set an event object, and your main thread can wait until it's set.
Or, for an even fancier and more mechanism, use queues: https://docs.python.org/3/library/queue.html
Each thread writes an "I'm done" object to the queue when done, and the main thread can read those notifications from the queue in sequence as each thread completes.
import _thread
import time
def test1():
while True:
time.sleep(1)
print('TEST1')
def test2():
while True:
time.sleep(3)
print('TEST2')
try:
_thread.start_new_thread(test1,())
_thread.start_new_thread(test2,())
except:
print("ERROR")
How can I stop the two threads for example in case of KeyboardInterrupts?
Because for "except KeyboardInterrupt" the threads are still running :/
Important:
The question is about closing threads only with the module _thread!
Is it possible?
There's no way to directly interact with another thread, except for the main thread. While some platforms do offer thread cancel or kill semantics, Python doesn't expose them, and for good reason.1
So, the usual solution is to use some kind of signal to tell everyone to exit. One possibility is a done flag with a Lock around it:
done = False
donelock = _thread.allocate_lock()
def test1():
while True:
try:
donelock.acquire()
if done:
return
finally:
donelock.release()
time.sleep(1)
print('TEST1')
_thread.start_new_thread(test1,())
time.sleep(3)
try:
donelock.acquire()
done = True
finally:
donelock.release()
Of course the same thing is a lot cleaner if you use threading (or a different higher-level API like Qt's threads). Plus, you can use a Condition or Event to make the background threads exit as soon as possible, instead of only after their next sleep finishes.
done = threading.Event()
def test1():
while True:
if done.wait(1):
return
print('TEST1')
t1 = threading.Thread(target=test1)
t1.start()
time.sleep(3)
done.set()
The _thread module doesn't have an Event or Condition, of course, but you can always build one yourself—or just borrowing from the threading source.
Or, if you wanted the threads to be killed asynchronously (which obviously isn't safe if they're, e.g., writing files, but if they're just doing computation or downloads or the like that you don't care about if you're canceling, that's fine), threading makes it even easier:
t1 = threading.Thread(target=test1, daemon=True)
As a side note, the behavior you're seeing isn't actually reliable across platforms:
Background threads created with _thread may keep running, or shut down semi-cleanly, or terminate hard. So, when you use _thread in a portable application, you have to write code that can handle any of the three.
KeyboardInterrupt may be delivered to an arbitrary thread rather than the main thread. If it is, it will usually kill that thread, unless you've set up a handler. So, if you're using _thread, you usually want to handle KeyboardInterrupt and call _thread.interrupt_main().
Also, I don't think your except: is doing what you think it is. That try only covers the start_new_thread calls. If the threads start successfully, the main thread exits the try block and reaches the end of the program. If a KeyboardInterrupt or other exception is raised, the except: isn't going to be triggered. (Also, using a bare except: and not even logging which exception got handled is a really bad idea if you want to be able to understand what your code is doing.) Presumably, on your platform, background threads continue running, and the main thread blocks on them (and probably at the OS level, not the Python level, so there's no code you can write that gets involved there).
If you want your main thread to keep running to make sure it can handle a KeyboardInterrupt and so something with it (but see the caveats above!), you have to give it code to keep running:
try:
while True:
time.sleep(1<<31)
except KeyboardInterrupt:
# background-thread-killing code goes here.
1. TerminateThread on Windows makes it impossible to do all the cleanup Python needs to do. pthread_cancel on POSIX systems like Linux and macOS makes it possible, but very difficult. And the semantics are different enough between the two that trying to write a cross-platform wrapper would be a nightmare. Not to mention that Python supports systems (mostly older Unixes) that don't have the full pthread API, or even have a completely different threading API.
I am using the new concurrent.futures module (which also has a Python 2 backport) to do some simple multithreaded I/O. I am having trouble understanding how to cleanly kill tasks started using this module.
Check out the following Python 2/3 script, which reproduces the behavior I'm seeing:
#!/usr/bin/env python
from __future__ import print_function
import concurrent.futures
import time
def control_c_this():
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
future1 = executor.submit(wait_a_bit, name="Jack")
future2 = executor.submit(wait_a_bit, name="Jill")
for future in concurrent.futures.as_completed([future1, future2]):
future.result()
print("All done!")
def wait_a_bit(name):
print("{n} is waiting...".format(n=name))
time.sleep(100)
if __name__ == "__main__":
control_c_this()
While this script is running it appears impossible to kill cleanly using the regular Control-C keyboard interrupt. I am running on OS X.
On Python 2.7 I have to resort to kill from the command line to kill the script. Control-C is just ignored.
On Python 3.4, Control-C works if you hit it twice, but then a lot of strange stack traces are dumped.
Most documentation I've found online talks about how to cleanly kill threads with the old threading module. None of it seems to apply here.
And all the methods provided within the concurrent.futures module to stop stuff (like Executor.shutdown() and Future.cancel()) only work when the Futures haven't started yet or are complete, which is pointless in this case. I want to interrupt the Future immediately.
My use case is simple: When the user hits Control-C, the script should exit immediately like any well-behaved script does. That's all I want.
So what's the proper way to get this behavior when using concurrent.futures?
It's kind of painful. Essentially, your worker threads have to be finished before your main thread can exit. You cannot exit unless they do. The typical workaround is to have some global state, that each thread can check to determine if they should do more work or not.
Here's the quote explaining why. In essence, if threads exited when the interpreter does, bad things could happen.
Here's a working example. Note that C-c takes at most 1 sec to propagate because the sleep duration of the child thread.
#!/usr/bin/env python
from __future__ import print_function
import concurrent.futures
import time
import sys
quit = False
def wait_a_bit(name):
while not quit:
print("{n} is doing work...".format(n=name))
time.sleep(1)
def setup():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=5)
future1 = executor.submit(wait_a_bit, "Jack")
future2 = executor.submit(wait_a_bit, "Jill")
# main thread must be doing "work" to be able to catch a Ctrl+C
# http://www.luke.maurits.id.au/blog/post/threads-and-signals-in-python.html
while (not (future1.done() and future2.done())):
time.sleep(1)
if __name__ == "__main__":
try:
setup()
except KeyboardInterrupt:
quit = True
I encountered this, but the issue I had was that many futures (10's of thousands) would be waiting to run and just pressing Ctrl-C left them waiting, not actually exiting. I was using concurrent.futures.wait to run a progress loop and needed to add a try ... except KeyboardInterrupt to handle cancelling unfinished Futures.
POLL_INTERVAL = 5
with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_WORKERS) as pool:
futures = [pool.submit(do_work, arg) for arg in large_set_to_do_work_over]
# next line returns instantly
done, not_done = concurrent.futures.wait(futures, timeout=0)
try:
while not_done:
# next line 'sleeps' this main thread, letting the thread pool run
freshly_done, not_done = concurrent.futures.wait(not_done, timeout=POLL_INTERVAL)
done |= freshly_done
# more polling stats calculated here and printed every POLL_INTERVAL seconds...
except KeyboardInterrupt:
# only futures that are not done will prevent exiting
for future in not_done:
# cancel() returns False if it's already done or currently running,
# and True if was able to cancel it; we don't need that return value
_ = future.cancel()
# wait for running futures that the above for loop couldn't cancel (note timeout)
_ = concurrent.futures.wait(not_done, timeout=None)
If you're not interested in keeping exact track of what got done and what didn't (i.e. don't want a progress loop), you can replace the first wait call (the one with timeout=0) with not_done = futures and still leave the while not_done: logic.
The for future in not_done: cancel loop can probably behave differently based on that return value (or be written as a comprehension), but waiting for futures that are done or canceled isn't really waiting - it returns instantly. The last wait with timeout=None ensures that pool's running jobs really do finish.
Again, this only works correctly if the do_work that's being called actually, eventually returns within a reasonable amount of time. That was fine for me - in fact, I want to be sure that if do_work gets started, it runs to completion. If do_work is 'endless' then you'll need something like cdosborn's answer that uses a variable visible to all the threads, signaling them to stop themselves.
Late to the party, but I just had the same problem.
I want to kill my program immediately and I don't care what's going on. I don't need a clean shutdown beyond what Linux will do.
I found that replacing geitda's code in the KeyboardInterrupt exception handler with os.kill(os.getpid(), 9) exits immediately after the first ^C.
main = str(os.getpid())
def ossystem(c):
return subprocess.Popen(c, shell=True, stdout=subprocess.PIPE).stdout.read().decode("utf-8").strip()
def killexecutor():
print("Killing")
pids = ossystem('ps -a | grep scriptname.py').split('\n')
for pid in pids:
pid = pid.split(' ')[0].strip()
if(str(pid) != main):
os.kill(int(pid), 9)
...
killexecutor()
I've a python program that spawns a number of threads. These threads last anywhere between 2 seconds to 30 seconds. In the main thread I want to track whenever each thread completes and print a message. If I just sequentially .join() all threads and the first thread lasts 30 seconds and others complete much sooner, I wouldn't be able to print a message sooner -- all messages will be printed after 30 seconds.
Basically I want to block until any thread completes. As soon as a thread completes, print a message about it and go back to blocking if any other threads are still alive. If all threads are done then exit program.
One way I could think of is to have a queue that is passed to all the threads and block on queue.get(). Whenever a message is received from the queue, print it, check if any other threads are alive using threading.active_count() and if so, go back to blocking on queue.get(). This would work but here all the threads need to follow the discipline of sending a message to the queue before terminating.
I'm wonder if this is the conventional way of achieving this behavior or are there any other / better ways ?
Here's a variation on #detly's answer that lets you specify the messages from your main thread, instead of printing them from your target functions. This creates a wrapper function which calls your target and then prints a message before terminating. You could modify this to perform any kind of standard cleanup after each thread completes.
#!/usr/bin/python
import threading
import time
def target1():
time.sleep(0.1)
print "target1 running"
time.sleep(4)
def target2():
time.sleep(0.1)
print "target2 running"
time.sleep(2)
def launch_thread_with_message(target, message, args=[], kwargs={}):
def target_with_msg(*args, **kwargs):
target(*args, **kwargs)
print message
thread = threading.Thread(target=target_with_msg, args=args, kwargs=kwargs)
thread.start()
return thread
if __name__ == '__main__':
thread1 = launch_thread_with_message(target1, "finished target1")
thread2 = launch_thread_with_message(target2, "finished target2")
print "main: launched all threads"
thread1.join()
thread2.join()
print "main: finished all threads"
The thread needs to be checked using the Thread.is_alive() call.
Why not just have the threads themselves print a completion message, or call some other completion callback when done?
You can the just join these threads from your main program, so you'll see a bunch of completion messages and your program will terminate when they're all done, as required.
Here's a quick and simple demonstration:
#!/usr/bin/python
import threading
import time
def really_simple_callback(message):
"""
This is a really simple callback. `sys.stdout` already has a lock built-in,
so this is fine to do.
"""
print message
def threaded_target(sleeptime, callback):
"""
Target for the threads: sleep and call back with completion message.
"""
time.sleep(sleeptime)
callback("%s completed!" % threading.current_thread())
if __name__ == '__main__':
# Keep track of the threads we create
threads = []
# callback_when_done is effectively a function
callback_when_done = really_simple_callback
for idx in xrange(0, 10):
threads.append(
threading.Thread(
target=threaded_target,
name="Thread #%d" % idx,
args=(10 - idx, callback_when_done)
)
)
[t.start() for t in threads]
[t.join() for t in threads]
# Note that thread #0 runs for the longest, but we'll see its message first!
What I would suggest is loop like this
while len(threadSet) > 0:
time.sleep(1)
for thread in theadSet:
if not thread.isAlive()
print "Thread "+thread.getName()+" terminated"
threadSet.remove(thread)
There is a 1 second sleep, so there will be a slight delay between the thread termination and the message being printed. If you can live with this delay, then I think this is a simpler solution than the one you proposed in your question.
You can let the threads push their results into a threading.Queue. Have another thread wait on this queue and print the message as soon as a new item appears.
I'm not sure I see the problem with using:
threading.activeCount()
to track the number of threads that are still active?
Even if you don't know how many threads you're going to launch before starting it seems pretty easy to track. I usually generate thread collections via list comprehension then a simple comparison using activeCount to the list size can tell you how many have finished.
See here: http://docs.python.org/library/threading.html
Alternately, once you have your thread objects you can just use the .isAlive method within the thread objects to check.
I just checked by throwing this into a multithread program I have and it looks fine:
for thread in threadlist:
print(thread.isAlive())
Gives me a list of True/False as the threads turn on and off. So you should be able to do that and check for anything False in order to see if any thread is finished.
I use a slightly different technique because of the nature of the threads I used in my application. To illustrate, this is a fragment of a test-strap program I wrote to scaffold a barrier class for my threading class:
while threads:
finished = set(threads) - set(threading.enumerate())
while finished:
ttt = finished.pop()
threads.remove(ttt)
time.sleep(0.5)
Why do I do it this way? In my production code, I have a time limit, so the first line actually reads "while threads and time.time() < cutoff_time". If I reach the cut-off, I then have code to tell the threads to shut down.