For a music sampler, I have two main threads (using threading) :
Thread #1 reads sound from files from disk, in real-time, when needed (example: when we press a C#3 on the MIDI keyboard, we need to play C#3.wav as soon as possible!) or from RAM if this sound has been already loaded in RAM,
Thread #2 preloads all the files into RAM, one after another.
Thread #2 should be done in background, only during free time, but should not prevent Thread #1 to do its job quickly.
In short, Thread #1 should have much higher priority than Thread #2.
How to do that with threading or any other Python thread managament module? Or is it possible with pthread_setschedparam? How?
I'm not a big expert but I'll handle that problem like this:
#!/usr/bin/python2.7
# coding: utf-8
import threading, time
class Foo:
def __init__(self):
self.allow_thread1=True
self.allow_thread2=True
self.important_task=False
threading.Thread(target=self.thread1).start()
threading.Thread(target=self.thread2).start()
def thread1(self):
loops=0
while self.allow_thread1:
for i in range(10):
print ' thread1'
time.sleep(0.5)
self.important_task=True
for i in range(10):
print 'thread1 important task'
time.sleep(0.5)
self.important_task=False
loops+=1
if loops >= 2:
self.exit()
time.sleep(0.5)
def thread2(self):
while self.allow_thread2:
if not self.important_task:
print ' thread2'
time.sleep(0.5)
def exit(self):
self.allow_thread2=False
self.allow_thread1=False
print 'Bye bye'
exit()
if __name__ == '__main__':
Foo()
In a few words, I'll handle thread2 with thread1. If thread1 is busy, then we pause thread2.
Note that I added loops just to kill the thread in the example, but in a real case the exit function would be called when closing the program. ( if your threads are always running in the background)
Related
I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:
while True:
pass
or
while True:
time.sleep(1)
Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?
I would imagine time.sleep() will have less overhead on the system. Using pass will cause the loop to immediately re-evaluate and peg the CPU, whereas using time.sleep will allow the execution to be temporarily suspended.
EDIT: just to prove the point, if you launch the python interpreter and run this:
>>> while True:
... pass
...
You can watch Python start eating up 90-100% CPU instantly, versus:
>>> import time
>>> while True:
... time.sleep(1)
...
Which barely even registers on the Activity Monitor (using OS X here but it should be the same for every platform).
Why sleep? You don't want to sleep, you want to wait for the threads to finish.
So
# store the threads you start in a your_threads list, then
for a_thread in your_threads:
a_thread.join()
See: thread.join
If you are looking for a short, zero-cpu way to loop forever until a KeyboardInterrupt, you can use:
from threading import Event
Event().wait()
Note: Due to a bug, this only works on Python 3.2+. In addition, it appears to not work on Windows. For this reason, while True: sleep(1) might be the better option.
For some background, Event objects are normally used for waiting for long running background tasks to complete:
def do_task():
sleep(10)
print('Task complete.')
event.set()
event = Event()
Thread(do_task).start()
event.wait()
print('Continuing...')
Which prints:
Task complete.
Continuing...
signal.pause() is another solution, see https://docs.python.org/3/library/signal.html#signal.pause
Cause the process to sleep until a signal is received; the appropriate handler will then be called. Returns nothing. Not on Windows. (See the Unix man page signal(2).)
I've always seen/heard that using sleep is the better way to do it. Using sleep will keep your Python interpreter's CPU usage from going wild.
You don't give much context to what you are really doing, but maybe Queue could be used instead of an explicit busy-wait loop? If not, I would assume sleep would be preferable, as I believe it will consume less CPU (as others have already noted).
[Edited according to additional information in comment below.]
Maybe this is obvious, but anyway, what you could do in a case where you are reading information from blocking sockets is to have one thread read from the socket and post suitably formatted messages into a Queue, and then have the rest of your "worker" threads reading from that queue; the workers will then block on reading from the queue without the need for neither pass, nor sleep.
Running a method as a background thread with sleep in Python:
import threading
import time
class ThreadingExample(object):
""" Threading example class
The run() method will be started and it will run in the background
until the application exits.
"""
def __init__(self, interval=1):
""" Constructor
:type interval: int
:param interval: Check interval, in seconds
"""
self.interval = interval
thread = threading.Thread(target=self.run, args=())
thread.daemon = True # Daemonize thread
thread.start() # Start the execution
def run(self):
""" Method that runs forever """
while True:
# Do something
print('Doing something imporant in the background')
time.sleep(self.interval)
example = ThreadingExample()
time.sleep(3)
print('Checkpoint')
time.sleep(2)
print('Bye')
I've made a program which has a main thread that spawns many other threads by subclassing the threading.Thread class.
Each such child thread runs an infinite while loop, and inside the loop I check a condition. If the condition is true, I make the thread sleep for 1 second using time.sleep(1) and if it's false, then the thread performs some computation.
The program itself works fine and I've achieved what I wanted to do, my only remaining problem is that I seem unable to stop the threads after my work is done. I want the user to be able to kill all the threads by pressing a button or giving a keyboard interrupt like Ctrl+C.
For this I had tried using the signal module and inserted a conditon in the threads' loops that breaks the loop when the main thread catches a signal but it didn't work for some reason. Can anyone please help with this?
EDIT: This is some of the relevant code snippets:
def sighandler(signal,frame):
BaseThreadClass.stop_flag = True
class BaseThreadClass(threading.Thread):
stop_flag = False
def __init__(self):
threading.Thread.__init__(self)
def run(self,*args):
while True:
if condition:
time.sleep(1)
else:
#do computation and stuff
if BaseThreadClass.stop_flag:
#do cleanup
break
Your basic method does work, but you've still not posted enough code to show the flaw. I added a few lines of code to make it runnable and produced a result like:
$ python3 test.py
thread alive
main alive
thread alive
main alive
^CSignal caught
main alive
thread alive
main alive
main alive
main alive
^CSignal caught
^CSignal caught
main alive
^Z
[2]+ Stopped python3 test.py
$ kill %2
The problem demonstrated above involves the signal handler telling all the threads to exit, except the main thread, which still runs and still catches interrupts. The full source of this variant of the sample snippet is:
import threading, signal, time
def sighandler(signal,frame):
BaseThreadClass.stop_flag = True
print("Signal caught")
class BaseThreadClass(threading.Thread):
stop_flag = False
def __init__(self):
threading.Thread.__init__(self)
def run(self,*args):
while True:
if True:
time.sleep(1)
print("thread alive")
else:
#do computation and stuff
pass
if BaseThreadClass.stop_flag:
#do cleanup
break
signal.signal(signal.SIGINT, sighandler)
t = BaseThreadClass()
t.start()
while True:
time.sleep(1)
print("main alive")
The problem here is that the main thread never checks for the quit condition. But as you never posted what the main thread does, nor how the signal handler is activated, or information regarding whether threads may go a long time without checking the quit condition... I still don't know what went wrong in your program. The signal example shown in the library documentation raises an exception in order to divert the main thread.
Signals are a rather low level concept for this task, however. I took the liberty of writing a somewhat more naïve version of the main thread:
try:
t = BaseThreadClass()
t.start()
while True:
time.sleep(1)
print("main alive")
except KeyboardInterrupt:
BaseThreadClass.stop_flag = True
t.join()
This version catches the exception thrown by the default interrupt handler, signals the thread to stop, and waits for it to do so. It might even be appropriate to change the except clause to a finally, since we could want to clean the threads up on other errors too.
If you want to do this kind of "cooperative" polled-shutdown, you can use a threading.Event to signal:
import threading
import time
def proc1():
while True:
print("1") # payload
time.sleep(1)
# have we been signalled to stop?
if not ev1.wait(0): break
# do any shutdown etc. here
print ("T1 exiting")
ev1 = threading.Event()
ev1.set()
thread1 = threading.Thread(target=proc1)
thread1.start()
time.sleep(3)
# signal thread1 to stop
ev1.clear()
But be aware that if the "payload" does something blocking like network or file IO, that op will not be interrupted. You can do those blocking ops with a timeout, but that obviously will complicate your code.
EDIT 9/15/16: In my original code (still posted below) I tried to use .join() with a function, which is a silly mistake because it can only be used with a thread object. I am trying to
(1) continuously run a thread that gets data and saves it to a file
(2) have a second thread, or incorporate queue, that will stop the program once a user enters a flag (i.e. "stop"). It doesn't interrupt the data gathering/saving thread.
I need help with multithreading. I am trying to run two threads, one that handles data and the second checks for a flag to stop the program.
I learned by trial and error that I can't interrupt a while loop without my computer exploding. Additionally, I have abandoned my GUI code because it made my code too complicated with the mulithreading.
What I want to do is run a thread that gathers data from an Arduino, saves it to a file, and repeats this. The second thread will scan for a flag -- which can be a raw_input? I can't think of anything else that a user can do to stop the data acquisition program.
I greatly appreciate any help on this. Here is my code (much of it is pseudocode, as you can see):
#threading
import thread
import time
global flag
def monitorData():
print "running!"
time.sleep(5)
def stopdata(flag ):
flag = raw_input("enter stop: ")
if flag == "stop":
monitorData.join()
flag = "start"
thread.start_new_thread( monitorData,())
thread.start_new_thread( stopdata,(flag,))
The error I am getting is this when I try entering "stop" in the IDLE.
Unhandled exception in thread started by
Traceback (most recent call last):
File "c:\users\otangu~1\appdata\local\temp\IDLE_rtmp_h_frd5", line 16, in stopdata
AttributeError: 'function' object has no attribute 'join'
Once again I really appreciate any help, I have taught myself Python so far and this is the first huge wall that I've hit.
The error you see is a result of calling join on the function. You need to call join on the thread object. You don't capture a reference to the thread so you have no way to call join anyway. You should join like so.
th1 = thread.start_new_thread( monitorData,())
# later
th1.join()
As for a solution, you can use a Queue to communicate between threads. The queue is used to send a quit message to the worker thread and if the worker does not pick anything up off the queue for a second it runs the code that gathers data from the arduino.
from threading import Thread
from Queue import Queue, Empty
def worker(q):
while True:
try:
item = q.get(block=True, timeout=1)
q.task_done()
if item == "quit":
print("got quit msg in thread")
break
except Empty:
print("empty, do some arduino stuff")
def input_process(q):
while True:
x = raw_input("")
if x == 'q':
print("will quit")
q.put("quit")
break
q = Queue()
t = Thread(target=worker, args=(q,))
t.start()
t2 = Thread(target=input_process, args=(q,))
t2.start()
# waits for the `task_done` function to be called
q.join()
t2.join()
t.join()
It's possibly a bit more code than you hoped for and having to detect the queue is empty with an exception is a little ugly, but this doesn't rely on any global variables and will always exit promptly. That wont be the case with sleep based solutions, which need to wait for any current calls to sleep to finish before resuming execution.
As noted by someone else, you should really be using threading rather than the older thread module and also I would recommend you learn with python 3 and not python 2.
You're looking for something like this:
from threading import Thread
from time import sleep
# "volatile" global shared by threads
active = True
def get_data():
while active:
print "working!"
sleep(3)
def wait_on_user():
global active
raw_input("press enter to stop")
active = False
th1 = Thread(target=get_data)
th1.start()
th2 = Thread(target=wait_on_user)
th2.start()
th1.join()
th2.join()
You made a few obvious and a few less obvious mistakes in your code. First, join is called on a thread object, not a function. Similarly, join doesn't kill a thread, it waits for the thread to finish. A thread finishes when it has no more code to execute. If you want a thread to run until some flag is set, you normally include a loop in your thread that checks the flag every second or so (depending on how precise you need the timing to be).
Also, the threading module is preferred over the lower lever thread module. The latter has been removed in python3.
This is not possible. The thread function has to finish. You can't join it from the outside.
I've a python program that spawns a number of threads. These threads last anywhere between 2 seconds to 30 seconds. In the main thread I want to track whenever each thread completes and print a message. If I just sequentially .join() all threads and the first thread lasts 30 seconds and others complete much sooner, I wouldn't be able to print a message sooner -- all messages will be printed after 30 seconds.
Basically I want to block until any thread completes. As soon as a thread completes, print a message about it and go back to blocking if any other threads are still alive. If all threads are done then exit program.
One way I could think of is to have a queue that is passed to all the threads and block on queue.get(). Whenever a message is received from the queue, print it, check if any other threads are alive using threading.active_count() and if so, go back to blocking on queue.get(). This would work but here all the threads need to follow the discipline of sending a message to the queue before terminating.
I'm wonder if this is the conventional way of achieving this behavior or are there any other / better ways ?
Here's a variation on #detly's answer that lets you specify the messages from your main thread, instead of printing them from your target functions. This creates a wrapper function which calls your target and then prints a message before terminating. You could modify this to perform any kind of standard cleanup after each thread completes.
#!/usr/bin/python
import threading
import time
def target1():
time.sleep(0.1)
print "target1 running"
time.sleep(4)
def target2():
time.sleep(0.1)
print "target2 running"
time.sleep(2)
def launch_thread_with_message(target, message, args=[], kwargs={}):
def target_with_msg(*args, **kwargs):
target(*args, **kwargs)
print message
thread = threading.Thread(target=target_with_msg, args=args, kwargs=kwargs)
thread.start()
return thread
if __name__ == '__main__':
thread1 = launch_thread_with_message(target1, "finished target1")
thread2 = launch_thread_with_message(target2, "finished target2")
print "main: launched all threads"
thread1.join()
thread2.join()
print "main: finished all threads"
The thread needs to be checked using the Thread.is_alive() call.
Why not just have the threads themselves print a completion message, or call some other completion callback when done?
You can the just join these threads from your main program, so you'll see a bunch of completion messages and your program will terminate when they're all done, as required.
Here's a quick and simple demonstration:
#!/usr/bin/python
import threading
import time
def really_simple_callback(message):
"""
This is a really simple callback. `sys.stdout` already has a lock built-in,
so this is fine to do.
"""
print message
def threaded_target(sleeptime, callback):
"""
Target for the threads: sleep and call back with completion message.
"""
time.sleep(sleeptime)
callback("%s completed!" % threading.current_thread())
if __name__ == '__main__':
# Keep track of the threads we create
threads = []
# callback_when_done is effectively a function
callback_when_done = really_simple_callback
for idx in xrange(0, 10):
threads.append(
threading.Thread(
target=threaded_target,
name="Thread #%d" % idx,
args=(10 - idx, callback_when_done)
)
)
[t.start() for t in threads]
[t.join() for t in threads]
# Note that thread #0 runs for the longest, but we'll see its message first!
What I would suggest is loop like this
while len(threadSet) > 0:
time.sleep(1)
for thread in theadSet:
if not thread.isAlive()
print "Thread "+thread.getName()+" terminated"
threadSet.remove(thread)
There is a 1 second sleep, so there will be a slight delay between the thread termination and the message being printed. If you can live with this delay, then I think this is a simpler solution than the one you proposed in your question.
You can let the threads push their results into a threading.Queue. Have another thread wait on this queue and print the message as soon as a new item appears.
I'm not sure I see the problem with using:
threading.activeCount()
to track the number of threads that are still active?
Even if you don't know how many threads you're going to launch before starting it seems pretty easy to track. I usually generate thread collections via list comprehension then a simple comparison using activeCount to the list size can tell you how many have finished.
See here: http://docs.python.org/library/threading.html
Alternately, once you have your thread objects you can just use the .isAlive method within the thread objects to check.
I just checked by throwing this into a multithread program I have and it looks fine:
for thread in threadlist:
print(thread.isAlive())
Gives me a list of True/False as the threads turn on and off. So you should be able to do that and check for anything False in order to see if any thread is finished.
I use a slightly different technique because of the nature of the threads I used in my application. To illustrate, this is a fragment of a test-strap program I wrote to scaffold a barrier class for my threading class:
while threads:
finished = set(threads) - set(threading.enumerate())
while finished:
ttt = finished.pop()
threads.remove(ttt)
time.sleep(0.5)
Why do I do it this way? In my production code, I have a time limit, so the first line actually reads "while threads and time.time() < cutoff_time". If I reach the cut-off, I then have code to tell the threads to shut down.
I have a long process that I've scheduled to run in a thread, because otherwise it will freeze the UI in my wxpython application.
I'm using:
threading.Thread(target=myLongProcess).start()
to start the thread and it works, but I don't know how to pause and resume the thread. I looked in the Python docs for the above methods, but wasn't able to find them.
Could anyone suggest how I could do this?
I did some speed tests as well, the time to set the flag and for action to be taken is pleasantly fast 0.00002 secs on a slow 2 processor Linux box.
Example of thread pause test using set() and clear() events:
import threading
import time
# This function gets called by our thread.. so it basically becomes the thread init...
def wait_for_event(e):
while True:
print('\tTHREAD: This is the thread speaking, we are Waiting for event to start..')
event_is_set = e.wait()
print('\tTHREAD: WHOOOOOO HOOOO WE GOT A SIGNAL : %s' % event_is_set)
# or for Python >= 3.6
# print(f'\tTHREAD: WHOOOOOO HOOOO WE GOT A SIGNAL : {event_is_set}')
e.clear()
# Main code
e = threading.Event()
t = threading.Thread(name='pausable_thread',
target=wait_for_event,
args=(e,))
t.start()
while True:
print('MAIN LOOP: still in the main loop..')
time.sleep(4)
print('MAIN LOOP: I just set the flag..')
e.set()
print('MAIN LOOP: now Im gonna do some processing')
time.sleep(4)
print('MAIN LOOP: .. some more processing im doing yeahhhh')
time.sleep(4)
print('MAIN LOOP: ok ready, soon we will repeat the loop..')
time.sleep(2)
There is no method for other threads to forcibly pause a thread (any more than there is for other threads to kill that thread) -- the target thread must cooperate by occasionally checking appropriate "flags" (a threading.Condition might be appropriate for the pause/unpause case).
If you're on a unix-y platform (anything but windows, basically), you could use multiprocessing instead of threading -- that is much more powerful, and lets you send signals to the "other process"; SIGSTOP should unconditionally pause a process and SIGCONT continues it (if your process needs to do something right before it pauses, consider also the SIGTSTP signal, which the other process can catch to perform such pre-suspension duties. (There may be ways to obtain the same effect on Windows, but I'm not knowledgeable about them, if any).
You can use signals: http://docs.python.org/library/signal.html#signal.pause
To avoid using signals you could use a token passing system. If you want to pause it from the main UI thread you could probably just use a Queue.Queue object to communicate with it.
Just pop a message telling the thread the sleep for a certain amount of time onto the queue.
Alternatively you could simply continuously push tokens onto the queue from the main UI thread. The worker should just check the queue every N seconds (0.2 or something like that). When there are no tokens to dequeue the worker thread will block. When you want it to start again just start pushing tokens on to the queue from the main thread again.
The multiprocessing module works fine on Windows. See the documentation here (end of first paragraph):
http://docs.python.org/library/multiprocessing.html
On the wxPython IRC channel, we had a couple fellows trying multiprocessing out and they said it worked. Unfortunately, I have yet to see anyone who has written up a good example of multiprocessing and wxPython.
If you (or anyone else on here) come up with something, please add it to the wxPython wiki page on threading here: http://wiki.wxpython.org/LongRunningTasks
You might want to check that page out regardless as it has several interesting examples using threads and queues.
You might take a look at the Windows API for thread suspension.
As far as I'm aware there is no POSIX/pthread equivalent. Furthermore, I cannot ascertain if thread handles/IDs are made available from Python. There are also potential issues with Python, as its scheduling is done using the native scheduler, it's unlikely that it is expecting threads to suspend, particularly if threads suspended while holding the GIL, amongst other possibilities.
I had the same issue. It is more effective to use time.sleep(1800) in the thread loop to pause the thread execution.
e.g
MON, TUE, WED, THU, FRI, SAT, SUN = range(7) #Enumerate days of the week
Thread 1 :
def run(self):
while not self.exit:
try:
localtime = time.localtime(time.time())
#Evaluate stock
if localtime.tm_hour > 16 or localtime.tm_wday > FRI:
# do something
pass
else:
print('Waiting to evaluate stocks...')
time.sleep(1800)
except:
print(traceback.format_exc())
Thread 2
def run(self):
while not self.exit:
try:
localtime = time.localtime(time.time())
if localtime.tm_hour >= 9 and localtime.tm_hour <= 16:
# do something
pass
else:
print('Waiting to update stocks indicators...')
time.sleep(1800)
except:
print(traceback.format_exc())