I have multiple methods set up that need to be run simultaneously. I decided to create individual threads for said methods. There is also a method I made with the sole purpose of creating another thread. Here is an example of what I have done. My question is, how can I safely close these threads?
from threading import Thread
....
def startOtherThread():
Thread(target = myMethod).start()
Thread(target = anotherMethod).start()
....
You do not close threads. They will run until your target= method is finished. It is not clear why you are trying to introduce separate method to start a thread: Thread(target=...).start() looks simple enough.
When you work with threads, you have three basic options:
- wait in main thread until child thread is finished using join() function
- just let child thread run by doing nothing
- exit child thread when main thread is over by using setDeamon(True) on the thread object.
Also you need to be aware of GIL (Global Interpreter Lock) in cPython
Here is some basic test code for threads:
import threading
import time
import sys
def f():
sys.stderr.write("Enter\n")
time.sleep(2)
sys.stderr.write("Leave\n")
if __name__ == '__main__':
t0 = threading.Thread(target=f)
t1 = threading.Thread(target=f)
t0.start()
time.sleep(1)
t1.setDaemon(True)
t1.start()
#t1.join()
Related
What is the best way to update a gui from another thread in python.
I have main function (GUI) in thread1 and from this i'm referring another thread (thread2), is it possible to update GUI while working in Thread2 without cancelling work at thread2, if it is yes how can I do that?
any suggested reading about thread handling. ?
Of course you can use Threading to run several processes simultaneously.
You have to create a class like this :
from threading import Thread
class Work(Thread):
def __init__(self):
Thread.__init__(self)
self.lock = threading.Lock()
def run(self): # This function launch the thread
(your code)
if you want run several thread at the same time :
def foo():
i = 0
list = []
while i < 10:
list.append(Work())
list[i].start() # Start call run() method of the class above.
i += 1
Be careful if you want to use the same variable in several threads. You must lock this variable so that they do not all reach this variable at the same time. Like this :
lock = threading.Lock()
lock.acquire()
try:
yourVariable += 1 # When you call lock.acquire() without arguments, block all variables until the lock is unlocked (lock.release()).
finally:
lock.release()
From the main thread, you can call join() on the queue to wait until all pending tasks have been completed.
This approach has the benefit that you are not creating and destroying threads, which is expensive. The worker threads will run continuously, but will be asleep when no tasks are in the queue, using zero CPU time.
I hope it will help you.
I have a python program with one main thread and let's say 2 other threads (or maybe even more, probably doesn't matter). I would like to let the main thread sleep until ONE of the other threads is finished. It's easy to do with polling (by calling t.join(1) and waiting for one second for every thread t).
Is it possible to do it without polling, just by
SOMETHING_LIKE_JOIN(1, [t1, t2])
where t1 and t2 are threading.Thread objects? The call must do the following: sleep 1 second, but wake up as soon as one of t1,t2 is finished. Quite similar to POSIX select(2) call with two file descriptors.
One solution is to use a multiprocessing.dummy.Pool; multiprocessing.dummy provides an API almost identical to multiprocessing, but backed by threads, so it gets you a thread pool for free.
For example, you can do:
from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(2) # Two workers
for res in pool.imap_unordered(some_func, list_of_func_args):
# res is whatever some_func returned
multiprocessing.Pool.imap_unordered returns results as they become available, regardless of which task finishes first.
If you can use Python 3.2 or higher (or install the concurrent.futures PyPI module for older Python) you can generalize to disparate task functions by creating one or more Futures from a ThreadPoolExecutor, then using concurrent.futures.wait with return_when=FIRST_COMPLETED, or using concurrent.futures.as_completed for similar effect.
Here is an example of using condition object.
from threading import Thread, Condition, Lock
from time import sleep
from random import random
_lock = Lock()
def run(idx, condition):
sleep(random() * 3)
print('thread_%d is waiting for notifying main thread.' % idx)
_lock.acquire()
with condition:
print('thread_%d notifies main thread.' % idx)
condition.notify()
def is_working(thread_list):
for t in thread_list:
if t.is_alive():
return True
return False
def main():
condition = Condition(Lock())
thread_list = [Thread(target=run, kwargs={'idx': i, 'condition': condition}) for i in range(10)]
with condition:
with _lock:
for t in thread_list:
t.start()
while is_working(thread_list):
_lock.release()
if condition.wait(timeout=1):
print('do something')
sleep(1) # <-- Main thread is doing something.
else:
print('timeout')
for t in thread_list:
t.join()
if __name__ == '__main__':
main()
I don't think there is race condition as you described in comment. The condition object contains a Lock. When the main thread is working(sleep(1) in the example), it holds the lock and no thread can notify it until it finishes its work and release the lock.
I just realize that there is a race condition in the previous example. I added a global _lock to ensure the condition never notifies the main thread until the main thread starts waiting. I don't like how it works, but I haven't figured out a better solution...
You can create a Thread Class and the main thread keeps a reference to it. So you can check whether the thread has finished and make your main thread continue again easily.
If that doesn't helped you, I suggest you to look at the Queue library!
import threading
import time, random
#THREAD CLASS#
class Thread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True
self.state = False
#START THREAD (THE RUN METHODE)#
self.start()
#THAT IS WHAT THE THREAD ACTUALLY DOES#
def run(self):
#THREAD SLEEPS FOR A RANDOM TIME RANGE#
time.sleep(random.randrange(5, 10))
#AFTERWARDS IS HAS FINISHED (STORE IN VARIABLE)#
self.state = True
#RETURNS THE STATE#
def getState(self):
return self.state
#10 SEPERATE THREADS#
threads = []
for i in range(10):
threads.append(Thread())
#MAIN THREAD#
while True:
#RUN THROUGH ALL THREADS AND CHECK FOR ITS STATE#
for i in range(len(threads)):
if threads[i].getState():
print "WAITING IS OVER: THREAD ", i
#SLEEPS ONE SECOND#
time.sleep(1)
from threading import Thread
class MyClass:
#...
def method2(self):
while True:
try:
hashes = self.target.bssid.replace(':','') + '.pixie'
text = open(hashes).read().splitlines()
except IOError:
time.sleep(5)
continue
# function goes on ...
def method1(self):
new_thread = Thread(target=self.method2())
new_thread.setDaemon(True)
new_thread.start() # Main thread will stop there, wait until method 2
print "Its continues!" # wont show =(
# function goes on ...
Is it possible to do like that?
After new_thread.start() Main thread waits until its done, why is that happening? i didn't provide new_thread.join() anywhere.
Daemon doesn't solve my problem because my problem is that Main thread stops right after new thread start, not because main thread execution is end.
As written, the call to the Thread constructor is invoking self.method2 instead of referring to it. Replace target=self.method2() with target=self.method2 and the threads will run in parallel.
Note that, depending on what your threads do, CPU computations might still be serialized due to the GIL.
IIRC, this is because the program doesn't exit until all non-daemon threads have finished execution. If you use a daemon thread instead, it should fix the issue. This answer gives more details on daemon threads:
Daemon Threads Explanation
I don't manage to understand why my SIGINT is never caught by the piece of code below.
#!/usr/bin/env python
from threading import Thread
from time import sleep
import signal
class MyThread(Thread):
def __init__(self):
Thread.__init__(self)
self.running = True
def stop(self):
self.running = False
def run(self):
while self.running:
for i in range(500):
col = i**i
print col
sleep(0.01)
global threads
threads = []
for w in range(150):
threads.append(MyThread())
def stop(s, f):
for t in threads:
t.stop()
signal.signal(signal.SIGINT, stop)
for t in threads:
t.start()
for t in threads:
t.join()
To clean this code I would prefer to try/except the join() and closing all threads in case of exception, would that work?
One of the problems with multithreading in python is that join() more or less disables signals.
This is because the signal can only be delivered to the main thread, but the main thread is already busy with performing the join() and the join is not interruptible.
You can deduce this from the documentation of the signal module
Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), pause(), setitimer() or getitimer(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). This means that signals can’t be used as a means of inter-thread communication. Use locks instead.
You can work your way around it, by busy-looping over the join operation:
for t in threads:
while t.isAlive():
t.join(timeout=1)
This is, however, none to efficient:
The workaround of calling join() with a timeout has a drawback:
Python's threading wait routine polls 20 times a second when
given any timeout. All this polling can mean lots of CPU
interrupts/wakeups on an otherwise idle laptop and drain the
battery faster.
Some more details are provided here:
Python program with thread can't catch CTRL+C
Bug reports for this problem with a discussion of the underlying issue can be found here:
https://bugs.python.org/issue1167930
https://bugs.python.org/issue1171023
I have a simple app that listens to a socket connection. Whenever certain chunks of data come in a callback handler is called with that data. In that callback I want to send my data to another process or thread as it could take a long time to deal with. I was originally running the code in the callback function, but it blocks!!
What's the proper way to spin off a new task?
threading is the threading library usually used for resource-based multithreading. The multiprocessing library is another library, but designed more for running intensive parallel computing tasks; threading is generally the recommended library in your case.
Example
import threading, time
def my_threaded_func(arg, arg2):
print "Running thread! Args:", (arg, arg2)
time.sleep(10)
print "Done!"
thread = threading.Thread(target=my_threaded_func, args=("I'ma", "thread"))
thread.start()
print "Spun off thread"
The multiprocessing module has worker pools. If you don't need a pool of workers, you can use Process to run something in parallel with your main program.
import threading
from time import sleep
import sys
# assume function defs ...
class myThread (threading.Thread):
def __init__(self, threadID):
threading.Thread.__init__(self)
self.threadID = threadID
def run(self):
if self.threadID == "run_exe":
run_exe()
def main():
itemList = getItems()
for item in itemList:
thread = myThread("run_exe")
thread.start()
sleep(.1)
listenToSocket(item)
while (thread.isAlive()):
pass # a way to wait for thread to finish before looping
main()
sys.exit(0)
The sleep between thread.start() and listenToSocket(item) ensures that the thread is established before you begin to listen. I implemented this code in a unit test framework were I had to launch multiple non-blacking processes (len(itemList) number of times) because my other testing framework (listenToSocket(item)) was dependent on the processes.
un_exe() can trigger a subprocess call that can be blocking (i.e. invoking pipe.communicate()) so that output data from the execution will still be printed in time with the python script output. But the nature of threading makes this ok.
So this code solves two problems - print data of a subprocess without blocking script execution AND dynamically create and start multiple threads sequentially (makes maintenance of the script better if I ever add more items to my itemList later).