I have two thread classes extract and detect.
Extract extracts frames from a video and stores it in a folder, Detect takes images from the folder where frames are extracted and detects objects.
But when I run the below code only the extract works:
global q
q = Queue()
class extract(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
print("T1")
cam = cv2.VideoCapture(video_name)
frameNum = 0
# isCaptured = True
frameCount = 0
while True:
isCapture, frame = cam.read()
if not isCapture:
break
if frameCount % 5 == 0:
frameNum = frameNum + 1
fileName = vid + str(frameNum) + '.jpg'
cv2.imwrite('images/extracted/' + fileName, frame)
q.put(fileName)
frameCount += 1
cam.release()
cv2.destroyAllWindows()
class detect(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
print("T2")
#logic to detect objects.
if __name__ == '__main__':
thread1 = extract()
thread1.start()
thread2 = detect()
thread2.start()
This prints only T1 and no T2.
I thought probably detect ran first and queue was empty so nothing happened so I added dummy entries into the queue and it ran how I wanted it to.
But it ran only for the dummy entries, it didn't work for the entries that the extract function added to the queue.
Looked up other questions and none of them seemed to solve the problem, hence posting this here
You probably want to keep your detect logic in infinite loop as well.
class detect(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
while True:
#detect frame
And if it is a single frame detection.
Then consider waiting in detect thread.
from time import sleep
class detect(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
sleep(120)
# Detect logic
Instead of waiting for a hardcoded time you can make use of Event() and make your detect thread wait until the Event() is set before performing the detection
If the event is set that means all tasks are done. Additionally, you also have to keep an eye on the queue if any tasks are there yet not processed.
I have written an example code to demonstrate how it works you can modify the code according to your needs.
Here extract takes 5 seconds to add a task to queue and detect checks for a task every 1 second. So if the extract is slower than also whenever something is available is will be processed by detect. And when all tasks are done detect will break out of the loop.
import threading
import queue
import time
global q
q = queue.Queue()
class extract(threading.Thread):
all_tasks_done = threading.Event()
def __init__(self):
threading.Thread.__init__(self)
def run(self):
counter = 5
while counter:
time.sleep(5)
counter -= 1
q.put(1)
print("added a task to queue")
extract.all_tasks_done.set()
class detect(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
while not extract.all_tasks_done.wait(1) or not q.empty():
print(q.get())
print("detection done")
#logic to detect objects.
if __name__ == '__main__':
thread1 = extract()
thread1.start()
thread2 = detect()
thread2.start()
thread1.join()
thread2.join()
print("work done")
Related
How can I start and stop a thread with my poor thread class?
It is in loop, and I want to restart it again at the beginning of the code. How can I do start-stop-restart-stop-restart?
My class:
import threading
class Concur(threading.Thread):
def __init__(self):
self.stopped = False
threading.Thread.__init__(self)
def run(self):
i = 0
while not self.stopped:
time.sleep(1)
i = i + 1
In the main code, I want:
inst = Concur()
while conditon:
inst.start()
# After some operation
inst.stop()
# Some other operation
You can't actually stop and then restart a thread since you can't call its start() method again after its run() method has terminated. However you can make one pause and then later resume its execution by using a threading.Condition variable to avoid concurrency problems when checking or changing its running state.
threading.Condition objects have an associated threading.Lock object and methods to wait for it to be released and will notify any waiting threads when that occurs. Here's an example derived from the code in your question which shows this being done. In the example code I've made the Condition variable a part of Thread subclass instances to better encapsulate the implementation and avoid needing to introduce additional global variables:
from __future__ import print_function
import threading
import time
class Concur(threading.Thread):
def __init__(self):
super(Concur, self).__init__()
self.iterations = 0
self.daemon = True # Allow main to exit even if still running.
self.paused = True # Start out paused.
self.state = threading.Condition()
def run(self):
self.resume()
while True:
with self.state:
if self.paused:
self.state.wait() # Block execution until notified.
# Do stuff...
time.sleep(.1)
self.iterations += 1
def pause(self):
with self.state:
self.paused = True # Block self.
def resume(self):
with self.state:
self.paused = False
self.state.notify() # Unblock self if waiting.
class Stopwatch(object):
""" Simple class to measure elapsed times. """
def start(self):
""" Establish reference point for elapsed time measurements. """
self.start_time = time.time()
return self
#property
def elapsed_time(self):
""" Seconds since started. """
try:
return time.time() - self.start_time
except AttributeError: # Wasn't explicitly started.
self.start_time = time.time()
return 0
MAX_RUN_TIME = 5 # Seconds.
concur = Concur()
stopwatch = Stopwatch()
print('Running for {} seconds...'.format(MAX_RUN_TIME))
concur.start()
while stopwatch.elapsed_time < MAX_RUN_TIME:
concur.resume()
# Can also do other concurrent operations here...
concur.pause()
# Do some other stuff...
# Show Concur thread executed.
print('concur.iterations: {}'.format(concur.iterations))
This is David Heffernan's idea fleshed-out. The example below runs for 1 second, then stops for 1 second, then runs for 1 second, and so on.
import time
import threading
import datetime as DT
import logging
logger = logging.getLogger(__name__)
def worker(cond):
i = 0
while True:
with cond:
cond.wait()
logger.info(i)
time.sleep(0.01)
i += 1
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s %(threadName)s] %(message)s',
datefmt='%H:%M:%S')
cond = threading.Condition()
t = threading.Thread(target=worker, args=(cond, ))
t.daemon = True
t.start()
start = DT.datetime.now()
while True:
now = DT.datetime.now()
if (now-start).total_seconds() > 60: break
if now.second % 2:
with cond:
cond.notify()
The implementation of stop() would look like this:
def stop(self):
self.stopped = True
If you want to restart, then you can just create a new instance and start that.
while conditon:
inst = Concur()
inst.start()
#after some operation
inst.stop()
#some other operation
The documentation for Thread makes it clear that the start() method can only be called once for each instance of the class.
If you want to pause and resume a thread, then you'll need to use a condition variable.
I have a class (MyClass) which contains a queue (self.msg_queue) of actions that need to be run and I have multiple sources of input that can add tasks to the queue.
Right now I have three functions that I want to run concurrently:
MyClass.get_input_from_user()
Creates a window in tkinter that has the user fill out information and when the user presses submit it pushes that message onto the queue.
MyClass.get_input_from_server()
Checks the server for a message, reads the message, and then puts it onto the queue. This method uses functions from MyClass's parent class.
MyClass.execute_next_item_on_the_queue()
Pops a message off of the queue and then acts upon it. It is dependent on what the message is, but each message corresponds to some method in MyClass or its parent which gets run according to a big decision tree.
Process description:
After the class has joined the network, I have it spawn three threads (one for each of the above functions). Each threaded function adds items from the queue with the syntax "self.msg_queue.put(message)" and removes items from the queue with "self.msg_queue.get_nowait()".
Problem description:
The issue I am having is that it seems that each thread is modifying its own queue object (they are not sharing the queue, msg_queue, of the class of which they, the functions, are all members).
I am not familiar enough with Multiprocessing to know what the important error messages are; however, it is stating that it cannot pickle a weakref object (it gives no indication of which object is the weakref object), and that within the queue.put() call the line "self._sem.acquire(block, timeout) yields a '[WinError 5] Access is denied'" error. Would it be safe to assume that this failure in the queue's reference not copying over properly?
[I am using Python 3.7.2 and the Multiprocessing package's Process and Queue]
[I have seen multiple Q/As about having threads shuttle information between classes--create a master harness that generates a queue and then pass that queue as an argument to each thread. If the functions didn't have to use other functions from MyClass I could see adapting this strategy by having those functions take in a queue and use a local variable rather than class variables.]
[I am fairly confident that this error is not the result of passing my queue to the tkinter object as my unit tests on how my GUI modifies its caller's queue work fine]
Below is a minimal reproducible example for the queue's error:
from multiprocessing import Queue
from multiprocessing import Process
import queue
import time
class MyTest:
def __init__(self):
self.my_q = Queue()
self.counter = 0
def input_function_A(self):
while True:
self.my_q.put(self.counter)
self.counter = self.counter + 1
time.sleep(0.2)
def input_function_B(self):
while True:
self.counter = 0
self.my_q.put(self.counter)
time.sleep(1)
def output_function(self):
while True:
try:
var = self.my_q.get_nowait()
except queue.Empty:
var = -1
except:
break
print(var)
time.sleep(1)
def run(self):
process_A = Process(target=self.input_function_A)
process_B = Process(target=self.input_function_B)
process_C = Process(target=self.output_function)
process_A.start()
process_B.start()
process_C.start()
# without this it generates the WinError:
# with this it still behaves as if the two input functions do not modify the queue
process_C.join()
if __name__ == '__main__':
test = MyTest()
test.run()
Indeed - these are not "threads" - these are "processes" - while if you were using multithreading, and not multiprocessing, the self.my_q instance would be the same object, placed at the same memory space on the computer,
multiprocessing does a fork of the process, and any data in the original process (the one in execution in the "run" call) will be duplicated when it is used - so, each subprocess will see its own "Queue" instance, unrelated to the others.
The correct way to have various process share a multiprocessing.Queue object is to pass it as a parameter to the target methods. The simpler way to reorganize your code so that it works is thus:
from multiprocessing import Queue
from multiprocessing import Process
import queue
import time
class MyTest:
def __init__(self):
self.my_q = Queue()
self.counter = 0
def input_function_A(self, queue):
while True:
queue.put(self.counter)
self.counter = self.counter + 1
time.sleep(0.2)
def input_function_B(self, queue):
while True:
self.counter = 0
queue.put(self.counter)
time.sleep(1)
def output_function(self, queue):
while True:
try:
var = queue.get_nowait()
except queue.Empty:
var = -1
except:
break
print(var)
time.sleep(1)
def run(self):
process_A = Process(target=self.input_function_A, args=(queue,))
process_B = Process(target=self.input_function_B, args=(queue,))
process_C = Process(target=self.output_function, args=(queue,))
process_A.start()
process_B.start()
process_C.start()
# without this it generates the WinError:
# with this it still behaves as if the two input functions do not modify the queue
process_C.join()
if __name__ == '__main__':
test = MyTest()
test.run()
As you can see, since your class is not actually sharing any data through the instance's attributes, this "class" design does not make much sense for your application - but for grouping the different workers in the same code block.
It would be possible to have a magic-multiprocess-class that would have some internal method to actually start the worker-methods and share the Queue instance - so if you have a lot of those in a project, there would be a lot less boilerplate.
Something along:
from multiprocessing import Queue
from multiprocessing import Process
import time
class MPWorkerBase:
def __init__(self, *args, **kw):
self.queue = None
self.is_parent_process = False
self.is_child_process = False
self.processes = []
# ensure this can be used as a colaborative mixin
super().__init__(*args, **kw)
def run(self):
if self.is_parent_process or self.is_child_process:
# workers already initialized
return
self.queue = Queue()
processes = []
cls = self.__class__
for name in dir(cls):
method = getattr(cls, name)
if callable(method) and getattr(method, "_MP_worker", False):
process = Process(target=self._start_worker, args=(self.queue, name))
self.processes.append(process)
process.start()
# Setting these attributes here ensure the child processes have the initial values for them.
self.is_parent_process = True
self.processes = processes
def _start_worker(self, queue, method_name):
# this method is called in a new spawned process - attribute
# changes here no longer reflect attributes on the
# object in the initial process
# overwrite queue in this process with the queue object sent over the wire:
self.queue = queue
self.is_child_process = True
# call the worker method
getattr(self, method_name)()
def __del__(self):
for process in self.processes:
process.join()
def worker(func):
"""decorator to mark a method as a worker that should
run in its own subprocess
"""
func._MP_worker = True
return func
class MyTest(MPWorkerBase):
def __init__(self):
super().__init__()
self.counter = 0
#worker
def input_function_A(self):
while True:
self.queue.put(self.counter)
self.counter = self.counter + 1
time.sleep(0.2)
#worker
def input_function_B(self):
while True:
self.counter = 0
self.queue.put(self.counter)
time.sleep(1)
#worker
def output_function(self):
while True:
try:
var = self.queue.get_nowait()
except queue.Empty:
var = -1
except:
break
print(var)
time.sleep(1)
if __name__ == '__main__':
test = MyTest()
test.run()
I have this example code to explain my problem:
import threading
import time
class thread1(threading.Thread):
def __init__(self, lock):
threading.Thread.__init__(self)
self.daemon = True
self.start()
self.lock = lock
def run(self):
while True:
self.lock.acquire(True)
print ('write done by t1')
self.lock.release()
class thread2(threading.Thread):
def __init__(self, lock):
threading.Thread.__init__(self)
self.daemon = True
self.start()
self.lock = lock
def run(self):
while True:
self.lock.acquire(True)
print ('write done by t2')
self.lock.release()
if __name__ == '__main__':
lock = threading.Lock()
t1 = thread1(lock)
t2 = thread2(lock)
lock.acquire(True)
counter = 0
while True:
print("main...")
counter = counter + 1
if(counter==5 or counter==10):
lock.release() # Here I want to unlock both threads to run just one time and then wait until I release again
time.sleep(1)
t1.join()
t2.join()
What I'm having some issues is the following:
I want to have two threads (thread1 and thread2) that are launched at the beginning of the program, but they should wait until the main() counter reaches 5 or 10.
When the main() counter reaches 5 or 10, it should signal/trigger/unlock the threads, and both threads should run just once and then wait until a new unlock.
I was expecting the code to have the following output (Each line is 1 second running):
main...
main...
main...
main...
main...
write done by t1
write done by t2
main...
main...
main...
main...
main...
write done by t1
write done by t2
Instead I have a different behaviour, such as starting with:
write done by t1
write done by t1
write done by t1
write done by t1
(etc)
And after 5 seconds the
write done by t2
A lot of times...
Can someone help me explaining what is wrong and how can I improve this?
In __init__() of thread1 and thread2, start() is invoked before self.lock is assigned.
t1 and t2 are created before the main thread acquires the lock. That makes these two threads start printing before the main thread locks them. It is the reason your code print the first several lines of "write done by x".
After the counter reaches 5, the main thread releases the lock, but it never locks it again. That makes t1 and t2 keep running.
It never quits unless you kill it...
I suggest you to use Condition Object instead of Lock.
Here is an example based on your code.
import threading
import time
class Thread1(threading.Thread):
def __init__(self, condition_obj):
super().__init__()
self.daemon = True
self.condition_obj = condition_obj
self.start()
def run(self):
with self.condition_obj:
while True:
self.condition_obj.wait()
print('write done by t1')
class Thread2(threading.Thread):
def __init__(self, condition_obj):
super().__init__()
self.daemon = True
self.condition_obj = condition_obj
self.start()
def run(self):
with self.condition_obj:
while True:
self.condition_obj.wait()
print('write done by t2')
if __name__ == '__main__':
condition = threading.Condition()
t1 = Thread1(condition)
t2 = Thread2(condition)
counter = 0
while True:
print("main...")
counter += 1
if counter == 5 or counter == 10:
with condition:
condition.notify_all()
time.sleep(1)
t1.join()
t2.join()
I use Queue to provide tasks that threads can work on. After all work is done from Queue, I see the threads are still alive while I expected them being released. Here is my code. You can see the active threads number is increasing after a batch of task(in the same queue) increases from the console. How could I release the threads after a batch of work get done?
import threading
import time
from Queue import Queue
class ThreadWorker(threading.Thread):
def __init__(self, task_queue):
threading.Thread.__init__(self)
self.task_queue = task_queue
def run(self):
while True:
work = self.task_queue.get()
#do some work
# do_work(work)
time.sleep(0.1)
self.task_queue.task_done()
def get_batch_work_done(works):
task_queue = Queue()
for _ in range(5):
t = ThreadWorker(task_queue)
t.setDaemon(True)
t.start()
for work in range(works):
task_queue.put(work)
task_queue.join()
print 'get batch work done'
print 'active threads count is {}'.format(threading.activeCount())
if __name__ == '__main__':
for work_number in range(3):
print 'start with {}'.format(work_number)
get_batch_work_done(work_number)
Do a non blocking read in a loop and use the exception handling to terminate
def run(self):
try:
while True:
work = self.task_queue.get(True, 0.1)
#do some work
# do_work(work)
except Queue.Empty:
print "goodbye"
I want to repeat a function at timed intervals. The issue I have is that the function runs another function in a separate thread and therefore doesn't seem to be working with my code.
From the example below, I want to repeat function1 every 60 seconds:
from multiprocessing import Process
from threading import Event
def function2(type):
print("Function2")
def function1():
print("Function1")
if __name__ == '__main__':
p = Process(target=function2, args=('type',))
p.daemon = True
p.start()
p.join()
function1()
To repeat the function I attempted to use the following code:
class TimedThread(Thread):
def __init__(self, event, wait_time, tasks):
Thread.__init__(self)
self.stopped = event
self.wait_time = wait_time
self.tasks = tasks
def run(self):
while not self.stopped.wait(0.5):
self.tasks()
stopFlag = Event()
thread = TimedThread(stopFlag, 60, function1)
thread.start()
Both snippets combined print "Function1" in a timed loop but also produce the following error:
AttributeError: Can't get attribute 'function2' on <module '__main__' (built-in)
Any help would be greatly appreciated.
You can wrap your function1, like:
def main():
while True:
time.sleep(60)
function1()
or you can have it run in a separate thread:
def main():
while True:
time.sleep(60)
t = threading.Thread(target=function1)
t.start()
It actually works for me, printing Function1 and Function2 over and over. Are these two snippets in the same file?
If you import function1 from a different module, then the if __name__ == '__main__' check will fail.
I managed to find an alternative, working solution. Instead of using processes, I achieved the desired results using threads.The differences between the two are well explained here.
from threading import Event, Thread
class TimedThread(Thread):
def __init__(self, event, wait_time):
Thread.__init__(self)
self.stopped = event
self.wait_time = wait_time
def run(self):
while not self.stopped.wait(self.wait_time):
self.function1()
def function2(self):
print("Function2 started from thread")
# Do something
def function1(self):
print("Function1 started from thread")
# Do something
temp_thread = Thread(target=self.function2)
temp_thread.start()
stopFlag = Event()
thread = TimedThread(stopFlag, 60)
thread.start()