I am getting BrokenPipeError when threads which employ multiprocessing.JoinableQueue spawn processes. It seems that happens after the program finished working and tries to exit, because it does everyithing it supposed to do. What does it mean, is there a way to fix this / safe to ignore?
import requests
import multiprocessing
from multiprocessing import JoinableQueue
from queue import Queue
import threading
class ProcessClass(multiprocessing.Process):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
class ThreadClass(threading.Thread):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
def get_urls(host, out_queue):
r = requests.get(host)
out_queue.put(r.text)
print(r.status_code, host)
def get_title(text, out_queue):
print(text.strip('\r\n ')[:5])
if __name__ == '__main__':
def test():
q1 = JoinableQueue()
q2 = JoinableQueue()
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
q2.join()
test()
print('Finished')
Program output:
200 http://ibm.com
<!DOC
200 http://google.com
<!doc
200 http://yahoo.com
<!DOC
200 http://apple.com
<!DOC
200 http://amazon.com
<!DOC
Finished
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Python\33\lib\multiprocessing\connection.py", line 313, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109]
The pipe has been ended
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\33\lib\threading.py", line 901, in _bootstrap_inner
self.run()
File "D:\Progs\Uspat\uspat\spider\run\threads_test.py", line 31, in run
arg = self.in_queue.get()
File "C:\Python\33\lib\multiprocessing\queues.py", line 94, in get
res = self._recv()
File "C:\Python\33\lib\multiprocessing\connection.py", line 251, in recv
buf = self._recv_bytes()
File "C:\Python\33\lib\multiprocessing\connection.py", line 322, in _recv_bytes
raise EOFError
EOFError
....
(cut same errors for other threads.)
If I switch JoinableQueue to queue.Queue for multithreading part, everything fixes, but why?
This is happening because you're leaving the background threads blocking in a multiprocessing.Queue.get call when the main thread exits, but it only happens in certain conditions:
A daemon thread is running and blocking on a multiprocessing.Queue.get when the main thread exits.
A multiprocessing.Process is running.
The multiprocessing context is something other than 'fork'.
The exception is telling you that the other end of the Connection that the multiprocessing.JoinableQueue is listening to when its inside of a get() call sent an EOF. Generally this means the other side of the Connection has closed. It makes sense that this happens during shutdown - Python is cleaning up all objects prior to exiting the interpreter, and part of that clean up involves closing all the open Connection objects. What I haven't been able to figure out yet is why it only (and always) happens if a multiprocessing.Process has been spawned (not forked, which is why it doesn't happen on Linux by default) and is still running. I can even reproduce it if I create a multiprocessing.Process that just sleeps in a while loop. It doesn't take any Queue objects at all. For whatever reason, the presence of a running, spawned child process seems to guarantee the exception will be raised. It might simply cause the order that things are destroyed to be just right for race condition to occur, but that's a guess.
In any case, using a queue.Queue instead of multiprocessing.JoinableQueue is a good way to fix it, since you don't actually need a multiprocessing.Queue there. You could also make sure that the background threads and/or background processes are shut down before the main thread, by sending sentinels to their queues. So, make both run methods check for the sentinel:
def run(self):
for arg in iter(self.in_queue.get, None): # None is the sentinel
self.func(arg, self.out_queue)
self.in_queue.task_done()
self.in_queue.task_done()
And then send the sentinels when you're done:
threads = []
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
threads.append(t)
p = multiprocessing.Process(target=blah)
p.daemon = True
p.start()
procs = []
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
procs.append(t)
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
# All items have been consumed from input queue, lets start shutting down.
for t in procs:
q2.put(None)
t.join()
for t in threads:
q1.put(None)
t.join()
q2.join()
Related
Where using a multiprocessing queue to communicate between processes, many articles recommend sending a terminate message to the queue.
However, if a child process is the producer, if may fail expectedly, leaving the consumer without and notification to expect more messages.
However, the parent process can be notified if a process when a child dies.
It seems it should be possible for it to notify a worker thread in this process to quit and not expect more messages. But how?
multiprocessing.Queue.close()
...doesn't notify consumers (Really? Wait? what!)
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE")
... doesn't let me wait for pending work to complete.
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE")
# messageQ.close()
messageQ.join_thread() # Wait for worker to complete
... fails because the queue is not yet closed.
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE")
messageQ.close()
messageQ.join_thread() # Wait for worker to complete
... seems like it should work, but fails in the worker with a TypeError exception:
msg = messageQ.get()
File "/usr/lib/python3.7/multiprocessing/queues.py", line 94, in get
res = self._recv_bytes()
File "/usr/lib/python3.7/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 411, in _recv_bytes
return self._recv(size)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
TypeError: an integer is required (got type NoneType)
while !quit:
try:
msg = messageQ.get(block=True, timeout=0.5)
except Empty:
continue
... is terrible in that it unnecessarily demands trading shutdown latency without throttling the CPU.
Full example
import multiprocessing
import threading
def producer(messageQ):
messageQ.put("1")
messageQ.put("2")
messageQ.put("3")
if __name__ == '__main__':
messageQ = multiprocessing.Queue()
def worker():
try:
while True:
msg = messageQ.get()
print(msg)
if msg=="TERMINATE": return
# messageQ.task_done()
finally:
print("Worker quit")
# messageQ.close() # End thread
# messageQ.join_thread()
thr = threading.Thread(target=worker,
daemon=False) # The work queue is precious.
thr.start()
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE") # Notify worker we are done
messageQ.close() # No more messages
messageQ.join_thread() # Wait for worker to complete
def runProcess():
proc = multiprocessing.Process(target=producer, args=(messageQ,))
proc.start()
proc.join()
print("runProcess quitting ...")
onProcessQuit()
print("runProcess quitting .. OK")
runProcess()
If you are concerned about the producer process not completing normally, then I am not sure what your question is because your code as is should work except for a few corrections: (1) it is missing an import statement, (2) there is no call to runProcess and (3) your worker thread is incorrectly a daemon thread (as such it may end up terminating before it has had a chance to process all the messages on the queue).
I would also use as a personal preference (and not a correction) None as the special sentinel message instead of TERMINATE and remove some extraneous queue calls that you don't really need (I don't see your explicitly closing the queue accomplishing anything that is necessary).
These are the changes:
def producer(messageQ):
messageQ.put("1")
messageQ.put("2")
messageQ.put("3")
if __name__ == '__main__':
import multiprocessing
import threading
SENTINEL = None
def worker():
try:
while True:
msg = messageQ.get()
if msg is SENTINEL:
return # No need to print the sentinel
print(msg)
finally:
print("Worker quit")
def onProcessQuit(): # Notify worker that we are done.
messageQ.put(SENTINEL) # Notify worker we are done
def runProcess():
proc = multiprocessing.Process(target=producer, args=(messageQ,))
proc.start()
proc.join()
print("runProcess quitting ...")
onProcessQuit()
print("runProcess quitting .. OK")
thr.join()
messageQ = multiprocessing.Queue()
thr = threading.Thread(target=worker) # The work queue is precious.
thr.start()
runProcess()
Prints:
1
2
3
runProcess quitting ...
runProcess quitting .. OK
Worker quit
I'm trying to use a cluster of computers to run millions of small simulations. To do this I tried to set up two "servers" on my main computer, one to add input variables in a queue to the network and one to take care of the result.
This is the code for putting stuff into the simulation variables queue:
"""This script reads start parameters and calls on run_sim to run the
simulations"""
import time
from multiprocessing import Process, freeze_support, Manager, Value, Queue, current_process
from multiprocessing.managers import BaseManager
class QueueManager(BaseManager):
pass
class MultiComputers(Process):
def __init__(self, sim_name, queue):
self.sim_name = sim_name
self.queue = queue
super(MultiComputers, self).__init__()
def get_sim_obj(self, offset, db):
"""returns a list of lists from a database query"""
def handle_queue(self):
self.sim_nr = 0
sims = self.get_sim_obj()
self.total = len(sims)
while len(sims) > 0:
if self.queue.qsize() > 100:
self.queue.put(sims[0])
self.sim_nr += 1
print(self.sim_nr, round(self.sim_nr/self.total * 100, 2), self.queue.qsize())
del sims[0]
def run(self):
self.handle_queue()
if __name__ == '__main__':
freeze_support()
queue = Queue()
w = MultiComputers('seed_1_hundred', queue)
w.start()
QueueManager.register('get_queue', callable=lambda: queue)
m = QueueManager(address=('', 8001), authkey=b'abracadabra')
s = m.get_server()
s.serve_forever()
And then is this queue run to take care of the results of the simulations:
__author__ = 'axa'
from multiprocessing import Process, freeze_support, Queue
from multiprocessing.managers import BaseManager
import time
class QueueManager(BaseManager):
pass
class SaveFromMultiComp(Process):
def __init__(self, sim_name, queue):
self.sim_name = sim_name
self.queue = queue
super(SaveFromMultiComp, self).__init__()
def run(self):
res_got = 0
with open('sim_type1_' + self.sim_name, 'a') as f_1:
with open('sim_type2_' + self.sim_name, 'a') as f_2:
while True:
if self.queue.qsize() > 0:
while self.queue.qsize() > 0:
res = self.queue.get()
res_got += 1
if res[0] == 1:
f_1.write(str(res[1]) + '\n')
elif res[0] == 2:
f_2.write(str(res[1]) + '\n')
print(res_got)
time.sleep(0.5)
if __name__ == '__main__':
queue = Queue()
w = SaveFromMultiComp('seed_1_hundred', queue)
w.start()
m = QueueManager(address=('', 8002), authkey=b'abracadabra')
s = m.get_server()
s.serve_forever()
These scripts works as expected for handling the first ~7-800 simulations, after that I get the following error in the terminal running the receiving result script:
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Python35\lib\multiprocessing\managers.py", line 177, in accepter
t.start()
File "C:\Python35\lib\threading.py", line 844, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Can anyone give som insights in where and how the threads are spawned, is a new thread spawned every time I call queue.get() or how does it work?
And I would be very glad if someone knows what I can do to avoid this failure? (i'm running the script with Python3.5-32)
All signs point to your system being out of resources it needs to launch a thread (probably memory, but you could be leaking threads or other resources). You could use OS system monitoring tools (top for Linux, Resource Monitor for windows) to look at the number of threads and memory usage to track this down, but I would recommend you just use an easier, more efficient programming pattern.
While not a perfect comparison, you generally are seeing the C10K problem and it states that blocking threads waiting for results do not scale well and can be prone to leaking errors like this. The solution was to implement Async IO patterns (one blocking thread that launches other workers) and this is pretty straight forward to do in Web Servers.
A framework like pythons aiohttp should be a good fit for what you want. You just need a handler that can get the ID of the remote code and the result. The framework should hopefully take care of the scaling for you.
So in your case you can keep your launching code, but after it starts the process on the remote machine, kill the thread. Have the remote code then send an HTTP message to your server with 1) its ID and 2) its result. Throw in a little extra code to ask it to try again if it does not get a 200 'OK' Status code and you should be in much better shape.
I think you have to many Threads running for your system. I would first check your system ressources and then rethink my Program.
Try limiting your threads and use as few as possible.
Running the following minimized and reproducible code example, python (e.g. 3.7.3, and 3.8.3) will emit a message as follows when a first Ctrl+C is pressed, rather than terminate the program.
Traceback (most recent call last):
File "main.py", line 44, in <module>
Main()
File "main.py", line 41, in __init__
self.interaction_manager.join()
File "/home/user/anaconda3/lib/python3.7/threading.py", line 1032, in join
self._wait_for_tstate_lock()
File "/home/user/anaconda3/lib/python3.7/threading.py", line 1048, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
Only on a second Ctrl+C being pressed after that, the program will terminate.
What is the rationale behind this design? What would be an elegant way for avoiding the need for more than a single Ctrl+C or underlying signal?
Here's the code:
from threading import Thread
from queue import Queue, Empty
def get_event(queue: Queue, block=True, timeout=None):
""" just a convenience wrapper for avoiding try-except clutter in code """
try:
element = queue.get(block, timeout)
except Empty:
element = Empty
return element
class InteractionManager(Thread):
def __init__(self):
super().__init__()
self.queue = Queue()
def run(self):
while True:
event = get_event(self.queue, block=True, timeout=0.1)
class Main(object):
def __init__(self):
# kick off the user interaction
self.interaction_manager = InteractionManager()
self.interaction_manager.start()
# wait for the interaction manager object shutdown as a signal to shutdown
self.interaction_manager.join()
if __name__ == "__main__":
Main()
Prehistoric related question: Interruptible thread join in Python
Python waits for all non-daemon threads before exiting. The first Ctrl+C merely kills the explicit self.interaction_manager.join(), the second Ctrl+C kills the internal join() of threading. Either declare the thread as an expendable daemon thread, or signal it to shut down.
A thread can be declared as expendable by setting daemon=True, either as a keyword or attribute:
class InteractionManager(Thread):
def __init__(self):
super().__init__(daemon=True)
self.queue = Queue()
def run(self):
while True:
event = get_event(self.queue, block=True, timeout=0.1)
A daemon thread is killed abruptly, and may fail to cleanly release resources if it holds any.
Graceful shutdown can be coordinated using a shared flag, such as threading.Event or a boolean value:
shutdown = False
class InteractionManager(Thread):
def __init__(self):
super().__init__()
self.queue = Queue()
def run(self):
while not shutdown:
event = get_event(self.queue, block=True, timeout=0.1)
def main()
self.interaction_manager = InteractionManager()
self.interaction_manager.start()
try:
self.interaction_manager.join()
finally:
global shutdown
shutdown = True
An Exception is raised in threading._wait_for_tstate_lock when I transfere hugh data between a Process and a Thread via multiprocessing.Queue.
My minimal working example looks a bit complex first - sorry. I will explain. The original application loads a lot of (not so important) files into RAM. This is done in a separate process to save ressources. The main gui thread shouldn't freez.
The GUI start a separate Thread to prevent the gui event loop from freezing.
This separate Thread then starts one Process which should does the work.
a) This Thread instanciates a multiprocess.Queue (be aware that this is a multiprocessing and not threading!)
b) This is givin to the Process for sharing data from Process back to the Thread.
The Process does some work (3 steps) and .put() the result into the multiprocessing.Queue.
When the Process ends the Thread takes over again and collect the data from the Queue, store it to its own attribute MyThread.result.
The Thread tells the GUI main loop/thread to call a callback function if it has time for.
The callback function (MyWindow::callback_thread_finished()) get the results from MyWindow.thread.result.
The problem is if the data put to the Queue is to big something happen I don't understand - the MyThread never ends. I have to cancle the application via Strg+C.
I got some hints from the docs. But my problem is I did not fully understand the documentation. But I have the feeling that the key of my problems can be found there.
Please see the two red boxex in "Pipes and Queues" (Python 3.5 docs).
That is the full output
MyWindow::do_start()
Running MyThread...
Running MyProcess...
MyProcess stoppd.
^CProcess MyProcess-1:
Exception ignored in: <module 'threading' from '/usr/lib/python3.5/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 1288, in _shutdown
t.join()
File "/usr/lib/python3.5/threading.py", line 1054, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 252, in _bootstrap
util._exit_function()
File "/usr/lib/python3.5/multiprocessing/util.py", line 314, in _exit_function
_run_finalizers()
File "/usr/lib/python3.5/multiprocessing/util.py", line 254, in _run_finalizers
finalizer()
File "/usr/lib/python3.5/multiprocessing/util.py", line 186, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.5/multiprocessing/queues.py", line 198, in _finalize_join
thread.join()
File "/usr/lib/python3.5/threading.py", line 1054, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
This is the minimal working example
#!/usr/bin/env python3
import multiprocessing
import threading
import time
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
from gi.repository import GLib
class MyThread (threading.Thread):
"""This thread just starts the process."""
def __init__(self, callback):
threading.Thread.__init__(self)
self._callback = callback
def run(self):
print('Running MyThread...')
self.result = []
queue = multiprocessing.Queue()
process = MyProcess(queue)
process.start()
process.join()
while not queue.empty():
process_result = queue.get()
self.result.append(process_result)
print('MyThread stoppd.')
GLib.idle_add(self._callback)
class MyProcess (multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self)
self.queue = queue
def run(self):
print('Running MyProcess...')
for i in range(3):
self.queue.put((i, 'x'*102048))
print('MyProcess stoppd.')
class MyWindow (Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self)
self.connect('destroy', Gtk.main_quit)
GLib.timeout_add(2000, self.do_start)
def do_start(self):
print('MyWindow::do_start()')
# The process need to be started from a separate thread
# to prevent the main thread (which is the gui main loop)
# from freezing while waiting for the process result.
self.thread = MyThread(self.callback_thread_finished)
self.thread.start()
def callback_thread_finished(self):
result = self.thread.result
for r in result:
print('{} {}...'.format(r[0], r[1][:10]))
if __name__ == '__main__':
win = MyWindow()
win.show_all()
Gtk.main()
Possible duplicate but quite different and IMO without an answer for my situation: Thread._wait_for_tstate_lock() never returns.
Workaround
Using a Manager by modifing line 22 to queue = multiprocessing.Manager().Queue() solve the problem. But I don't know why. My intention of this question is to understand the things behind and not only to make my code work. Even I don't really know what a Manager() is and if it has other (problem causing) implications.
According to the second warning box in the documentation you are linking to you can get a deadlock when you join a process before processing all items in the queue. So starting the process and immediately joining it and then processing the items in the queue is the wrong order of steps. You have to start the process, then receive the items, and then only when all items are received you can call the join method. Define some sentinel value to signal that the process is finished sending data through the queue. None for example if that can't be a regular value you expect from the process.
class MyThread(threading.Thread):
"""This thread just starts the process."""
def __init__(self, callback):
threading.Thread.__init__(self)
self._callback = callback
self.result = []
def run(self):
print('Running MyThread...')
queue = multiprocessing.Queue()
process = MyProcess(queue)
process.start()
while True:
process_result = queue.get()
if process_result is None:
break
self.result.append(process_result)
process.join()
print('MyThread stoppd.')
GLib.idle_add(self._callback)
class MyProcess(multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self)
self.queue = queue
def run(self):
print('Running MyProcess...')
for i in range(3):
self.queue.put((i, 'x' * 102048))
self.queue.put(None)
print('MyProcess stoppd.')
The documentation in question
reads:
Warning
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.
This is supplementary to the accepted answer, but the edit queue is full.
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
Start this up and then ^C as it is running. It will hang and never exit.
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
You can use the signal module to set a flag that tells the main thread to stop processing:
import threading
import time
import signal
import sys
sigint = False
def sighandler(num, frame):
global sigint
sigint = True
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
signal.signal(signal.SIGINT, sighandler)
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
if sigint:
sys.exit()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
t.join()
threads.append(t)
In your original code you could also make the threads daemon threads. When you interrupt the script, the daemon threads all die as you expected.
t = ...
t.setDaemon(True)
t.start()
In this case, it looks like you might just want to use a thread pool to control the starting and stopping of your threads. You could use Chris Arndt's threadpool library in a manner something like this:
pool = ThreadPool(5)
try:
# enqueue 100 worker threads
pool.wait()
except KeyboardInterrupt, k:
pool.dismiss(5)
# the program will exit after all running threads are complete
This is bug #11714, and has been patched in newer versions of python.
If you are using an older python, you could copy the the version of Semaphore found in that patch into your project and use it instead of relying on the buggy version in threading
# importing modules
import threading
import time
# defining our worker and pass a counter and the semaphore to it
def worker(i, sema):
time.sleep(2)
print i, "finished"
# releasing the thread increments the sema value
sema.release()
# creating the semaphore object
sema = threading.BoundedSemaphore(value=5)
# a list to store the created threads
threads = []
for x in xrange(100):
try:
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
# exit once the user hit CTRL+c
# or you can make the thead as daemon t.setdaemon(True)
except KeyboardInterrupt:
exit()