This question already has answers here:
Keyboard Interrupts with python's multiprocessing Pool
(11 answers)
Closed 7 years ago.
I would like my program to exit as soon as I press Ctrl+C:
import multiprocessing
import os
import time
def sqr(a):
time.sleep(0.2)
print 'local {}'.format(os.getpid())
#raise Exception()
return a * a
pool = multiprocessing.Pool(processes=4)
try:
r = [pool.apply_async(sqr, (x,)) for x in range(100)]
pool.close()
pool.join()
except:
print 121312313
pool.terminate()
pool.join()
print 'main {}'.format(os.getpid())
This code doesn't work as intended: the program does not quit when I press Ctrl+C. Instead, it prints a few KeyboardInterrupt each time, and just gets stuck forever.
Also, I would like it to exit ASAP if I uncomment #raise ... in sqr. The solutions in Exception thrown in multiprocessing Pool not detected do not seem to be helpful.
Update
I think I finally ended up with this: (let me know if it is wrong)
def sqr(a):
time.sleep(0.2)
print 'local {}'.format(os.getpid())
if a == 20:
raise Exception('fff')
return a * a
pool = Pool(processes=4)
r = [pool.apply_async(sqr, (x,)) for x in range(100)]
pool.close()
# Without timeout, cannot respond to KeyboardInterrupt.
# Also need get to raise the exceptions workers may throw.
for item in r:
item.get(timeout=999999)
# I don't think I need join since I already get everything.
pool.join()
print 'main {}'.format(os.getpid())
This is because of a Python 2.x bug that makes the call to pool.join() uninterruptable. It works fine in Python 3.x. Normally the work around is to pass a really large timeout to join, but multiprocessing.Pool.join doesn't take a timeout parameter, so you can't use it at all. Instead, you'll need to wait for each individual task in the pool to complete, and pass timeout to the wait() method:
import multiprocessing
import time
import os
Pool = multiprocessing.Pool
def sqr(a):
time.sleep(0.2)
print('local {}'.format(os.getpid()))
#raise Exception()
return a * a
pool = Pool(processes=4)
try:
r = [pool.apply_async(sqr, (x,)) for x in range(100)]
pool.close()
for item in r:
item.wait(timeout=9999999) # Without a timeout, you can't interrupt this.
except KeyboardInterrupt:
pool.terminate()
finally:
pool.join()
print('main {}'.format(os.getpid()))
This can be interrupted on both Python 2 and 3.
Related
I have a script that is essentially an API scraper, it runs perpetually. I strapped a map_async pool to it and its glorious, the pool was hiding some errors which I learned was pretty common. So I incorporated this wrapped helper function.
helper.py
def trace_unhandled_exceptions(func):
#functools.wraps(func)
def wrapped_func(*args, **kwargs):
try:
return func(*args, **kwargs)
except:
print('Exception in '+func.__name__)
traceback.print_exc()
return wrapped_func
My main script looks like
scraper.py
import multiprocessing as mp
from helper import trace_unhandled_exceptions
start_block = 100
end_block = 50000
#trace_unhandled_exceptions
def main(block_num):
block = blah_blah(block_num)
return block
if __name__ == "__main__":
cpus = min(8, mp.cpu_count()-1 or 1)
pool = mp.Pool(cpus)
pool.map_async(main, range(start_block - 20, end_block), chunksize=cpus)
pool.close()
pool.join()
This works great, im receiving exception:
Exception in main
Traceback (most recent call last):
.....
How can I get the script to end on exception, ive tried incorporating os.exit or sys.exit into the helper function like this
def trace_unhandled_exceptions(func):
#functools.wraps(func)
def wrapped_func(*args, **kwargs):
try:
return func(*args, **kwargs)
except:
print('Exception in '+func.__name__)
traceback.print_exc()
os._exit(1)
return wrapped_func
But I believe its only terminating the child process and not the entire script, any advice?
I don't think you need that trace_unhandled_exception decorator to do what you want, at least not if you use pool.apply_async() instead of pool.map_async() because the you can use the error_callback= option it supports to be notified whenever the target function fails. Note that map_async() also supports something similar, but it's not called until the entire iterable has been consumed — so it would not be suitable for what you're wanting to do.
I got the idea for this approach from #Tim Peters' answer to a similar question titled Multiprocessing Pool - how to cancel all running processes if one returns the desired result?
import multiprocessing as mp
import random
import time
START_BLOCK = 100
END_BLOCK = 1000
def blah_blah(block_num):
if block_num % 10 == 0:
print(f'Processing block {block_num}')
time.sleep(random.uniform(.01, .1))
return block_num
def main(block_num):
if random.randint(0, 100) == 42:
print(f'Raising radom exception')
raise RuntimeError('RANDOM TEST EXCEPTION')
block = blah_blah(block_num)
return block
def error_handler(exception):
print(f'{exception} occurred, terminating pool.')
pool.terminate()
if __name__ == "__main__":
processes = min(8, mp.cpu_count()-1 or 1)
pool = mp.Pool(processes)
for i in range(START_BLOCK-20, END_BLOCK):
pool.apply_async(main, (i,), error_callback=error_handler)
pool.close()
pool.join()
print('-fini-')
I am not sure what you mean by the pool hiding errors. My experience is that when a worker function (i.e. the target of a Pool method) raises an uncaught exception, it doesn't go unnoticed. Anyway,...
I would suggest that:
You do not use your trace_unhandled_exception decorator and allow your worker function, main, to raise an exception and
Instead of using method map_async (why that instead of map?), you use method imap, which allows you to iterate individual return values and any exception that may have been thrown by main as they become available and therefore as soon as you detect an exception you can then call multiprocessing.Pool.terminate() to (1) cancel any tasks that have been submitted but not started or (2) tasks running and not yet completed. As an aside, even if you don't call terminate, once an uncaught exception occurs in a submitted task, the processing pool flushes the input task queue.
Once the main process detects the exception, it can. of course, call sys.exit() after cleaning up the pool.
import multiprocessing as mp
start_block = 100
end_block = 50000
def main(block_num):
if block_num == 1000:
raise ValueError("I don't like 1000.")
return block_num * block_num
if __name__ == "__main__":
cpus = min(8, mp.cpu_count()-1 or 1)
pool = mp.Pool(cpus)
it = pool.imap(main, range(start_block - 20, end_block), chunksize=cpus)
results = []
while True:
try:
result = next(it)
except StopIteration:
break
except Exception as e:
print(e)
# Kill remaining tasks
pool.terminate()
break
else:
results.append(result)
pool.close()
pool.join()
Prints:
I don't like 1000.
Alternatively, you can keep your decorator function, but modify it to return the Exception instance it caught (currently, it implicitly returns None). Then you can modify the while True loop as follows:
while True:
try:
result = next(it)
except StopIteration:
break
else:
if isinstance(result, Exception):
pool.terminate()
break
results.append(result)
Since no actual exception has been raised, the call to terminate becomes absolutely essential if you want to continue execution without allowing the remaining submitted tasks to run. Even if you just want to immediately exit, it is still a good idea terminate and clean up the pool to ensure that nothing hangs when you do call exit.
I was trying to do some multiprocessing.
import multiprocessing
def func(foo,q):
for i in foo:
do something
q.put(something)
q.close()
q=multiprocessing.Queue()
p=multiprocessing.Process(target=func,args=(foo,q))
p.start()
while True:
try:
q.get()
except ValueError:
break
This code then goes into an infinite loop.
I know that there are work arounds, in fact I have already implemented one. I just want to know why the queue doesn't raise ValueError like its supposed to, according to the docs.
Just to clarify, my understanding is that the queue will raise the error as long as it is closed and you call .get on it.
I’ve seen people suggesting setting timeout to 1 and breaking when the queue is empty but:
def func(q):
for i in range(10):
time.sleep(10)
q.put(i)
Will cause the code to exit prematurely if we break when queue is empty
first all of you are using queue as variable name so exception will handle queue.Empty second use break for while loop to stop
q = multiprocessing.Queue()
p=multiprocessing.Process(target=func,args=(range(5),q))
p.start()
while True:
try:
q.get(timeout=1)
except queue.Empty:
print("Empty")
break
except ValueError:
break
as docs say if you provide block true and timeout none this operation goes into an uninterruptible wait on an underlying lock. Doc
If timeout is a positive number, it blocks at most timeout seconds and raises the Queue.Empty exception if no item was available within that time.
while True is an infinite loop if you are just giving queue.Empty a pass.
import multiprocessing
import queue
def func(a,b):
print('func acquired {}\t{}'.format(a,b))
q = multiprocessing.Queue()
p=multiprocessing.Process(target=func,args=(range(5),q))
p.start()
while True:
try:
q.get(timeout=1)
except queue.Empty:
print("Empty")
break
except ValueError:
break
print('termination')
Output:
func acquired range(0, 5) <multiprocessing.queues.Queue object at 0x7f42d0989710>
Empty
termination
I figured it out!
Closing a queue only flushes the objects put in the queue to the thread calling close(). Other threads are not affected, things can still be put into the queue even if it is closed on the current thread.
I am trying to run, pause and terminate the child processes in Python from the parent process. I have tried to use multiprocessing.Value but for some reason the parent process never finishes completely although I terminate and join all the processes. My use case is something like:
def child(flow_flag):
while True:
with flow_flag.get_lock():
flag_value = flow_flag.value
if flag_value == 0:
print("This is doing some work")
elif flag_value == 1:
print("This is waiting for some time to check back later")
time.sleep(5)
else:
print("Time to exit")
break
def main():
flow_flag = Value('i', 0)
processes = [Process(target=child, args=(flow_flag,)) for i in range(10)]
[p.start() for p in processes]
print("Waiting for some work")
with flow_flag.get_lock():
flow_flag.value = 1
print("Do something else")
with flow_flag.get_lock():
flow_flag.value = 0
print("Waiting for more work")
with flow_flag.get_lock():
flow_flag.value = 2
print("Exiting")
for p in processes:
p.terminate()
p.join()
This never finishes properly and I have to Ctrl+C eventually. Then I see this message:
Traceback (most recent call last):
File "/home/abcde/anaconda3/lib/python3.7/threading.py", line 1308, in _shutdown
lock.acquire()
KeyboardInterrupt
What is a better way? FYI, while waiting for something else, I am spawning some other processes. I also had them not terminating properly, and I was using Value with them too. It got fixed when I switched to using Queue for them. However, Queue does not seem to be appropriate for the case above.
P.S. : I am ssh'ing into Ubuntu 18.04.
EDIT: After a lot of debugging, not exiting turned out to be because of a library I am using that I did not suspect to cause this. My apologies for false alarm. Thanks for the suggestions on the better way of controlling the child processes.
Your program works for me, but let me chime in on "is there another way". Instead of polling at 5 second intervals you could create a shared event object that lets the child processes know when they can do their work. Instead of polling for Value 1, wait for the event.
from multiprocessing import *
import time
import os
def child(event, times_up):
while True:
event.wait()
if times_up.value:
print(os.getpid(), "time to exit")
return
print(os.getpid(), "doing work")
time.sleep(.5)
def main():
manager = Manager()
event = manager.Event()
times_up = manager.Value(bool, False)
processes = [Process(target=child, args=(event, times_up)) for i in range(10)]
[p.start() for p in processes]
print("Let processes work")
event.set()
time.sleep(2)
print("Make them stop")
event.clear()
time.sleep(4)
print("Make them go away")
times_up.value = True
event.set()
print("Exiting")
for p in processes:
p.join()
if __name__ == "__main__":
main()
With Python 3.7.7 running on FreeBSD 12.1 (64-bit) I cannot reproduce your problem.
After fixing the indentation and adding the necessary imports The changed program runs fine AFAICT.
BTW, you might want to import sys and add
sys.stdout.reconfigure(line_buffering=True)
to the beginning of your main().
I've read a lot of questions on SO and elsewhere on this topic but can't get it working. Perhaps it's because I'm using Windows, I don't know.
What I'm trying to do is download a bunch of files (whose URLs are read from a CSV file) in parallel. I've tried using multiprocessing and concurrent.futures for this with no success.
The main problem is that I can't stop the program on Ctrl-C - it just keeps running. This is especially bad in the case of processes instead of threads (I used multiprocessing for that) because I have to kill each process manually every time.
Here is my current code:
import concurrent.futures
import signal
import sys
import urllib.request
class Download(object):
def __init__(self, url, filename):
self.url = url
self.filename = filename
def perform_download(download):
print('Downloading {} to {}'.format(download.url, download.filename))
return urllib.request.urlretrieve(download.url, filename=download.filename)
def main(argv):
args = parse_args(argv)
queue = []
with open(args.results_file, 'r', encoding='utf8') as results_file:
# Irrelevant CSV parsing...
queue.append(Download(url, filename))
def handle_interrupt():
print('CAUGHT SIGINT!!!!!!!!!!!!!!!!!!!11111111')
sys.exit(1)
signal.signal(signal.SIGINT, handle_interrupt)
with concurrent.futures.ThreadPoolExecutor(max_workers=args.num_jobs) as executor:
futures = {executor.submit(perform_download, d): d for d in queue}
try:
concurrent.futures.wait(futures)
except KeyboardInterrupt:
print('Interrupted')
sys.exit(1)
I'm trying to catch Ctrl-C in two different ways here but none of them works. The latter one (except KeyboardInterrupt) actually gets run but the process won't exit after calling sys.exit.
Before this I used the multiprocessing module like this:
try:
pool = multiprocessing.Pool(processes=args.num_jobs)
pool.map_async(perform_download, queue).get(1000000)
except Exception as e:
pool.close()
pool.terminate()
sys.exit(0)
So what is the proper way to add ability to terminate all worker threads or processes once you hit Ctrl-C in the terminal?
System information:
Python version: 3.6.1 32-bit
OS: Windows 10
You are catching the SIGINT signal in a signal handler and re-routing it as a SystemExit exception. This prevents the KeyboardInterrupt exception to ever reach your main loop.
Moreover, if the SystemExit is not raised in the main thread, it will just kill the child thread where it is raised.
Jesse Noller, the author of the multiprocessing library, explains how to deal with CTRL+C in a old blog post.
import signal
from multiprocessing import Pool
def initializer():
"""Ignore CTRL+C in the worker process."""
signal.signal(SIGINT, SIG_IGN)
pool = Pool(initializer=initializer)
try:
pool.map(perform_download, dowloads)
except KeyboardInterrupt:
pool.terminate()
pool.join()
I don't believe the accepted answer works under Windows, certainly not under current versions of Python (I am running 3.8.5). In fact, it won't run at all since SIGINT and SIG_IGN will be undefined (what is needed is signal.SIGINT and signal.SIG_IGN).
This is a know problem under Windows. A solution I have come up with is essentially the reverse of the accepted solution: The main process must ignore keyboard interrupts and we initialize the process pool to initially set a global flag ctrl_c_entered to False and to set this flag to True if Ctrl-C is entered. Then any multiprocessing worker function (or method) is decorated with a special decorator, handle_ctrl_c, that firsts tests the ctrl_c_entered flag and only if False does it run the worker function after re-enabling keyboard interrupts and establishing a try/catch handler for keyboard interrups. If the ctrl_c_entered flag was True or if a keyboard interrupt occurs during the execution of the worker function, the value returned is an instance of KeyboardInterrupt, which the main process can check to determine whether a Ctrl-C was entered.
Thus all submitted tasks will be allowed to start but will immediately terminate with a return value of a KeyBoardInterrupt exception and the actual worker function will never be called by the decorator function once a Ctrl-C has been entered.
import signal
from multiprocessing import Pool
from functools import wraps
import time
def handle_ctrl_c(func):
"""
Decorator function.
"""
#wraps(func)
def wrapper(*args, **kwargs):
global ctrl_c_entered
if not ctrl_c_entered:
# re-enable keyboard interrups:
signal.signal(signal.SIGINT, default_sigint_handler)
try:
return func(*args, **kwargs)
except KeyboardInterrupt:
ctrl_c_entered = True
return KeyboardInterrupt()
finally:
signal.signal(signal.SIGINT, pool_ctrl_c_handler)
else:
return KeyboardInterrupt()
return wrapper
def pool_ctrl_c_handler(*args, **kwargs):
global ctrl_c_entered
ctrl_c_entered = True
def init_pool():
# set global variable for each process in the pool:
global ctrl_c_entered
global default_sigint_handler
ctrl_c_entered = False
default_sigint_handler = signal.signal(signal.SIGINT, pool_ctrl_c_handler)
#handle_ctrl_c
def perform_download(download):
print('begin')
time.sleep(2)
print('end')
return True
if __name__ == '__main__':
signal.signal(signal.SIGINT, signal.SIG_IGN)
pool = Pool(initializer=init_pool)
results = pool.map(perform_download, range(20))
if any(map(lambda x: isinstance(x, KeyboardInterrupt), results)):
print('Ctrl-C was entered.')
print(results)
I have a script that takes a text file as input and performs the testing. What I want to do is create two threads and divide the input text file in 2 parts and run them so as to minimize the execution time. Is there a way I can do this ?
Thanks
class myThread (threading.Thread):
def __init__(self, ip_list):
threading.Thread.__init__(self)
self.input_list = ip_list
def run(self):
# Get lock to synchronize threads
threadLock.acquire()
print "python Audit.py " + (",".join(x for x in self.input_list))
p = subprocess.Popen("python Audit.py " + (",".join(x for x in self.input_list)), shell=True)
# Free lock to release next thread
threadLock.release()
while p.poll() is None:
print('Test Execution in Progress ....')
time.sleep(60)
print('Not sleeping any longer. Exited with returncode %d' % p.returncode)
def split_list(input_list, split_count):
for i in range(0, len(input_list), split_count):
yield input_list[i:i + split_count]
if __name__ == '__main__':
threadLock = threading.Lock()
threads = []
with open("inputList.txt", "r") as Ptr:
for i in Ptr:
try:
id = str(i).rstrip('\n').rstrip('\r')
input_list.append(id)
except Exception as err:
print err
print "Exception occured..."
try:
test = split_list(input_list, len(input_list)/THREAD_COUNT)
list_of_lists = list(test)
except Exception as err:
print err
print "Exception caught in splitting list"
try:
#Create Threads & Start
for i in range(0,len(list_of_lists)-1):
# Create new threads
threads.append(myThread(list_of_lists[i]))
threads[i].start()
time.sleep(1)
# Wait for all threads to complete
for thread in threads:
thread.join()
print "Exiting Main Thread..!"
except Exception as err:
print err
print "Exception caught during THREADING..."
You are trying to do 2 things at the same time, which is the definition of parallelism. The problem here is that if you are using CPython, you won't be able to do parallelism because of the GIL(Global Interpreter Lock). The GIL makes sure that only 1 thread is running because the python interpreter is not considered thread safe.
What you should use if you really want to do two operations in parallel is to use the multiprocessing module (import multiprocessing)
Read this: Multiprocessing vs Threading Python
Some notes, in random order:
In python, multithreading is not a good solution to approach computationally intensive tasks. A better approach is multiprocessing:
Python: what are the differences between the threading and multiprocessing modules?
For resources that are not shared (in your case, each line will be used exclusively by a single process) you do not need locks. A better approach would be the map function.
def processing_function(line):
suprocess.call(["python", "Audit.py", line])
with open('file.txt', 'r') as f:
lines = f.readlines()
to_process = [lines[:len(lines)//2], lines[len(lines)//2:]]
p = multiprocessing.Pool(2)
results = p.map(processing_func, to_process)
If the computation requires a variable amount of time depending on the line, using Queues to move data between processes instead of mapping could help to balance the load