Python subprocess, communicate, and multiprocessing/multithreading - python

I have a script that executes a compiled fortran module. Input then has to be passed to this process in the form of a filename and enter must be pressed to initiate processing. I have no real control over the nature of the fortran executable it is what it is.
I am using subprocess and communicate to handle this from python and it works well. Problem is I need to process 100's to 1000's of files and doing them sequentially is slow. While I expect I will eventually run into an I/O bottleneck at the HDD current, execution times are nowhere near this limit.
I attempted to simply wrap the method spawning the subproccess in a multithreading ThreadPoolExecutor but found that only a small subset of the files actually get processed (roughly every 20, but it varies) and the rest of the files are created but are empty (each is 0 kb and has no contents - as though the subprocess that spawned them was killed prematurely just after creating the handle)
I have tried using instead subprocess.run with an input argument, custom os.pipes, TemporaryFile as a pipe, spawning all the subprocesses first then multithreading calls to communicate, and manual delays after spawning the process before communicating, all to no avail.
If I spawn the subprocesses first I can confirm by inspection that the stdout, stdin, and stderr pipe for each has a unique identifier.
This is the code that calls the fortran module
def run_CEA2(fName_prefix):
print(fName_prefix)
CEA_call = subprocess.run('FCEA2.exe', input='{}\n'.format(fName_prefix), encoding='ascii',
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, cwd=None, check=False)
if 'DOES NOT EXIST' in CEA_call.stdout:
raise RuntimeError('\nERROR: Stdout returned by run_CEA()\n'+'\t'.join([line+'\n' for line in CEA_call.stdout.split('\n')]))
else:
return True
This is the code that calls the above method asynchronously
import concurrent.futures
def threadedRun(fName):
print('\tExecuting file {}'.format(fName))
run_CEA(fName)
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
executor.map(threadedRun, fNames)
print('\tDone.')
Here is a version of run_CEA using Popen and communicate
def run_CEA(fName_prefix):
print(fName_prefix)
p = subprocess.Popen(['FCEA2.exe'], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE,shell=True)
return_str = p.communicate(input=('{}\n'.format(fName_prefix)).encode())[0].decode()
if 'DOES NOT EXIST' in return_str:
raise RuntimeError('\nERROR: Stdout returned by run_CEA()\n'+'\t'.join([line+'\n' for line in return_str.split('\n')]))
else:
return True
I do not understand what it causing the premature closure of spawned processes. As stated above I can pre-spawn all the sub processes and then iterate through a list and of these and process each in turn.
When adding concurrent futures to the mix it seems signals get crossed and multiple spawned processes are killed at a time.
Interestingly when I used concurrent futures only to process the pre-populated list of subprocesses behaviour was the same. Regardless of all processes already being present (not being spawned on the fly as the communicate and close process was occuring) output was produced for roughly every 20th process in the list.

Embarrassingly the issue was a Fortran issue and became obvious when I stopped piping stderr and allowed it to pass to the console where I was greeted by:
forrtl: severe (30): / process cannot access file because it is being
used by another process.
The Fortran executable being used was not just reading from a binary but also locking it with write permissions meaning that it could not be called concurrently by more than one instance of the executable.
To get around this at runtime I spawn n temporary folders each with a complete copy of the Fortran executable and its dependencies. Then use the 'cwd' argument in the call to subprocess run to have a bunch of threads and crunch through the files.
If you are familiar with the NASA CEA code that is what is being called. For completeness below is code for anyone that might benefit.
import os
import shutil
import subprocess
from threading import Thread, Lock, current_thread
import queue
import functools
import threading
def run_CEA(fName_prefix,working_folder=None):
CEA_str = os.path.abspath(os.path.join(working_folder,'FCEA2.exe'))
CEA_call = subprocess.run(CEA_str, input='{}\n'.format(fName_prefix),
encoding='ascii', stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=False, cwd=working_folder, check=False)
if 'DOES NOT EXIST' in CEA_call.stdout:
raise RuntimeError('FCEA2.exe could not find specified input file\n'
+'\t'.join([line+'\n' for line in CEA_call.stdout.split('\n')]))
elif CEA_call.stderr:
raise RuntimeError('Error occured in call to FCEA2.exe\n'
+'\t'.join([line+'\n' for line in CEA_call.stderr.split('\n')]))
else:
return 1
def synchronized(lock):
""" Synchronization decorator """
def wrap(f):
#functools.wraps(f)
def newFunction(*args, **kw):
with lock:
return f(*args, **kw)
return newFunction
return wrap
class CEA_Queue(queue.Queue):
""" Based on template at provided by Shashwat Kumar found #
https://medium.com/#shashwat_ds/a-tiny-multi-threaded-job-queue-in-30-lines-of-python-a344c3f3f7f0"""
inp_folder = os.path.abspath('.//inp_files')
out_folder = os.path.abspath('.//out_files')
run_folder = os.path.abspath('.//workers')
exe_folder = os.path.abspath('.//cea_files')
req_cea_files = ["FCEA2.exe",
"b1b2b3.exe",
"syntax.exe",
"thermo.lib",
"trans.lib"]
lock = Lock()
#classmethod
def test_dirs_cls(cls):
print('test_dirs_cls:')
for dirname in ('inp_folder','out_folder','run_folder','exe_folder'):
print(dirname,':',getattr(cls,dirname))
def test_dirs_self(self):
print('test_dirs_self:')
for dirname in ('inp_folder','out_folder','run_folder','exe_folder'):
print(dirname,':',getattr(self,dirname))
#staticmethod
def clean_folder(target,ignore_list=[]):
if os.path.isdir(target):
for fName in os.listdir(target):
fPath = os.path.join(target,fName)
if os.path.isfile(fPath) and not fName in ignore_list:
os.remove(fPath)
elif os.path.isdir(fPath) and not fName in ignore_list:
shutil.rmtree(fPath)
#classmethod
def setup_folders(cls):
for folder in (cls.out_folder,cls.inp_folder,cls.run_folder):
if not os.path.isdir(folder):
os.mkdir(folder)
else:
cls.clean_folder(folder)
if not os.path.isdir(cls.exe_folder):
raise ValueError("Cannot find exe folder at:\n\t{}".format(cls.exe_folder))
else:
cls.clean_folder(cls.exe_folder,ignore_list=cls.req_cea_files)
#classmethod
def cleanup(cls):
cls.clean_folder(cls.run_folder)
out_files = []
for fName in os.listdir(cls.inp_folder):
if '.out' == fName[-4:]:
try:
shutil.move(os.path.join(cls.inp_folder,fName),
os.path.join(cls.out_folder,fName))
out_files.append(os.path.join(cls.out_folder,fName))
except Exception as exc:
print('WARNING: Could not move *.out file\n{}\n{}'.format(fName,exc))
return out_files
#classmethod
def gather_inputs(cls):
inp_files = []
for fName in os.listdir(cls.inp_folder):
if '.inp' in fName[-4:]:
inp_files.append(os.path.join(cls.inp_folder,fName))
return inp_files
#classmethod
def set_dirs(cls,inp_folder=None,out_folder=None,
run_folder=None,exe_folder=None):
if not inp_folder is None:
cls.inp_folder = os.path.abspath(inp_folder)
if not out_folder is None:
cls.out_folder = os.path.abspath(out_folder)
if not run_folder is None:
cls.run_folder = os.path.abspath(run_folder)
if not exe_folder is None:
cls.exe_folder = os.path.abspath(exe_folder)
def __init__(self, num_workers=1,inp_folder=None,out_folder=None,
run_folder=None,exe_folder=None):
queue.Queue.__init__(self)
self.set_dirs(inp_folder,out_folder,run_folder,exe_folder)
self.setup_folders()
self.num_workers = num_workers
self.n_task = 0
self.n_complete = 0
self.update_every = 10.
self.last_update = 0
def add_task(self, fName):
self.put(fName)
def schedule_tasks(self):
inp_files = self.gather_inputs()
for fName in inp_files:
self.add_task(fName.split('.inp')[0])
self.n_task = len(inp_files)
self.n_complete = 0
self.last_update = 0
return inp_files
def progress(self):
return (self.n_complete/self.n_task)*100
def start_workers(self):
self.worker_threads = []
for i in range(self.num_workers):
k = str(i)
worker_folder = os.path.join(self.run_folder,k)
try:
os.mkdir(worker_folder)
for fNameExe in os.listdir(self.exe_folder):
shutil.copy(os.path.join(self.exe_folder,fNameExe),os.path.join(worker_folder,fNameExe))
except Exception as exc:
raise exc
t = Thread(target=self.worker)
t.daemon = True
t.worker_folder = worker_folder
t.start()
self.worker_threads.append(t)
def worker(self):
while True:
try:
worker_folder = current_thread().worker_folder
fName = self.get()
rel_path = os.path.relpath(fName,worker_folder)
run_CEA(rel_path,worker_folder)
except Exception as exc:
print('ERROR: Worker failed on task\n\tFolder:{}\n\tFile:{}\n\t{}'.format(worker_folder,fName,exc))
finally:
self.task_done()
with self.lock:
self.n_complete+=1
current_progress = self.progress()
if (self.last_update==0 or current_progress==100. or
current_progress-self.last_update>=self.update_every):
print('\tCurrent progress: {:>6.2f}%'.format(current_progress))
self.last_update = current_progress
def run(self):
inp_files = self.schedule_tasks()
self.start_workers()
self.join()
out_files = self.cleanup()
return out_files
def tests(self,n):
inp_str = """! EXAMPLE 1
! (a) Assigned-temperature-and-pressure problem (tp).
! (b) Reactants are H2 and Air. Since "exploded ll formulas are not given,
! these formulas will be taken from the thermodynamic data library,
! thermo. lib.
! (c) Calculations are for two equivalence ratios (r,eq.ratio =1,1.5) .
! (d) Assigned pressures are I, 0.1, and 0.01 atm (p(atm)=l, .1, .01).
! (d) Assigned temperatures are 3000 and 2000 K (t(k)=3000,2000).
! (f) 'only' dataset is used to restrict possible products.
! (g) Energy units in the final tables are in calories (calories).
problem case=Example-1 tp p(atm)=1,.1,.01, t(k)=3000,2000,
r,eq.ratio=1,1.5
reac
fuel= H2 moles = 1.
oxid= Air moles = 1.
only Ar C CO CO2 H H2 H2O HNO HO2 HNO2 HNO3 N NH
NO N2 N2O3 O O2 OH O3
output calories
end
"""
self.setup_folders()
for i in range(n):
fName = 'test{:0>4}'.format(i)
fName = os.path.abspath(os.path.join(self.inp_folder,fName+'.inp'))
f = open(fName,'w')
f.write(inp_str)
f.close()
return self.run()
if __name__ == "__main__":
if True:
import time
start_time = time.time()
Q = CEA_Queue(12)
out_files = Q.tests(10_000)
end_time = time.time()
print('Processing took {:5.2f}'.format(end_time-start_time))
On my 8 core machine the sweet spot is at about 12 threads. Below is an example curve comparing runtime to number of threads handling the workload for a problem.

Related

Process pool results without waiting for all tasks to finish

from multiprocessing import Pool
from functools import partial
from time import sleep
import random
import string
import uuid
import os
import glob
def task_a(param1, param2, mydata):
thread_id = str(uuid.uuid4().hex) # this may not be robust enough to guarantee no collisions, address
output_filename = ''.join([str(thread_id),'.txt'])
# part 1 - create output file for task_b to use
with open(output_filename, 'w') as outfile:
for line in mydata:
outfile.write(line)
# part 2 - do some extra stuff (whilst task_b is running)
sleep(5)
print('Task A finished')
return output_filename # not interested in return val
def task_b(expected_num_files):
processed_files = 0
while processed_files<expected_num_files:
print('I am task_b, waiting for {} files ({} so far)'.format(expected_num_files, processed_files))
path_to_search = ''
for filename in glob.iglob(path_to_search + '*.txt', recursive=True):
print('Got file : {}'.format(filename))
# would do something complicated here
os.rename(filename, filename+'.done')
processed_files+=1
sleep(10)
if __name__ == '__main__':
param1 = '' # dummy variable, need to support in solution
param2 = '' # dummy variable, need to support in solution
num_workers = 2
full_data = [[random.choice(string.ascii_lowercase) for _ in range(5)] for _ in range(100)]
print(full_data)
for i in range(0, len(full_data), num_workers):
print('Going to process {}'.format(full_data[i:i+num_workers]))
p = Pool(num_workers)
task_a_func = partial(task_a, param1, param2)
results = p.map(task_a_func, full_data[i:i+num_workers])
p.close()
p.join()
task_b(expected_num_files=num_workers) # want this running sooner
print('Iteration {} complete'.format(i))
#want to wait for task_a's and task_b to finish
I'm having trouble scheduling these tasks to run concurrently.
task_a is a multiprocessing pool that produces an output file part way through it execution.
task_b MUST process the output files sequentially can be in any order (can be as soon as they are available), WHILST task_a continues to run (it will no longer change the output file)
The next iteration must only start when both all task_a's have completed AND task_b has completed.
The toy code I have posted obviously waits for task_a's to fully complete before task_b is started (which is not what I want)
I have looked at multiprocessing / subprocess etc. but cannot find a way to launch both the pool and the single task_b process concurrently AND wait for BOTH to finish.
task_b is written as if it could be changed to an external script, but I am still stuck on how manage the execution.
Should I effectively merge code from task_b into task_a and somehow pass a flag to ensure one worker per pool 'runs the task_b code' via a if/else - at least then I would just be waiting on the pool to complete?
You can use an interprocess queue to communicate the filenames between task a and task b.
Also, initializing pool repeatedly inside the loop is harmful and unnecessarily slow.
Its better to initialize the pool once in the beginning.
from multiprocessing import Pool, Manager, Event
from functools import partial
from time import sleep
import random
import string
import uuid
import os
import glob
def task_a(param1, param2, queue, mydata):
thread_id = str(uuid.uuid4().hex)
output_filename = ''.join([str(thread_id),'.txt'])
output_filename = 'data/' + output_filename
with open(output_filename, 'w') as outfile:
for line in mydata:
outfile.write(line)
print(f'{thread_id}: Task A file write complete for data {mydata}')
queue.put(output_filename)
print('Task A finished')
def task_b(queue, num_workers, data_size, event_task_b_done):
print('Task b started!')
processed_files = 0
while True:
filename = queue.get()
if filename == 'QUIT':
# Whenever you want task_b to quit, just push 'quit' to the queue
print('Task b quitting')
break
print('Got file : {}'.format(filename))
os.rename(filename, filename+'.done')
processed_files+=1
print(f'Have processed {processed_files} so far!')
if (processed_files % num_workers == 0) or (processed_files == data_size):
event_task_b_done.set()
if __name__ == '__main__':
param1 = '' # dummy variable, need to support in solution
param2 = '' # dummy variable, need to support in solution
num_workers = 2
data_size = 100
full_data = [[random.choice(string.ascii_lowercase) for _ in range(5)] for _ in range(data_size)]
mgr = Manager()
queue = mgr.Queue()
event_task_b_done = mgr.Event()
# One extra worker for task b
p = Pool(num_workers + 1)
p.apply_async(task_b, args=(queue, num_workers, data_size, event_task_b_done))
task_a_func = partial(task_a, param1, param2, queue)
for i in range(0, len(full_data), num_workers):
data = full_data[i:i+num_workers]
print('Going to process {}'.format(data))
p.map_async(task_a_func, full_data[i:i+num_workers])
print(f'Waiting for task b to process all {num_workers} files...')
event_task_b_done.wait()
event_task_b_done.clear()
print('Iteration {} complete'.format(i))
queue.put('QUIT')
p.close()
p.join()
exit(0)

How can I terminate running jobs without closing connection to the core? (currently using execnet)

I have a cluster of computers which uses a master node to communicate with the slave nodes in the cluster.
The main problem I'm facing is using execnet is being able to kill certain jobs that are running and then having new jobs requeue on the same core that the other job just got terminated on (as I want to utilize all cores of the slave nodes at any given time).
As of now there is no way to terminate running jobs using execnet, so I figured if I could just kill the jobs manually through a bash script, say sudo kill 12345 where 12345 is the PID of the job (obtaining the PID of each job is another thing not supported by execnet, but that's another topic), then it would terminate the job and then requeue another on the same core that was just terminated on. It does kill the job correctly, however it closes the connection to that channel (the core; the master node communicates to each core individually) and then does not utilize that core anymore, until all jobs are done. Is there a way to terminate a running job, without killing the connection to the core?
Here is the script to submit jobs
import execnet, os, sys
import re
import socket
import numpy as np
import pickle, cPickle
from copy import deepcopy
import time
import job
def main():
print 'execnet source files are located at:\n {}/\n'.format(
os.path.join(os.path.dirname(execnet.__file__))
)
# Generate a group of gateways.
work_dir = '/home/mpiuser/pn2/'
f = 'cluster_core_info.txt'
n_start, n_end = 250000, 250008
ci = get_cluster_info(f)
group, g_labels = make_gateway_group(ci, work_dir)
mch = group.remote_exec(job)
args = range(n_start, n_end+1) # List of parameters to compute factorial.
manage_jobs(group, mch, queue, g_labels, args)
# Close the group of gateways.
group.terminate()
def get_cluster_info(f):
nodes, ncores = [], []
with open(f, 'r') as fid:
while True:
line = fid.readline()
if not line:
fid.close()
break
line = line.strip('\n').split()
nodes.append(line[0])
ncores.append(int(line[1]))
return dict( zip(nodes, ncores) )
def make_gateway_group(cluster_info, work_dir):
''' Generate gateways on all cores in remote nodes. '''
print 'Gateways generated:\n'
group = execnet.Group()
g_labels = []
nodes = list(cluster_info.keys())
for node in nodes:
for i in range(cluster_info[node]):
group.makegateway(
"ssh={0}//id={0}_{1}//chdir={2}".format(
node, i, work_dir
))
sys.stdout.write(' ')
sys.stdout.flush()
print list(group)[-1]
# Generate a string 'node-id_core-id'.
g_labels.append('{}_{}'.format(re.findall(r'\d+',node)[0], i))
print ''
return group, g_labels
def get_mch_id(g_labels, string):
ids = [x for x in re.findall(r'\d+', string)]
ids = '{}_{}'.format(*ids)
return g_labels.index(ids)
def manage_jobs(group, mch, queue, g_labels, args):
args_ref = deepcopy(args)
terminated_channels = 0
active_jobs, active_args = [], []
while True:
channel, item = queue.get()
if item == 'terminate_channel':
terminated_channels += 1
print " Gateway closed: {}".format(channel.gateway.id)
if terminated_channels == len(mch):
print "\nAll jobs done.\n"
break
continue
if item != "ready":
mch_id_completed = get_mch_id(g_labels, channel.gateway.id)
depopulate_list(active_jobs, mch_id_completed, active_args)
print " Gateway {} channel id {} returned:".format(
channel.gateway.id, mch_id_completed)
print " {}".format(item)
if not args:
print "\nNo more jobs to submit, sending termination request...\n"
mch.send_each(None)
args = 'terminate_channel'
if args and \
args != 'terminate_channel':
arg = args.pop(0)
idx = args_ref.index(arg)
channel.send(arg) # arg is copied by value to the remote side of
# channel to be executed. Maybe blocked if the
# sender queue is full.
# Get the id of current channel used to submit a job,
# this id can be used to refer mch[id] to terminate a job later.
mch_id_active = get_mch_id(g_labels, channel.gateway.id)
print "Job {}: {}! submitted to gateway {}, channel id {}".format(
idx, arg, channel.gateway.id, mch_id_active)
populate_list(active_jobs, mch_id_active,
active_args, arg)
def populate_list(jobs, job_active, args, arg_active):
jobs.append(job_active)
args.append(arg_active)
def depopulate_list(jobs, job_completed, args):
i = jobs.index(job_completed)
jobs.pop(i)
args.pop(i)
if __name__ == '__main__':
main()
and here is my job.py script:
#!/usr/bin/env python
import os, sys
import socket
import time
import numpy as np
import pickle, cPickle
import random
import job
def hostname():
return socket.gethostname()
def working_dir():
return os.getcwd()
def listdir(path):
return os.listdir(path)
def fac(arg):
return np.math.factorial(arg)
def dump(arg):
path = working_dir() + '/out'
if not os.path.exists(path):
os.mkdir(path)
f_path = path + '/fac_{}.txt'.format(arg)
t_0 = time.time()
num = fac(arg) # Main operation
t_1 = time.time()
cPickle.dump(num, open(f_path, "w"), protocol=2) # Main operation
t_2 = time.time()
duration_0 = "{:.4f}".format(t_1 - t_0)
duration_1 = "{:.4f}".format(t_2 - t_1)
#num2 = cPickle.load(open(f_path, "rb"))
return '--Calculation: {} s, dumping: {} s'.format(
duration_0, duration_1)
if __name__ == '__channelexec__':
channel.send("ready")
for arg in channel:
if arg is None:
break
elif str(arg).isdigit():
channel.send((
str(arg)+'!',
job.hostname(),
job.dump(arg)
))
else:
print 'Warnning! arg sent should be number | None'
Yes, you are on the right track. Use psutil library to manage the processes, find their pids etc.
And kill them. No need for involveing bash anywhere. Python covers it all.
Or, even better, program your script to terminate when master say so.
It is usually done that way.
You can even make it start another script before terminating itself if you want/need.
Or, if it is the same that you would be doing in another process, just stop the current work and start a new one in the script without terminating it at all.
And, if I may make a suggestion. Don't read your file line by line, read a whole file and then use *.splitlines(). For small files reading them in chunks just tortures the IO. You wouldn't be needing *.strip() as well. And you should remove unused imports too.

strace a python function

Is it possible to strace a python function for opened files, and differentiate if they were opened by python or a subprocess?
read_python, read_external = [], []
#strace_read(read_python, read_external)
function test():
file = open("foo.txt", "r")
subprocess.call(["cat", "bar.txt"])
for file in read_python:
print("python: ", file)
for file in read_external:
print("external: ", file)
So the output is as:
>>> python: foo.txt
>>> external: bar.txt
I'm most interested in using a decorator. Differentiating isn't a priority.
Conceptually, my best guess is to replace instances of load_function(open) with wrappers ... actually, I have no idea, there are too many ways to access open.
I'd solve it in a much simpler way but with similar result. Instead of figuring out how to enable strace on a single function:
Create decorator like this: (untested)
-
def strace_mark(f):
def wrapper(*args, **kwargs):
try:
open('function-%s-start' % f.__name__, 'r')
except:
pass
ret = f(*args, **kwargs)
try:
open('function-%s-end' % f.__name__, 'r')
except:
pass
return ret
Run the whole app under strace -e file.
Get only the parts between calls open(function-something-start) and open(function-something-end).
If you do strace -f, you get the python/external separation for free. Just look at what pid calls the function.
This is the solution I used:
#!/usr/bin/env python3
import multiprocessing
import selectors
import os
import array
import fcntl
import termios
import subprocess
import decorator
import locale
import io
import codecs
import re
import collections
def strace(function):
StraceReturn = collections.namedtuple("StraceReturn", ["return_data", "pid", "strace_data"])
def strace_filter(stracefile, pid, exclude_system=False):
system = ( "/bin"
, "/boot"
, "/dev"
, "/etc"
, "/lib"
, "/proc"
, "/root"
, "/run"
, "/sbin"
, "/srv"
, "/sys"
, "/tmp"
, "/usr"
, "/var"
)
encoding = locale.getpreferredencoding(False)
for line in stracefile:
match = re.search(r'^(?:\[pid\s+(\d+)\]\s+)?open\(\"((?:\\x[0-9a-f]{2})+)\",', line, re.IGNORECASE)
if match:
p, f = match.groups(pid)
f = codecs.escape_decode(f.encode("ascii"))[0].decode(encoding)
if exclude_system and f.startswith(system):
continue
yield (p, f)
def strace_reader(conn_parent, conn_child, barrier, pid):
conn_parent.close()
encoding = locale.getpreferredencoding(False)
strace_args = ["strace", "-e", "open", "-f", "-s", "512", "-xx", "-p", str(pid)]
process_data = io.StringIO()
process = subprocess.Popen\
( strace_args
, stdout = subprocess.DEVNULL
, stderr = subprocess.PIPE
, universal_newlines = True
)
selector = selectors.DefaultSelector()
selector.register(process.stderr, selectors.EVENT_READ)
selector.select()
barrier.wait()
selector.register(conn_child, selectors.EVENT_READ)
while len(selector.get_map()):
events = selector.select()
for key, mask in events:
if key.fd == conn_child.fileno():
conn_child.recv()
selector.unregister(key.fd)
process.terminate()
try:
process.wait(5)
except TimeoutError:
process.kill()
process.wait()
else:
ioctl_buffer = array.array("i", [0])
try:
fcntl.ioctl(key.fd, termios.FIONREAD, ioctl_buffer)
except OSError:
read_bytes = 1024
else:
read_bytes = max(1024, ioctl_buffer[0])
data = os.read(key.fd, read_bytes)
if data:
# store all data, simpler but not as memory-efficient
# as:
# result, leftover_line = strace_filter\
# ( leftover_line + data.decode(encoding)
# , pid
# )
# process_data.append(result)
# with, after this loop, a final:
# result = strace_filter(leftover_line + "\n", pid)
# process_data.append(result)
process_data.write(data.decode(encoding))
else:
selector.unregister(key.fd)
selector.close()
process_data.seek(0, io.SEEK_SET)
for pidfile in strace_filter(process_data, pid):
conn_child.send(pidfile)
conn_child.close()
def strace_wrapper(function, *args, **kw):
strace_data = list()
barrier = multiprocessing.Barrier(2)
conn_parent, conn_child = multiprocessing.Pipe(duplex = True)
process = multiprocessing.Process\
( target=strace_reader
, args=(conn_parent, conn_child, barrier, os.getpid())
)
process.start()
conn_child.close()
barrier.wait()
function_return = function()
conn_parent.send(None)
while True:
try:
strace_data.append(conn_parent.recv())
except EOFError:
break
process.join(5)
if process.is_alive():
process.terminate()
process.join(5)
if process.is_alive():
os.kill(process.pid, signal.SIGKILL)
process.join()
conn_parent.close()
return StraceReturn(function_return, os.getpid(), strace_data)
return decorator.decorator(strace_wrapper, function)
#strace
def test():
print("Entering test()")
process = subprocess.Popen("cat +μυρτιὲς.txt", shell=True)
f = open("test\"test", "r")
f.close()
process.wait()
print("Exiting test()")
return 5
print(test())
Note that any information strace generates after the termination event will be collected. To avoid that, use a while not signaled loop, and terminate the subprocess after the loop (the FIONREAD ioctl is a holdover from this case; I didn't see any reason to remove it).
In hindsight, the decorator could have been greatly simplified had I used a temporary file, rather than multiprocessing/pipe.
A child process is forked to then fork strace - in other words, strace is tracing its grandparent. Some linux distributions only allow strace to trace its children. I'm not sure how to work around this restriction - having the main program continue executing in the child fork (while the parent execs strace) is probably a bad idea - the program will trade PIDs like a hot potato if the decorated functions are used too often.

multiprocessing - execute external command and wait before proceeding

I am using Linux. I have an external executable called "combine" and a loop of 20 iterations.
Per each iteration, "combine" needs to be called with an argument that depends on the i-th iteration. Example:
arguments = " "
for i in range(1,20):
arguments += str(i) + "_image.jpg "
# begin of pseudo-code
execute: "./combine" + arguments # in parallel using all cores
# pseudo-code continues
wait_for_all_previous_process_to_terminate
execute: "./merge_resized_images" # use all cores - possible for one single command?
How do I achieve this using the multiprocessing module in Python?
You can use subprocess.Popen to launch the external commands asynchronously, and store each Popen object returned in a list. Once you've launched all the processes, just iterate over them and wait for each to finish using popen_object.wait.
from subprocess import Popen
processes = []
for i in range(1,20):
arguments += str(i) + "_image.jpg "
processes.append(subprocess.Popen(shlex.split("./combine" + arguments)))
for p in processes:
p.wait()
subprocess.call("./merge_resized_images")
However, this will launch twenty concurrent processes, which is probably going to hurt performance.
To avoid that, you can use a ThreadPool to limit yourself to some lower number of concurrent processes (multiprocessing.cpu_count is a good number), and then use pool.join to wait for them all to finish.
import multiprocessing
import subprocess
import shlex
from multiprocessing.pool import ThreadPool
def call_proc(cmd):
""" This runs in a separate thread. """
#subprocess.call(shlex.split(cmd)) # This will block until cmd finishes
p = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
return (out, err)
pool = ThreadPool(multiprocessing.cpu_count())
results = []
for i in range(1,20):
arguments += str(i) + "_image.jpg "
results.append(pool.apply_async(call_proc, ("./combine" + arguments,)))
# Close the pool and wait for each running task to complete
pool.close()
pool.join()
for result in results:
out, err = result.get()
print("out: {} err: {}".format(out, err))
subprocess.call("./merge_resized_images")
Each thread will release the GIL while waiting for the subprocess to complete, so they'll all run in parallel.
My solution to this problem is to create and manage a list of subprocesses. Pay special attention to startencoder and manageprocs. That is where the actual work is being started and managed.
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
#
# Author: R.F. Smith
# $Date: 2014-02-15 14:44:31 +0100 $
#
# To the extent possible under law, Roland Smith has waived all copyright and
# related or neighboring rights to vid2mkv.py. This work is published from the
# Netherlands. See http://creativecommons.org/publicdomain/zero/1.0/
"""Convert all video files given on the command line to Theora/Vorbis streams
in a Matroska container."""
from __future__ import print_function, division
__version__ = '$Revision: a42ef58 $'[11:-2]
import os
import sys
import subprocess
from multiprocessing import cpu_count
from time import sleep
def warn(s):
"""Print a warning message.
:param s: Message string
"""
s = ' '.join(['Warning:', s])
print(s, file=sys.stderr)
def checkfor(args, rv=0):
"""Make sure that a program necessary for using this script is
available.
:param args: String or list of strings of commands. A single string may
not contain spaces.
:param rv: Expected return value from evoking the command.
"""
if isinstance(args, str):
if ' ' in args:
raise ValueError('no spaces in single command allowed')
args = [args]
try:
with open(os.devnull, 'w') as bb:
rc = subprocess.call(args, stdout=bb, stderr=bb)
if rc != rv:
raise OSError
except OSError as oops:
outs = "Required program '{}' not found: {}."
print(outs.format(args[0], oops.strerror))
sys.exit(1)
def startencoder(fname):
"""Use ffmpeg to convert a video file to Theora/Vorbis
streams in a Matroska container.
:param fname: Name of the file to convert.
:returns: a 3-tuple of a Process, input path and output path
"""
basename, ext = os.path.splitext(fname)
known = ['.mp4', '.avi', '.wmv', '.flv', '.mpg', '.mpeg', '.mov', '.ogv']
if ext.lower() not in known:
warn("File {} has unknown extension, ignoring it.".format(fname))
return (None, fname, None)
ofn = basename + '.mkv'
args = ['ffmpeg', '-i', fname, '-c:v', 'libtheora', '-q:v', '6', '-c:a',
'libvorbis', '-q:a', '3', '-sn', ofn]
with open(os.devnull, 'w') as bitbucket:
try:
p = subprocess.Popen(args, stdout=bitbucket, stderr=bitbucket)
print("Conversion of {} to {} started.".format(fname, ofn))
except:
warn("Starting conversion of {} failed.".format(fname))
return (p, fname, ofn)
def manageprocs(proclist):
"""Check a list of subprocesses tuples for processes that have ended and
remove them from the list.
:param proclist: a list of (process, input filename, output filename)
tuples.
"""
print('# of conversions running: {}\r'.format(len(proclist)), end='')
sys.stdout.flush()
for p in proclist:
pr, ifn, ofn = p
if pr is None:
proclist.remove(p)
elif pr.poll() is not None:
print('Conversion of {} to {} finished.'.format(ifn, ofn))
proclist.remove(p)
sleep(0.5)
def main(argv):
"""Main program.
:param argv: command line arguments
"""
if len(argv) == 1:
binary = os.path.basename(argv[0])
print("{} version {}".format(binary, __version__), file=sys.stderr)
print("Usage: {} [file ...]".format(binary), file=sys.stderr)
sys.exit(0)
checkfor(['ffmpeg', '-version'])
avis = argv[1:]
procs = []
maxprocs = cpu_count()
for ifile in avis:
while len(procs) == maxprocs:
manageprocs(procs)
procs.append(startencoder(ifile))
while len(procs) > 0:
manageprocs(procs)
if __name__ == '__main__':
main(sys.argv)

Multiprocessing, writing to file, and deadlock on large loops

I have a very weird problem with the code below. when numrows = 10 the Process loops completes itself and proceeds to finish. If the growing list becomes larger it goes into a deadlock. Why is this and how can I solve this?
import multiprocessing, time, sys
# ----------------- Calculation Engine -------------------
def feed(queue, parlist):
for par in parlist:
queue.put(par)
def calc(queueIn, queueOut):
while True:
try:
par = queueIn.get(block = False)
print "Project ID: %s started. " % par
res = doCalculation(par)
queueOut.put(res)
except:
break
def write(queue, fname):
print 'Started to write to file'
fhandle = open(fname, "w")
while True:
try:
res = queue.get(block = False)
for m in res:
print >>fhandle, m
except:
break
fhandle.close()
print 'Complete writing to the file'
def doCalculation(project_ID):
numrows = 100
toFileRowList = []
for i in range(numrows):
toFileRowList.append([project_ID]*100)
print "%s %s" % (multiprocessing.current_process().name, i)
return toFileRowList
def main():
parlist = [276, 266]
nthreads = multiprocessing.cpu_count()
workerQueue = multiprocessing.Queue()
writerQueue = multiprocessing.Queue()
feedProc = multiprocessing.Process(target = feed , args = (workerQueue, parlist))
calcProc = [multiprocessing.Process(target = calc , args = (workerQueue, writerQueue)) for i in range(nthreads)]
writProc = multiprocessing.Process(target = write, args = (writerQueue, 'somefile.csv'))
feedProc.start()
feedProc.join ()
for p in calcProc:
p.start()
for p in calcProc:
p.join()
writProc.start()
writProc.join()
if __name__=='__main__':
sys.exit(main())
I think the problem is the Queue buffer getting filled, so you need to read from the queue before you can put additional stuff in it.
For example, in your feed thread you have:
queue.put(par)
If you keep putting much stuff without reading this will cause it to block untill the buffer is freed, but the problem is that you only free the buffer in your calc thread, which in turn doesn't get started before you join your blocking feed thread.
So, in order for your feed thread to finish, the buffer should be freed, but the buffer won't be freed before the thread finishes :)
Try organizing your queues access more.
The feedProc and the writeProc are not actually running in parallel with the rest of your program. When you have
proc.start()
proc.join ()
you start the process and then, on the join() you immediatly wait for it to finish. In this case there's no gain in multiprocessing, only overhead. Try to start ALL processes at once before you join them. This will also have the effect that your queues get emptied regularyl and you won't deadlock.

Categories