Terminate two processes that have a pipe connection - Multiprocessing Python - python

I've made two processes and each process has one end of the pipe. When I terminate my process, I get a "BrokenPipeError". How do I correctly kill the processes. I'm using Process, not subprocess! import multiprocessing I'm suspecting it's because Process1's pipe is still trying to send information. How do I end the processes without having this error?
I also have a GUI that send the kill command to my processes when it closes.
I've tried:
process.terminate() and process.join()
But I still get the broken pipe error.
import multiprocessing as mp
...#other code
my_pipe = mp.Pipe()
other_pipe1 = mp.Pipe()
other_pipe2 = mp.Pipe()
process1 = mp.Process(my_pipe[0], other_pipe1[0])
process2 = mp.Process(my_pipe[1], other_pipe2[0])
..... #doing things in my processes. Sending data from process1 to process2
#When I close my GUI
other_pipe1[1].send("kill") #Process1 closes some files
**while(1):**
if(other_pipe1[1].poll()):
if(other_pipe1[1].recv() == "done"): #Process1 send a message back once files closed
process1.join()
process2.join()
**break**

Related

How to run an EXE program in the background and get the outuput in python

I want to run an exe program in the background
Let's say the program is httpd.exe
I can run it but when I want to get the outupt It get stuck becuase there is no output if It starts successfully. But if there is an error It's OK.
Here is the code I'm using:
import asyncio
import os
os.chdir('c:\\apache\\bin')
process, stdout, stderr = asyncio.run(run('httpd.exe'))
print(stdout, stderr)
async def run(cmd):
proc = await asyncio.create_subprocess_exec(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await proc.communicate()
return (proc, stdout, stderr)
I tried to make the following code as general as possible:
I make no assumptions as to whether the program being run only writes its output to stdout alone or stderr alone. So I capture both outputs by starting two threads, one for each stream, and then write the output to a common queue that can be read in real time. When end-of-stream in encountered on each stdout and stderr, the threads write a special None record to the queue to indicate end of stream. So the reader of the queue know that after seeing two such "end of stream" indicators that there will be no more lines being written to the queue and that the process has effectively ended.
The call to subprocess.Popen can be made with argument shell=True so that this can also built-in shell commands and also to make the specification of the command easier (it can now be a single string rather than a list of strings).
The function run_cmd returns the created process and the queue. You just have to now loop reading lines from the queue until two None records are seen. Once that occurs, you can then just wait for the process to complete, which should be immediate.
If you know that the process you are starting only writes its output to stdout or stderr (or if you only want to catch one of these outputs), then you can modify the program to start only one thread and specify the subprocess.PIPE value for only one of these outputs and then the loop that is reading lines from the queue should only be looking for one None end-of-stream indicator.
The threads are daemon threads so that if you wish to terminate based on output from the process that has been read before all the end-of-stream records have been detected, then the threads will automatically be terminated along with the main process.
run_apache, which runs Apache as a subprocess, is itself a daemon thread. If it detects any output from Apache, it sets an event that has been passed to it. The main thread that starts run_apache can periodically test this event, wait on this event, wait for the run_apache thread to end (which will only occur when Apache ends) or can terminate Apache via global variable proc.
import subprocess
import sys
import threading
import queue
def read_stream(f, q):
for line in iter(f.readline, ''):
q.put(line)
q.put(None) # show no more data from stdout or stderr
def run_cmd(command, run_in_shell=True):
"""
Run command as a subprocess. If run_in_shell is True, then
command is a string, else it is a list of strings.
"""
proc = subprocess.Popen(command, shell=run_in_shell, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
q = queue.Queue()
threading.Thread(target=read_stream, args=(proc.stdout, q), daemon=True).start()
threading.Thread(target=read_stream, args=(proc.stderr, q), daemon=True).start()
return proc, q
import os
def run_apache(event):
global proc
os.chdir('c:\\apache\\bin')
proc, q = run_cmd(['httpd.exe'], False)
seen_None_count = 0
while seen_None_count < 2:
line = q.get()
if line is None:
# end of stream from either stdout or stderr
seen_None_count += 1
else:
event.set() # Seen output line:
print(line, end='')
# wait for process to terminate, which should be immediate:
proc.wait()
# This event will be set if Apache write output:
event = threading.Event()
t = threading.Thread(target=run_apache, args=(event,), daemon=True)
t.start()
# Main thread runs and can test event any time to see if it has done any output:
if event.is_set():
...
# The main thread can wait for run_apache thread to normally terminate,
# will occur when Apache terminates:
t.join()
# or the main thread can kill Apache via global variable procL
proc.terminate() # No need to do t.join() since run_apache is a daemon thread

How to terminate Python's `ProcessPoolExecutor` when parent process dies?

Is there a way to make the processes in concurrent.futures.ProcessPoolExecutor terminate if the parent process terminates for any reason?
Some details: I'm using ProcessPoolExecutor in a job that processes a lot of data. Sometimes I need to terminate the parent process with a kill command, but when I do that the processes from ProcessPoolExecutor keep running and I have to manually kill them too. My primary work loop looks like this:
with concurrent.futures.ProcessPoolExecutor(n_workers) as executor:
result_list = [executor.submit(_do_work, data) for data in data_list]
for id, future in enumerate(
concurrent.futures.as_completed(result_list)):
print(f'{id}: {future.result()}')
Is there anything I can add here or do differently to make the child processes in executor terminate if the parent dies?
You can start a thread in each process to terminate when parent process dies:
def start_thread_to_terminate_when_parent_process_dies(ppid):
pid = os.getpid()
def f():
while True:
try:
os.kill(ppid, 0)
except OSError:
os.kill(pid, signal.SIGTERM)
time.sleep(1)
thread = threading.Thread(target=f, daemon=True)
thread.start()
Usage: pass initializer and initargs to ProcessPoolExecutor
with concurrent.futures.ProcessPoolExecutor(
n_workers,
initializer=start_thread_to_terminate_when_parent_process_dies, # +
initargs=(os.getpid(),), # +
) as executor:
This works even if the parent process is SIGKILL/kill -9'ed.
I would suggest two changes:
Use a kill -15 command, which can be handled by the Python program as a SIGTERM signal rather than a kill -9 command.
Use a multiprocessing pool created with the multiprocessing.pool.Pool class, whose terminate method works quite differently than that of the concurrent.futures.ProcessPoolExecutor class in that it will kill all processes in the pool so any tasks that have been submitted and running will be also immediately terminated.
Your equivalent program using the new pool and handling a SIGTERM interrupt would be:
from multiprocessing import Pool
import signal
import sys
import os
...
def handle_sigterm(*args):
#print('Terminating...', file=sys.stderr, flush=True)
pool.terminate()
sys.exit(1)
# The process to be "killed", if necessary:
print(os.getpid(), file=sys.stderr)
pool = Pool(n_workers)
signal.signal(signal.SIGTERM, handle_sigterm)
results = pool.imap_unordered(_do_work, data_list)
for id, result in enumerate(results):
print(f'{id}: {result}')
You could run the script in a kill-cgroup. When you need to kill the whole thing, you can do so by using the cgroup's kill switch. Even a cpu-cgroup will do the trick as you can access the group's pids.
Check this article on how to use cgexec.

os._exit(1) does not kill non-daemonic sibling processes

I am writing a python script which has 2 child processes. The main logic occurs in one process and another process waits for some time and then kills the main process even if the logic is not done.
I read that calling os_exit(1) stops the interpreter, so the entire script is killed automatically. I've used it like shown below:
import os
from multiprocessing import Process, Lock
from multiprocessing.sharedctypes import Array
# Main process
def main_process(shared_variable):
shared_variable.value = "mainprc"
time.sleep(20)
print("Task finished normally.")
os._exit(1)
# Timer process
def timer_process(shared_variable):
threshold_time_secs = 5
time.sleep(threshold_time_secs)
print("Timeout reached")
print("Shared variable ",shared_variable.value)
print("Task is shutdown.")
os._exit(1)
if __name__ == "__main__":
lock = Lock()
shared_variable = Array('c',"initial",lock=lock)
process_main = Process(target=main_process, args=(shared_variable))
process_timer = Process(target=timer_process, args=(shared_variable))
process_main.start()
process_timer.start()
process_timer.join()
The timer process calls os._exit but the script still waits for the main process to print "Task finished normally." before exiting.
How do I make it such that if timer process exits, the entire program is shutdown (including main process)?
Thanks.

Kill a multiprocessing pool with SIGKILL instead of SIGTERM (I think)

So, I have this program that utilizes multiprocessing with multiple selenium browser windows.
Here's what the program looks like:
pool = Pool(5)
results = pool.map_async(worker,range(10))
time.sleep(10)
pool.terminate()
However, this waits for the existing process in pool to complete. I want instant termination of all the workers.
multiprocessing.Pool store worker processes list in Pool._pool attr, send a signal to them is straightforward then:
import multiprocessing
import os
import signal
def kill(pool):
# stop repopulating new child
pool._state = multiprocessing.pool.TERMINATE
pool._worker_handler._state = multiprocessing.pool.TERMINATE
for p in pool._pool:
os.kill(p.pid, signal.SIGKILL)
# .is_alive() will reap dead process
while any(p.is_alive() for p in pool._pool):
pass
pool.terminate()

PYTHON subprocess cmd.exe closes after first command

I am working on a python program which implements the cmd window.
I am using subproccess with PIPE.
If for example i write "dir" (by stdout), I use communicate() in order to get the response from the cmd and it does work.
The problem is that in a while True loop, this doesn't work more than one time, it seems like the subprocess closes itself..
Help me please
import subprocess
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=None)
x=""
while x!="x":
x = raw_input("insert a command \n")
process.stdin.write(x+"\n")
o,e=process.communicate()
print o
process.stdin.close()
The main problem is that trying to read subprocess.PIPE deadlocks when the program is still running but there is nothing to read from stdout. communicate() manually terminates the process to stop this.
A solution would be to put the piece of code that reads stdout in another thread, and then access it via Queue, which allows for reliable sharing of data between threads by timing out instead of deadlocking.
The new thread will read standard out continuously, stopping when there is no more data.
Each line will be grabbed from the queue stream until a timeout is reached(no more data in Queue), then the list of lines will be displayed to the screen.
This process will work for non-interactive programs
import subprocess
import threading
import Queue
def read_stdout(stdout, queue):
while True:
queue.put(stdout.readline()) #This hangs when there is no IO
process = subprocess.Popen('cmd.exe', shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
q = Queue.Queue()
t = threading.Thread(target=read_stdout, args=(process.stdout, q))
t.daemon = True # t stops when the main thread stops
t.start()
while True:
x = raw_input("insert a command \n")
if x == "x":
break
process.stdin.write(x + "\n")
o = []
try:
while True:
o.append(q.get(timeout=.1))
except Queue.Empty:
print ''.join(o)

Categories