Python popen command. Wait until the command is finished - python

I have a script where I launch with popen a shell command.
The problem is that the script doesn't wait until that popen command is finished and go continues right away.
om_points = os.popen(command, "w")
.....
How can I tell to my Python script to wait until the shell command has finished?

Depending on how you want to work your script you have two options. If you want the commands to block and not do anything while it is executing, you can just use subprocess.call.
#start and block until done
subprocess.call([data["om_points"], ">", diz['d']+"/points.xml"])
If you want to do things while it is executing or feed things into stdin, you can use communicate after the popen call.
#start and process things, then wait
p = subprocess.Popen([data["om_points"], ">", diz['d']+"/points.xml"])
print "Happens while running"
p.communicate() #now wait plus that you can send commands to process
As stated in the documentation, wait can deadlock, so communicate is advisable.

You can you use subprocess to achieve this.
import subprocess
#This command could have multiple commands separated by a new line \n
some_command = "export PATH=$PATH://server.sample.mo/app/bin \n customupload abc.txt"
p = subprocess.Popen(some_command, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
#This makes the wait possible
p_status = p.wait()
#This will give you the output of the command being executed
print "Command output: " + output

Force popen to not continue until all output is read by doing:
os.popen(command).read()

Let the command you are trying to pass be
os.system('x')
then you covert it to a statement
t = os.system('x')
now the python will be waiting for the output from the commandline so that it could be assigned to the variable t.

What you are looking for is the wait method.

wait() works fine for me. The subprocesses p1, p2 and p3 are executed at the same. Therefore, all processes are done after 3 seconds.
import subprocess
processes = []
p1 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p2 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p3 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
processes.append(p1)
processes.append(p2)
processes.append(p3)
for p in processes:
if p.wait() != 0:
print("There was an error")
print("all processed finished")

I think process.communicate() would be suitable for output having small size. For larger output it would not be the best approach.

Related

How to wait for a terminal to be closed by the user? [duplicate]

I have a script where I launch with popen a shell command.
The problem is that the script doesn't wait until that popen command is finished and go continues right away.
om_points = os.popen(command, "w")
.....
How can I tell to my Python script to wait until the shell command has finished?
Depending on how you want to work your script you have two options. If you want the commands to block and not do anything while it is executing, you can just use subprocess.call.
#start and block until done
subprocess.call([data["om_points"], ">", diz['d']+"/points.xml"])
If you want to do things while it is executing or feed things into stdin, you can use communicate after the popen call.
#start and process things, then wait
p = subprocess.Popen([data["om_points"], ">", diz['d']+"/points.xml"])
print "Happens while running"
p.communicate() #now wait plus that you can send commands to process
As stated in the documentation, wait can deadlock, so communicate is advisable.
You can you use subprocess to achieve this.
import subprocess
#This command could have multiple commands separated by a new line \n
some_command = "export PATH=$PATH://server.sample.mo/app/bin \n customupload abc.txt"
p = subprocess.Popen(some_command, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
#This makes the wait possible
p_status = p.wait()
#This will give you the output of the command being executed
print "Command output: " + output
Force popen to not continue until all output is read by doing:
os.popen(command).read()
Let the command you are trying to pass be
os.system('x')
then you covert it to a statement
t = os.system('x')
now the python will be waiting for the output from the commandline so that it could be assigned to the variable t.
wait() works fine for me. The subprocesses p1, p2 and p3 are executed at the same. Therefore, all processes are done after 3 seconds.
import subprocess
processes = []
p1 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p2 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p3 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
processes.append(p1)
processes.append(p2)
processes.append(p3)
for p in processes:
if p.wait() != 0:
print("There was an error")
print("all processed finished")
What you are looking for is the wait method.
I think process.communicate() would be suitable for output having small size. For larger output it would not be the best approach.

kill subprocess.Popen().write after the process is finished

I am trying to update the router with a python script with only one ssh call.
However, the kill() function is executed before the update starts.
process_1 = f' opkg update'
process_2 = f' echo process 2'
cmds = [
f'{process_1}\n',
f'{process_2}'
]
proc = subprocess.Popen(["ssh", "root#192.168.1.1"], stdin=subprocess.PIPE)
for cmd in cmds:
proc.stdin.write(f'{cmd}'.encode())
proc.stdin.flush()
proc.stdin.close()
proc.kill()
Solution
.wait() is the method I was looking for
process_1 = f' opkg update'
process_2 = f' echo process 2'
cmds = [
f'{process_1}\n',
f'{process_2}'
]
proc = subprocess.Popen(["ssh", "root#192.168.1.1"], stdin=subprocess.PIPE)
for cmd in cmds:
proc.stdin.write(f'{cmd}'.encode())
proc.stdin.flush()
proc.stdin.close()
proc.wait()
Passing commands to standard input of ssh is somewhat finicky. A much better solution is to switch to Paramiko, but if your needs are simple, just refactor to pass the commands as arguments to ssh.
result = subprocess.run(
["ssh", "root#192.168.1.1",
"\n".join(cmds)],
check=True)
Like the documentation already tells you, you should generally prefer subprocess.run (or its legacy siblings check_call, check_output, etc) over Popen whenever you can. The problems you were experiencing is one of the symptoms, and of course, the fixed code is also much shorter and easier to understand.
As a further aside, f'{thing}' is just a really clumsy way to write thing (or str(thing) if thing isn't already a string).

How to kill a parallel process(started by subprocess.Popen) and it's subprocess? [duplicate]

I'm launching a subprocess with the following command:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
However, when I try to kill using:
p.terminate()
or
p.kill()
The command keeps running in the background, so I was wondering how can I actually terminate the process.
Note that when I run the command with:
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
It does terminate successfully when issuing the p.terminate().
Use a process group so as to enable sending a signal to all the process in the groups. For that, you should attach a session id to the parent process of the spawned/child processes, which is a shell in your case. This will make it the group leader of the processes. So now, when a signal is sent to the process group leader, it's transmitted to all of the child processes of this group.
Here's the code:
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM) # Send the signal to all the process groups
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.kill()
p.kill() ends up killing the shell process and cmd is still running.
I found a convenient fix this by:
p = subprocess.Popen("exec " + cmd, stdout=subprocess.PIPE, shell=True)
This will cause cmd to inherit the shell process, instead of having the shell launch a child process, which does not get killed. p.pid will be the id of your cmd process then.
p.kill() should work.
I don't know what effect this will have on your pipe though.
If you can use psutil, then this works perfectly:
import subprocess
import psutil
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
proc = subprocess.Popen(["infinite_app", "param"], shell=True)
try:
proc.wait(timeout=3)
except subprocess.TimeoutExpired:
kill(proc.pid)
I could do it using
from subprocess import Popen
process = Popen(command, shell=True)
Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
it killed the cmd.exe and the program that i gave the command for.
(On Windows)
When shell=True the shell is the child process, and the commands are its children. So any SIGTERM or SIGKILL will kill the shell but not its child processes, and I don't remember a good way to do it.
The best way I can think of is to use shell=False, otherwise when you kill the parent shell process, it will leave a defunct shell process.
None of these answers worked for me so Im leaving the code that did work. In my case even after killing the process with .kill() and getting a .poll() return code the process didn't terminate.
Following the subprocess.Popen documentation:
"...in order to cleanup properly a well-behaved application should kill the child process and finish communication..."
proc = subprocess.Popen(...)
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
In my case I was missing the proc.communicate() after calling proc.kill(). This cleans the process stdin, stdout ... and does terminate the process.
As Sai said, the shell is the child, so signals are intercepted by it -- best way I've found is to use shell=False and use shlex to split the command line:
if isinstance(command, unicode):
cmd = command.encode('utf8')
args = shlex.split(cmd)
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Then p.kill() and p.terminate() should work how you expect.
Send the signal to all the processes in group
self.proc = Popen(commands,
stdout=PIPE,
stderr=STDOUT,
universal_newlines=True,
preexec_fn=os.setsid)
os.killpg(os.getpgid(self.proc.pid), signal.SIGHUP)
os.killpg(os.getpgid(self.proc.pid), signal.SIGTERM)
There is a very simple way for Python 3.5 or + (Actually tested on Python 3.8)
import subprocess, signal, time
p = subprocess.Popen(['cmd'], shell=True)
time.sleep(5) #Wait 5 secs before killing
p.send_signal(signal.CTRL_C_EVENT)
Then, your code may crash at some point if you have a keyboard input detection, or sth like this. In this case, on the line of code/function where the error is given, just use:
try:
FailingCode #here goes the code which is raising KeyboardInterrupt
except KeyboardInterrupt:
pass
What this code is doing is just sending a "CTRL+C" signal to the running process, what will cause the process to get killed.
Solution that worked for me
if os.name == 'nt': # windows
subprocess.Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
else:
os.kill(process.pid, signal.SIGTERM)
Full blown solution that will kill running process (including subtree) on timeout reached or specific conditions via a callback function.
Works both on windows & Linux, from Python 2.7 up to 3.10 as of this writing.
Install with pip install command_runner
Example for timeout:
from command_runner import command_runner
# Kills ping after 2 seconds
exit_code, output = command_runner('ping 127.0.0.1', shell=True, timeout=2)
Example for specific condition:
Here we'll stop ping if current system time seconds digit is > 5
from time import time
from command_runner import command_runner
def my_condition():
# Arbitrary condition for demo
return True if int(str(int(time()))[-1]) > 5
# Calls my_condition() every second (check_interval) and kills ping if my_condition() returns True
exit_code, output = command_runner('ping 127.0.0.1', shell=True, stop_on=my_condition, check_interval=1)

subprocess.popen detached from master (Linux)

I am trying to open a subprocess but have it be detached from the parent script that called it. Right now if I call subprocess.popen and the parent script crashes the subprocess dies as well.
I know there are a couple of options for windows but I have not found anything for *nix.
I also don't need to call this using subprocess. All I need is to be able to cal another process detached and get the pid.
With linux, it's no issue at all. Just Popen(). For example, here is a little dying_demon.py
#!/usr/bin/python -u
from time import sleep
from subprocess import Popen
print Popen(["python", "-u", "child.py"]).pid
i = 0
while True:
i += 1
print "demon: %d" % i
sleep(1)
if i == 3:
i = hurz # exception
spinning off a child.py
#!/usr/bin/python -u
from time import sleep
i = 0
while True:
i += 1
print "child: %d" % i
sleep(1)
if i == 20:
break
The child continues to count (to the console), while the demon is dying by exception.
I think this should do the trick: https://www.python.org/dev/peps/pep-3143/#reference-implementation
You can create daemon which will call your subprocess, passing detach_process=True.
This might do what you want:
def cmd_detach(*command, **kwargs) -> subprocess.CompletedProcess:
# https://stackoverflow.com/questions/62521658/python-subprocess-detach-a-process
# if using with ffmpeg remember to run it with `-nostdin`
stdout = os.open(os.devnull, os.O_WRONLY)
stderr = os.open(os.devnull, os.O_WRONLY)
stdin = os.open(os.devnull, os.O_RDONLY)
command = conform(command)
if command[0] in ["fish", "bash"]:
import shlex
command = command[0:2] + [shlex.join(command[2:])]
subprocess.Popen(command, stdin=stdin, stdout=stdout, stderr=stderr, close_fds=True, start_new_session=True, **kwargs)
return subprocess.CompletedProcess(command, 0, "Detached command is async")
On Windows you might need
CREATE_NEW_PROCESS_GROUP = 0x00000200
DETACHED_PROCESS = 0x00000008
creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP
instead of start_new_session=True
I managed to get it working by doing the following using python-daemon:
process = subprocess.Popen(["python", "-u", "Child.py"])
time.sleep(2)
process.kill()
Then in Child.py:
with daemon.DaemonContext():
print("Child Started")
time.sleep(30)
print "Done"
exit()
I do process.kill() because otherwise it creates a defunct python process. The main problem I have now is that the PID that popen returns does not match the final pid of the process. I can get by this by adding a function in Child.py to update a database with the pid.
Let me know if there is something that I am missing or if this is an ok method of doing this.
fork the subprocs using the NOHUP option

how to poll and exit a subprocess

I'd like to start a process, wait 2 seconds and print out whatever is in the stderr and stdout pipes so far and then exit. Here is the code I have so far and it doesn't seem to work as hoped. What am I doing wrong?
There are 3 issues:
The program as it stands prints out "done" then waits for the
suprocess to complete before printing out the first line.
As it stands, the script reads one line. How to read to the end of the current buffer?
will the subprocess exit if the calling script exits? If so, how should I modify the
function call so that the subprocess runs to completion even if the
calling script exits?
cmdStr = "./stepper.py"
proc = subprocess.Popen(cmdStr, shell=True, bufsize=-1, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
print "polling"
time.sleep(2)
print "done"
print proc.stdout.readline()
Here is what the stepper.py looks like:
out = open("stepper.log", 'w')
for idx in range(3):
time.sleep(2)
print "Idx",idx
sys.stdout.flush()
out.write("%d\n"%(idx))
print "fnished"
out.write("cloing\n")
out.close()

Categories