Disable console output from subprocess.Popen in Python - python

I run Python 2.5 on Windows, and somewhere in the code I have
subprocess.Popen("taskkill /PID " + str(p.pid))
to kill IE window by pid. The problem is that without setting up piping in Popen I still get output to console - SUCCESS: The process with PID 2068 has been terminated. I debugged it to CreateProcess in subprocess.py, but can't go from there.
Anyone knows how to disable this?

from subprocess import check_call, DEVNULL, STDOUT
check_call(
("taskkill", "/PID", str(p.pid)),
stdout=DEVNULL,
stderr=STDOUT,
)
I always pass in tuples (or lists) to subprocess as it saves me worrying about escaping. check_call ensures (a) the subprocess has finished before the pipe closes, and (b) a failure in the called process is not ignored.
If you're stuck in python 2, subprocess doesn't provide DEVNULL. However, you can replicate it by opening os.devnull (the standard, cross-platform way of saying NUL in Python 2.4+):
import os
from subprocess import check_call, STDOUT
DEVNULL = open(os.devnull, 'wb')
try:
check_call(
("taskkill", "/PID", str(p.pid)),
stdout=DEVNULL,
stderr=STDOUT,
)
finally:
DEVNULL.close()

fh = open("NUL","w")
subprocess.Popen("taskkill /PID " + str(p.pid), stdout = fh, stderr = fh)
fh.close()

Related

run a shell command in the background and pipe stdout to a logfile with python

I have a .war file I'd like to launch via python. I want it to run in the background so no log messages appear in my terminal. Also I would love to have the actual log output put into a logfile. This is the python code I use to solve this.
I had no luck yet. The process is not detached because I cannot run other shell commands after executing the script. The logfile is created but no log-output is appended there.
EDIT: To make things more clear. I want to enhance this script to run multiple java processes in the end. Therefore this python script should spawn those java processes and die in the end. How to achieve exactly that including the functionality of redirecting stdout to a file?
#!/usr/bin/env python3
import subprocess
import re
platformDir = "./platform/"
fe = "frontend-webapp-0.5.0.war"
logfile = open("frontend-log", 'w')
process = subprocess.Popen(['java', '-jar', platformDir + fe],
stdout=subprocess.PIPE)
for line in process.stdout:
logFile.write(line)
Here is how you can try it using Python 3:
import sys
from subprocess import Popen, PIPE, STDOUT
platformDir = "./platform/"
fe = "frontend-webapp-0.5.0.war"
logfile = open("frontend-log", 'ab')
p = Popen(['java', '-jar', platformDir + fe], stdout=PIPE, stderr=STDOUT, bufsize=1)
for line in p.stdout:
sys.stdout.buffer.write(line)
logfile.write(line)
I actually just simply had to give stdout the file handle like stdout=logFile. The solution with the for loop through the stdout would leave me in the python script process all the time
import sys
from subprocess import Popen, PIPE, STDOUT
platformDir = "./platform/"
fe = "frontend-webapp-0.5.0.war"
logfile = open("frontend-log", 'ab')
p = Popen(['java', '-jar', platformDir + fe], stdout=logfile, stderr=STDOUT, bufsize=1)

closing python command subprocesses

I want to continue with commands after closing subprocess. I have following code but fsutil is not executed. how can I do it?
import os
from subprocess import Popen, PIPE, STDOUT
os.system('mkdir c:\\temp\\vhd')
p = Popen( ["diskpart"], stdin=PIPE, stdout=PIPE )
p.stdin.write("create vdisk file=c:\\temp\\vhd\\test.vhd maximum=2000 type=expandable\n")
p.stdin.write("attach vdisk\n")
p.stdin.write("create partition primary size=10\n")
p.stdin.write("format fs=ntfs quick\n")
p.stdin.write("assign letter=r\n")
p.stdin.write("exit\n")
p.stdout.close
os.system('fsutil file createnew r:\dummy.txt 6553600') #this doesn´t get executed
At the least, I think you need to change your code to look like this:
import os
from subprocess import Popen, PIPE
os.system('mkdir c:\\temp\\vhd')
p = Popen(["diskpart"], stdin=PIPE, stdout=PIPE, stderr=PIPE)
p.stdin.write("create vdisk file=c:\\temp\\vhd\\test.vhd maximum=2000 type=expandable\n")
p.stdin.write("attach vdisk\n")
p.stdin.write("create partition primary size=10\n")
p.stdin.write("format fs=ntfs quick\n")
p.stdin.write("assign letter=r\n")
p.stdin.write("exit\n")
results, errors = p.communicate()
os.system('fsutil file createnew r:\dummy.txt 6553600')
From the documentation for Popen.communicate():
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.
You could replace the p.communicate() with p.wait(), but there is this warning in the documentation for Popen.wait()
Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that.

subprocess.Popen handling stdout and stderr as they come

I'm trying to process both stdout and stderr from a subprocess.Popen call that captures both via subprocess.PIPE but would like to handle the output (for example printing them on the terminal) as it comes.
All the current solutions that I've seen will wait for the completion of the Popen call to ensure that all of the stdout and stderr is captured so that then it can be processed.
This is an example Python script with mixed output that I can't seem to replicate the order when processing it in real time (or as real time as I can):
$ cat mix_out.py
import sys
sys.stdout.write('this is an stdout line\n')
sys.stdout.write('this is an stdout line\n')
sys.stderr.write('this is an stderr line\n')
sys.stderr.write('this is an stderr line\n')
sys.stderr.write('this is an stderr line\n')
sys.stdout.write('this is an stdout line\n')
sys.stderr.write('this is an stderr line\n')
sys.stdout.write('this is an stdout line\n')
The one approach that seems that it might work would be using threads, because then the reading would be asynchronous, and could be processed as subprocess is yielding the output.
The current implementation of this just process stdout first and stderr last, which can be deceiving if the output was originally alternating between both:
cmd = ['python', 'mix_out.py']
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=True,
**kw
)
if process.stdout:
while True:
out = process.stdout.readline()
if out == '' and process.poll() is not None:
break
if out != '':
print 'stdout: %s' % out
sys.stdout.flush()
if process.stderr:
while True:
err = process.stderr.readline()
if err == '' and process.poll() is not None:
break
if err != '':
print 'stderr: %s' % err
sys.stderr.flush()
If I run the above (saved as out.py) to handle the mix_out.py example script from above, the streams are (as expected) handled in order:
$ python out.py
stdout: this is an stdout line
stdout: this is an stdout line
stdout: this is an stdout line
stdout: this is an stdout line
stderr: this is an stderr line
stderr: this is an stderr line
stderr: this is an stderr line
stderr: this is an stderr line
I understand that some system calls might buffer, and I am OK with that, the one thing I am looking to solve is respecting the order of the streams as they happened.
Is there a way to be able to process both stdout and stderr as it comes from subprocess without having to use threads? (the code gets executed in restricted remote systems where threading is not possible).
The need to differentiate stdout from stderr is a must (as shown in the example output)
Ideally, no extra libraries would be best (e.g. I know pexpect solves this)
A lot of examples out there mention the use of select but I have failed to come up with something that would preserve the order of the output with it.
If you are looking for a way of having subprocess.Popen` output to stdout/stderr in realtime, you should be able to achieve that with:
import sys, subprocess
p = subprocess.Popen(cmdline,
stdout=sys.stdout,
stderr=sys.stderr)
Maybe using stderr=subprocess.STDOUT may simplify your filtering, IMO.
I found working example here (see listing of capture_together.py). Compiled C++ code that mixes cerr and cout executed as subprocess on both Windows and UNIX OSes. Results are identitical
I was able to solve this by using select.select()
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=True,
**kw
)
while True:
reads, _, _ = select(
[process.stdout.fileno(), process.stderr.fileno()],
[], []
)
for descriptor in reads:
if descriptor == process.stdout.fileno():
read = process.stdout.readline()
if read:
print 'stdout: %s' % read
if descriptor == process.stderr.fileno():
read = process.stderr.readline()
if read:
print 'stderr: %s' % read
sys.stdout.flush()
if process.poll() is not None:
break
By passing in the file descriptors to select() on the reads argument (first argument for select()) and looping over them (as long as process.poll()indicated that the process was still alive).
No need for threads. Code was adapted from this stackoverflow answer

Calling ffmpeg kills script in background only

I've got a python script that calls ffmpeg via subprocess to do some mp3 manipulations. It works fine in the foreground, but if I run it in the background, it gets as far as the ffmpeg command, which itself gets as far as dumping its config into stderr. At this point, everything stops and the parent task is reported as stopped, without raising an exception anywhere. I've tried a few other simple commands in the place of ffmpeg, they execute normally in foreground or background.
This is the minimal example of the problem:
import subprocess
inf = "3HTOSD.mp3"
outf = "out.mp3"
args = [ "ffmpeg",
"-y",
"-i", inf,
"-ss", "0",
"-t", "20",
outf
]
print "About to do"
result = subprocess.call(args)
print "Done"
I really can't work out why or how a wrapped process can cause the parent to terminate without at least raising an error, and how it only happens in so niche a circumstance. What is going on?
Also, I'm aware that ffmpeg isn't the nicest of packages, but I'm interfacing with something that has using ffmpeg compiled into it, so using it again seems sensible.
It might be related to Linux process in background - “Stopped” in jobs? e.g., using parent.py:
from subprocess import check_call
check_call(["python", "-c", "import sys; sys.stdin.readline()"])
should reproduce the issue: "parent.py script shown as stopped" if you run it in bash as a background job:
$ python parent.py &
[1] 28052
$ jobs
[1]+ Stopped python parent.py
If the parent process is in an orphaned process group then it is killed on receiving SIGTTIN signal (a signal to stop).
The solution is to redirect the input:
import os
from subprocess import check_call
try:
from subprocess import DEVNULL
except ImportError: # Python 2
DEVNULL = open(os.devnull, 'r+b', 0)
check_call(["python", "-c", "import sys; sys.stdin.readline()"], stdin=DEVNULL)
If you don't need to see ffmpeg stdout/stderr; you could also redirect them to /dev/null:
check_call(ffmpeg_cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
I like to use the commands module. It's simpler to use in my opinion.
import commands
cmd = "ffmpeg -y -i %s -ss 0 -t 20 %s 2>&1" % (inf, outf)
status, output = commands.getstatusoutput(cmd)
if status != 0:
raise Exception(output)
As a side note, sometimes PATH can be an issue, and you might want to use an absolute path to the ffmpeg binary.
matt#goliath:~$ which ffmpeg
/opt/local/bin/ffmpeg
From the python/subprocess/call documentation:
Wait for command to complete, then return the returncode attribute.
So as long as the process you called does not exit, your program does not go on.
You should set up a Popen process object, put its standard output and error in different buffers/streams and when there is an error, you terminate the process.
Maybe something like this works:
proc = subprocess.Popen(args, stderr = subprocess.PIPE) # puts stderr into a new stream
while proc.poll() is None:
try:
err = proc.stderr.read()
except: continue
else:
if err:
proc.terminate()
break

How to write to stdout AND to log file simultaneously with Popen?

I am using Popen to call a shell script that is continuously writing its stdout and stderr to a log file. Is there any way to simultaneously output the log file continuously (to the screen), or alternatively, make the shell script write to both the log file and stdout at the same time?
I basically want to do something like this in Python:
cat file 2>&1 | tee -a logfile #"cat file" will be replaced with some script
Again, this pipes stderr/stdout together to tee, which writes it both to stdout and my logfile.
I know how to write stdout and stderr to a logfile in Python. Where I'm stuck is how to duplicate these back to the screen:
subprocess.Popen("cat file", shell=True, stdout=logfile, stderr=logfile)
Of course, I could just do something like this, but is there any way to do this without tee and shell file descriptor redirection?:
subprocess.Popen("cat file 2>&1 | tee -a logfile", shell=True)
You can use a pipe to read the data from the program's stdout and write it to all the places you want:
import sys
import subprocess
logfile = open('logfile', 'w')
proc=subprocess.Popen(['cat', 'file'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in proc.stdout:
sys.stdout.write(line)
logfile.write(line)
proc.wait()
UPDATE
In python 3, the universal_newlines parameter controls how pipes are used. If False, pipe reads return bytes objects and may need to be decoded (e.g., line.decode('utf-8')) to get a string. If True, python does the decode for you
Changed in version 3.3: When universal_newlines is True, the class uses the encoding locale.getpreferredencoding(False) instead of locale.getpreferredencoding(). See the io.TextIOWrapper class for more information on this change.
To emulate: subprocess.call("command 2>&1 | tee -a logfile", shell=True) without invoking the tee command:
#!/usr/bin/env python2
from subprocess import Popen, PIPE, STDOUT
p = Popen("command", stdout=PIPE, stderr=STDOUT, bufsize=1)
with p.stdout, open('logfile', 'ab') as file:
for line in iter(p.stdout.readline, b''):
print line, #NOTE: the comma prevents duplicate newlines (softspace hack)
file.write(line)
p.wait()
To fix possible buffering issues (if the output is delayed), see links in Python: read streaming input from subprocess.communicate().
Here's Python 3 version:
#!/usr/bin/env python3
import sys
from subprocess import Popen, PIPE, STDOUT
with Popen("command", stdout=PIPE, stderr=STDOUT, bufsize=1) as p, \
open('logfile', 'ab') as file:
for line in p.stdout: # b'\n'-separated lines
sys.stdout.buffer.write(line) # pass bytes as is
file.write(line)
Write to terminal byte by byte for interactive applications
This method write any bytes it gets to stdout immediately, which more closely simulates the behavior of tee, especially for interactive applications.
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
with subprocess.Popen(sys.argv[1:], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as proc, \
open('logfile.txt', 'bw') as logfile:
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
logfile.write(byte)
# logfile.flush()
else:
break
exit_status = proc.returncode
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(10):
print(i)
sys.stdout.flush()
time.sleep(1)
First we can do a non-interactive sanity check:
./main.py ./sleep.py
And we see it counting to stdout on real time.
Next, for an interactive test, you can run:
./main.py bash
Then the characters you type appear immediately on the terminal as you type them, which is very important for interactive applications. This is what happens when you run:
bash | tee logfile.txt
Also, if you want the output to show on the ouptut file immediately, then you can also add a:
logfile.flush()
but tee does not do this, and I'm afraid it would kill performance. You can test this out easily with:
tail -f logfile.txt
Related question: live output from subprocess command
Tested on Ubuntu 18.04, Python 3.6.7.

Categories