Do I have to close stdout after use subprocess.run() - python

I am trying to create subprocess to remove python a package and return stdout and stderr. I do following but I wonder is it safe to use? Today I faced /bin/bash: resource temporarily unavailable error while I am using. And when I do ps ux I saw lots of bin/bash process.
I think that function cause lots of bash terminal in the background. How should I safely close the subprocess after I got stdout and stderr? Documentation says run method is the recommended way.
def run_subprocess_command(process_command):
response = {"stdout": "", "stderr": "", "exception": ""}
try:
plugin_install_feedback.send(
sender="", message="Package install starting..")
p = subprocess.run(process_command,
universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
timeout=180)
response["stdout"] = p.stdout
response["stderr"] = p.stderr
return response
except Exception as err:
response["exception"] = err
return response

Related

Python's subrocess module hangs in PyCharm on self.stdout.read() only when using Chainpoint cli tool commands

Python's subprocess module hangs when calling chp (Chainpoint cli tool) commands. But only when I do this inside PyCharm. Doing the same in a Python shell directly in the terminal works perfectly. Also using other processes works fine in PyCharm. It seems to be the combination between chp and PyCharm that creates this failure.
This is what i try:
outputs_raw = subprocess.check_output(['chp', 'version'])
it eventually hangs at:
stdout = self.stdout.read() in subprocess.py
I looked around for a solution, but all the other "subprocess hangs pycharm" results don't help.
I also tried using readingproc as an alternative, advised here. This gave me an interesting result. It keeps looping in readingproc/core.py:
while self._proc.poll() is None:
with _unblock_read(self._proc):
result = self._yield_ready_read()
self._check_timeouts(chunk_timeout, total_timeout)
if result is not None:
self._update_chunk_time()
yield result
here result is always None, as self._yield_ready_read() keeps returning None so the if statement never passes.
This is what the _yield_ready_read function looks like. ( in core.py)
def _yield_ready_read(self):
stdout = b''
stderr = b''
if self._poll_stdout.poll(0):
stdout = self._read_while(self._proc.stdout)
if self._poll_stderr.poll(0):
stderr = self._read_while(self._proc.stderr)
if len(stdout) > 0 or len(stderr) > 0:
return ProcessData(stdout, stderr)
else:
return None
I am using python 3.7.3
PATH is the same in the working environment and the failing one.
Can someone help me fix this issue? Thanks!
This fixed it:
from subprocess import Popen, PIPE, STDOUT
proc = Popen(command, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
outputs_raw, errs = proc.communicate()

It's possible to catch ffmpeg errors with python?

Hi I'm trying to make a video converter for django with python, I forked django-ffmpeg module which does almost everything I want, except that doesn't catch error if conversion failed.
Basically the module passes to the command line interface the ffmpeg command to make the conversion like this:
/usr/bin/ffmpeg -hide_banner -nostats -i %(input_file)s -target
film-dvd %(output_file)
Module uses this method to pass the ffmpeg command to cli and get the output:
def _cli(self, cmd, without_output=False):
print 'cli'
if os.name == 'posix':
import commands
return commands.getoutput(cmd)
else:
import subprocess
if without_output:
DEVNULL = open(os.devnull, 'wb')
subprocess.Popen(cmd, stdout=DEVNULL, stderr=DEVNULL)
else:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
return p.stdout.read()
But for example, I you upload an corrupted video file it only returns the ffmpeg message printed on the cli, but nothing is triggered to know that something failed
This is an ffmpeg sample output when conversion failed:
[mov,mp4,m4a,3gp,3g2,mj2 # 0x237d500] Format mov,mp4,m4a,3gp,3g2,mj2
detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 # 0x237d500] moov atom not found
/home/user/PycharmProjects/videotest/media/videos/orig/270f412927f3405aba041265725cdf6b.mp4:
Invalid data found when processing input
I was wondering if there's any way to make that an exception and how, so I can handle it easy.
The only option that came to my mind is to search: "Invalid data found when processing input" in the cli output message string but I'm not shure that if this is the best approach. Anyone can help me and guide me with this please.
You need to check the returncode of the Popen object that you're creating.
Check the docs: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
Your code should wait for the subprocess to finish (with wait) & then check the returncode. If the returncode is != 0 then you can raise any exception you want.
This is how I implemented it in case it's useful to someone else:
def _cli(self, cmd):
errors = False
import subprocess
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdoutdata, stderrdata = p.communicate()
if p.wait() != 0:
# Handle error / raise exception
errors = True
print "There were some errors"
return stderrdata, errors
print 'conversion success '
return stderrdata, errors
except OSError as e:
errors = True
return e.strerror, errors

Get only stdout in a variable in python using subprocess

I use the following command in cli as below,
[mbelagali#mbelagali-vm naggappan]$ aws ec2 create-vpc --cidr-block 172.35.0.0/24 --no-verify-ssl --endpoint-url https://10.34.172.145:8787
/usr/local/aws/lib/python2.6/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py:769:
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
"Vpc": {
"InstanceTenancy": "default",
"State": "pending",
"VpcId": "vpc-ebb1608e",
"CidrBlock": "172.35.0.0/24",
"DhcpOptionsId": "dopt-a24e51c0"
}
And now I redirect the warnings using "2>/dev/null" so that i get only the json response.
Now I need to implement this using the python subprocess and hence tried the following option,
cmd = "aws ec2 create-vpc --cidr-block " + cidr_block + " --no-verify-ssl --endpoint-url " + endpoint_url
cmd_arg = shlex.split(cmd.encode('utf-8'))
p1 = subprocess.Popen(
cmd_arg,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
output, error = p1.communicate()
Now in output variable I get is complete output including the warning messages how can I ignore the warning message as I do it in the shell script
If you don't want the stderr messages you should not have the flag stderr=subprocess.STDOUT which does the equivalent of 2>&1. If you just remove that I suspect you'll get what you want. If you want to redirect stderr to /dev/null you can follow this answer: How to hide output of subprocess in Python 2.7
To separate stderr and stdout simply create two independent pipes.
p1 = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
To ignore stderr completely simply open devnull and redirect stderr there.
with open(os.devnull) as devnull:
p1 = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=devnull)
os.devnull The file path of the null device. For example: '/dev/null'
for POSIX, 'nul' for Windows. Also available via os.path.
To get json data that the subprocess prints to its stdout while ignoring warnings on its stderr:
from subprocess import check_output
json_data = check_output(cmd, stderr=DEVNULL)
where DEVNULL is defined here.

Calling ffmpeg kills script in background only

I've got a python script that calls ffmpeg via subprocess to do some mp3 manipulations. It works fine in the foreground, but if I run it in the background, it gets as far as the ffmpeg command, which itself gets as far as dumping its config into stderr. At this point, everything stops and the parent task is reported as stopped, without raising an exception anywhere. I've tried a few other simple commands in the place of ffmpeg, they execute normally in foreground or background.
This is the minimal example of the problem:
import subprocess
inf = "3HTOSD.mp3"
outf = "out.mp3"
args = [ "ffmpeg",
"-y",
"-i", inf,
"-ss", "0",
"-t", "20",
outf
]
print "About to do"
result = subprocess.call(args)
print "Done"
I really can't work out why or how a wrapped process can cause the parent to terminate without at least raising an error, and how it only happens in so niche a circumstance. What is going on?
Also, I'm aware that ffmpeg isn't the nicest of packages, but I'm interfacing with something that has using ffmpeg compiled into it, so using it again seems sensible.
It might be related to Linux process in background - “Stopped” in jobs? e.g., using parent.py:
from subprocess import check_call
check_call(["python", "-c", "import sys; sys.stdin.readline()"])
should reproduce the issue: "parent.py script shown as stopped" if you run it in bash as a background job:
$ python parent.py &
[1] 28052
$ jobs
[1]+ Stopped python parent.py
If the parent process is in an orphaned process group then it is killed on receiving SIGTTIN signal (a signal to stop).
The solution is to redirect the input:
import os
from subprocess import check_call
try:
from subprocess import DEVNULL
except ImportError: # Python 2
DEVNULL = open(os.devnull, 'r+b', 0)
check_call(["python", "-c", "import sys; sys.stdin.readline()"], stdin=DEVNULL)
If you don't need to see ffmpeg stdout/stderr; you could also redirect them to /dev/null:
check_call(ffmpeg_cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
I like to use the commands module. It's simpler to use in my opinion.
import commands
cmd = "ffmpeg -y -i %s -ss 0 -t 20 %s 2>&1" % (inf, outf)
status, output = commands.getstatusoutput(cmd)
if status != 0:
raise Exception(output)
As a side note, sometimes PATH can be an issue, and you might want to use an absolute path to the ffmpeg binary.
matt#goliath:~$ which ffmpeg
/opt/local/bin/ffmpeg
From the python/subprocess/call documentation:
Wait for command to complete, then return the returncode attribute.
So as long as the process you called does not exit, your program does not go on.
You should set up a Popen process object, put its standard output and error in different buffers/streams and when there is an error, you terminate the process.
Maybe something like this works:
proc = subprocess.Popen(args, stderr = subprocess.PIPE) # puts stderr into a new stream
while proc.poll() is None:
try:
err = proc.stderr.read()
except: continue
else:
if err:
proc.terminate()
break

Catching runtime error for process created by python subprocess

I am writing a script which can take a file name as input, compile it and run it.
I am taking the name of a file as input(input_file_name). I first compile the file from within python:
self.process = subprocess.Popen(['gcc', input_file_name, '-o', 'auto_gen'], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
Next, I'm executing the executable using the same(Popen) call:
subprocess.Popen('./auto_gen', stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
In both cases, I'm catching the stdout(and stderr) contents using
(output, _) = self.process.communicate()
Now, if there is an error during compilation, I am able to catch the error because the returncode is 1 and I can get the details of the error because gcc sends them on stderr.
However, the program itself can return a random value even on executing successfully(because there might not be a "return 0" at the end). So I can't catch runtime errors using the returncode. Moreover, the executable does not send the error details on stderr. So I can't use the trick I used for catching compile-time errors.
What is the best way to catch a runtime error OR to print the details of the error? That is, if ./auto_gen throws a segmentation fault, I should be able to print either one of:
'Runtime error'
'Segmentation Fault'
'Program threw a SIGSEGV'
Try this. The code runs a subprocess which fails and prints to stderr. The except block captures the specific error exit code and stdout/stderr, and displays it.
#!/usr/bin/env python
import subprocess
try:
out = subprocess.check_output(
"ls non_existent_file",
stderr=subprocess.STDOUT,
shell=True)
print 'okay:',out
except subprocess.CalledProcessError as exc:
print 'error: code={}, out="{}"'.format(
exc.returncode, exc.output,
)
Example output:
$ python ./subproc.py
error: code=2, out="ls: cannot access non_existent_file: No such file or directory
"
If ./autogen is killed by a signal then self.process.returncode (after .wait() or .communicate()) is less than zero and its absolute value reports the signal e.g., returncode == -11 for SIGSERV.
please check following link for runtime errors or output of subprocess
https://www.endpoint.com/blog/2015/01/28/getting-realtime-output-using-python
def run_command(command):
process = subprocess.Popen(shlex.split(command),
stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print output.strip()
rc = process.poll()
return rc

Categories