I have a requirement to execute a jar, and also have a timeout mechanism around it , such that when a timeout occurs, the exeuction stops.
On my research I found that we can use the 'timeout' attribute of subprocess.call()
To replicate my usecase, I have a jar which prints a message every second for 15 seconds.
Now the python application is coded as below.
import subprocess
X1 = subprocess.run(['java', '-jar', 'JarsForPython2.jar'],timeout=15)
print('------------checks for stdout-----------------------')
print(X1.stdout.decode())
print('------------checks for stderr-----------------------')
print(X1.stdout.decode())
Everything works smooth. Now to check for the timeout, I make the changes and set timeout=7
I modify the code and handle the exception as below
try:
X1 = subprocess.run(['java', '-jar', 'JarsForPython2.jar'], capture_output=True, timeout=7.0)
except subprocess.TimeoutExpired as e:
print('The execution has timedout')
else:
print('------------checks for stdout-----------------------')
print(X1.stdout.decode())
print('------------checks for stderr-----------------------')
print(X1.stderr.decode())
Now here, I get
The execution has timedout
Process finished with exit code 0
I just want to have the output of the first six seconds log here on my console.
Any suggestion?
else: is executed only when there was no error.
I found that in except: you can get e.stdout and e.stderr but you have to check if it is not None.
import subprocess
try:
X1 = subprocess.run(['java', '-jar', 'JarsForPython2.jar'], capture_output=True, timeout=7)
except subprocess.TimeoutExpired as e:
print('The execution has timedout')
print('------------checks for stdout-----------------------')
if e.stdout: # check if not `None`
print(e.stdout.decode())
print('------------checks for stderr-----------------------')
if e.stderr: # check if not `None`
print(e.stderr.decode())
else:
print('------------checks for stdout-----------------------')
print(X1.stdout.decode())
print('------------checks for stderr-----------------------')
print(X1.stderr.decode())
or with finally
import subprocess
try:
X1 = subprocess.run(['java', '-jar', 'JarsForPython2.jar'], capture_output=True, timeout=7)
stdout = X1.stdout
stderr = X1.stderr
except subprocess.TimeoutExpired as e:
print('The execution has timedout')
stdout = e.stdout
stderr = e.stderr
finally:
print('------------checks for stdout-----------------------')
if stdout: # check if not `None`
print(stdout.decode())
print('------------checks for stderr-----------------------')
if stderr: # check if not `None`
print(stderr.decode())
Hi I'm trying to make a video converter for django with python, I forked django-ffmpeg module which does almost everything I want, except that doesn't catch error if conversion failed.
Basically the module passes to the command line interface the ffmpeg command to make the conversion like this:
/usr/bin/ffmpeg -hide_banner -nostats -i %(input_file)s -target
film-dvd %(output_file)
Module uses this method to pass the ffmpeg command to cli and get the output:
def _cli(self, cmd, without_output=False):
print 'cli'
if os.name == 'posix':
import commands
return commands.getoutput(cmd)
else:
import subprocess
if without_output:
DEVNULL = open(os.devnull, 'wb')
subprocess.Popen(cmd, stdout=DEVNULL, stderr=DEVNULL)
else:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
return p.stdout.read()
But for example, I you upload an corrupted video file it only returns the ffmpeg message printed on the cli, but nothing is triggered to know that something failed
This is an ffmpeg sample output when conversion failed:
[mov,mp4,m4a,3gp,3g2,mj2 # 0x237d500] Format mov,mp4,m4a,3gp,3g2,mj2
detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 # 0x237d500] moov atom not found
/home/user/PycharmProjects/videotest/media/videos/orig/270f412927f3405aba041265725cdf6b.mp4:
Invalid data found when processing input
I was wondering if there's any way to make that an exception and how, so I can handle it easy.
The only option that came to my mind is to search: "Invalid data found when processing input" in the cli output message string but I'm not shure that if this is the best approach. Anyone can help me and guide me with this please.
You need to check the returncode of the Popen object that you're creating.
Check the docs: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
Your code should wait for the subprocess to finish (with wait) & then check the returncode. If the returncode is != 0 then you can raise any exception you want.
This is how I implemented it in case it's useful to someone else:
def _cli(self, cmd):
errors = False
import subprocess
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdoutdata, stderrdata = p.communicate()
if p.wait() != 0:
# Handle error / raise exception
errors = True
print "There were some errors"
return stderrdata, errors
print 'conversion success '
return stderrdata, errors
except OSError as e:
errors = True
return e.strerror, errors
I am converting bash code to python.
I use mkdir of bash through subprocess.run() in python.
In the following example, subprocess.run() raise an exception.
However I could not check what the error is because I could not
get an resultant object returned by subprocess.run().
Are there any smart ways to know what the error was?
Or should not I use 'try exception' here?
import sys
import subprocess
directory = '/tmp/test_dir'
options = ''
try:
result=subprocess.run(['mkdir', options, directory], check=True)
except subprocess.CalledProcessError as ex:
print("In this example, subprocess.run() above raise an exception CalledProcessError.")
# print("I would like to check result.returncode = {0}. But it failed because object 'result' is not defined.".format(result.returncode))
except Exception as ex:
sys.stderr.write("This must not happen.")
sys.exit(1)
Thank you very much.
you can always do
import subprocess
# make the subprocess
pr = subprocess.Popen(['your', 'command', 'here'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# blocks until process finishes
out, err = pr.communicate()
# check the return code
if pr.returncode != 0:
sys.stderr.write("oh no")
I'm trying to do a Bitcoin payment from within Python. In bash I would normally do this:
bitcoin sendtoaddress <bitcoin address> <amount>
So for example:
bitcoin sendtoaddress 1HoCUcbK9RbVnuaGQwiyaJGGAG6xrTPC9y 1.4214
If it is successful I get a transaction id as output, but if I try to transfer an amount larger than my bitcoin balance, I get the following output:
error: {"code":-4,"message":"Insufficient funds"}
In my Python program I now try to do the payment as follows:
import subprocess
try:
output = subprocess.check_output(['bitcoin', 'sendtoaddress', address, str(amount)])
except:
print "Unexpected error:", sys.exc_info()
If there's enough balance it works fine, but if there's not enough balance sys.exc_info() prints out this:
(<class 'subprocess.CalledProcessError'>, CalledProcessError(), <traceback object at 0x7f339599ac68>)
It doesn't include the error which I get on the command line though. So my question is; how can I get the outputted error ({"code":-4,"message":"Insufficient funds"}) from within Python?
According to the subprocess.check_output() docs, the exception raised on error has an output attribute that you can use to access the error details:
try:
subprocess.check_output(...)
except subprocess.CalledProcessError as e:
print(e.output)
You should then be able to analyse this string and parse the error details with the json module:
if e.output.startswith('error: {'):
error = json.loads(e.output[7:]) # Skip "error: "
print(error['code'])
print(error['message'])
I don't think the accepted solution handles the case where the error text is reported on stderr. From my testing the exception's output attribute did not contain the results from stderr and the docs warn against using stderr=PIPE in check_output(). Instead, I would suggest one small improvement to J.F Sebastian's solution by adding stderr support. We are, after all, trying to handle errors and stderr is where they are often reported.
from subprocess import Popen, PIPE
p = Popen(['bitcoin', 'sendtoaddress', ..], stdout=PIPE, stderr=PIPE)
output, error = p.communicate()
if p.returncode != 0:
print("bitcoin failed %d %s %s" % (p.returncode, output, error))
As mentioned by #Sebastian the default solution should aim to use run():
https://docs.python.org/3/library/subprocess.html#subprocess.run
Here a convenient implementation (feel free to change the log class with print statements or what ever other logging functionality you are using):
import subprocess
def _run_command(command):
log.debug("Command: {}".format(command))
result = subprocess.run(command, shell=True, capture_output=True)
if result.stderr:
raise subprocess.CalledProcessError(
returncode = result.returncode,
cmd = result.args,
stderr = result.stderr
)
if result.stdout:
log.debug("Command Result: {}".format(result.stdout.decode('utf-8')))
return result
And sample usage (code is unrelated, but I think it serves as example of how readable and easy to work with errors it is with this simple implementation):
try:
# Unlock PIN Card
_run_command(
"sudo qmicli --device=/dev/cdc-wdm0 -p --uim-verify-pin=PIN1,{}"
.format(pin)
)
except subprocess.CalledProcessError as error:
if "couldn't verify PIN" in error.stderr.decode("utf-8"):
log.error(
"SIM card could not be unlocked. "
"Either the PIN is wrong or the card is not properly connected. "
"Resetting module..."
)
_reset_4g_hat()
return
Trying to "transfer an amount larger than my bitcoin balance" is not an unexpected error. You could use Popen.communicate() directly instead of check_output() to avoid raising an exception unnecessarily:
from subprocess import Popen, PIPE
p = Popen(['bitcoin', 'sendtoaddress', ..], stdout=PIPE)
output = p.communicate()[0]
if p.returncode != 0:
print("bitcoin failed %d %s" % (p.returncode, output))
Since Python 3.5, subprocess.run() supports check argument:
If check is true, and the process exits with a non-zero exit code, a CalledProcessError exception will be raised. Attributes of that exception hold the arguments, the exit code, and stdout and stderr if they were captured.
A simple example that will raise and print out CalledProcessError:
import subprocess
try:
subprocess.run("exit 1", shell=True, check=True, timeout=15, capture_output=True)
except subprocess.CalledProcessError as e:
print(e) # Output: Command 'exit 1' returned non-zero exit status 1.
There are good answers here, but in these answers, there has not been an answer that comes up with the text from the stack-trace output, which is the default behavior of an exception.
If you wish to use that formatted traceback information, you might wish to:
import traceback
try:
check_call( args )
except CalledProcessError:
tb = traceback.format_exc()
tb = tb.replace(passwd, "******")
print(tb)
exit(1)
As you might be able to tell, the above is useful in case you have a password in the check_call( args ) that you wish to prevent from displaying.
This did the trick for me. It captures all the stdout output from the subprocess(For python 3.8):
from subprocess import check_output, STDOUT
cmd = "Your Command goes here"
try:
cmd_stdout = check_output(cmd, stderr=STDOUT, shell=True).decode()
except Exception as e:
print(e.output.decode()) # print out the stdout messages up to the exception
print(e) # To print out the exception message
Based on the answer of #macetw I print the exception directly to stderr in a decorator.
Python 3
from functools import wraps
from sys import stderr
from traceback import format_exc
from typing import Callable, Collection, Any, Mapping
def force_error_output(func: Callable):
#wraps(func)
def forced_error_output(*args: Collection[Any], **kwargs: Mapping[str, Any]):
nonlocal func
try:
func(*args, **kwargs)
except Exception as exception:
stderr.write(format_exc())
stderr.write("\n")
stderr.flush()
raise exception
return forced_error_output
Python 2
from functools import wraps
from sys import stderr
from traceback import format_exc
def force_error_output(func):
#wraps(func)
def forced_error_output(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as exception:
stderr.write(format_exc())
stderr.write("\n")
stderr.flush()
raise exception
return forced_error_output
Then in your worker just use the decorator
#force_error_output
def da_worker(arg1: int, arg2: str):
pass
I think most of previous answers are correct, in my case I needed to do this on Windows server and command was a Powershell, for that this worked really nicely for me:
try:
print("inpgoress")
cmd_exec="Get-Date"
print(cmd_aws)
subprocess.run(['powershell', '-Command', cmd_exec],shell=False,check=True,capture_output=True,text=True,encoding="utf-8")
except Exception as e:
print(e)
print("ERROR: something went wrong executing powershell command")
raise e
The subprocess invoked needs to be told to capture the output in the invoked program and raise the exception. It's simple to do it.
Firstly, Use
subprocess.run() instead of subprocess.call()
Let's assume u wanna python script called "Vijay.py".
For raising the exception, use the following;
subprocess.run("py vijay.py", check=True, capture_output=True, shell=True)
The above method then can be put in try and except block to immediately raise the error or can use sys.exit(1) :any non-zero exit is fine
try:
subprocess.call("py vijay.py", check=True, capture_output=True, shell=True)
except Exception as e:
print("Exception raised: ", e)
and body of vijay.py can be as follows;
vijay.py
try:
Your code is here...
except Exception as e:
sys.exit(1) // or can even use raise Exception("ur own exception to raise:)
enter code here
I have a python script 'b.py' which prints out time ever 5 sec.
while (1):
print "Start : %s" % time.ctime()
time.sleep( 5 )
print "End : %s" % time.ctime()
time.sleep( 5 )
And in my a.py, I call b.py by:
def run_b():
print "Calling run b"
try:
cmd = ["./b.py"]
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print (">>>" + line.rstrip())
except OSError as e:
print >>sys.stderr, "fcs Execution failed:", e
return None
and later on, I kill 'b.py' by:
PS_PATH = "/usr/bin/ps -efW"
def kill_b(program):
try:
cmd = shlex.split(PS_PATH)
retval = subprocess.check_output(cmd).rstrip()
for line in retval.splitlines():
if program in line:
print "line =" + line
pid = line.split(None)[1]
os.kill(int(pid), signal.SIGKILL)
except OSError as e:
print >>sys.stderr, "kill_all Execution failed:", e
except subprocess.CalledProcessError as e:
print >>sys.stderr, "kill_all Execution failed:", e
run_b()
time.sleep(600)
kill_b("b.py")
I have 2 questions.
1. why I don't see any prints out from 'b.py' and when I do 'ps -efW' I don't see a process named 'b.py'?
2. Why when I kill a process like above, I see 'permission declined'?
I am running above script on cygwin under windows.
Thank you.
Why I don't see any prints out from 'b.py' and when I do 'ps -efW' I don't see a process named 'b.py'?
Change run_b() lines:
p = subprocess.Popen(cmd,
stdout=sys.stdout,
stderr=sys.stderr)
You will not see a process named "b.py" but something like "python b.py" which is little different. You should use pid instead of name to find it (in your code "p.pid" has the pid).
Why when I kill a process like above, I see 'permission declined'?
os.kill is supported under Windows only 2.7+ and acts a little bit different than posix version. However you can use "p.pid". Best way to kill a process in a cross platform way is:
if platform.system() == "Windows":
subprocess.Popen("taskkill /F /T /PID %i" % p.pid, shell=True)
else:
os.killpg(p.pid, signal.SIGKILL)
killpg works also on OS X and other Unixy operating systems.