How to implement retry mechanism if the shell script execution got failed? - python

I am trying to execute shell script in Python code. And so far everything is looking good.
Below is my Python script which will execute a shell script. Now for an example sake, here it is a simple Hello World shell script.
jsonStr = '{"script":"#!/bin/bash\\necho Hello world 1\\n"}'
j = json.loads(jsonStr)
shell_script = j['script']
print "start"
proc = subprocess.Popen(shell_script, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = proc.communicate()
if stderr:
print "Shell script gave some error"
print stderr
else:
print stdout
print "end" # Shell script ran fine.
Now what I am looking for is, suppose for whatever reason whenever I am executing my shell script from Python code and it got failed for whatever reason. Then that means stderr won't be empty. So now I want to retry executing the shell script again, let's say after sleeping for couple of milliseconds?
Meaning is there any possibility of implementing of retry mechanism if the shell script execution got failed? Can I retry for 5 or 6 times? Meaning is it possible to configure this number as well?

from time import sleep
MAX_TRIES = 6
# ... your other code ...
for i in xrange(MAX_TRIES):
proc = subprocess.Popen(shell_script, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = proc.communicate()
if stderr:
print "Shell script gave some error..."
print stderr
sleep(0.05) # delay for 50 ms
else:
print stdout
print "end" # Shell script ran fine.
break

Something like this maybe:
maxRetries = 6
retries = 0
while (retries < maxRetries):
doSomething ()
if errorCondition:
retries += 1
continue
break

How about using a decorator? Seems like a very clear way.
You can read about them here https://wiki.python.org/moin/PythonDecoratorLibrary. (Retry decorator)

Related

Subprocess Python Error

when I run this subprocess command from python, it seems like python stalls and never outputs anything :
msg = subprocess.call(['/Users/admirmonteiro/bin/Praat', '/Users/admirmonteiro/tmp/tmp.praat'])
but when I run the command itself from the terminal, it runs and closes as it should :
Praat /tmp/tmp.praat
Is anyone able to tell me why python is not finishing up the code and is stalling and not outputting anything?
thanks !
You could try making sure the stdin and stdout (or other file descriptors) are not causing the problem:
p = subprocess.POpen(
['/Users/admirmonteiro/bin/Praat', '/Users/admirmonteiro/tmp/tmp.praat'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
close_fds=True,
)
print p.communicate()
print p.wait()
it seems you have exchanged the arguments.
msg = subprocess.call(['/Users/admirmonteiro/bin/Praat', '/Users/admirmonteiro/tmp/tmp.praat'])
should be
msg = subprocess.call([ '/Users/admirmonteiro/tmp/tmp.praat','/Users/admirmonteiro/bin/Praat'])

How to check the status of a shell script using subprocess module in Python?

I have a simple Python script which will execute a shell script using subprocess mdoule in Python.
Below is my Python shell script which is calling testing.sh shell script and it works fine.
import os
import json
import subprocess
jsonData = '{"pp": [0,3,5,7,9], "sp": [1,2,4,6,8]}'
jj = json.loads(jsonData)
print jj['pp']
print jj['sp']
os.putenv( 'jj1', 'Hello World 1')
os.putenv( 'jj2', 'Hello World 2')
os.putenv( 'jj3', ' '.join( str(v) for v in jj['pp'] ) )
os.putenv( 'jj4', ' '.join( str(v) for v in jj['sp'] ) )
print "start"
subprocess.call(['./testing.sh'])
print "end"
And below is my shell script -
#!/bin/bash
for el1 in $jj3
do
echo "$el1"
done
for el2 in $jj4
do
echo "$el2"
done
for i in $( david ); do
echo item: $i
done
Now the question I have is -
if you see my Python script, I am printing start, then executing shell script and then printing end.. So suppose for whatever reason that shell script which I am executing has any problem, then I don't want to print out end.
So in the above example, shell script will not run properly as david is not a linux command so it will throw an error. So how should I see the status of entire bash shell script and then decide whether I need to print end or not?
I have just added a for loop example, it can be any shell script..
Is it possible to do?
You can check stderr of the bash script rather than return code.
proc = subprocess.Popen('testing.sh', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = proc.communicate()
if stderr:
print "Shell script gave some error"
else:
print "end" # Shell script ran fine.
Just use the returned value from call():
import subprocess
rc = subprocess.call("true")
assert rc == 0 # zero exit status means success
rc = subprocess.call("false")
assert rc != 0 # non-zero means failure
You could use check_call() to raise an exception automatically if the command fails instead of checking the returned code manually:
rc = subprocess.check_call("true") # <-- no exception
assert rc == 0
try:
subprocess.check_call("false") # raises an exception
except subprocess.CalledProcessError as e:
assert e.returncode == 1
else:
assert 0, "never happens"
Well, according to the docs, .call will return the exit code back to you. You may want to check that you actually get an error return code, though. (I think the for loop will still return a 0 code since it more-or-less finished.)

python subprocess.call output is not interleaved

I have a python (v3.3) script that runs other shell scripts. My python script also prints message like "About to run script X" and "Done running script X".
When I run my script I'm getting all the output of the shell scripts separate from my print statements. I see something like this:
All of script X's output
All of script Y's output
All of script Z's output
About to run script X
Done running script X
About to run script Y
Done running script Y
About to run script Z
Done running script Z
My code that runs the shell scripts looks like this:
print( "running command: " + cmnd )
ret_code = subprocess.call( cmnd, shell=True )
print( "done running command")
I wrote a basic test script and do *not* see this behaviour. This code does what I would expect:
print("calling")
ret_code = subprocess.call("/bin/ls -la", shell=True )
print("back")
Any idea on why the output is not interleaved?
Thanks. This works but has one limitation - you can't see any output until after the command completes. I found an answer from another question (here) that uses popen but also lets me see the output in real time. Here's what I ended up with this:
import subprocess
import sys
cmd = ['/media/sf_git/test-automation/src/SalesVision/mswm/shell_test.sh', '4', '2']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, bufsize=1, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(process.stdout.readline, ''):
line = line.replace('\n', '')
print(line)
sys.stdout.flush()
process.wait() # Wait for the underlying process to complete.
errcode = process.returncode # Harvest its returncode, if needed.
print( 'Script ended with return code of: ' + str(errcode) )
This uses Popen and allows me to see the progress of the called script.
It has to do with STDOUT and STDERR buffering. You should be using subprocess.Popen to redirect STDOUT and STDERR from your child process into your application. Then, as needed, output them. Example:
import subprocess
cmd = ['ls', '-la']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
process.wait() # Wait for the underlying process to complete.
out, err = process.communicate() # Capture what it outputted on STDOUT and STDERR
errcode = process.returncode # Harvest its returncode, if needed.
print(out)
print('done running command')
Additionally, I wouldn't use shell = True unless it's really required. It forces subprocess to fire up a whole shell environment just to run a command. It's usually better to inject directly into the env parameter of Popen.

Catching runtime error for process created by python subprocess

I am writing a script which can take a file name as input, compile it and run it.
I am taking the name of a file as input(input_file_name). I first compile the file from within python:
self.process = subprocess.Popen(['gcc', input_file_name, '-o', 'auto_gen'], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
Next, I'm executing the executable using the same(Popen) call:
subprocess.Popen('./auto_gen', stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
In both cases, I'm catching the stdout(and stderr) contents using
(output, _) = self.process.communicate()
Now, if there is an error during compilation, I am able to catch the error because the returncode is 1 and I can get the details of the error because gcc sends them on stderr.
However, the program itself can return a random value even on executing successfully(because there might not be a "return 0" at the end). So I can't catch runtime errors using the returncode. Moreover, the executable does not send the error details on stderr. So I can't use the trick I used for catching compile-time errors.
What is the best way to catch a runtime error OR to print the details of the error? That is, if ./auto_gen throws a segmentation fault, I should be able to print either one of:
'Runtime error'
'Segmentation Fault'
'Program threw a SIGSEGV'
Try this. The code runs a subprocess which fails and prints to stderr. The except block captures the specific error exit code and stdout/stderr, and displays it.
#!/usr/bin/env python
import subprocess
try:
out = subprocess.check_output(
"ls non_existent_file",
stderr=subprocess.STDOUT,
shell=True)
print 'okay:',out
except subprocess.CalledProcessError as exc:
print 'error: code={}, out="{}"'.format(
exc.returncode, exc.output,
)
Example output:
$ python ./subproc.py
error: code=2, out="ls: cannot access non_existent_file: No such file or directory
"
If ./autogen is killed by a signal then self.process.returncode (after .wait() or .communicate()) is less than zero and its absolute value reports the signal e.g., returncode == -11 for SIGSERV.
please check following link for runtime errors or output of subprocess
https://www.endpoint.com/blog/2015/01/28/getting-realtime-output-using-python
def run_command(command):
process = subprocess.Popen(shlex.split(command),
stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print output.strip()
rc = process.poll()
return rc

Communicate with python subprocess while it is running

I running a subprocess that run a software in "command" mode. (This software is Nuke by The Foundy, in case you know that software)
When in command mode, this software is waiting for user input. This mode allow to create compositing scripts without any UI.
I have done this bit of code that start the process, find when the application is done starting then I try to send the process some commands, but the stdin doesn't seem to be sending the commands properly.
Here the sample code I did to test this process.
import subprocess
appPath = '/Applications/Nuke6.3v3/Nuke6.3v3.app/Nuke6.3v3' readyForCommand = False
commandAndArgs = [appPath, '-V', '-t']
commandAndArgs = ' '.join(commandAndArgs)
process = subprocess.Popen(commandAndArgs,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True, )
while True:
if readyForCommand:
print 'trying to send command to nuke...'
process.stdin.write('import nuke')
process.stdin.write('print nuke')
process.stdin.write('quit()')
print 'done sending commands'
readyForCommand = False
else:
print 'Reading stdout ...'
outLine = process.stdout.readline().rstrip()
if outLine:
print 'stdout:', outLine
if outLine.endswith('getenv.tcl'):
print 'setting ready for command'
readyForCommand = True
if outLine == '' and process.poll() != None:
print 'in break!'
break
print('return code: %d' % process.returncode)
when I run nuke in a shell and send the same commands here is what I get:
sylvain.berger core/$ nuke -V -t
[...]
Loading /Applications/Nuke6.3v3/Nuke6.3v3.app/Contents/MacOS/plugins/getenv.tcl
>>> import nuke
>>> print nuke
<module 'nuke' from '/Applications/Nuke6.3v3/Nuke6.3v3.app/Contents/MacOS/plugins/nuke/__init__.pyc'>
>>> quit()
sylvain.berger core/$
Any idea why the stdin is not sending the commands properly?
Thanks
your code will send the text
import nukeprint nukequit()
with no newline, thus the python instance will not try to execute anything, everything is just sitting in a buffer waiting for a newline
The subprocess module is not intended for interactive communication with a process. At best, you can give it a single pre-computed standard input string and then read its stdout and stderr:
p = Popen(..., stdin=PIPE, stdout=PIPE, stderr=PIPE)
out, err = p.communicate(predefined_stdin)
If you actually need interaction, consider using pexpect.

Categories