Popen Subprocess Python - python

I am using Popen to remote call script
s = Popen(['ssh' , ssh_argument0 , ssh_argument1 , '/tmp/remote/dude_rc_admsvr.sh %s %s' %(DomainHome , ACTIVITY)])
stdout = s.communicate()
print stdout
The script is not exiting with the status mentioned under shell script , instead it only prints success or failure status only ..
i want to exit with the status codes as per shell script.
here is shell script
tail -Fn0 ${ADM_DOMAIN_LOG} | \
while read LOG_LINE;
do
echo ${LOG_LINE} | grep -q "${PASS_MSG}"
if [ $? = 0 ]
then
echo "${STATUS_SUCCESS}"
exit 0
elif echo ${LOG_LINE} | grep -q "${FAIL_MSG}"
then
echo "${STATUS_FAILURE}"
exit 1
elif echo ${LOG_LINE} | grep -q "${FAIL_MSG2}"
then
echo "${STATUS_FAILURE}"
exit 1
fi
done
exit
How to get the status code returned ?

From subprocess.check_call's documentation:
If the return code was zero then return, otherwise raise
CalledProcessError. The CalledProcessError object will have the return
code in the returncode attribute.
Therefore, to get the status code, you need to catch the exception and retrieve the returncode attribute.
try:
check_call(['ssh' , ssh_argument0 , ssh_argument1 , '/tmp/remote/dude_rc_admsvr.sh %s %s' %(DomainHome , ACTIVITY)])
except subprocess.CalledProcessError as e:
print('status code = %s' % e.returncode)

Adding a new answer since you changed your code in question from using check_call to using Popen and communicate.
From subprocess.communicate's documentation:
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.
In other words, you need to add stdout=PIPE to your call to Popen's constructor in order to use communicate.
s = Popen(['ssh' , ssh_argument0 , ssh_argument1 , '/tmp/remote/dude_rc_admsvr.sh %s %s' %(DomainHome , ACTIVITY)], stdout=PIPE)
stdout = s.communicate()
returncode = s.returncode

Found Issue :
The tail statement in shell script was holding the SSH session which is why Popen couldn't able to exit the subshell , Adding kill -9 ps -eaf |grep tail resolved the issue.

Related

Returning status of grep command executed remotely via ssh embedded in expect command inside python

I have a below python script where in I am executing a remote SSH command using expect. Here, even the target file contains the string "error" or not, the exit code is returned as success always as the ssh connectivity is what only checked. How can I get the status of the grep command?
#! /usr/bin/python
import subprocess
def execute(cmd):
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
_output, _error = proc.communicate()
_code = proc.returncode
return _output, _error, _code
host = "localhost"
passwd = "kube"
cmd="/usr/bin/expect -c 'spawn ssh "+host+" \"cat /home/kube/f1 | grep -qi error\"; expect \"password:\" { send \""+passwd+"\r\"} ;interact' "
_output, _error, return_code = execute(cmd)
print cmd + "\n" + _output + "\n" + _error
if (return_code == 0):
print "no error"
else:
print "contains error"
Option 1:
Let the remote command output something which indicates success/failure for you. E.g.:
ssh user#host "/some/command | grep -q some-string && echo MAGIC-SUCCESS || echo MAGIC-FAILURE"
And in Python you can get the output and parse it.
Option 2:
According to man expect:
wait [args]
[...] wait normally returns a list of four integers. The first integer is the pid of the process that was waited upon. The second integer is the corresponding spawn id. The third integer is -1 if an operating system error occurred, or 0 otherwise. If the third integer was 0, the fourth integer is the status returned by the spawned process. If the third integer was -1, the fourth integer is the value of errno set by the operating system. [...]
So your Expect code can check the wait result and then exit with different values and Python code can get the exit status.
For Expect part it's like this:
spawn ...
...
expect eof
set result [wait]
exit [lindex $result 3]

output from subprocess.check_output in Python is empty for openstack CLI

I tried executing an OpenStack CLI openstack volume list | grep -w my_vm1 using subprocess in python
output = subprocess.check_output(cmd, shell=True)
In this case
cmd = 'openstack volume list | grep -w my_vm1'.
I observed that the output is null. When I tried:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
And if I print the output var, it shows me "type 'exceptions.OSError'". Am I missing something?
It seems like you are running into an OSError exception.
I usually run subprocess commands within try / expect for catching issues, and use pipe and communicate() to grab the output from commands. I find this flow more logic.
Something like this:
try:
cmd = 'openstack volume list | grep -w my_vm1'
output = subprocess.check_output(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = p.communicate()
print(out)
except OSError:
print(err)
sys.exit(1)
Granted you can run the command as the same user running the script, it should work. If not, this should show you some error message, granted that command is playing by the rules.
I hope this helps!

Subprocess return code different than "echo $?"

I'm using subprocess to call a bash command in Python, and I'm getting a different return code than what the shell shows me.
import subprocess
def check_code(cmd):
print "received command '%s'" % (cmd)
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
print "p.returncode is '%d'" % (p.returncode)
exit()
if p.returncode == 0:
return True
else:
return False
#End if there was a return code at all
#End get_code()
When sent "ls /dev/dsk &> /dev/null", check_code returns 0, but "echo $?" produces "2" in the terminal:
Welcome to Dana version 0.7
Now there is Dana AND ZOL
received command 'ls /dev/dsk &> /dev/null'
p.returncode is '0'
root#Ubuntu-14:~# ls /dev/dsk &> /dev/null
root#Ubuntu-14:~# echo $?
2
root#Ubuntu-14:~#
Does anyone know what's going on here?
According to subprocess.Popen, the shell used in your Python script is sh. This shell is the POSIX standard, as opposed to Bash, which has several nonstandard features such as the shorthand redirection &> /dev/null. sh, the Bourne shell, interprets this symbol as "run me in the background, and redirect stdout to /dev/null".
Since your subprocess.Popen opens a sh which runs ls in its own background, the return value of sh is used instead of ls, which in this case is 0.
If you want Bash behavior with your Python, I believe you may have to reconfigure (possibly recompile) Python itself. It's simpler to just use the sh syntax, which is ls /dev/dsk 2> /dev/null.
Following the suggestion by xi_, I split the command up in to space delineated fields, and it failed to run with "&>" and "/dev/null". I removed them, and it worked.
Then I put the command all back together to test it without "&> /dev/null", and that worked too. It appears that the addition of "&> /dev/null" throws subprocess off, somehow.
Welcome to Dana version 0.7
Now there is Dana AND ZOL
received command 'cat /etc/fstab'
p.wait() is 0
p.returncode is '0'
received command 'cat /etc/fstabb'
p.wait() is 1
p.returncode is '1'
received command 'cat /etc/fstab &> /dev/null'
p.wait() is 0
p.returncode is '0'
received command 'cat /etc/fstabb &> /dev/null'
p.wait() is 0
p.returncode is '0'
root#Ubuntu-14:~# cat /etc/fstab &> /dev/null
root#Ubuntu-14:~# echo $?
0
root#Ubuntu-14:~# cat /etc/fstabb &> /dev/null
root#Ubuntu-14:~# echo $?
1
root#Ubuntu-14:~#
I originally added the "&> /dev/null" to the call because I was seeing output on the screen from STDERR. Once I added stderr=PIPE to the subprocess call, that went away. I was just trying to silently check the code on the output behind the scenes.
If someone can explain why adding "&> /dev/null" to a subprocess call in Python causes it to behave unexpectedly, I'd be happy to select that as the answer!
You are using it as subprocess.Popen(cmd, shell=True), with cmd as string.
That means that subprocess will call under the hood /bin/sh with arguments. So you are getting back exit code of your shell.
If you need actually exit code of your command, split it into list and use shell=False.
subprocess.Popen(['cmd', 'arg1'], shell=False)

How to avoid passing shell constructs to executable using Popen

I am trying to call an executable called foo, and pass it some command line arguments. An external script calls into the executable and uses the following command:
./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
The script uses Popen to execute this command as follows:
from subprocess import Popen
from subprocess import PIPE
def run_command(command, returnObject=False):
cmd = command.split(' ')
print('%s' % cmd)
p = None
print('command : %s' % command)
if returnObject:
p = Popen(cmd)
else:
p = Popen(cmd)
p.communicate()
print('returncode: %s' % p.returncode)
return p.returncode
return p
command = "./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
"
run_command(command)
However, this passes extra arguments ['2>&1', '|', '/usr/bin/tee', 'temp.log'] to the foo executable.
How can I get rid of these extra arguments getting passed to foo while maintaining the functionality?
I have tried shell=True but read about avoiding it for security purposes (shell injection attack). Looking for a neat solution.
Thanks
UPDATE:
- Updated the file following the tee command
The string
./main/foo --config config_file 2>&1 | /usr/bin/tee >temp.log
...is full of shell constructs. These have no meaning to anything without a shell in play. Thus, you have two options:
Set shell=True
Replace them with native Python code.
For instance, 2>&1 is the same thing as passing stderr=subprocess.STDOUT to Popen, and your tee -- since its output is redirected and it's passed no arguments -- could just be replaced with stdout=open('temp.log', 'w').
Thus:
p = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=open('temp.log', 'w'))
...or, if you really did want the tee command, but were just using it incorrectly (that is, if you wanted tee temp.log, not tee >temp.log):
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
stdout, _ = p2.communicate()
Wrapping this in a function, and checking success for both ends might look like:
def run():
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
# True if both processes were successful, False otherwise
return (p2.wait() == 0 && p1.wait() == 0)
By the way -- if you want to use shell=True and return the exit status of foo, rather than tee, things get a bit more interesting. Consider the following:
p = subprocess.Popen(['bash', '-c', 'set -o pipefail; ' + command_str])
...the pipefail bash extension will force the shell to exit with the status of the first pipeline component to fail (and 0 if no components fail), rather than using only the exit status of the final component.
Here's a couple of "neat" code examples in addition to the explanation from #Charles Duffy answer.
To run the shell command in Python:
#!/usr/bin/env python
from subprocess import check_call
check_call("./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log",
shell=True)
without the shell:
#!/usr/bin/env python
from subprocess import Popen, PIPE, STDOUT
tee = Popen(["/usr/bin/tee", "temp.log"], stdin=PIPE)
foo = Popen("./main/foo --config config_file".split(),
stdout=tee.stdin, stderr=STDOUT)
pipestatus = [foo.wait(), tee.wait()]
Note: don't use "command arg".split() with non-literal strings.
See How do I use subprocess.Popen to connect multiple processes by pipes?
You may combine answers to two StackOverflow questions:
1. piping together several subprocesses
x | y problem
2. Merging a Python script's subprocess' stdout and stderr (while keeping them distinguishable)
2>&1 problem

How to check the status of a shell script using subprocess module in Python?

I have a simple Python script which will execute a shell script using subprocess mdoule in Python.
Below is my Python shell script which is calling testing.sh shell script and it works fine.
import os
import json
import subprocess
jsonData = '{"pp": [0,3,5,7,9], "sp": [1,2,4,6,8]}'
jj = json.loads(jsonData)
print jj['pp']
print jj['sp']
os.putenv( 'jj1', 'Hello World 1')
os.putenv( 'jj2', 'Hello World 2')
os.putenv( 'jj3', ' '.join( str(v) for v in jj['pp'] ) )
os.putenv( 'jj4', ' '.join( str(v) for v in jj['sp'] ) )
print "start"
subprocess.call(['./testing.sh'])
print "end"
And below is my shell script -
#!/bin/bash
for el1 in $jj3
do
echo "$el1"
done
for el2 in $jj4
do
echo "$el2"
done
for i in $( david ); do
echo item: $i
done
Now the question I have is -
if you see my Python script, I am printing start, then executing shell script and then printing end.. So suppose for whatever reason that shell script which I am executing has any problem, then I don't want to print out end.
So in the above example, shell script will not run properly as david is not a linux command so it will throw an error. So how should I see the status of entire bash shell script and then decide whether I need to print end or not?
I have just added a for loop example, it can be any shell script..
Is it possible to do?
You can check stderr of the bash script rather than return code.
proc = subprocess.Popen('testing.sh', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = proc.communicate()
if stderr:
print "Shell script gave some error"
else:
print "end" # Shell script ran fine.
Just use the returned value from call():
import subprocess
rc = subprocess.call("true")
assert rc == 0 # zero exit status means success
rc = subprocess.call("false")
assert rc != 0 # non-zero means failure
You could use check_call() to raise an exception automatically if the command fails instead of checking the returned code manually:
rc = subprocess.check_call("true") # <-- no exception
assert rc == 0
try:
subprocess.check_call("false") # raises an exception
except subprocess.CalledProcessError as e:
assert e.returncode == 1
else:
assert 0, "never happens"
Well, according to the docs, .call will return the exit code back to you. You may want to check that you actually get an error return code, though. (I think the for loop will still return a 0 code since it more-or-less finished.)

Categories