If I run echo a; echo b in bash the result will be that both commands are run. However if I use subprocess then the first command is run, printing out the whole of the rest of the line.
The code below echos a; echo b instead of a b, how do I get it to run both commands?
import subprocess, shlex
def subprocess_cmd(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd("echo a; echo b")
You have to use shell=True in subprocess and no shlex.split:
import subprocess
command = "echo a; echo b"
ret = subprocess.run(command, capture_output=True, shell=True)
# before Python 3.7:
# ret = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
print(ret.stdout.decode())
returns:
a
b
I just stumbled on a situation where I needed to run a bunch of lines of bash code (not separated with semicolons) from within python. In this scenario the proposed solutions do not help. One approach would be to save a file and then run it with Popen, but it wasn't possible in my situation.
What I ended up doing is something like:
commands = '''
echo "a"
echo "b"
echo "c"
echo "d"
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = process.communicate(commands)
print out
So I first create the child bash process and after I tell it what to execute. This approach removes the limitations of passing the command directly to the Popen constructor.
Join commands with "&&".
os.system('echo a > outputa.txt && echo b > outputb.txt')
If you're only running the commands in one shot then you can just use subprocess.check_output convenience function:
def subprocess_cmd(command):
output = subprocess.check_output(command, shell=True)
print output
>>> command = "echo a; echo b"
>>> shlex.split(command);
['echo', 'a; echo', 'b']
so, the problem is shlex module do not handle ";"
Got errors like when I used capture_output=True
TypeError: __init__() got an unexpected keyword argument 'capture_output'
After made changes like as below and its works fine
import subprocess
command = '''ls'''
result = subprocess.run(command, stdout=subprocess.PIPE,shell=True)
print(result.stdout.splitlines())
import subprocess
cmd = "vsish -e ls /vmkModules/lsom/disks/ | cut -d '/' -f 1 | while read diskID ; do echo $diskID; vsish -e cat /vmkModules/lsom/disks/$diskID/virstoStats | grep -iE 'Delete pending |trims currently queued' ; echo '====================' ;done ;"
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
for line in proc_stdout.decode().split('\n'):
print (line)
subprocess_cmd(cmd)
Related
I have the following code to receive list of process with sudo:
sudoPass = 'mypass'
command = "launchctl list | grep -v com.apple"
x = os.system('echo %s|sudo -S %s' % (sudoPass, command))
But, I receive answer in int. I need in str. Is it possible to convert it to str without loosing data?
os.system returns (in most cases, see https://docs.python.org/3/library/os.html#os.system) the exit value of the process. Meaning most of the time 0 is everything went fine.
What you look for is the subprocess module (https://docs.python.org/3/library/subprocess.html) that allow you to capture output like so :
import subprocess
sudoPass = 'mypass\n' #Note the new line
command = "launchctl list | grep -v com.apple"
x = subprocess.Popen('echo %s|sudo -S %s' % (sudoPass, command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = x.communicate()
print(stdout)
I'm using subprocess to call a bash command in Python, and I'm getting a different return code than what the shell shows me.
import subprocess
def check_code(cmd):
print "received command '%s'" % (cmd)
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
print "p.returncode is '%d'" % (p.returncode)
exit()
if p.returncode == 0:
return True
else:
return False
#End if there was a return code at all
#End get_code()
When sent "ls /dev/dsk &> /dev/null", check_code returns 0, but "echo $?" produces "2" in the terminal:
Welcome to Dana version 0.7
Now there is Dana AND ZOL
received command 'ls /dev/dsk &> /dev/null'
p.returncode is '0'
root#Ubuntu-14:~# ls /dev/dsk &> /dev/null
root#Ubuntu-14:~# echo $?
2
root#Ubuntu-14:~#
Does anyone know what's going on here?
According to subprocess.Popen, the shell used in your Python script is sh. This shell is the POSIX standard, as opposed to Bash, which has several nonstandard features such as the shorthand redirection &> /dev/null. sh, the Bourne shell, interprets this symbol as "run me in the background, and redirect stdout to /dev/null".
Since your subprocess.Popen opens a sh which runs ls in its own background, the return value of sh is used instead of ls, which in this case is 0.
If you want Bash behavior with your Python, I believe you may have to reconfigure (possibly recompile) Python itself. It's simpler to just use the sh syntax, which is ls /dev/dsk 2> /dev/null.
Following the suggestion by xi_, I split the command up in to space delineated fields, and it failed to run with "&>" and "/dev/null". I removed them, and it worked.
Then I put the command all back together to test it without "&> /dev/null", and that worked too. It appears that the addition of "&> /dev/null" throws subprocess off, somehow.
Welcome to Dana version 0.7
Now there is Dana AND ZOL
received command 'cat /etc/fstab'
p.wait() is 0
p.returncode is '0'
received command 'cat /etc/fstabb'
p.wait() is 1
p.returncode is '1'
received command 'cat /etc/fstab &> /dev/null'
p.wait() is 0
p.returncode is '0'
received command 'cat /etc/fstabb &> /dev/null'
p.wait() is 0
p.returncode is '0'
root#Ubuntu-14:~# cat /etc/fstab &> /dev/null
root#Ubuntu-14:~# echo $?
0
root#Ubuntu-14:~# cat /etc/fstabb &> /dev/null
root#Ubuntu-14:~# echo $?
1
root#Ubuntu-14:~#
I originally added the "&> /dev/null" to the call because I was seeing output on the screen from STDERR. Once I added stderr=PIPE to the subprocess call, that went away. I was just trying to silently check the code on the output behind the scenes.
If someone can explain why adding "&> /dev/null" to a subprocess call in Python causes it to behave unexpectedly, I'd be happy to select that as the answer!
You are using it as subprocess.Popen(cmd, shell=True), with cmd as string.
That means that subprocess will call under the hood /bin/sh with arguments. So you are getting back exit code of your shell.
If you need actually exit code of your command, split it into list and use shell=False.
subprocess.Popen(['cmd', 'arg1'], shell=False)
I am trying to call an executable called foo, and pass it some command line arguments. An external script calls into the executable and uses the following command:
./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
The script uses Popen to execute this command as follows:
from subprocess import Popen
from subprocess import PIPE
def run_command(command, returnObject=False):
cmd = command.split(' ')
print('%s' % cmd)
p = None
print('command : %s' % command)
if returnObject:
p = Popen(cmd)
else:
p = Popen(cmd)
p.communicate()
print('returncode: %s' % p.returncode)
return p.returncode
return p
command = "./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
"
run_command(command)
However, this passes extra arguments ['2>&1', '|', '/usr/bin/tee', 'temp.log'] to the foo executable.
How can I get rid of these extra arguments getting passed to foo while maintaining the functionality?
I have tried shell=True but read about avoiding it for security purposes (shell injection attack). Looking for a neat solution.
Thanks
UPDATE:
- Updated the file following the tee command
The string
./main/foo --config config_file 2>&1 | /usr/bin/tee >temp.log
...is full of shell constructs. These have no meaning to anything without a shell in play. Thus, you have two options:
Set shell=True
Replace them with native Python code.
For instance, 2>&1 is the same thing as passing stderr=subprocess.STDOUT to Popen, and your tee -- since its output is redirected and it's passed no arguments -- could just be replaced with stdout=open('temp.log', 'w').
Thus:
p = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=open('temp.log', 'w'))
...or, if you really did want the tee command, but were just using it incorrectly (that is, if you wanted tee temp.log, not tee >temp.log):
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
stdout, _ = p2.communicate()
Wrapping this in a function, and checking success for both ends might look like:
def run():
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
# True if both processes were successful, False otherwise
return (p2.wait() == 0 && p1.wait() == 0)
By the way -- if you want to use shell=True and return the exit status of foo, rather than tee, things get a bit more interesting. Consider the following:
p = subprocess.Popen(['bash', '-c', 'set -o pipefail; ' + command_str])
...the pipefail bash extension will force the shell to exit with the status of the first pipeline component to fail (and 0 if no components fail), rather than using only the exit status of the final component.
Here's a couple of "neat" code examples in addition to the explanation from #Charles Duffy answer.
To run the shell command in Python:
#!/usr/bin/env python
from subprocess import check_call
check_call("./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log",
shell=True)
without the shell:
#!/usr/bin/env python
from subprocess import Popen, PIPE, STDOUT
tee = Popen(["/usr/bin/tee", "temp.log"], stdin=PIPE)
foo = Popen("./main/foo --config config_file".split(),
stdout=tee.stdin, stderr=STDOUT)
pipestatus = [foo.wait(), tee.wait()]
Note: don't use "command arg".split() with non-literal strings.
See How do I use subprocess.Popen to connect multiple processes by pipes?
You may combine answers to two StackOverflow questions:
1. piping together several subprocesses
x | y problem
2. Merging a Python script's subprocess' stdout and stderr (while keeping them distinguishable)
2>&1 problem
I want to run a bash command from python shell.
my bash is:
grep -Po "(?<=<cite>).*?(?=</cite>)" /tmp/file1.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > /tmp/file2.txt
what I tried is:
#!/usr/bin/python
import commands
commands.getoutput('grep ' + '-Po ' + '\"\(?<=<dev>\).*?\(?=</dev>\)\" ' + '/tmp/file.txt ' + '| ' + 'awk \'!x[$0]++\' ' + '> ' + '/tmp/file2.txt')
But I don't have any result.
Thank you
If you want to avoid splitting your arguments and worrying about pipes, you can use the shell=True option:
cmd = "grep -Po \"(?<=<dev>).*?(?=</dev>)\" /tmp/file.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > file2.txt"
out = subprocess.check_output(cmd, shell=True)
This will run a subshell which will understands all your directives, including "|" for piping, ">" for redirection. If you do not do this, these symbols normally parsed by the shell will just be passed to grep program.
Otherwise, you have to create the pipes yourself. For example (untested code below):
grep_p = subprocess.Popen(["grep", "-Po", "(?<=<dev>).*?(?=</dev>)", "/tmp/file.txt"], stdout=subprocess.PIPE)
awk_p = subprocess.Popen(["awk", "-F/", "'{print $1}'"], stdin = grep_p.stdout)
file2_fh = open("file2.txt", "w")
awk_p_2 = subprocess.Popen(["awk", "!x[$0]++", stdout = file2_fh, stdin = awk_p.stdout)
awk_p_2.communicate()
However, you're missing the point of python if you are doing this. You should instead look into the re module: re.match, re.sub, re.search, though I'm not familiar enough with awk to translate your commands.
The recommend way to run system commands in python is to use the module subprocess.
import subprocess
a=['grep' ,'-Po', '"(?<=<dev>).*?(?=</dev>)"','/tmp/file.txt']
b=['awk', '-F/', '"{print $1}"']
c=["awk", '"!x[$0]++"']
p1 = subprocess.Popen(a,stdout=subprocess.PIPE)
p2 = subprocess.Popen(b,stdin=p1.stdout,stdout=subprocess.PIPE)
p3 = subprocess.Popen(c,stdin=p2.stdout,stdout=subprocess.PIPE)
p1.stdout.close()
p2.stdout.close()
out,err=p3.communicate()
print out
The point of creating pipes between each subprocess is for security and debugging reasons. Also it makes the code much clearer in terms, which process gets input and sends output to.
Let us write a simple function to easily deal with these messy pipes for us:
def subprocess_pipes (pipes, last_pipe_out = None):
import subprocess
from subprocess import PIPE
last_p = None
for cmd in pipes:
out_pipe = PIPE if not (cmd==pipes[-1] and last_pipe_out) else open(last_pipe_out, "w")
cmd = cmd if isinstance(cmd, list) else cmd.split(" ")
in_pipe = last_p.stdout if last_p else None
p = subprocess.Popen(cmd, stdout = out_pipe, stdin = in_pipe)
last_p = p
comm = last_p.communicate()
return comm
Then we run,
subprocess_pipes(("ps ax", "grep python"), last_pipe_out = "test.out.2")
The result is a "test.out.2" file with the contents of piping "ps ax" into "grep python".
In your case,
a = ["grep", "-Po", "(?<=<cite>).*?(?=</cite>)", "/tmp/file1.txt"]
b = ["awk", "-F/", "{print $1}"]
c = ["awk", "!x[$0]++"]
subprocess_pipes((a, b, c), last_pipe_out = "/tmp/file2.txt")
The commands module is obsolete now.
If you don't actually need the output of your command you can use
import os
exit_status = os.system("your-command")
Otherwise you can use
import suproccess
out, err = subprocess.Popen("your | commands", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell = True).communicate()
Note: for your command you send stdout to file2.txt so I wouldn't expect to see anything in out you will however still see error messages on stderr which will go into err
you must use
import os
os.system(command)
I think what you are looking for is something like:
ubprocess.check_output(same as popen arguments, **kwargs) , use it the same way you would use a popen command , it should show you the output of the program that's being called.
For more details here is a link: http://freefilesdl.com/how-to-call-a-shell-command-from-python/
I have a script I like to execute in python via subprocess (yes, it has to be in sh).
Now I call sh like so:
subprocess.check_call( ['sh' + command] )
where command is:
echo 'someformat : : '${ENV_VAR}'/efc ;' > targetfile
Sadly this gives me:
sh: 0: Can't open echo 'someformat : : '${ENV_VAR}'/efc ;' > targetfile
Could someone please walk me through the steps to get the command working in sh and explain the why.
You have to run sh with -c param:
subprocess.check_call( ['sh', '-c', command] )
Try this:
command = "echo 'someformat : : '${ENV_VAR}'/efc ;' > targetfile"
subprocess.check_call(["sh", "-c", command])
Parameter -c modifies sh behavior to read commands from the string of the next argument.
And arguments must be contained in list.
python 2 subprocess doc
python 3 subprocess doc