I want to run a bash command from python shell.
my bash is:
grep -Po "(?<=<cite>).*?(?=</cite>)" /tmp/file1.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > /tmp/file2.txt
what I tried is:
#!/usr/bin/python
import commands
commands.getoutput('grep ' + '-Po ' + '\"\(?<=<dev>\).*?\(?=</dev>\)\" ' + '/tmp/file.txt ' + '| ' + 'awk \'!x[$0]++\' ' + '> ' + '/tmp/file2.txt')
But I don't have any result.
Thank you
If you want to avoid splitting your arguments and worrying about pipes, you can use the shell=True option:
cmd = "grep -Po \"(?<=<dev>).*?(?=</dev>)\" /tmp/file.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > file2.txt"
out = subprocess.check_output(cmd, shell=True)
This will run a subshell which will understands all your directives, including "|" for piping, ">" for redirection. If you do not do this, these symbols normally parsed by the shell will just be passed to grep program.
Otherwise, you have to create the pipes yourself. For example (untested code below):
grep_p = subprocess.Popen(["grep", "-Po", "(?<=<dev>).*?(?=</dev>)", "/tmp/file.txt"], stdout=subprocess.PIPE)
awk_p = subprocess.Popen(["awk", "-F/", "'{print $1}'"], stdin = grep_p.stdout)
file2_fh = open("file2.txt", "w")
awk_p_2 = subprocess.Popen(["awk", "!x[$0]++", stdout = file2_fh, stdin = awk_p.stdout)
awk_p_2.communicate()
However, you're missing the point of python if you are doing this. You should instead look into the re module: re.match, re.sub, re.search, though I'm not familiar enough with awk to translate your commands.
The recommend way to run system commands in python is to use the module subprocess.
import subprocess
a=['grep' ,'-Po', '"(?<=<dev>).*?(?=</dev>)"','/tmp/file.txt']
b=['awk', '-F/', '"{print $1}"']
c=["awk", '"!x[$0]++"']
p1 = subprocess.Popen(a,stdout=subprocess.PIPE)
p2 = subprocess.Popen(b,stdin=p1.stdout,stdout=subprocess.PIPE)
p3 = subprocess.Popen(c,stdin=p2.stdout,stdout=subprocess.PIPE)
p1.stdout.close()
p2.stdout.close()
out,err=p3.communicate()
print out
The point of creating pipes between each subprocess is for security and debugging reasons. Also it makes the code much clearer in terms, which process gets input and sends output to.
Let us write a simple function to easily deal with these messy pipes for us:
def subprocess_pipes (pipes, last_pipe_out = None):
import subprocess
from subprocess import PIPE
last_p = None
for cmd in pipes:
out_pipe = PIPE if not (cmd==pipes[-1] and last_pipe_out) else open(last_pipe_out, "w")
cmd = cmd if isinstance(cmd, list) else cmd.split(" ")
in_pipe = last_p.stdout if last_p else None
p = subprocess.Popen(cmd, stdout = out_pipe, stdin = in_pipe)
last_p = p
comm = last_p.communicate()
return comm
Then we run,
subprocess_pipes(("ps ax", "grep python"), last_pipe_out = "test.out.2")
The result is a "test.out.2" file with the contents of piping "ps ax" into "grep python".
In your case,
a = ["grep", "-Po", "(?<=<cite>).*?(?=</cite>)", "/tmp/file1.txt"]
b = ["awk", "-F/", "{print $1}"]
c = ["awk", "!x[$0]++"]
subprocess_pipes((a, b, c), last_pipe_out = "/tmp/file2.txt")
The commands module is obsolete now.
If you don't actually need the output of your command you can use
import os
exit_status = os.system("your-command")
Otherwise you can use
import suproccess
out, err = subprocess.Popen("your | commands", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell = True).communicate()
Note: for your command you send stdout to file2.txt so I wouldn't expect to see anything in out you will however still see error messages on stderr which will go into err
you must use
import os
os.system(command)
I think what you are looking for is something like:
ubprocess.check_output(same as popen arguments, **kwargs) , use it the same way you would use a popen command , it should show you the output of the program that's being called.
For more details here is a link: http://freefilesdl.com/how-to-call-a-shell-command-from-python/
Related
If I run echo a; echo b in bash the result will be that both commands are run. However if I use subprocess then the first command is run, printing out the whole of the rest of the line.
The code below echos a; echo b instead of a b, how do I get it to run both commands?
import subprocess, shlex
def subprocess_cmd(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd("echo a; echo b")
You have to use shell=True in subprocess and no shlex.split:
import subprocess
command = "echo a; echo b"
ret = subprocess.run(command, capture_output=True, shell=True)
# before Python 3.7:
# ret = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
print(ret.stdout.decode())
returns:
a
b
I just stumbled on a situation where I needed to run a bunch of lines of bash code (not separated with semicolons) from within python. In this scenario the proposed solutions do not help. One approach would be to save a file and then run it with Popen, but it wasn't possible in my situation.
What I ended up doing is something like:
commands = '''
echo "a"
echo "b"
echo "c"
echo "d"
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = process.communicate(commands)
print out
So I first create the child bash process and after I tell it what to execute. This approach removes the limitations of passing the command directly to the Popen constructor.
Join commands with "&&".
os.system('echo a > outputa.txt && echo b > outputb.txt')
If you're only running the commands in one shot then you can just use subprocess.check_output convenience function:
def subprocess_cmd(command):
output = subprocess.check_output(command, shell=True)
print output
>>> command = "echo a; echo b"
>>> shlex.split(command);
['echo', 'a; echo', 'b']
so, the problem is shlex module do not handle ";"
Got errors like when I used capture_output=True
TypeError: __init__() got an unexpected keyword argument 'capture_output'
After made changes like as below and its works fine
import subprocess
command = '''ls'''
result = subprocess.run(command, stdout=subprocess.PIPE,shell=True)
print(result.stdout.splitlines())
import subprocess
cmd = "vsish -e ls /vmkModules/lsom/disks/ | cut -d '/' -f 1 | while read diskID ; do echo $diskID; vsish -e cat /vmkModules/lsom/disks/$diskID/virstoStats | grep -iE 'Delete pending |trims currently queued' ; echo '====================' ;done ;"
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
for line in proc_stdout.decode().split('\n'):
print (line)
subprocess_cmd(cmd)
I am trying to run the following awk command inside python but I get a syntax error related to the quotes:
import subprocess
COMMAND = "df /dev/sda1 | awk /'NR==2 {sub("%","",$5); if ($5 >= 80) {printf "Warning! Space usage is %d%%", $5}}"
subprocess.call(COMMAND, shell=True)
I tried to escape the quotes but I am still getting the same error.
You may want to put ''' or """ around the string since you have both ' and ".
import subprocess
COMMAND = '''"df /dev/sda1 | awk /'NR==2 {sub("%","",$5); if ($5 >= 80) {printf "Warning! Space usage is %d%%", $5}}"'''
subprocess.call(COMMAND, shell=True)
There also seems to be a relevant answer already for this as well: awk commands within python script
Try this:
import subprocess
COMMAND="df /dev/sda1 | awk 'NR==2 {sub(\"%\",\"\",$5); if ($5 >= 80) {printf \"Warning! Space usage is %d%%\", $5}}'"
subprocess.Popen(COMMAND,stdin=subprocess.PIPE,stdout=subprocess.PIPE, shell=True).stdout.read()
I was writing a python script for my deployment purpose and one part of the script was to explicitely kill the process if its not stopped successfully.
Below is the python code which actually performs
Find the processId of the process named myApplication
ps -ef | grep myApplication | grep -v grep | awk {'print $2'}
and then perform
kill -9 PID //where PID is output of earlier command
import subprocess
import signal
def killApplicationProcessIfStillRunning(app_name):
p1 = subprocess.Popen(['ps', '-ef'], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['grep', app_name],stdin=p1.stdout, stdout=subprocess.PIPE)
p3 = subprocess.Popen(['grep', '-v' , 'grep'],stdin=p2.stdout, stdout=subprocess.PIPE)
p4 = subprocess.Popen(['awk', '{print $2}'],stdin=p3.stdout, stdout=subprocess.PIPE)
out, err = p4.communicate()
if out:
print 'Attempting to kill '+app_name +' process with PID ' +out.splitlines()[0]
os.kill(int(out.splitlines()[0]),signal.SIGKILL)
Now invoke the above method as
killApplicationProcessIfStillRunning(myApplication)
Hope it helps someone.
I am trying to call an executable called foo, and pass it some command line arguments. An external script calls into the executable and uses the following command:
./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
The script uses Popen to execute this command as follows:
from subprocess import Popen
from subprocess import PIPE
def run_command(command, returnObject=False):
cmd = command.split(' ')
print('%s' % cmd)
p = None
print('command : %s' % command)
if returnObject:
p = Popen(cmd)
else:
p = Popen(cmd)
p.communicate()
print('returncode: %s' % p.returncode)
return p.returncode
return p
command = "./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
"
run_command(command)
However, this passes extra arguments ['2>&1', '|', '/usr/bin/tee', 'temp.log'] to the foo executable.
How can I get rid of these extra arguments getting passed to foo while maintaining the functionality?
I have tried shell=True but read about avoiding it for security purposes (shell injection attack). Looking for a neat solution.
Thanks
UPDATE:
- Updated the file following the tee command
The string
./main/foo --config config_file 2>&1 | /usr/bin/tee >temp.log
...is full of shell constructs. These have no meaning to anything without a shell in play. Thus, you have two options:
Set shell=True
Replace them with native Python code.
For instance, 2>&1 is the same thing as passing stderr=subprocess.STDOUT to Popen, and your tee -- since its output is redirected and it's passed no arguments -- could just be replaced with stdout=open('temp.log', 'w').
Thus:
p = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=open('temp.log', 'w'))
...or, if you really did want the tee command, but were just using it incorrectly (that is, if you wanted tee temp.log, not tee >temp.log):
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
stdout, _ = p2.communicate()
Wrapping this in a function, and checking success for both ends might look like:
def run():
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
# True if both processes were successful, False otherwise
return (p2.wait() == 0 && p1.wait() == 0)
By the way -- if you want to use shell=True and return the exit status of foo, rather than tee, things get a bit more interesting. Consider the following:
p = subprocess.Popen(['bash', '-c', 'set -o pipefail; ' + command_str])
...the pipefail bash extension will force the shell to exit with the status of the first pipeline component to fail (and 0 if no components fail), rather than using only the exit status of the final component.
Here's a couple of "neat" code examples in addition to the explanation from #Charles Duffy answer.
To run the shell command in Python:
#!/usr/bin/env python
from subprocess import check_call
check_call("./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log",
shell=True)
without the shell:
#!/usr/bin/env python
from subprocess import Popen, PIPE, STDOUT
tee = Popen(["/usr/bin/tee", "temp.log"], stdin=PIPE)
foo = Popen("./main/foo --config config_file".split(),
stdout=tee.stdin, stderr=STDOUT)
pipestatus = [foo.wait(), tee.wait()]
Note: don't use "command arg".split() with non-literal strings.
See How do I use subprocess.Popen to connect multiple processes by pipes?
You may combine answers to two StackOverflow questions:
1. piping together several subprocesses
x | y problem
2. Merging a Python script's subprocess' stdout and stderr (while keeping them distinguishable)
2>&1 problem
How could I run this code using subprocess module?
commands.getoutput('sudo blkid | grep 'uuid' | cut -d " " -f 1 | tr -d ":"')
I've tried this but it doesn't work at all
out_1 = subprocess.Popen(('sudo', 'blkid'), stdout=subprocess.PIPE)
out_2 = subprocess.Popen(('grep', 'uuid'), stdin=out_1.stdout, stdout=subprocess.PIPE)
out_3 = subprocess.Popen(('cut', '-d', '" "', '-f', '1'), stdin=out_2.stdout, stdout=subprocess.PIPE)
main_command = subprocess.check_output(('tr', '-d', '":"'), stdin=out_3.stdout)
main_command
Error: cut: the delimiter must be a single character
from subprocess import check_output, STDOUT
shell_command = '''sudo blkid | grep 'uuid' | cut -d " " -f 1 | tr -d ":"'''
output = check_output(shell_command, shell=True, stderr=STDOUT,
universal_newlines=True).rstrip('\n')
btw, it returns nothing on my system unless grep -i is used. In the latter case it returns devices. If it is your intent then you could use different command:
from subprocess import check_output
devices = check_output(['sudo', 'blkid', '-odevice']).split()
I'm trying not to use shell=True
It is ok to use shell=True if you control the command i.e., if you don't use user input to construct the command. Consider the shell command as a special language that allows you to express your intent concisely (like regex for string processing). It is more readable then several lines of code that do not use shell:
from subprocess import Popen, PIPE
blkid = Popen(['sudo', 'blkid'], stdout=PIPE)
grep = Popen(['grep', 'uuid'], stdin=blkid.stdout, stdout=PIPE)
blkid.stdout.close() # allow blkid to receive SIGPIPE if grep exits
cut = Popen(['cut', '-d', ' ', '-f', '1'], stdin=grep.stdout, stdout=PIPE)
grep.stdout.close()
tr = Popen(['tr', '-d', ':'], stdin=cut.stdout, stdout=PIPE,
universal_newlines=True)
cut.stdout.close()
output = tr.communicate()[0].rstrip('\n')
pipestatus = [cmd.wait() for cmd in [blkid, grep, cut, tr]]
Note: there are no quotes inside quotes here (no '" "', '":"'). Also unlike the previous command and commands.getoutput(), it doesn't capture stderr.
plumbum provides some syntax sugar:
from plumbum.cmd import sudo, grep, cut, tr
pipeline = sudo['blkid'] | grep['uuid'] | cut['-d', ' ', '-f', '1'] | tr['-d', ':']
output = pipeline().rstrip('\n') # execute
See How do I use subprocess.Popen to connect multiple processes by pipes?
pass your command as one string like this:
main_command = subprocess.check_output('tr -d ":"', stdin=out_3.stdout)
if you have multiple commands and if you want to execute one by one, pass them as list:
main_command = subprocess.check_output([comand1, command2, etc..], shell=True)
I read every thread I found on StackOverflow on invoking shell commands from Python using subprocess, but I couldn't find an answer that applies to my situation below:
I would like to do the following from Python:
Run shell command command_1. Collect the output in variable result_1
Shell pipe result_1 into command_2 and collect the output on result_2. In other words, run command_1 | command_2 using the result that I obtained when running command_1 in the step before
Do the same piping result_1 into a third command command_3 and collecting the result in result_3.
So far I have tried:
p = subprocess.Popen(command_1, stdout=subprocess.PIPE, shell=True)
result_1 = p.stdout.read();
p = subprocess.Popen("echo " + result_1 + ' | ' +
command_2, stdout=subprocess.PIPE, shell=True)
result_2 = p.stdout.read();
the reason seems to be that "echo " + result_1 does not simulate the process of obtaining the output of a command for piping.
Is this at all possible using subprocess? If so, how?
You can do:
pipe = Popen(command_2, shell=True, stdin=PIPE, stdout=PIPE)
pipe.stdin.write(result_1)
pipe.communicate()
instead of the line with the pipe.