I was trying to run the next command in my python code:
iostat -d 7 7 -p sda | awk '!/^Device/' | awk '!/^Linux/'
The way I tried it so far was this:
command = ["iostat", "-d", "7", "7", "-p", "sda", "|", "awk", "'!/^Device/'", "|", "awk", "'!/^Linux/'"]
device = subprocess.Popen(command, stdout=subprocess.PIPE)
devicetr = device.stdout.read()
It seems that the code won't handle the quotes marks in "'!/^Device/'" and in "'!/^Linux/'" like '!/^Device/' and '!/^Linux/' as intended.
When there is no USB connected, instead of blank I get this:
Linux 4.14.98-v7+ 01/28/2020 _armv7l_ (4 CPU)
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
Tried already adding '\' and this:
Passing double quote shell commands in python to subprocess.Popen()?
You don't need to add additional quotes because subprocess will automatically add them when needed.
command = ["iostat", "-d", "7", "7", "-p", "sda", "|", "awk", "!/^Device/", "|", "awk", "!/^Linux/"]
device = subprocess.Popen(command, stdout=subprocess.PIPE)
devicetr = device.stdout.read() # also here was a typo red -> read
As #noufal-ibrahim pointer out | doesn't work this way. You need to pipe your commands:
ps1 = subprocess.Popen(["iostat"], stdout=subprocess.PIPE)
ps2 = subprocess.Popen(["awk", "!/^ *KB/"], stdin=ps1.stdout, stdout=subprocess.PIPE) # filter out line with KB and extra space
print(ps2.stdout.read().decode())
^ This code works on my machine, you can adjust it to your use case.
Appendix. And instead of manually creating command list you can use shlex lib (from standard python lib):
>>> import shlex
>>> shlex.split("python foo.py 'multi word arg'")
['python', 'foo.py', 'multi word arg']
Related
I want to use the "raster2pgsql" utility in my Python code. When I use it in a Linux terminal, it works fine. This is the command:
$ raster2pgsql -a "/mnt/c/Users/Jan/path/to/raster/dem.tiff" test_schema.raster2 | psql -h localhost -d pisl -U pisl
Then I use subprocess.run (I have also tried subprocess.call) to use this same tool in my Python code. This is my code:
from subprocess import run
command = ["raster2pgsql", "-a", '"' + file_name + '"', self.schema_name + "." + identifier, "|", "psql", "-h", "localhost", "-p", "5432", "-d", self.dbname]
run(command)
I get this error:
ERROR: Unable to read raster file: "/mnt/c/Users/Jan/path/to/raster/dem.tiff"
Printing command gives this which I think is correct (equivalent to what worked in the terminal):
['raster2pgsql', '-a', '"/mnt/c/Users/Jan/path/to/raster/dem.tiff"', 'test_schema.raster2', '|', 'psql', '-h', 'localhost', '-p', '5432', '-d', 'pisl']
I have double checked that the path to the raster file is correct, tried single quotes, double quotes but nothing helps. I have looked at a number of similar question (here, here or here ) but did not find anything helpful.
I use Python 3.5 and Linux Bash Shell in Windows 10.
Question: What is wrong with the way I use subprocess?
2 issues here:
no need to extra-quote the filename. It's passed to the system literally and since there's no file called "/tmp/something" the command fails.
second, to be able to pass a pipe, you need shell=True
so quickfix:
command = ["raster2pgsql", "-a", file_name, self.schema_name + "." + identifier, "|", "psql", "-h", "localhost", "-p", "5432", "-d", self.dbname]
run(command,shell=True)
or using a command string (because shell=True is picky with argument list):
command = "raster2pgsql -a "+ file_name + " " + self.schema_name + "." + identifier + " | psql -h localhost -p 5432 -d" + self.dbname
run(command,shell=True)
(ugly, isn't it?)
It's much better to run 2 processes without shell=True and pipe them together using python, more portable & secure (not sure how shell=True reacts with an argument list on Linux):
from subprocess import *
command1 = ["raster2pgsql", "-a", file_name, self.schema_name + "." + identifier]
p = Popen(command1,stdout=PIPE)
command2 = ["psql", "-h", "localhost", "-p", "5432", "-d", self.dbname]
run(command2,stdin=p.stdout)
The first Popen object created writes its output to a pipe (thanks to stdout=PIPE argument). The run function can take an input as a pipe too (thanks to stdin=p.stout). It consumes the output of the first command, creating a native piped chain of commands, without the need of the shell (and the caveats of quoting, spaces, special character interpretation and such...)
I am new to python and working on trying to make a script which checks if a specified host as for example sensu-client exist. I use a deployment software called NSO and run it by: nso status and it shows me this information:
nagios-client host nagios-client down
test host test down
Is there any possibility to make a script to check if for example nagios-Client exist with a script ?
In shell I do it by:
nso status | awk '{ print $1 }'
In this case I would suggest using subprocess' check_output function. The documentation is here. check_output can return, as a string the shell output of a command. So you would have something like this:
import subprocess
foo=subprocess.check_output(['nso', 'status', '|', 'awk', '\'{ print $1 }\''], shell=True)
#Thanks bereal for shell=True
print foo
Of course, if your only targeting linux, you could use the much easier sh module. It allows you to import programs as if they were libraries.
you can use subprocess to run this command and parse the output
import subprocess
command = ['nso', 'status', '|', 'awk', '\'{ print $1 }\'']
p1 = subprocess.Popen(command, stdout=subprocess.PIPE)
You don't have to run awk, since you're already in Python:
import subprocess
proc = subprocess.Popen(['nso', 'status'], stdout=subprocess.PIPE)
# get stdout as a EOL-separated string, ignore stderr for now
out, _ = proc.communicate()
# parse the output, line.split()[0] is awk's $1
items = [line.split()[0] for line in out.split('\n')]
I want to run a bash command from python shell.
my bash is:
grep -Po "(?<=<cite>).*?(?=</cite>)" /tmp/file1.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > /tmp/file2.txt
what I tried is:
#!/usr/bin/python
import commands
commands.getoutput('grep ' + '-Po ' + '\"\(?<=<dev>\).*?\(?=</dev>\)\" ' + '/tmp/file.txt ' + '| ' + 'awk \'!x[$0]++\' ' + '> ' + '/tmp/file2.txt')
But I don't have any result.
Thank you
If you want to avoid splitting your arguments and worrying about pipes, you can use the shell=True option:
cmd = "grep -Po \"(?<=<dev>).*?(?=</dev>)\" /tmp/file.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > file2.txt"
out = subprocess.check_output(cmd, shell=True)
This will run a subshell which will understands all your directives, including "|" for piping, ">" for redirection. If you do not do this, these symbols normally parsed by the shell will just be passed to grep program.
Otherwise, you have to create the pipes yourself. For example (untested code below):
grep_p = subprocess.Popen(["grep", "-Po", "(?<=<dev>).*?(?=</dev>)", "/tmp/file.txt"], stdout=subprocess.PIPE)
awk_p = subprocess.Popen(["awk", "-F/", "'{print $1}'"], stdin = grep_p.stdout)
file2_fh = open("file2.txt", "w")
awk_p_2 = subprocess.Popen(["awk", "!x[$0]++", stdout = file2_fh, stdin = awk_p.stdout)
awk_p_2.communicate()
However, you're missing the point of python if you are doing this. You should instead look into the re module: re.match, re.sub, re.search, though I'm not familiar enough with awk to translate your commands.
The recommend way to run system commands in python is to use the module subprocess.
import subprocess
a=['grep' ,'-Po', '"(?<=<dev>).*?(?=</dev>)"','/tmp/file.txt']
b=['awk', '-F/', '"{print $1}"']
c=["awk", '"!x[$0]++"']
p1 = subprocess.Popen(a,stdout=subprocess.PIPE)
p2 = subprocess.Popen(b,stdin=p1.stdout,stdout=subprocess.PIPE)
p3 = subprocess.Popen(c,stdin=p2.stdout,stdout=subprocess.PIPE)
p1.stdout.close()
p2.stdout.close()
out,err=p3.communicate()
print out
The point of creating pipes between each subprocess is for security and debugging reasons. Also it makes the code much clearer in terms, which process gets input and sends output to.
Let us write a simple function to easily deal with these messy pipes for us:
def subprocess_pipes (pipes, last_pipe_out = None):
import subprocess
from subprocess import PIPE
last_p = None
for cmd in pipes:
out_pipe = PIPE if not (cmd==pipes[-1] and last_pipe_out) else open(last_pipe_out, "w")
cmd = cmd if isinstance(cmd, list) else cmd.split(" ")
in_pipe = last_p.stdout if last_p else None
p = subprocess.Popen(cmd, stdout = out_pipe, stdin = in_pipe)
last_p = p
comm = last_p.communicate()
return comm
Then we run,
subprocess_pipes(("ps ax", "grep python"), last_pipe_out = "test.out.2")
The result is a "test.out.2" file with the contents of piping "ps ax" into "grep python".
In your case,
a = ["grep", "-Po", "(?<=<cite>).*?(?=</cite>)", "/tmp/file1.txt"]
b = ["awk", "-F/", "{print $1}"]
c = ["awk", "!x[$0]++"]
subprocess_pipes((a, b, c), last_pipe_out = "/tmp/file2.txt")
The commands module is obsolete now.
If you don't actually need the output of your command you can use
import os
exit_status = os.system("your-command")
Otherwise you can use
import suproccess
out, err = subprocess.Popen("your | commands", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell = True).communicate()
Note: for your command you send stdout to file2.txt so I wouldn't expect to see anything in out you will however still see error messages on stderr which will go into err
you must use
import os
os.system(command)
I think what you are looking for is something like:
ubprocess.check_output(same as popen arguments, **kwargs) , use it the same way you would use a popen command , it should show you the output of the program that's being called.
For more details here is a link: http://freefilesdl.com/how-to-call-a-shell-command-from-python/
How could I run this code using subprocess module?
commands.getoutput('sudo blkid | grep 'uuid' | cut -d " " -f 1 | tr -d ":"')
I've tried this but it doesn't work at all
out_1 = subprocess.Popen(('sudo', 'blkid'), stdout=subprocess.PIPE)
out_2 = subprocess.Popen(('grep', 'uuid'), stdin=out_1.stdout, stdout=subprocess.PIPE)
out_3 = subprocess.Popen(('cut', '-d', '" "', '-f', '1'), stdin=out_2.stdout, stdout=subprocess.PIPE)
main_command = subprocess.check_output(('tr', '-d', '":"'), stdin=out_3.stdout)
main_command
Error: cut: the delimiter must be a single character
from subprocess import check_output, STDOUT
shell_command = '''sudo blkid | grep 'uuid' | cut -d " " -f 1 | tr -d ":"'''
output = check_output(shell_command, shell=True, stderr=STDOUT,
universal_newlines=True).rstrip('\n')
btw, it returns nothing on my system unless grep -i is used. In the latter case it returns devices. If it is your intent then you could use different command:
from subprocess import check_output
devices = check_output(['sudo', 'blkid', '-odevice']).split()
I'm trying not to use shell=True
It is ok to use shell=True if you control the command i.e., if you don't use user input to construct the command. Consider the shell command as a special language that allows you to express your intent concisely (like regex for string processing). It is more readable then several lines of code that do not use shell:
from subprocess import Popen, PIPE
blkid = Popen(['sudo', 'blkid'], stdout=PIPE)
grep = Popen(['grep', 'uuid'], stdin=blkid.stdout, stdout=PIPE)
blkid.stdout.close() # allow blkid to receive SIGPIPE if grep exits
cut = Popen(['cut', '-d', ' ', '-f', '1'], stdin=grep.stdout, stdout=PIPE)
grep.stdout.close()
tr = Popen(['tr', '-d', ':'], stdin=cut.stdout, stdout=PIPE,
universal_newlines=True)
cut.stdout.close()
output = tr.communicate()[0].rstrip('\n')
pipestatus = [cmd.wait() for cmd in [blkid, grep, cut, tr]]
Note: there are no quotes inside quotes here (no '" "', '":"'). Also unlike the previous command and commands.getoutput(), it doesn't capture stderr.
plumbum provides some syntax sugar:
from plumbum.cmd import sudo, grep, cut, tr
pipeline = sudo['blkid'] | grep['uuid'] | cut['-d', ' ', '-f', '1'] | tr['-d', ':']
output = pipeline().rstrip('\n') # execute
See How do I use subprocess.Popen to connect multiple processes by pipes?
pass your command as one string like this:
main_command = subprocess.check_output('tr -d ":"', stdin=out_3.stdout)
if you have multiple commands and if you want to execute one by one, pass them as list:
main_command = subprocess.check_output([comand1, command2, etc..], shell=True)
I have this commands in bash:
ACTIVE_MGMT_1=ssh -n ${MGMT_IP_1} ". .bash_profile; xms sho proc TRAF.*" 2>/dev/null |egrep " A " |awk '/TRAF/{print $1}' |cut -d "." -f2;
I was trying to do it in Python like this:
active_mgmgt_1 = os.popen("""ssh -n MGMT_IP_1 ". .bash_profile; xms sho proc TRAF.*" 2>/dev/null |egrep " A " |awk '/TRAF/{print $1}' |cut -d "." -f2""") ACTIVE_MGMT_1 = active_mgmgt_1.read().replace('\n', '')
It doesn't work; any advice please?
Your popen call needs be set to communicate via a pipe.
Also stop trying to put everything on one line - python doesn't require it and places a lot of empasis on readable code.
I would strongly suggest doing the string processing in python rather than egrep, (use find or re in python), awk (find or egrep) and cut (string split).
It is also recommended to use subprocess.Popen rather than os.popen functions. There is a suggestion to use shlex.spilt to clear up this sort of issue.
untested code
import subprocess
import re
import os
MGMT_IP_1 = os.getenv('MGMT_IP_1')
sp = subprocess.Popen(
['ssh', '-n', MGMT_IP_1, '. .bash_profile; xms sho proc TRAF.*'],
stdout=PIPE, stderr=None)
(result, outtext) = sp.communicate()
# Proceed to process outtext from here using re, find and split
# to the equivalent of egrep " A " |awk '/TRAF/{print $1}' |cut -d "." -f2;