How to use awk command in python - python

I have to write a python script to find available space in /var/tmp folder in linux. So I am using awk command to filter out only available space.
import subprocess
subprocess.call("$(awk 'NR==2 {print $4}' file1.txt)")
The output should be 4.7G but it comes
/bin/sh: 1: 4.7G: not found
and the return value is 127 and not 0.

Apart from the fact that python should be sufficient to do what awk does here, what you are doing with
$(awk 'NR==2 {print $4}' file1.txt)
is to run the awk command and expand its output as part of the command line by $(...). As this is the only part of the command line, the shell tries to execute awk's output as a command.
If you really want to run awk from your python script, remove $( and ) from your command.

Related

bash wrap a piped command with a python script

Is there a way to create a python script which wraps an entire bash command including the pipes.
For example, if I have the following simple script
import sys
print sys.argv
and call it like so (from bash or ipython), I get the expected outcome:
[pkerp#pendari trell]$ python test.py ls
['test.py', 'ls']
If I add a pipe, however, the output of the script gets redirected to the pipe sink:
[pkerp#pendari trell]$ python test.py ls > out.txt
And the > out.txt portion is not in sys.argv. I understand that the shell automatically process this output, but I'm curious if there's a way to force the shell to ignore it and pass it to the process being called.
The point of this is to create something like a wrapper for the shell. I'd like to run the commands regularly, but keep track of the strace output for each command (including the pipes). Ideally I'd like to keep all of the bash features, such as tab-completion and up and down arrows and history search, and then just pass the completed command through a python script which invokes a subprocess to handle it.
Is this possible, or would I have to write my own shell to do this?
Edit
It appears I'm asking the exact same thing as this question.
The only thing you can do is pass the entire shell command as a string, then let Python pass it back to a shell for execution.
$ python test.py "ls > out.txt"
Inside test.py, something like
subprocess.call("strace " + sys.argv[1], shell=True, executable="/bin/bash")
to ensure the entire string is passed to the shell (and bash, specifically).
Well, I don't quite see what you are trying to do. The general approach would be to give the desired output destination to the script using command line options: python test.py ls --output=out.txt. Incidentally, strace writes to stderr. You could capture everything using strace python test.py > out 2> err if you want to save everything...
Edit: If your script writes to stderr as well you could use strace -o strace_out python test.py > script_out 2> script_err
Edit2: Okay, I understand better what you want. My suggestion is this: Write a bash helper:
function process_and_evaluate()
{
strace -o /tmp/output/strace_output "$#"
/path/to/script.py /tmp/output/strace_output
}
Put this in a file like ~/helper.sh. Then open a bash, source it using . ~/helper.sh.
Now you can run it like this: process_and_evaluate ls -lA.
Edit3:
To capture output / error you could extend the macro like this:
function process_and_evaluate()
{
out=$1
err=$2
shift 2
strace -o /tmp/output/strace_output "$#" > "$out" 2> "$err"
/path/to/script.py /tmp/output/strace_output
}
You would have to use the (less obvious ) process_and_evaluate out.txt err.txt ls -lA.
This is the best that I can come up with...
At least in your simple example, you could just run the python script as an argument to echo, e.g.
$ echo $(python test.py ls) > test.txt
$ more test.txt
['test.py','ls']
Enclosing a command in parenthesis with a dollar sign first executes the contents then passes the output as an argument to echo.

Problems running terminal command via Python

I'm working on a small project where I need to control a console player via python. This example command works perfectly on the Linux terminal:
mplayer -loop 0 -playlist <(find "/mnt/music/soundtrack" -type f | egrep -i '(\.mp3|\.wav|\.flac|\.ogg|\.avi|\.flv|\.mpeg|\.mpg)'| sort)
In Python I'm doing the following:
command = """mplayer -loop 0 -playlist <(find "/mnt/music/soundtrack" -type f | egrep -i '(\.mp3|\.wav|\.flac|\.ogg|\.avi|\.flv|\.mpeg|\.mpg)'| sort)"""
os.system(command)
The problem is when I try it using Python it gives me an error when I run it:
sh: 1: Syntax error: "(" unexpected
I'm really confused here because it is the exact same string. Why doesn't the second method work?
Thanks.
Your default user shell is probably bash. Python's os.system command calls sh by default in linux.
A workaround is to use subprocess.check_call() and pass shell=True as an argument to tell subprocess to execute using your default user shell.
import subprocess
command = """mplayer -loop 0 -playlist <(find "/mnt/music/soundtrack" -type f | egrep -i '(\.mp3|\.wav|\.flac|\.ogg|\.avi|\.flv|\.mpeg|\.mpg)'| sort)"""
subprocess.check_call(command, shell=True)
Your python call 'os.system' is probably just using a different shell than the one you're using on the terminal:
os.system() execute command under which linux shell?
The shell you've spawned with os.system may not support parentheses for substitution.
import subprocess
COMND=subprocess.Popen('command',shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
COMND_bytes = COMND.stdout.read() + COMND.stderr.read()
print(COMND_bytes)

How to run SVN commands from a python script?

Basically I want to write a python script that does several things and one of them will be to run a checkout on a repository using subversion (SVN) and maybe preform a couple more of svn commands. What's the best way to do this ? This will be running as a crond script.
Would this work?
p = subprocess.Popen("svn info svn://xx.xx.xx.xx/project/trunk | grep \"Revision\" | awk '{print $2}'", stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
print "Revision is", output
Try pysvn
Gives you great access as far as i've tested it.
Here's some examples: http://pysvn.tigris.org/docs/pysvn_prog_guide.html
The reason for why i'm saying as far as i've tested it is because i've moved over to Git.. but if i recall pysvn is (the only and) the best library for svn.
Take a look into the python xonsh module: http://xon.sh/tutorial.html
It can call shell commands plus piping and output redirection with close touch to the python native code (nested) without need to play with python communicate bullshet and escape characters around.
Examples:
env | uniq | sort | grep PATH
COMMAND1 e>o < input.txt | COMMAND2 > output.txt e>> errors.txt
echo "my home is $HOME"
echo #(7+3)

python 2.3 how to run a piped command

I want to run a command like this
grep -w 1 pattern <(tail -f mylogfile.log)
basically from a python script i want to monitor a log file for a specific string and continue with the python script as soon as i found that.
I am using os.system(), but that is hanging. The same command in bash works good.
I have a very old version of python (v2.3) and so don't have sub-process module.
do we have a way to acheive this
In Python 2.3, you need to use subprocess from SVN
import subprocess
import shlex
subprocess.call(shlex.split("/bin/bash -c 'grep -w 1 pattern <(tail -f mylogfile.log)'"))
To be explicit, you need to install it from the SVN link above.
You need to call this with /bin/bash -c due to the shell redirection you're using
EDIT
If you want to solve this with os.system(), just wrap the command in /bin/bash -c since you're using shell redirection...
os.system("/bin/bash -c 'grep -w 1 pattern <(tail -f mylogfile.log)'")
First of all, the command i think you should be using is grep -w -m 1 'pattern' <(tail -f in)
For executing commands in python, use the Popen constructor from the subprocess module. Read more at
http://docs.python.org/library/subprocess.html
If I understand correctly, you want to send the output to python like this -
tail -f mylogfile.log | grep -w 1 pattern | python yourscript.py
i.e., read all updates to the log file, and send matching lines to your script.
To read from standard input, you can use the file-like object: sys.stdin.
So your script would look like
import sys
for line in sys.stdin.readlines():
#process each line here.

Python Popen grep

I'd like Popen to execute:
grep -i --line-buffered "grave" data/*.txt
When run from the shell, this gives me the wanted result. If I start, in the very same directory where I test grep, a python repl and follow the instruction from the docs, I obtain what should be the proper argument list to feed Popen with:
['grep', '-i', '--line-buffered', 'grave', 'data/*.txt']
The result of p = subprocess.Popen(args) is
grep: data/*.txt: No such file or directory
and if I try p = subprocess.Popen(args, shell=True), I get:
Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.
Any help on how to perform the wanted process? I'm on MacOS Lion.
If you type * in bash the shell expands it to the files in the given directory before executing the command. Python's Popen does no such thing, so what you're doing when you call Popen like that is telling grep there is a file called *.txt in the data directory, instead of all the .txt files in the data directory. That file doesn't exist and you get the expected error.
To solve this you can tell python to run the command through the shell by passing shell=True to Popen:
subprocess.Popen('grep -i --line-buffered grave data/*.txt', shell=True)
Which gets translated to:
subprocess.Popen(['/bin/sh', '-c', 'grep -i --line-buffered "grave" data/*.txt'])
As explained in the documentation of Popen.
You have to use a string instead of a list here, because you want to execute /bin/sh -c "grep -i --line-buffered "grave" data/*.txt" (N.B. quotes around the command, making it a single argument to sh). If you use a list this command is run: /bin/sh -c grep -i --line-buffered "grave" data/*.txt, which gives you the output of simply running grep.
The problem is that shell makes file globbing for you: data/*.txt
You will need to do it youself, for example, by using glob module.
import glob
cmd_line = ['grep', '-i', '--line-buffered', 'grave'] + glob.glob('data/*.txt')

Categories