How to execute awk command inside python - python

I am trying to run the following awk command inside python but I get a syntax error related to the quotes:
import subprocess
COMMAND = "df /dev/sda1 | awk /'NR==2 {sub("%","",$5); if ($5 >= 80) {printf "Warning! Space usage is %d%%", $5}}"
subprocess.call(COMMAND, shell=True)
I tried to escape the quotes but I am still getting the same error.

You may want to put ''' or """ around the string since you have both ' and ".
import subprocess
COMMAND = '''"df /dev/sda1 | awk /'NR==2 {sub("%","",$5); if ($5 >= 80) {printf "Warning! Space usage is %d%%", $5}}"'''
subprocess.call(COMMAND, shell=True)
There also seems to be a relevant answer already for this as well: awk commands within python script

Try this:
import subprocess
COMMAND="df /dev/sda1 | awk 'NR==2 {sub(\"%\",\"\",$5); if ($5 >= 80) {printf \"Warning! Space usage is %d%%\", $5}}'"
subprocess.Popen(COMMAND,stdin=subprocess.PIPE,stdout=subprocess.PIPE, shell=True).stdout.read()

I was writing a python script for my deployment purpose and one part of the script was to explicitely kill the process if its not stopped successfully.
Below is the python code which actually performs
Find the processId of the process named myApplication
ps -ef | grep myApplication | grep -v grep | awk {'print $2'}
and then perform
kill -9 PID //where PID is output of earlier command
import subprocess
import signal
def killApplicationProcessIfStillRunning(app_name):
p1 = subprocess.Popen(['ps', '-ef'], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['grep', app_name],stdin=p1.stdout, stdout=subprocess.PIPE)
p3 = subprocess.Popen(['grep', '-v' , 'grep'],stdin=p2.stdout, stdout=subprocess.PIPE)
p4 = subprocess.Popen(['awk', '{print $2}'],stdin=p3.stdout, stdout=subprocess.PIPE)
out, err = p4.communicate()
if out:
print 'Attempting to kill '+app_name +' process with PID ' +out.splitlines()[0]
os.kill(int(out.splitlines()[0]),signal.SIGKILL)
Now invoke the above method as
killApplicationProcessIfStillRunning(myApplication)
Hope it helps someone.

Related

How to kill a java process by name in python?

I am trying to kill java process with name "MyClass" using below python script :
import os
os.system("kill $(ps aux | grep 'MyClass' | grep -v 'grep' | awk '{print $2}')")
But this gives me output as below and the process is still running
sh: 1: kill: Usage: kill [-s sigspec | -signum | -sigspec] [pid | job]... or
kill -l [exitstatus]
512
I know that the $ sign is the problem here but do not know how to make this work.
Any help/hint is appreciated.
Thanks.
def terminate_java_process(process_name):
proc = subprocess.Popen(["jps"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
processes = {}
for line in out.split(b"\n"):
try:
process_name=str(line, 'utf-8').split(' ')[1]
except:
continue
process_id= str(line, 'utf-8').split(' ')[0]
processes[process_name] = process_id
os.system("kill -s TERM 82220")
Here I have another way:
I am fetching all the processes, by looping each one I'm taking the required one and if it is found just kill them.
I am making use of these concept to find out how many processes are running for the same client, for the same category.
# this will fetch the processes in stdout var
processes = Popen(['ps', '-ef'], stdout=PIPE, stderr=PIPE)
stdout, error = processes.communicate()
for line in stdout.splitlines():
if (line.__contains__("Process_name to check")):
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)

Execute a bash command with parameter in Python

This is a bash command that I run in python and get the expected result:
count = subprocess.Popen("ps -ef | grep app | wc -l", stdout=subprocess.PIPE, shell=True)
but when I'd like to pass an argument (count in this case) cannot figure out how to do it.
I tried:
pid = subprocess.call("ps -ef | grep app | awk -v n=' + str(count), 'NR==n | awk \'{print $2}\'", shell=True)
and
args = shlex.split('ps -ef | grep app | awk -v n=' + str(count), 'NR==n | awk \'{print $2}\'')
pid = subprocess.Popen(args, stdout=subprocess.PIPE, shell=True)
among other attempts, from various posts here, but still cannot make it.
You're mixing opening and closing quotations and you pass a colon by mistake on your other attempts among other things.
Try this for a fix:
pid = subprocess.call("ps -ef | grep app | awk -v n=" + str(count) + " NR==n | awk '{print $2}'", shell=True)
You opened the command parameter with " and there for you need to close it before you do + str() with a " and not a '. Further more i swapped the , 'NR= with + "NR= since you want to append more to your command and not pass a argument to subprocess.call().
As pointed out in the comments, there's no point in splitting the command with shlex since piping commands isn't implemented in subprocess, I would however like to point out that using shell=True is usually not recommended because for instance one of the examples given here.
An other vay is using format:
pid = subprocess.call("ps -ef | grep app | awk -v n={} NR==n | awk '{{print $2}}'".format(str(count)), shell=True)
Your Awk pipeline could be simplified a great deal - if the goal is to print the last match, ps -ef | awk '/app/ { p=$2 } END { print p }' does that. But many times, running Awk from Python is just silly, and performing the filtering in Python is convenient and easy, as well as obviously more efficient (you save not only the Awk process, but also the pesky shell=True).
for p in subprocess.check_output(['ps', '-ef']).split('\n'):
if 'app' in p:
pid = p.split()[1]

How to split up the command here for using subprocess.Popen()

ip = subprocess.Popen(["/sbin/ifconfig $(/sbin/route | awk '/default/ {print $8}') | grep \"inet addr\" | awk -F: '{print $2}' | awk \'{print $1}\'"], stdout=subprocess.PIPE)
I am not sure where to put the commas to separate them to use this command using subprocess.Popen. Does anyone know?
You are using shell features (the pipe) so instead of splitting the command, you should pass it as a single string (not a list) with shell=True
ip = subprocess.Popen("/sbin/ifconfig $(/sbin/route | awk '/default/ {print $8}') | grep \"inet addr\" | awk -F: '{print $2}' | awk \'{print $1}\'",
shell=True,
stdout=subprocess.PIPE)
Here's what I would recommend.
Create a file with this contents - call it 'route-info' and make it executable:
#!/bin/sh
/sbin/ifconfig $(/sbin/route | awk '/default/ {print $8}') |
grep "inet addr" |
awk -F: '{print $2}' |
awk '{print $1}'
In your python program, use:
ip = subprocess.Popen(["/path/to/route-info"], stdout=subprocess.PIPE)
Then you don't have to worry about quoting characters and you can independently test the route-info script to make sure it is working correctly.
The script route-info doesn't take any command line arguments, but if it did this is how you would pass them:
ip = subprocess.Popen(["/path/to/route-info", arg1, arg2, ...], stdout=subprocess.PIPE)
Quoting the official documentation of subprocess.Popen here
It may not be obvious how to break a shell command into a sequence of
arguments, especially in complex cases. shlex.split() can illustrate
how to determine the correct tokenization for args:
import shlex, subprocess
command_line = input()
args = shlex.split(command_line)
print(args)
p = subprocess.Popen(args) # Success!
shlex is included in standard library so you need not to install it.
Writing it in a single line like str.split() should look like:
import shlex
import subprocess
command = "ls -l"
proc = subprocess.Popen(shlex.split(command) , stdout = subprocess.PIPE , stderr = subprocess.PIPE)
output , errors = proc.communicate()
print(output , errors)

Running shell command in python and reading output

I have the following:
cmd = "ps aux | grep 'java -jar' | grep -v grep | awk '//{print $2}'".split(' ')
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print out
When I run the command in the console (outside of python), I get the desired output. Running this above code in python prints a blank line. I am assuming there is something up with the cmd (specifically the | operator) but I can't be sure.
I need to achieve this with the standard Python 2.6.6 install (no additional modules)
You need to use a single call to Popen() for each piece of the original command, as connected by the pipe, as in
import subprocess
p1 = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p2 = subprocess.Popen(["grep", "java -jar"], stdin=p1.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p3 = subprocess.Popen(["grep", "-v", "grep"], stdin=p2.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p4 = subprocess.Popen(["awk", "//{print $2}"], stdin=p3.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p4.communicate()
print out
The subprocess documentation has an in-depth discussion.
Popen by default only executes executables, not shell command lines.
When you pass the list of arguments to Popen they should call one executable with its arguments:
import subprocess
proc = subprocess.Popen(['ps', 'aux'])
Also note that you should not use str.split to split a command, because:
>>> "ps aux | grep 'java -jar' | grep -v grep | awk '//{print $2}'".split(' ')
['ps', 'aux', '|', 'grep', "'java", "-jar'", '', '', '', '', '|', 'grep', '-v', 'grep', '|', 'awk', "'//{print", "$2}'"]
Note how:
The arguments that were quoted (e.g. 'java -jar') are splitted.
If there is more than one consecutive space you get some empty arguments.
Python already provides a module that knows how to split a command line in a reasonable manner, it's shlex:
>>> shlex.split("ps aux | grep 'java -jar' | grep -v grep | awk '//{print $2}'")
['ps', 'aux', '|', 'grep', 'java -jar', '|', 'grep', '-v', 'grep', '|', 'awk', '//{print $2}']
Note how quoted arguments were preserved, and multiple spaces are handled gracefully. Still you cannot pass the result to Popen, because Popen will not interpret the | as a pipe by default.
If you want to run a shell command line (i.e. use any shell feature such as pipes, path expansion, redirection etc.) you must pass shell=True. In this case you should not pass a list of strings as argumento to Popen, but only a string that is the complete command line:
proc = subprocess.Popen("ps aux | grep 'java -jar' | grep -v grep | awk '//{print $2}'", shell=True)
If you pass a list of strings with shell=True its meaning is different: the first element should be the complete command line, while the other elements are passed as options to the shell used. For example on my machine the default shell (sh) has an -x option that will display on stderr all the processes that gets executed:
>>> from subprocess import Popen
>>> proc = Popen(['ps aux | grep python3', '-x'], shell=True)
>>>
username 7301 0.1 0.1 39440 7408 pts/9 S+ 12:57 0:00 python3
username 7302 0.0 0.0 4444 640 pts/9 S+ 12:58 0:00 /bin/sh -c ps aux | grep python3 -x
username 7304 0.0 0.0 15968 904 pts/9 S+ 12:58 0:00 grep python3
Here you can see that a /bin/sh was started that executed the command ps aux | python3 and with an option of -x.
(This is all documented in the documentation for Popen).
This said, one way to achieve what you want is to use subprocess.check_output:
subprocess.check_output("ps aux | grep 'java -jar' | grep -v grep | awk '//{print $2}'", shell=True)
However this isn't available in python<2.7 so you have to use Popen and communicate():
proc = subprocess.Popen("ps aux | grep 'java -jar' | grep -v grep | awk '//{print $2}'", shell=True, stdout=subprocess.PIPE)
out, err = proc.communicate()
The alternative is to avoid using shell=True (which is generally a very good thing, since shell=True introduces some security risks) and manually write the pipe using multiple processes:
from subprocess import Popen, PIPE
ps = Popen(['ps', 'aux'], stdout=PIPE)
grep_java = Popen(['grep', 'java -jar'], stdin=ps.stdout, stdout=PIPE)
grep_grep = Popen(['grep', '-v', 'grep'], stdin=grep_java.stdout, stdout=PIPE)
awk = Popen(['awk', '//{print $2}'], stdin=grep_grep.stdout, stdout=PIPE)
out, err = awk.communicate()
grep_grep.wait()
grep_java.wait()
ps.wait()
Note that if you don't care for the standard error you can avoid specifying it. It will then inherit the one of the current process.
There is just one command in the shell pipeline which can't be easily replaced by Python code. So you can start several external processes and connect their inputs and outputs, but I would just start the ps aux and add some Python code to filter and extract the desired data:
from subprocess import PIPE, Popen
def main():
process = Popen(['ps', 'aux'], stdout=PIPE)
pids = [
line.split(None, 2)[1] for line in process.stdout if 'java -jar' in line
]
process.wait()
print '\n'.join(pids)
if __name__ == '__main__':
main()

running bash command from python shell

I want to run a bash command from python shell.
my bash is:
grep -Po "(?<=<cite>).*?(?=</cite>)" /tmp/file1.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > /tmp/file2.txt
what I tried is:
#!/usr/bin/python
import commands
commands.getoutput('grep ' + '-Po ' + '\"\(?<=<dev>\).*?\(?=</dev>\)\" ' + '/tmp/file.txt ' + '| ' + 'awk \'!x[$0]++\' ' + '> ' + '/tmp/file2.txt')
But I don't have any result.
Thank you
If you want to avoid splitting your arguments and worrying about pipes, you can use the shell=True option:
cmd = "grep -Po \"(?<=<dev>).*?(?=</dev>)\" /tmp/file.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > file2.txt"
out = subprocess.check_output(cmd, shell=True)
This will run a subshell which will understands all your directives, including "|" for piping, ">" for redirection. If you do not do this, these symbols normally parsed by the shell will just be passed to grep program.
Otherwise, you have to create the pipes yourself. For example (untested code below):
grep_p = subprocess.Popen(["grep", "-Po", "(?<=<dev>).*?(?=</dev>)", "/tmp/file.txt"], stdout=subprocess.PIPE)
awk_p = subprocess.Popen(["awk", "-F/", "'{print $1}'"], stdin = grep_p.stdout)
file2_fh = open("file2.txt", "w")
awk_p_2 = subprocess.Popen(["awk", "!x[$0]++", stdout = file2_fh, stdin = awk_p.stdout)
awk_p_2.communicate()
However, you're missing the point of python if you are doing this. You should instead look into the re module: re.match, re.sub, re.search, though I'm not familiar enough with awk to translate your commands.
The recommend way to run system commands in python is to use the module subprocess.
import subprocess
a=['grep' ,'-Po', '"(?<=<dev>).*?(?=</dev>)"','/tmp/file.txt']
b=['awk', '-F/', '"{print $1}"']
c=["awk", '"!x[$0]++"']
p1 = subprocess.Popen(a,stdout=subprocess.PIPE)
p2 = subprocess.Popen(b,stdin=p1.stdout,stdout=subprocess.PIPE)
p3 = subprocess.Popen(c,stdin=p2.stdout,stdout=subprocess.PIPE)
p1.stdout.close()
p2.stdout.close()
out,err=p3.communicate()
print out
The point of creating pipes between each subprocess is for security and debugging reasons. Also it makes the code much clearer in terms, which process gets input and sends output to.
Let us write a simple function to easily deal with these messy pipes for us:
def subprocess_pipes (pipes, last_pipe_out = None):
import subprocess
from subprocess import PIPE
last_p = None
for cmd in pipes:
out_pipe = PIPE if not (cmd==pipes[-1] and last_pipe_out) else open(last_pipe_out, "w")
cmd = cmd if isinstance(cmd, list) else cmd.split(" ")
in_pipe = last_p.stdout if last_p else None
p = subprocess.Popen(cmd, stdout = out_pipe, stdin = in_pipe)
last_p = p
comm = last_p.communicate()
return comm
Then we run,
subprocess_pipes(("ps ax", "grep python"), last_pipe_out = "test.out.2")
The result is a "test.out.2" file with the contents of piping "ps ax" into "grep python".
In your case,
a = ["grep", "-Po", "(?<=<cite>).*?(?=</cite>)", "/tmp/file1.txt"]
b = ["awk", "-F/", "{print $1}"]
c = ["awk", "!x[$0]++"]
subprocess_pipes((a, b, c), last_pipe_out = "/tmp/file2.txt")
The commands module is obsolete now.
If you don't actually need the output of your command you can use
import os
exit_status = os.system("your-command")
Otherwise you can use
import suproccess
out, err = subprocess.Popen("your | commands", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell = True).communicate()
Note: for your command you send stdout to file2.txt so I wouldn't expect to see anything in out you will however still see error messages on stderr which will go into err
you must use
import os
os.system(command)
I think what you are looking for is something like:
ubprocess.check_output(same as popen arguments, **kwargs) , use it the same way you would use a popen command , it should show you the output of the program that's being called.
For more details here is a link: http://freefilesdl.com/how-to-call-a-shell-command-from-python/

Categories