How to kill a java process by name in python? - python

I am trying to kill java process with name "MyClass" using below python script :
import os
os.system("kill $(ps aux | grep 'MyClass' | grep -v 'grep' | awk '{print $2}')")
But this gives me output as below and the process is still running
sh: 1: kill: Usage: kill [-s sigspec | -signum | -sigspec] [pid | job]... or
kill -l [exitstatus]
512
I know that the $ sign is the problem here but do not know how to make this work.
Any help/hint is appreciated.
Thanks.

def terminate_java_process(process_name):
proc = subprocess.Popen(["jps"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
processes = {}
for line in out.split(b"\n"):
try:
process_name=str(line, 'utf-8').split(' ')[1]
except:
continue
process_id= str(line, 'utf-8').split(' ')[0]
processes[process_name] = process_id
os.system("kill -s TERM 82220")

Here I have another way:
I am fetching all the processes, by looping each one I'm taking the required one and if it is found just kill them.
I am making use of these concept to find out how many processes are running for the same client, for the same category.
# this will fetch the processes in stdout var
processes = Popen(['ps', '-ef'], stdout=PIPE, stderr=PIPE)
stdout, error = processes.communicate()
for line in stdout.splitlines():
if (line.__contains__("Process_name to check")):
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)

Related

Python subprocess: Issues capturing error where part of command fail when piped together

Take a simple command in bash
cmd='ls -l | wc -l'
I understand that we can run this command several ways using subprocess call/check_output/ communicate. The problem arise for me if initial commands do not work or fail for some reason. Like [replace ls with lsx].
cmd='lsx -l | wc -l'
In this case how can we capture error or we just have to process the output to figure out? Here is what I tried.
import subprocess
>>> subprocess.call('lsx -l | wc -l', shell=True)
/bin/sh: lsx: command not found
0
0
>>> subprocess.check_output('lsx -l | wc -l', shell=True)
/bin/sh: lsx: command not found
b' 0\n'
It seems that error code are still 0 in above two commands.
I also tried https://docs.python.org/3.5/library/subprocess.html#replacing-shell-pipeline but cannot figure out how to get error code for first process.
If you specify bash instead of sh, you can set the pipefail option to return a nonzero exit status if any part of a pipeline fails:
subprocess.check_output(['bash', '-c', 'set -o pipefail; lsx -l | wc -l'])
That said, you can certainly avoid using shell=True altogether:
import subprocess
try:
p1 = subprocess.Popen(['ls', '--invalid-argument'], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['wc', '-l'], stdin=p1.stdout, stdout=subprocess.PIPE)
wc_stdout = p2.communicate()[0]
if p1.wait() != 0 or p2.wait() != 0:
raise RuntimeError("Something failed!")
except FileNotFoundError as ex:
raise RuntimeError("Something failed, because we couldn't find an executable!") from ex
You can assign stderr to PIPE. Consider this example:
>>> from subprocess import PIPE, Popen
>>> sub = Popen('lsx -l | wc -l', shell=True, stderr=PIPE, stdout=PIPE)
>>> output, error_output = sub.communicate()
>>> error_output
b'/bin/sh: 1: lsx: not found\n'
>>> output
b'0\n'
>>> sub = Popen('ls -l | wc -l', shell=True, stderr=PIPE, stdout=PIPE)
>>> output, error_output = sub.communicate()
>>> error_output
b''

How to execute awk command inside python

I am trying to run the following awk command inside python but I get a syntax error related to the quotes:
import subprocess
COMMAND = "df /dev/sda1 | awk /'NR==2 {sub("%","",$5); if ($5 >= 80) {printf "Warning! Space usage is %d%%", $5}}"
subprocess.call(COMMAND, shell=True)
I tried to escape the quotes but I am still getting the same error.
You may want to put ''' or """ around the string since you have both ' and ".
import subprocess
COMMAND = '''"df /dev/sda1 | awk /'NR==2 {sub("%","",$5); if ($5 >= 80) {printf "Warning! Space usage is %d%%", $5}}"'''
subprocess.call(COMMAND, shell=True)
There also seems to be a relevant answer already for this as well: awk commands within python script
Try this:
import subprocess
COMMAND="df /dev/sda1 | awk 'NR==2 {sub(\"%\",\"\",$5); if ($5 >= 80) {printf \"Warning! Space usage is %d%%\", $5}}'"
subprocess.Popen(COMMAND,stdin=subprocess.PIPE,stdout=subprocess.PIPE, shell=True).stdout.read()
I was writing a python script for my deployment purpose and one part of the script was to explicitely kill the process if its not stopped successfully.
Below is the python code which actually performs
Find the processId of the process named myApplication
ps -ef | grep myApplication | grep -v grep | awk {'print $2'}
and then perform
kill -9 PID //where PID is output of earlier command
import subprocess
import signal
def killApplicationProcessIfStillRunning(app_name):
p1 = subprocess.Popen(['ps', '-ef'], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['grep', app_name],stdin=p1.stdout, stdout=subprocess.PIPE)
p3 = subprocess.Popen(['grep', '-v' , 'grep'],stdin=p2.stdout, stdout=subprocess.PIPE)
p4 = subprocess.Popen(['awk', '{print $2}'],stdin=p3.stdout, stdout=subprocess.PIPE)
out, err = p4.communicate()
if out:
print 'Attempting to kill '+app_name +' process with PID ' +out.splitlines()[0]
os.kill(int(out.splitlines()[0]),signal.SIGKILL)
Now invoke the above method as
killApplicationProcessIfStillRunning(myApplication)
Hope it helps someone.

Setting Variable to Gateway IP

I have a code so far that filters out everything except the gateway IP (route -n | awk '{if($4=="UG")print $2}'), but I'm trying to figure out how to pipe this to a variable in Python. Here's what I got:
import shlex;
from subprocess import Popen, PIPE;
cmd = "route -n | grep 'UG[ \t]' | awk '{print $2}'";
gateway = Popen(shlex.split(cmd), stdout=PIPE);
gateway.communicate();
exit_code = gateway.wait();
Any ideas?
NOTE: I'm new at this.
For better or worse, your cmd uses a shell pipeline. To use shell features in subprocess, one must set shell=True:
from subprocess import Popen, PIPE
cmd = "/sbin/route -n | grep 'UG[ \t]' | awk '{print $2}'"
gateway = Popen(cmd, shell=True, stdout=PIPE)
stdout, stderr = gateway.communicate()
exit_code = gateway.wait()
Alternatively, one could keep shell=False, eliminate the pipeline, and do all the string processing in python:
from subprocess import Popen, PIPE
cmd = "/sbin/route -n"
gateway = Popen(cmd.split(), stdout=PIPE)
stdout, stderr = gateway.communicate()
exit_code = gateway.wait()
gw = [line.split()[1] for line in stdout.decode().split('\n') if 'UG' in line][0]
Because of the vagaries of shell processing, and unless there is a specific need, it is probably best to avoid shell=True.

subprocess pid different from ps output

Why is it that the subprocess pid (Popen.pid) has different value from that the ps command returns?
I've noticed this when ps called both from inside python (with subprocess.call()) and from another terminal.
Here's a simple python file to test:
#!/usr/bin/python3
'''
Test subprocess termination
'''
import subprocess
command = 'cat'
#keep pipes so that cat doesn't complain
proc = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
shell=True)
print('pid = %d' % proc.pid)
subprocess.call("ps -A | grep -w %s" % command,
shell=True)
proc.terminate()
proc.wait() # make sure its dead before exiting pytyhon
Usually the pid reported by ps is 1 or 2 more than that reported by Popen.pid.
Because the command is run with shell=True, the pid returned by subprocess is that of the shell process used to run the command.

Processing output from cmdline via a Python script

I'm trying to use the subprocess module with Python 2.6 in order to run a command and get its output. The command is typically ran like this:
/usr/local/sbin/kamctl fifo profile_get_size myprofile | awk -F ':: ' '{print $2}'
What's the best way to use the subprocess module in my script to execute that command with those arguments and get the return value from the command? I'm using Python 2.6.
Do you want the output, the return value (AKA status code), or both?
If the amount of data emitted by the pipeline on stdout and/or stderr is not too large, it's pretty simple to get "all of the above":
import subprocess
s = """/usr/local/sbin/kamctl fifo profile_get_size myprofile | awk -F ':: ' '{print $2}'"""
p = subprocess.Popen(s, shell=True, stdout=subprocess.PIPE)
out, err = p.communicate()
print 'out: %r' % out
print 'err: %r' % err
print 'status: %r' % p.returncode
If you have to deal with potentially huge amounts of output, it takes a bit more code -- doesn't look like you should have that problem, judging from the pipeline in question.
f.e. stdout you can get like this:
>>> import subprocess
>>> process = subprocess.Popen("echo 'test'", shell=True, stdout=subprocess.PIPE)
>>> process.wait()
0
>>> process.stdout.read()
'test\n'

Categories