Python Script to kill java process - python

I am new to python scripting, we are trying to kill multiple java processes using the python script. Below is the script.
#!/usr/bin/env python3
import os, signal
def process():
name = ['test1.jar','test2.jar','test3.jar']
try:
for line in os.popen("ps ax | grep " + name + " | grep -v grep"):
fields = line.split()
pid = fields[0]
os.kill(int(pid), signal.SIGKILL)
print("Process Successfully terminated")
except:
print("Error Encountered while running script")
process()
We are not able to kill the process and end up with an "Error Encountered while running script" message.

Related

Lighttpd cgi python fail to run system processes

I'm trying to run terminal commands from a web python script.
I tried many things but none seens to work... Such as: add 'www-data' to sudoers, use full path to bin, run command with sudo word, use 3 different system calls (os.spawnl and subprocess) and none of that works.
Read only commands like "ps aux" that only output information works, but a simple echo to file don't. It seens like need permitions to do so. What more can i try?
Example from output: Unexpected error: (, CalledProcessError(2, '/bin/echo hello > /var/www/html/cgi-bin/test2.htm'), )
On that example the /var/www/html/cgi-bin/ folder is owned by "www-data", same user as server config.
<!-- language: python -->
#!/usr/bin/python3
# coding=utf-8
import os
import sys
import subprocess
import cgi
import subprocess
SCRIPT_PATH = "/var/www/html/scripts/aqi3.py"
DATA_FILE = "/var/www/html/assets/aqi.json"
KILL_PROCESS = "ps aux | grep " + SCRIPT_PATH + " | grep -v \"grep\" | awk '{print $2}' | xargs kill -9"
START_PROCESS = "/usr/bin/python3 " + SCRIPT_PATH + " start > /dev/null 2>&1 &"
STOP_PROCESS = "/usr/bin/python3 " + SCRIPT_PATH + " stop > /dev/null 2>&1 &"
# Don't edit
def killProcess():
os.spawnl(os.P_NOWAIT, KILL_PROCESS)
try:
os.spawnl(os.P_NOWAIT, "/bin/echo hello > /var/www/html/cgi-bin/test2.htm")
proc = subprocess.Popen(['sudo', 'echo', 'hello > /var/www/html/cgi-bin/test3.htm'])
print(subprocess.check_output("/bin/echo hello > /var/www/html/cgi-bin/test2.htm", shell=True, timeout = 10))
except:
print("Unexpected error:", sys.exc_info())
print(KILL_PROCESS)
def stopSensor():
killProcess()
os.spawnl(os.P_NOWAIT, STOP_PROCESS)
def restartProcess():
killProcess()
print(START_PROCESS)
print(os.spawnl(os.P_NOWAIT, START_PROCESS))
def main():
arguments = cgi.FieldStorage()
for key in arguments.keys():
value = arguments[key].value
if key == 'action':
if value == 'stop':
stopSensor()
print("ok")
return
elif value == 'start' or value == 'restart':
restartProcess()
print("ok")
return
elif value == 'resetdata':
try:
with open(DATA_FILE, 'w') as outfile:
outfile.write('[]')
except:
print("Unexpected error:", sys.exc_info())
print("ok")
return
print("?")
main()
I was able to solve my problem with: http://alexanderhoughton.co.uk/blog/lighttpd-changing-default-user-raspberry-pi/

How to kill a java process by name in python?

I am trying to kill java process with name "MyClass" using below python script :
import os
os.system("kill $(ps aux | grep 'MyClass' | grep -v 'grep' | awk '{print $2}')")
But this gives me output as below and the process is still running
sh: 1: kill: Usage: kill [-s sigspec | -signum | -sigspec] [pid | job]... or
kill -l [exitstatus]
512
I know that the $ sign is the problem here but do not know how to make this work.
Any help/hint is appreciated.
Thanks.
def terminate_java_process(process_name):
proc = subprocess.Popen(["jps"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
processes = {}
for line in out.split(b"\n"):
try:
process_name=str(line, 'utf-8').split(' ')[1]
except:
continue
process_id= str(line, 'utf-8').split(' ')[0]
processes[process_name] = process_id
os.system("kill -s TERM 82220")
Here I have another way:
I am fetching all the processes, by looping each one I'm taking the required one and if it is found just kill them.
I am making use of these concept to find out how many processes are running for the same client, for the same category.
# this will fetch the processes in stdout var
processes = Popen(['ps', '-ef'], stdout=PIPE, stderr=PIPE)
stdout, error = processes.communicate()
for line in stdout.splitlines():
if (line.__contains__("Process_name to check")):
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)

How can I start a process and put it to background in python?

I am currently writing my first python program (in Python 2.6.6). The program facilitates starting and stopping different applications running on a server providing the user common commands (like starting and stopping system services on a Linux server).
I am starting the applications' startup scripts by
p = subprocess.Popen(startCommand, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
print(output)
The problem is, that the startup script of one application stays in foreground and so p.communicate() waits forever. I have already tried to use "nohup startCommand &" in front of the startCommand but that did not work as expected.
As a workaround I now use the following bash script to call the application's start script:
#!/bin/bash
LOGFILE="/opt/scripts/bin/logs/SomeServerApplicationStart.log"
nohup /opt/someDir/startSomeServerApplication.sh >${LOGFILE} 2>&1 &
STARTUPOK=$(tail -1 ${LOGFILE} | grep "Server started in RUNNING mode" | wc -l)
COUNTER=0
while [ $STARTUPOK -ne 1 ] && [ $COUNTER -lt 100 ]; do
STARTUPOK=$(tail -1 logs/SomeServerApplicationStart.log | grep "Server started in RUNNING mode" | wc -l)
if (( STARTUPOK )); then
echo "STARTUP OK"
exit 0
fi
sleep 1
COUNTER=$(( $COUNTER + 1 ))
done
echo "STARTUP FAILED"
The bash script is called from my python code. This workaround works perfect but I would prefer to do all in python...
Is subprocess.Popen the wrong way? How could I accommplish my task in Python only?
First it is easy not to block the Python script in communicate... by not calling communicate! Just read from output or error output from the command until you find the correct message and just forget about the command.
# to avoid waiting for an EOF on a pipe ...
def getlines(fd):
line = bytearray()
c = None
while True:
c = fd.read(1)
if c is None:
return
line += c
if c == '\n':
yield str(line)
del line[:]
p = subprocess.Popen(startCommand, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT) # send stderr to stdout, same as 2>&1 for bash
for line in getlines(p.stdout):
if "Server started in RUNNING mode" in line:
print("STARTUP OK")
break
else: # end of input without getting startup message
print("STARTUP FAILED")
p.poll() # get status from child to avoid a zombie
# other error processing
The problem with the above, is that the server is still a child a the Python process and could get unwanted signals such as SIGHUP. If you want to make it a daemon, you must first start a subprocess that next start your server. That way when first child will end, it can be waited by caller and the server will get a PPID of 1 (adopted by init process). You can use multiprocessing module to ease that part
Code could be like:
import multiprocessing
import subprocess
# to avoid waiting for an EOF on a pipe ...
def getlines(fd):
line = bytearray()
c = None
while True:
c = fd.read(1)
if c is None:
return
line += c
if c == '\n':
yield str(line)
del line[:]
def start_child(cmd):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
shell=True)
for line in getlines(p.stdout):
print line
if "Server started in RUNNING mode" in line:
print "STARTUP OK"
break
else:
print "STARTUP FAILED"
def main():
# other stuff in program
p = multiprocessing.Process(target = start_child, args = (server_program,))
p.start()
p.join()
print "DONE"
# other stuff in program
# protect program startup for multiprocessing module
if __name__ == '__main__':
main()
One could wonder what is the need for the getlines generator when a file object is itself an iterator that returns one line at a time. The problem is that it internally calls read that read until EOF when file is not connected to a terminal. As it is now connected to a PIPE, you will not get anything until the server ends... which is not what is expected

PYTHON check for a program if it's up and running

I have a python script that parse some files, but sometimes appear unknown errors and script will fail.
So i tried to make a program that check for a file which have timestamp pid, and main program will update timestamp every 30 seconds.
def start_server():
subprocess.Popen("C:\Server.py", shell=True)
while True:
f = open('C:\server.conf', 'r+')
text = f.read().split(' ')
pid = int(text[0])
lastTime = text[1]
if float(time.time()) - float(lastTime) > 90:
temp = subprocess.Popen("taskkill /F /T /PID %i" % pid , stdout=subprocess.PIPE, shell=True)
out, err = temp.communicate()
print ' [INFO] Server.py was killed, and started again.'
start_server()
time.sleep(30)
but this doesn't start new server.py if last instance of program will fail.
Any ideea how i can make this works?
Thanks!

script will not save locally over ssh

I am having some issues getting a script to run.
This works perfectly from command line:
ssh root#ip.add.re.ss /usr/sbin/tcpdump -i eth0 -w - | /usr/sbin/tcpdump -r - -w /home/cuckoo/cuckoo/storage/analyses/1/saveit.pcap
However when I use this script:
#!/usr/bin/env python
import sys
import os
import subprocess
cmd = []
remote_cmd = []
local_cmd = []
connect_cmd = []
outfile = None
try:
connect_cmd = str.split(os.environ["RTCPDUMP_CMD"], " ")
except:
connect_cmd = str.split("ssh root#fw", " ")
remote_cmd.extend(str.split("/usr/sbin/tcpdump -w -", " "))
local_cmd.extend(str.split("/usr/sbin/tcpdump -r -", " "))
for argument in xrange(1, len(sys.argv)):
if sys.argv[argument] == "-w":
outfile=sys.argv[argument+1]
sys.argv[argument] = None
sys.argv[argument+1] = None
if sys.argv[argument] == "-i":
remote_cmd.append(sys.argv[argument])
remote_cmd.append(sys.argv[argument+1])
sys.argv[argument] = None
sys.argv[argument+1] = None
if not sys.argv[argument] == None:
if " " in sys.argv[argument]:
local_cmd.append("'" + sys.argv[argument] + "'")
remote_cmd.append("'" + sys.argv[argument] + "'")
else:
local_cmd.append(sys.argv[argument])
remote_cmd.append(sys.argv[argument])
if not outfile == None:
local_cmd.insert(1, "-w")
local_cmd.insert(2, outfile)
cmd.extend(connect_cmd)
cmd.extend(remote_cmd)
cmd.append("|")
cmd.extend(local_cmd)
try:
subprocess.call(cmd)
except KeyboardInterrupt:
exit(0)
It spawns both tcpdump processes on the remote host and the second tcpdump fails to save due to non working path. I added a print cmd at the end and the ssh command being passed to the prompt is exactly the same (when running the script itself, cuckoo passes a ton of options when it calls the script. Also it gets the -w - before the -i eth0, but I tested that and it works from command line as well).
So I am thoroughly stumped, why is the pipe to local not working in the script but it works from prompt?
Oh, and credit for the script belongs to Michael Boman
http://blog.michaelboman.org/2013/02/making-cuckoo-sniff-remotely.html
So I am thoroughly stumped, why is the pipe to local not working in the script but it works from prompt?
Because pipes are handled by the shell, and you're not running a shell.
If you look at the docs, under Replacing Older Functions with the subprocess Module, it explains how to do the same thing shell pipelines do. Here's the example:
output=`dmesg | grep hda`
# becomes
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
So, in your terms:
cmd.extend(connect_cmd)
cmd.extend(remote_cmd)
try:
remote = subprocess.Popen(cmd, stdout=subprocess.PIPE)
local = subprocess.Popen(local_cmd, stdin=remote.stdout)
remote.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
local.communicate()
except KeyboardInterrupt:
exit(0)

Categories