python create subprocess (newbie) - python

I'm new to python, so here's what I'm looking to get done.
I would like to use python to manage some of my gameservers and start/stop them. For this I would like to run every gameserver in a own process.
What's the best way to create processes using python, so these processes can continue even if the main application is stopped?
To start a server I only need to execute shell code.
How can I get access after stopping my main application and restarting it to these processes?

I'm not sure if I understand the question completely, but maybe something like this?
Run process:
import subprocess
subprocess.Popen(['/path/gameserver']) #keeps running
And in another script you can use 'ps -A' to find the pid and kill (or restart) it:
import subprocess, signal
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'gameserver' in line:
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)

Check the subprocess module. There is a function called call. See here.
You may need to set the process to not be a daemon process.

Related

Popen not responding to kill

Environment: Raspberry Pi Wheezy
I have a python program that uses Popen to call another python program
from subprocess import *
oJob = Popen('sudo python mypgm.py',shell=True)
Another menu option is supposed to end the job immediately
oJob.kill()
but the job is still running??
When you add the option shell=True, python launches a shell and the shell in turn launches the process python mymgm.py. You are killing the shell process here which doesn't kill its own child that runs mymgm.py.
To ensure, that child process gets killed on oJob.kill, you need to group them all under one process group and make shell process, the group leader.
The code is,
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(pro.pid, signal.SIGTERM) # Send the signal to all the process groups
When you send SIGTERM signal to the shell process, it will kill all its child process as well.
You need to add a creation flag arg
oJob = Popen('sudo python mypgm.py',shell=True, creationflags = subprocess.CREATE_NEW_PROCESS_GROUP)
source
subprocess.CREATE_NEW_PROCESS_GROUP
A Popen creationflags parameter to specify that a new process group will be created. This flag is necessary for using os.kill() on the subprocess.
EDIT I agree with the comment on how to import stuff and why you are getting something is undefined. Also the other answer seems to be on the right track getting the pid
import subprocess as sub
oJob = sub.Popen('sudo python mypgm.py', creationflags = sub.CREATE_NEW_PROCESS_GROUP)
oJob.kill()
Warning Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input:

Kill a chain of sub processes on KeyboardInterrupt

I'm having a strange problem I've encountered as I wrote a script to start my local JBoss instance.
My code looks something like this:
with open("/var/run/jboss/jboss.pid", "wb") as f:
process = subprocess.Popen(["/opt/jboss/bin/standalone.sh", "-b=0.0.0.0"])
f.write(str(process.pid))
try:
process.wait()
except KeyboardInterrupt:
process.kill()
Should be fairly simple to understand, write the PID to a file while its running, once I get a KeyboardInterrupt, kill the child process.
The problem is that JBoss keeps running in the background after I send the kill signal, as it seems that the signal doesn't propagate down to the Java process started by standalone.sh.
I like the idea of using Python to write system management scripts, but there are a lot of weird edge cases like this where if I would have written it in Bash, everything would have just worked™.
How can I kill the entire subprocess tree when I get a KeyboardInterrupt?
You can do this using the psutil library:
import psutil
#..
proc = psutil.Process(process.pid)
for child in proc.children(recursive=True):
child.kill()
proc.kill()
As far as I know the subprocess module does not offer any API function to retrieve the children spawned by subprocesses, nor does the os module.
A better way of killing the processes would probably be the following:
proc = psutil.Process(process.pid)
procs = proc.children(recursive=True)
procs.append(proc)
for proc in procs:
proc.terminate()
gone, alive = psutil.wait_procs(procs, timeout=1)
for p in alive:
p.kill()
This would give a chance to the processes to terminate correctly and when the timeout ends the remaining processes will be killed.
Note that psutil also provides a Popen class that has the same interface of subprocess.Popen plus all the extra functionality of psutil.Process. You may want to simply use that instead of subprocess.Popen. It is also safer because psutil checks that PIDs don't get reused if a process terminates, while subprocess doesn't.

How to kill a process been created by subprocess in python?

Under Linux Ubuntu operating system, I run the test.py scrip which contain a GObject loop using subprocess by:
subprocess.call(["test.py"])
Now, this test.py will creat process. Is there a way to kill this process in Python?
Note: I don't know the process ID.
I am sorry if I didn't explain my problem very clearly as I am new to this forms and new to python in general.
I would suggest not to use subprocess.call but construct a Popen object and use its API: http://docs.python.org/2/library/subprocess.html#popen-objects
In particular:
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate
HTH!
subprocess.call() is just subprocess.Popen().wait():
from subprocess import Popen
from threading import Timer
p = Popen(["command", "arg1"])
print(p.pid) # you can save pid to a file to use it outside Python
# do something else..
# now ask the command to exit
p.terminate()
terminator = Timer(5, p.kill) # give it 5 seconds to exit; then kill it
terminator.start()
p.wait()
terminator.cancel() # the child process exited, cancel the hit
subprocess.call waits for the process to be completed and returns the exit code (integer) value , hence there is no way of knowing the process id of the child process. YOu should consider using subprocess.Popen which forks() child process.

kill subprocess spawned by spawnProcess in twisted doesn't work

all
I start a process using spawnProcess and want to kill when my certain Factory stops.
something I wrote like these
p = SomeProtocol(ProcessProtocol)
reactor.spawnProcess(p, 'twistd', ['twistd', '-y', 'anotherMain.py'], {})
class Factory(ServerFactory):
...
def StopFactory(self):
# which is the p above
p.transport.signalProcess("KILL")
I thought the subprocess will be killed which is not.
I tried using p.transport.signalProcess("KILL") some other place, and it works.
What's wrong with my code? Thanks!
This can be because twistd daemonizes anotherMain.py. After anotherMain.py becomes a daemon twistd process exits. So anotherMain.py isn't really a subprocess of your main process.
Try to add -n option:
reactor.spawnProcess(p, 'twistd', ['twistd', '-ny', 'anotherMain.py'], {})

python parallel programming

hi i need some guidelines how to write programm that executes other python programm but in maximum for example 6 times at once and always try to be 6 process be running even if one ends up
also i would like to know what is happening to those processes right know but i dont want to wait to any process to finish
what is the pid of just created process? and its still running? or there has been an error? or it finish sucessfully?
some job manager ...
import subprocess
def start():
proc = {}
for i in range (0,6):
proc[i] = subprocess.Popen(
['python', 'someprogramm.py', '--env', 'DEVELOPMENT', '-l'],
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT
)
if __name__ == '__main__':
start()
Use celery.
Have a look at supervisord. It sounds like it will do what you want.
Maybe you can try with the multiprocessing module, it can handles pool of worker processes that seems similar to what you try to achieve.
You can use the poll() method to check if a process is still running. You'd have to loop through each process and check if it runs, and otherwise, run a new process.

Categories