all
I start a process using spawnProcess and want to kill when my certain Factory stops.
something I wrote like these
p = SomeProtocol(ProcessProtocol)
reactor.spawnProcess(p, 'twistd', ['twistd', '-y', 'anotherMain.py'], {})
class Factory(ServerFactory):
...
def StopFactory(self):
# which is the p above
p.transport.signalProcess("KILL")
I thought the subprocess will be killed which is not.
I tried using p.transport.signalProcess("KILL") some other place, and it works.
What's wrong with my code? Thanks!
This can be because twistd daemonizes anotherMain.py. After anotherMain.py becomes a daemon twistd process exits. So anotherMain.py isn't really a subprocess of your main process.
Try to add -n option:
reactor.spawnProcess(p, 'twistd', ['twistd', '-ny', 'anotherMain.py'], {})
Related
I'm new to python, so here's what I'm looking to get done.
I would like to use python to manage some of my gameservers and start/stop them. For this I would like to run every gameserver in a own process.
What's the best way to create processes using python, so these processes can continue even if the main application is stopped?
To start a server I only need to execute shell code.
How can I get access after stopping my main application and restarting it to these processes?
I'm not sure if I understand the question completely, but maybe something like this?
Run process:
import subprocess
subprocess.Popen(['/path/gameserver']) #keeps running
And in another script you can use 'ps -A' to find the pid and kill (or restart) it:
import subprocess, signal
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'gameserver' in line:
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)
Check the subprocess module. There is a function called call. See here.
You may need to set the process to not be a daemon process.
I'm having a strange problem I've encountered as I wrote a script to start my local JBoss instance.
My code looks something like this:
with open("/var/run/jboss/jboss.pid", "wb") as f:
process = subprocess.Popen(["/opt/jboss/bin/standalone.sh", "-b=0.0.0.0"])
f.write(str(process.pid))
try:
process.wait()
except KeyboardInterrupt:
process.kill()
Should be fairly simple to understand, write the PID to a file while its running, once I get a KeyboardInterrupt, kill the child process.
The problem is that JBoss keeps running in the background after I send the kill signal, as it seems that the signal doesn't propagate down to the Java process started by standalone.sh.
I like the idea of using Python to write system management scripts, but there are a lot of weird edge cases like this where if I would have written it in Bash, everything would have just worked™.
How can I kill the entire subprocess tree when I get a KeyboardInterrupt?
You can do this using the psutil library:
import psutil
#..
proc = psutil.Process(process.pid)
for child in proc.children(recursive=True):
child.kill()
proc.kill()
As far as I know the subprocess module does not offer any API function to retrieve the children spawned by subprocesses, nor does the os module.
A better way of killing the processes would probably be the following:
proc = psutil.Process(process.pid)
procs = proc.children(recursive=True)
procs.append(proc)
for proc in procs:
proc.terminate()
gone, alive = psutil.wait_procs(procs, timeout=1)
for p in alive:
p.kill()
This would give a chance to the processes to terminate correctly and when the timeout ends the remaining processes will be killed.
Note that psutil also provides a Popen class that has the same interface of subprocess.Popen plus all the extra functionality of psutil.Process. You may want to simply use that instead of subprocess.Popen. It is also safer because psutil checks that PIDs don't get reused if a process terminates, while subprocess doesn't.
Under Linux Ubuntu operating system, I run the test.py scrip which contain a GObject loop using subprocess by:
subprocess.call(["test.py"])
Now, this test.py will creat process. Is there a way to kill this process in Python?
Note: I don't know the process ID.
I am sorry if I didn't explain my problem very clearly as I am new to this forms and new to python in general.
I would suggest not to use subprocess.call but construct a Popen object and use its API: http://docs.python.org/2/library/subprocess.html#popen-objects
In particular:
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate
HTH!
subprocess.call() is just subprocess.Popen().wait():
from subprocess import Popen
from threading import Timer
p = Popen(["command", "arg1"])
print(p.pid) # you can save pid to a file to use it outside Python
# do something else..
# now ask the command to exit
p.terminate()
terminator = Timer(5, p.kill) # give it 5 seconds to exit; then kill it
terminator.start()
p.wait()
terminator.cancel() # the child process exited, cancel the hit
subprocess.call waits for the process to be completed and returns the exit code (integer) value , hence there is no way of knowing the process id of the child process. YOu should consider using subprocess.Popen which forks() child process.
hi i need some guidelines how to write programm that executes other python programm but in maximum for example 6 times at once and always try to be 6 process be running even if one ends up
also i would like to know what is happening to those processes right know but i dont want to wait to any process to finish
what is the pid of just created process? and its still running? or there has been an error? or it finish sucessfully?
some job manager ...
import subprocess
def start():
proc = {}
for i in range (0,6):
proc[i] = subprocess.Popen(
['python', 'someprogramm.py', '--env', 'DEVELOPMENT', '-l'],
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT
)
if __name__ == '__main__':
start()
Use celery.
Have a look at supervisord. It sounds like it will do what you want.
Maybe you can try with the multiprocessing module, it can handles pool of worker processes that seems similar to what you try to achieve.
You can use the poll() method to check if a process is still running. You'd have to loop through each process and check if it runs, and otherwise, run a new process.
I'm trying to use python to launch a command in multiple seperate instances of terminal simultaneously. What is the best way to do this? Right now I am trying to use the subprocess module with popen which works for one command but not multiple.
Thanks in advance.
Edit:
Here is what I am doing:
from subprocess import*
Popen('ant -Dport='+str(5555)+ ' -Dhost='+GetIP()+ ' -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8', shell=True)
The problem for me is this launches a java process in the terminal which I want to have keep running indefinatley. Secondly, I want to run a similar command multiple times in multiple different processes.
This should stay open as long as the process is running. If you want to launch multiple simultanously, just wrap it in a thread
untested code, but you should get the general idea:
class PopenThread(threading.Thread):
def __init__(self, port):
threading.Thread.__init__(self)
self.port=port
def run(self):
Popen('ant -Dport='+str(self.port)+ ' -Dhost='+GetIP()+
' -DhubURL=http://192.168.1.113:4444'
' -Denvironment=*firefox launch-remote-control'
' $HOME/selenium-grid-1.0.8', shell=True)
if '__main__'==__name__:
PopenThread(5555).start()
PopenThread(5556).start()
PopenThread(5557).start()
EDIT: The double-fork method described down here: https://stackoverflow.com/a/3765162/450517 by Mike would be the proper way to launch a daemon, i.e. a long-running process which won't communicate per stdio.
The simple answer I can come up with is to have Python use Popen to launch a shell script similar to:
gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP1 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP2 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
# etc. ...
There's a fully-Python way to do this, but it's ugly, only works on Unix-like OSes, and I don't have time to write the code out. Basically, subprocess.Popen doesn't support it because it assumes you want to either wait for the subprocess to finish, interact with the subprocess, or monitor the subprocess. It doesn't support the "just launch it and don't bother me with it ever again" case.
The way that's done in Unix-like OSes is to:
Use fork to spawn a subprocess
Have that subprocess fork a subprocess of its own
Have the grandchild process redirect I/O to /dev/null and then use one of the exec functions to launch the process you really want to start (might be able to use Popen for this part)
The child process exits.
Now there's no link between the grandparent and grandchild, so if the grandchild terminates you don't get a SIGCHLD signal, and if the grandparent terminates it doesn't kill all the grandchildren.
I might be off in the details, but that's the gist. Backgrounding (&) and disowning in bash are supposed to accomplish the same thing.
Here is a poor version of a blocking queue. You can fancify it with collections.deque or the like, or go even fancier with Twisted deferreds, or what not. Crummy parts include:
blocking
kill signals might not propagate down
season to taste!
import logging
basicConfig = dict(level=logging.INFO, format='%(process)s %(asctime)s %(lineno)s %(levelname)s %(name)s %(message)s')
logging.basicConfig(**basicConfig)
logger = logging.getLogger({"__main__":None}.get(__name__, __name__))
import subprocess
def wait_all(list_of_Popens,sleep_time):
""" blocking wait for all jobs to return.
Args:
list_of_Popens. list of possibly opened jobs
Returns:
list_of_Popens. list of possibly opened jobs
Side Effect:
block until all jobs complete.
"""
jobs = list_of_Popens
while None in [j.returncode for j in jobs]:
for j in jobs: j.poll()
logger.info("not all jobs complete, sleeping for %i", last_sleep)
time.sleep(sleep_time)
return jobs
jobs = [subprocess.Popen('sleep 1'.split()) for x in range(10)]
jobs = wait_all(jobs)