hi i need some guidelines how to write programm that executes other python programm but in maximum for example 6 times at once and always try to be 6 process be running even if one ends up
also i would like to know what is happening to those processes right know but i dont want to wait to any process to finish
what is the pid of just created process? and its still running? or there has been an error? or it finish sucessfully?
some job manager ...
import subprocess
def start():
proc = {}
for i in range (0,6):
proc[i] = subprocess.Popen(
['python', 'someprogramm.py', '--env', 'DEVELOPMENT', '-l'],
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT
)
if __name__ == '__main__':
start()
Use celery.
Have a look at supervisord. It sounds like it will do what you want.
Maybe you can try with the multiprocessing module, it can handles pool of worker processes that seems similar to what you try to achieve.
You can use the poll() method to check if a process is still running. You'd have to loop through each process and check if it runs, and otherwise, run a new process.
Related
I'm new to python, so here's what I'm looking to get done.
I would like to use python to manage some of my gameservers and start/stop them. For this I would like to run every gameserver in a own process.
What's the best way to create processes using python, so these processes can continue even if the main application is stopped?
To start a server I only need to execute shell code.
How can I get access after stopping my main application and restarting it to these processes?
I'm not sure if I understand the question completely, but maybe something like this?
Run process:
import subprocess
subprocess.Popen(['/path/gameserver']) #keeps running
And in another script you can use 'ps -A' to find the pid and kill (or restart) it:
import subprocess, signal
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'gameserver' in line:
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)
Check the subprocess module. There is a function called call. See here.
You may need to set the process to not be a daemon process.
Under Linux Ubuntu operating system, I run the test.py scrip which contain a GObject loop using subprocess by:
subprocess.call(["test.py"])
Now, this test.py will creat process. Is there a way to kill this process in Python?
Note: I don't know the process ID.
I am sorry if I didn't explain my problem very clearly as I am new to this forms and new to python in general.
I would suggest not to use subprocess.call but construct a Popen object and use its API: http://docs.python.org/2/library/subprocess.html#popen-objects
In particular:
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate
HTH!
subprocess.call() is just subprocess.Popen().wait():
from subprocess import Popen
from threading import Timer
p = Popen(["command", "arg1"])
print(p.pid) # you can save pid to a file to use it outside Python
# do something else..
# now ask the command to exit
p.terminate()
terminator = Timer(5, p.kill) # give it 5 seconds to exit; then kill it
terminator.start()
p.wait()
terminator.cancel() # the child process exited, cancel the hit
subprocess.call waits for the process to be completed and returns the exit code (integer) value , hence there is no way of knowing the process id of the child process. YOu should consider using subprocess.Popen which forks() child process.
I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.
all
I start a process using spawnProcess and want to kill when my certain Factory stops.
something I wrote like these
p = SomeProtocol(ProcessProtocol)
reactor.spawnProcess(p, 'twistd', ['twistd', '-y', 'anotherMain.py'], {})
class Factory(ServerFactory):
...
def StopFactory(self):
# which is the p above
p.transport.signalProcess("KILL")
I thought the subprocess will be killed which is not.
I tried using p.transport.signalProcess("KILL") some other place, and it works.
What's wrong with my code? Thanks!
This can be because twistd daemonizes anotherMain.py. After anotherMain.py becomes a daemon twistd process exits. So anotherMain.py isn't really a subprocess of your main process.
Try to add -n option:
reactor.spawnProcess(p, 'twistd', ['twistd', '-ny', 'anotherMain.py'], {})
Since input and raw_input() stop the program from running anymore, I want to use a subprocess to run this program...
while True: print raw_input()
and get its output.
This is what I have as my reading program:
import subprocess
process = subprocess.Popen('python subinput.py', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
output=process.stdout.read(12)
if output=='' and process.poll()!=None:
break
if output!='':
sys.stdout.write(output)
sys.stdout.flush()
When I run this, the subprocess exits almost as fast as it started. How can I fix this?
I'm afraid it won't work this way.
You assume, that subprocess will attach your console (your special
case of stdin). This does not work, the module only has two
options for specifying that: PIPE and STDOUT.
When nothing is specified, the subprocess won't be able to use
the corresponding stream - it's output will go nowhere or it will
receive no input. The raw_input() ends because of EOF.
The way to go is to have your input in the "main" program,
and the work done in a subprocess.
EDIT:
Here's an example in multiprocessing
from multiprocessing import Process, Pipe
import time
def child(conn):
while True:
print "Processing..."
time.sleep(1)
if conn.poll(0):
output = conn.recv()
print output
else:
print "I got nothing this time"
def parent():
parent_conn, child_conn = Pipe()
p = Process(target=child, args=(child_conn,))
p.start()
while True:
data = raw_input()
parent_conn.send(data)
# p.join() - you have to find some way to stop all this...
# like a specific message to quit etc.
if __name__ == '__main__':
parent()
You of course need to make it more robust by finding a way too stop
this cooperation. In my example both processes are in the same file,
but you may organize it differently.
This example works on Linux, you may have some problems with pipes on Windows,
but it should be altogether solvable.
The "Processing" is the part where you want to do something else, not just
wait for the data from the parent.
I think the problem is that subprocesses are not directly hooked up to stdout and stdin, and therefore cannot receive keyboard input. Presumably raw_input() is throwing an exception.
If this is a practical issue and not an experiment, I recommend you use a library such as curses or pygame to handle your input. If you're experimenting and want to do it yourself, then I suppose you'll have to look at threads instead of subprocesses, though this is a fairly complex thing to try to do so you're certain to run into other issues.
Well, try different architecture. You can use zeromq.
Producer produces all the items(here output which to be sent via stdout) and broadcasted via zmq.
Consumer should listen to the port no which is being broadcasted by the producer and process them accordingly.
Here is the Example http://code.saghul.net/implementing-a-pubsub-based-application-with
Note
Use gevent or multiprocessing to spawn these process.
You will have master program which takes care of spawning producer and consumer