I'm trying to use python to launch a command in multiple seperate instances of terminal simultaneously. What is the best way to do this? Right now I am trying to use the subprocess module with popen which works for one command but not multiple.
Thanks in advance.
Edit:
Here is what I am doing:
from subprocess import*
Popen('ant -Dport='+str(5555)+ ' -Dhost='+GetIP()+ ' -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8', shell=True)
The problem for me is this launches a java process in the terminal which I want to have keep running indefinatley. Secondly, I want to run a similar command multiple times in multiple different processes.
This should stay open as long as the process is running. If you want to launch multiple simultanously, just wrap it in a thread
untested code, but you should get the general idea:
class PopenThread(threading.Thread):
def __init__(self, port):
threading.Thread.__init__(self)
self.port=port
def run(self):
Popen('ant -Dport='+str(self.port)+ ' -Dhost='+GetIP()+
' -DhubURL=http://192.168.1.113:4444'
' -Denvironment=*firefox launch-remote-control'
' $HOME/selenium-grid-1.0.8', shell=True)
if '__main__'==__name__:
PopenThread(5555).start()
PopenThread(5556).start()
PopenThread(5557).start()
EDIT: The double-fork method described down here: https://stackoverflow.com/a/3765162/450517 by Mike would be the proper way to launch a daemon, i.e. a long-running process which won't communicate per stdio.
The simple answer I can come up with is to have Python use Popen to launch a shell script similar to:
gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP1 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP2 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
# etc. ...
There's a fully-Python way to do this, but it's ugly, only works on Unix-like OSes, and I don't have time to write the code out. Basically, subprocess.Popen doesn't support it because it assumes you want to either wait for the subprocess to finish, interact with the subprocess, or monitor the subprocess. It doesn't support the "just launch it and don't bother me with it ever again" case.
The way that's done in Unix-like OSes is to:
Use fork to spawn a subprocess
Have that subprocess fork a subprocess of its own
Have the grandchild process redirect I/O to /dev/null and then use one of the exec functions to launch the process you really want to start (might be able to use Popen for this part)
The child process exits.
Now there's no link between the grandparent and grandchild, so if the grandchild terminates you don't get a SIGCHLD signal, and if the grandparent terminates it doesn't kill all the grandchildren.
I might be off in the details, but that's the gist. Backgrounding (&) and disowning in bash are supposed to accomplish the same thing.
Here is a poor version of a blocking queue. You can fancify it with collections.deque or the like, or go even fancier with Twisted deferreds, or what not. Crummy parts include:
blocking
kill signals might not propagate down
season to taste!
import logging
basicConfig = dict(level=logging.INFO, format='%(process)s %(asctime)s %(lineno)s %(levelname)s %(name)s %(message)s')
logging.basicConfig(**basicConfig)
logger = logging.getLogger({"__main__":None}.get(__name__, __name__))
import subprocess
def wait_all(list_of_Popens,sleep_time):
""" blocking wait for all jobs to return.
Args:
list_of_Popens. list of possibly opened jobs
Returns:
list_of_Popens. list of possibly opened jobs
Side Effect:
block until all jobs complete.
"""
jobs = list_of_Popens
while None in [j.returncode for j in jobs]:
for j in jobs: j.poll()
logger.info("not all jobs complete, sleeping for %i", last_sleep)
time.sleep(sleep_time)
return jobs
jobs = [subprocess.Popen('sleep 1'.split()) for x in range(10)]
jobs = wait_all(jobs)
Related
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
I'm new to python, so here's what I'm looking to get done.
I would like to use python to manage some of my gameservers and start/stop them. For this I would like to run every gameserver in a own process.
What's the best way to create processes using python, so these processes can continue even if the main application is stopped?
To start a server I only need to execute shell code.
How can I get access after stopping my main application and restarting it to these processes?
I'm not sure if I understand the question completely, but maybe something like this?
Run process:
import subprocess
subprocess.Popen(['/path/gameserver']) #keeps running
And in another script you can use 'ps -A' to find the pid and kill (or restart) it:
import subprocess, signal
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'gameserver' in line:
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)
Check the subprocess module. There is a function called call. See here.
You may need to set the process to not be a daemon process.
Environment: Raspberry Pi Wheezy
I have a python program that uses Popen to call another python program
from subprocess import *
oJob = Popen('sudo python mypgm.py',shell=True)
Another menu option is supposed to end the job immediately
oJob.kill()
but the job is still running??
When you add the option shell=True, python launches a shell and the shell in turn launches the process python mymgm.py. You are killing the shell process here which doesn't kill its own child that runs mymgm.py.
To ensure, that child process gets killed on oJob.kill, you need to group them all under one process group and make shell process, the group leader.
The code is,
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(pro.pid, signal.SIGTERM) # Send the signal to all the process groups
When you send SIGTERM signal to the shell process, it will kill all its child process as well.
You need to add a creation flag arg
oJob = Popen('sudo python mypgm.py',shell=True, creationflags = subprocess.CREATE_NEW_PROCESS_GROUP)
source
subprocess.CREATE_NEW_PROCESS_GROUP
A Popen creationflags parameter to specify that a new process group will be created. This flag is necessary for using os.kill() on the subprocess.
EDIT I agree with the comment on how to import stuff and why you are getting something is undefined. Also the other answer seems to be on the right track getting the pid
import subprocess as sub
oJob = sub.Popen('sudo python mypgm.py', creationflags = sub.CREATE_NEW_PROCESS_GROUP)
oJob.kill()
Warning Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input:
I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.
all
I start a process using spawnProcess and want to kill when my certain Factory stops.
something I wrote like these
p = SomeProtocol(ProcessProtocol)
reactor.spawnProcess(p, 'twistd', ['twistd', '-y', 'anotherMain.py'], {})
class Factory(ServerFactory):
...
def StopFactory(self):
# which is the p above
p.transport.signalProcess("KILL")
I thought the subprocess will be killed which is not.
I tried using p.transport.signalProcess("KILL") some other place, and it works.
What's wrong with my code? Thanks!
This can be because twistd daemonizes anotherMain.py. After anotherMain.py becomes a daemon twistd process exits. So anotherMain.py isn't really a subprocess of your main process.
Try to add -n option:
reactor.spawnProcess(p, 'twistd', ['twistd', '-ny', 'anotherMain.py'], {})