This question already has answers here:
Python spawn off a child subprocess, detach, and exit
(2 answers)
Closed 1 year ago.
I'm trying to create a Python script that can do two things:
Run normally if no args are present.
If install is passed as argument, install itself to a specific directory (/tmp), and run the installed version in the background, detached from the current process.
I've tried multiple combinations of subprocess.run, subprocess.Popen with the shell, close_fds and other options (even tried nohup), but since I do not have a good understanding of how process spawning works, I do not seem to be using the correct one.
What I'm looking for when I use the install argument is to see "Installing..." and that's it, the new process should be running in the background detached and my shell ready. But what I see is the child process still attached and my terminal busy outputting "Running..." just after the installing message.
How should this be done?
import subprocess
import sys
import time
import os
def installAndRun():
print('Installing...')
scriptPath = os.path.realpath(__file__)
scriptName = (__file__.split('/')[-1] if '/' in __file__ else __file__)
# Copy script to new location (installation)
subprocess.run(['cp', scriptPath, '/tmp'])
# Now run the installed script
subprocess.run(['python3', f'/tmp/{scriptName}'])
def run():
for _ in range(5):
print('Running...')
time.sleep(1)
if __name__=="__main__":
if 'install' in sys.argv:
installAndRun()
else:
run()
Edit: I've just realised that the process does not end when called like that.
Do not use "cp" to copy the script, but shutil.copy() instead.
Instead of "python3", use sys.executable to start the script with the same interpreter the original is started with.
subprocess.Popen() without anything else will work as long as the child process isn't writing anything to stdout and stderr, and isn't requesting any output. In general, the process is not started unless communicate() is not called or PIPEs being read/written to. You have to use os.fork() to detach from the parent (research how daemons are made), then use:
p = subprocess.Popen([sys.executable, new_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
p.stdin.close() # If you do not need it
p.communicate()
or do not use subprocess.PIPE for stdin, stderr and stdout and make sure that the terminal is bound to the child when forking. After os.fork() you do with the parent what you want and with the child what you want. You can bind the child to whatever terminal you want or start a new shell e.g.:
pid = os.fork()
if pid==0: # Code in this if block is the child
<code to change the terminal and appropriately point sys.stdout, sys.stderr and sys.stdin>
subprocess.Popen([os.getenv("SHELL"), "-c", sys.executable, new_path]).communicate()
Note that you can point PIPEs to file-like objects using stdin, stderr and stdout arguments if you need.
To detach on Windows you can use os.startfile() or use subprocess.Popen(...).communicate() in a thread. If you then sys.exit() the parent, the child should stay opened. (that is how it worked on Windows XP with Python 2.x, I didn't try with Py3 nor on newer Win versions)
It seems like the correct combination was to use Popen + subprocess.PIPE for both stdout and stderr. The code now looks like this:
import subprocess
import sys
import time
import os
def installAndRun(scriptPath, scriptName):
print('Installing...')
# Copy script to new location (installation)
subprocess.run(['cp', scriptPath, '/tmp'])
# Now run the installed script
subprocess.Popen(['python3', f'/tmp/{scriptName}'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
def run(scriptPath):
for _ in range(5):
print(f'Running... {scriptPath}')
time.sleep(1)
if __name__=="__main__":
scriptPath = os.path.realpath(__file__)
scriptName = (__file__.split('/')[-1] if '/' in __file__ else __file__)
if 'install' in sys.argv:
installAndRun(scriptPath, scriptName)
else:
run(scriptPath)
Related
I have a python script that does this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=False)
theStdin=request.input.encode('utf-8')
(outputhere,errorshere) = p.communicate(input=theStdin)
It works as expected, it waits for the subprocess to finish via p.communicate(). However within the pythonscript.py I want to "fire and forget" a "grandchild" process. I'm currently doing this by overwriting the join function:
class EverLastingProcess(Process):
def join(self, *args, **kwargs):
pass # Overwrites join so that it doesn't block. Otherwise parent waits.
def __del__(self):
pass
And starting it like this:
p = EverLastingProcess(target=nameOfMyFunction, args=(arg1, etc,), daemon=False)
p.start()
This also works fine I just run pythonscript.py in a bash terminal or bash script. Control and a response returns while the child process started by EverLastingProcess keeps going. However, when I run pythonscript.py with Popen running the process as shown above, it looks from timings that the Popen is waiting on the grandchild to finish.
How can I make it so that the Popen only waits on the child process, and not any grandchild processes?
The solution above (using the join method with the shell=True addition) stopped working when we upgraded our Python recently.
There are many references on the internet about the pieces and parts of this, but it took me some doing to come up with a useful solution to the entire problem.
The following solution has been tested in Python 3.9.5 and 3.9.7.
Problem Synopsis
The names of the scripts match those in the code example below.
A top-level program (grandparent.py):
Uses subprocess.run or subprocess.Popen to call a program (parent.py)
Checks return value from parent.py for sanity.
Collects stdout and stderr from the main process 'parent.py'.
Does not want to wait around for the grandchild to complete.
The called program (parent.py)
Might do some stuff first.
Spawns a very long process (the grandchild - "longProcess" in the code below).
Might do a little more work.
Returns its results and exits while the grandchild (longProcess) continues doing what it does.
Solution Synopsis
The important part isn't so much what happens with subprocess. Instead, the method for creating the grandchild/longProcess is the critical part. It is necessary to ensure that the grandchild is truly emancipated from parent.py.
Subprocess only needs to be used in a way that captures output.
The longProcess (grandchild) needs the following to happen:
It should be started using multiprocessing.
It needs multiprocessing's 'daemon' set to False.
It should also be invoked using the double-fork procedure.
In the double-fork, extra work needs to be done to ensure that the process is truly separate from parent.py. Specifically:
Move the execution away from the environment of parent.py.
Use file handling to ensure that the grandchild no longer uses the file handles (stdin, stdout, stderr) inherited from parent.py.
Example Code
grandparent.py - calls parent.py using subprocess.run()
#!/usr/bin/env python3
import subprocess
p = subprocess.run(["/usr/bin/python3", "/path/to/parent.py"], capture_output=True)
## Comment the following if you don't need reassurance
print("The return code is: " + str(p.returncode))
print("The standard out is: ")
print(p.stdout)
print("The standard error is: ")
print(p.stderr)
parent.py - starts the longProcess/grandchild and exits, leaving the grandchild running. After 10 seconds, the grandchild will write timing info to /tmp/timelog.
!/usr/bin/env python3
import time
def longProcess() :
time.sleep(10)
fo = open("/tmp/timelog", "w")
fo.write("I slept! The time now is: " + time.asctime(time.localtime()) + "\n")
fo.close()
import os,sys
def spawnDaemon(func):
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0: # parent process
return
except OSError as e:
print("fork #1 failed. See next. " )
print(e)
sys.exit(1)
# Decouple from the parent environment.
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError as e:
print("fork #2 failed. See next. " )
print(e)
print(1)
# Redirect standard file descriptors.
# Here, they are reassigned to /dev/null, but they could go elsewhere.
sys.stdout.flush()
sys.stderr.flush()
si = open('/dev/null', 'r')
so = open('/dev/null', 'a+')
se = open('/dev/null', 'a+')
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# Run your daemon
func()
# Ensure that the daemon exits when complete
os._exit(os.EX_OK)
import multiprocessing
daemonicGrandchild=multiprocessing.Process(target=spawnDaemon, args=(longProcess,))
daemonicGrandchild.daemon=False
daemonicGrandchild.start()
print("have started the daemon") # This will get captured as stdout by grandparent.py
References
The code above was mainly inspired by the following two resources.
This reference is succinct about the use of the double-fork but does not include the file handling we need in this situation.
This reference contains the needed file handling, but does many other things that we do not need.
Edit: the below stopped working after a Python upgrade, see the accepted answer from Lachele.
Working answer from a colleague, change to shell=True like this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
I've tested and the grandchild subprocesses stay alive after the child processes returns without waiting for them to finish.
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
Environment: Raspberry Pi Wheezy
I have a python program that uses Popen to call another python program
from subprocess import *
oJob = Popen('sudo python mypgm.py',shell=True)
Another menu option is supposed to end the job immediately
oJob.kill()
but the job is still running??
When you add the option shell=True, python launches a shell and the shell in turn launches the process python mymgm.py. You are killing the shell process here which doesn't kill its own child that runs mymgm.py.
To ensure, that child process gets killed on oJob.kill, you need to group them all under one process group and make shell process, the group leader.
The code is,
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(pro.pid, signal.SIGTERM) # Send the signal to all the process groups
When you send SIGTERM signal to the shell process, it will kill all its child process as well.
You need to add a creation flag arg
oJob = Popen('sudo python mypgm.py',shell=True, creationflags = subprocess.CREATE_NEW_PROCESS_GROUP)
source
subprocess.CREATE_NEW_PROCESS_GROUP
A Popen creationflags parameter to specify that a new process group will be created. This flag is necessary for using os.kill() on the subprocess.
EDIT I agree with the comment on how to import stuff and why you are getting something is undefined. Also the other answer seems to be on the right track getting the pid
import subprocess as sub
oJob = sub.Popen('sudo python mypgm.py', creationflags = sub.CREATE_NEW_PROCESS_GROUP)
oJob.kill()
Warning Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input:
I have a some Python code that occasionally needs to span a new process to run a shell script in a "fire and forget" manner, i.e. without blocking. The shell script will not communicate with the original Python code and will in fact probably terminate the calling Python process, so the launched shell script cannot be a child process of the calling Python process. I need it to be launched as an independent process.
In other words, let's say I have mycode.py and that launches script.sh. Then mycode.py will continue processing without blocking. The script script.sh will do some things independently and will then actually stop and restart mycode.py. So the process that runs script.py must be completely independent of mycode.py. How exactly can I do this? I think subprocess.Popen will not block, but will still create a child process that terminates as soon as mycode.py stops, which is not what I want.
Try prepending "nohup" to script.sh. You'll probably need to decide what to do with stdout and stderr; I just drop it in the example.
import os
from subprocess import Popen
devnull = open(os.devnull, 'wb') # Use this in Python < 3.3
# Python >= 3.3 has subprocess.DEVNULL
Popen(['nohup', 'script.sh'], stdout=devnull, stderr=devnull)
Just use subprocess.Popen. The following works OK for me on Windows XP / Windows 7 and Python 2.5.4, 2.6.6, and 2.7.4. And after being converted with py2exe - not tried 3.3 - it comes from the need to delete expired test software on the clients machine.
import os
import subprocess
import sys
from tempfile import gettempdir
def ExitAndDestroy(ProgPath):
""" Exit and destroy """
absp = os.path.abspath(ProgPath)
fn = os.path.join(gettempdir(), 'SelfDestruct.bat')
script_lines = [
'#rem Self Destruct Script',
'#echo ERROR - Attempting to run expired test only software',
'#pause',
'#del /F /Q %s' % (absp),
'#echo Deleted Offending File!',
'#del /F /Q %s\n' % (fn),
#'#exit\n',
]
bf = open(fn, 'wt')
bf.write('\n'.join(script_lines))
bf.flush()
bf.close()
p = subprocess.Popen([fn], shell=False)
sys.exit(-1)
if __name__ == "__main__":
ExitAndDestroy(sys.argv[0])
I want to initiate a process from my python script main.py. Specifically, I want to run the below command:
`nohup python ./myfile.py &`
and the file myfile.py should continue running, even after the main.py script exits.
I also wish to get the pid of the new process.
I tried:
os.spawnl*
os.exec*
subprocess.Popen
and all are terminating the myfile.py when the main.py script exits.
Update: Can I use os.startfile with xdg-open? Is it the right approach?
Example
a = subprocess.Popen([sys.executable, "nohup /usr/bin/python25 /long_process.py &"],\
stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
print a.pid
If I check ps aux | grep long_process, there is no process running.
long_process.py which keeps on printing some text: no exit.
Am I doing anything wrong here?
You open your long-running process and keep a pipe to it. So you expect to talk to it. When yor launcher script exits, you can no longer talk to it. The long-running process receives a SIGPIPE and exits.
The following just worked for me (Linux, Python 2.7).
Create a long-running executable:
$ echo "sleep 100" > ~/tmp/sleeper.sh
Run Python REPL:
$ python
>>>
import subprocess
import os
p = subprocess.Popen(['/bin/sh', os.path.expanduser('~/tmp/sleeper.sh')])
# look ma, no pipes!
print p.pid
# prints 29893
Exit the REPL and see the process still running:
>>> ^D
$ ps ax | grep sleeper
29893 pts/0 S 0:00 /bin/sh .../tmp/sleeper.sh
29917 pts/0 S+ 0:00 grep --color=auto sleeper
If you want to first communicate to the started process and then leave it alone to run further, you have a few options:
Handle SIGPIPE in your long-running process, do not die on it. Live without stdin after the launcher process exits.
Pass whatever you wanted using arguments, environment, or a temporary file.
If you want bidirectional communication, consider using a named pipe (man mkfifo) or a socket, or writing a proper server.
Make the long-running process fork after the initial bi-direcional communication phase is done.
You can use os.fork().
import os
pid=os.fork()
if pid==0: # new process
os.system("nohup python ./myfile.py &")
exit()
# parent process continues
I could not see any process running.
You don't see any process running because the child python process exits immediately. The Popen arguments are incorrect as user4815162342 says in the comment.
To launch a completely independent process, you could use python-daemon package or use systemd/supervisord/etc:
#!/usr/bin/python25
import daemon
from long_process import main
with daemon.DaemonContext():
main()
Though it might be enough in your case, to start the child with correct Popen arguments:
with open(os.devnull, 'r+b', 0) as DEVNULL:
p = Popen(['/usr/bin/python25', '/path/to/long_process.py'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT, close_fds=True)
time.sleep(1) # give it a second to launch
if p.poll(): # the process already finished and it has nonzero exit code
sys.exit(p.returncode)
If the child process doesn't require python2.5 then you could use sys.executable instead (to use the same Python version as the parent).
Note: the code closes DEVNULL in the parent without waiting for the child process to finish (it has no effect on the child).