I have a Python interactive console program that calls a subprocess using Popen
What happens to subprocess if I exit the interactive program that submitted the subprocess?
The subprocess will terminate along with the parent unless the subprocess became a daemon and detached from it's parent.
See: How Linux Process Life Cycle Works – Parent, Child, and Init Process which is a pretty good explanation on the life-cycle of processes (at least on LInux/UNIX).
Related
What I want? Create a script that starts and kill a communication protocol
What I have?
I have a python script that opens a shell script, and this shell script initialize the protocol. When I kill the parent process, everything goes fine (but in the final project, the parent process will have to stay alive), but when I kill the subprocess, it become a zombie function, and my protocol keep running.
Problems I believe can be: I'm "killing" the shell script (not the protocol, that's what I want)
The line I start the shell script:
`protocolProcess = subprocess.Popen(["sh", arquivo], cwd = localDoArquivo) #inicia o protocolo`
protocolProcessPID = protocolProcess.pid #armazena o pid do protocolProcess
The line I kill the shell script: os.kill(protocolPID, signal.SIGTERM)
Well, that's it! If anyone can help me, I'll be very grateful
Zombie processes are processes that have not yet been reaped by the parent process.
The parent process will hold onto those process handlers until the end of time, or until it reads the process exit status, or itself is killed.
It sounds like the parent process needs to have a better handle on how it spawns and reaps it's children. Simply killing a child process is not enough to free a zombie process.
I'm wondering if this is the correct way to execute a system process and detach from parent, though allowing the parent to exit without creating a zombie and/or killing the child process. I'm currently using the subprocess module and doing this...
os.setsid()
os.umask(0)
p = subprocess.Popen(['nc', '-l', '8888'],
cwd=self.home,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
os.setsid() changes the process group, which I believe is what lets the process continue running when it's parent exits, as it no longer belongs to the same process group.
Is this correct and also is this a reliable way of performing this?
Basically, I have a remote control utility that communicate through sockets and allows to start processes remotely, but I have to ensure that if the remote control dies, the processes it started continue running unaffected.
I was reading about double-forks and not sure if this is necessary and/or subprocess.POpen close_fds somehow takes care of that and all that's needed is to change the process group?
Thanks.
Ilya
For Python 3.8.x, the process is a bit different. Use the start_new_session parameter available since Python 3.2:
import shlex
import subprocess
cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)
This will allow the parent process to exit while the child process continues to run. Not sure about zombies.
The start_new_session parameter is supported on all POSIX systems, i.e. Linux, MacOS, etc.
Tested on Python 3.8.1 on macOS 10.15.5
popen on Unix is done using fork. That means you'll be safe with:
you run Popen in your parent process
immediately exit the parent process
When the parent process exits, the child process is inherited by the init process (launchd on OSX) and will still run in the background.
The first two lines of your python program are not needed, this perfectly works:
import subprocess
p = subprocess.Popen(['nc', '-l', '8888'],
cwd="/",
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
I was reading about double-forks and not sure if this is necessary
This would be needed if your parent process keeps running and you need to protect your children from dying with the parent. This answer shows how this can be done.
How the double-fork works:
create a child via os.fork()
in this child call Popen() which launches the long running process
exit child: Popen process is inherited by init and runs in the background
Why the parent has to immediately exit? What happens if it doesn't exit immediately?
If you leave the parent running and the user stops the process e.g. via ctrl-C (SIGINT) or ctrl-\ (SIGQUIT) then it would kill both the parent process and the Popen process.
What if it exits one second after forking?
Then, during this 1s period your Popen process is vulnerable to ctrl-c etc. If you need to be 100% sure then use the double forking.
On a Windows 7 machine:-
I have a main (Python) program that I start on a command prompt [main process].
This program spawns a child (Python) program [child process].
I close the command prompt.
Result:-
Child process ends immediately.
On the other hand if I end the main program from task manager, I observe that the child process is still running.
I was wondering why the 2 approaches do not have the same results? Is it sending some different signal in the two cases?
Comments to the question pointed me to get the answer.
I was using subprocess.Popen(args) to spawn the child process. This would spawn the child process successfully but the child process would be launched in the same command window as its parent.
Going through subprocess Popen doc, I found some additional arguments to be passed in order to launch child process in another command window.
Launching the child with the following arguments solved my problem.
subprocess.Popen(args, shell=True, creationflags=subprocess.CREATE_NEW_CONSOLE)
The last argument subprocess.CREATE_NEW_CONSOLE is only for windows.
I'm wondering if this is the correct way to execute a system process and detach from parent, though allowing the parent to exit without creating a zombie and/or killing the child process. I'm currently using the subprocess module and doing this...
os.setsid()
os.umask(0)
p = subprocess.Popen(['nc', '-l', '8888'],
cwd=self.home,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
os.setsid() changes the process group, which I believe is what lets the process continue running when it's parent exits, as it no longer belongs to the same process group.
Is this correct and also is this a reliable way of performing this?
Basically, I have a remote control utility that communicate through sockets and allows to start processes remotely, but I have to ensure that if the remote control dies, the processes it started continue running unaffected.
I was reading about double-forks and not sure if this is necessary and/or subprocess.POpen close_fds somehow takes care of that and all that's needed is to change the process group?
Thanks.
Ilya
For Python 3.8.x, the process is a bit different. Use the start_new_session parameter available since Python 3.2:
import shlex
import subprocess
cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)
This will allow the parent process to exit while the child process continues to run. Not sure about zombies.
The start_new_session parameter is supported on all POSIX systems, i.e. Linux, MacOS, etc.
Tested on Python 3.8.1 on macOS 10.15.5
popen on Unix is done using fork. That means you'll be safe with:
you run Popen in your parent process
immediately exit the parent process
When the parent process exits, the child process is inherited by the init process (launchd on OSX) and will still run in the background.
The first two lines of your python program are not needed, this perfectly works:
import subprocess
p = subprocess.Popen(['nc', '-l', '8888'],
cwd="/",
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
I was reading about double-forks and not sure if this is necessary
This would be needed if your parent process keeps running and you need to protect your children from dying with the parent. This answer shows how this can be done.
How the double-fork works:
create a child via os.fork()
in this child call Popen() which launches the long running process
exit child: Popen process is inherited by init and runs in the background
Why the parent has to immediately exit? What happens if it doesn't exit immediately?
If you leave the parent running and the user stops the process e.g. via ctrl-C (SIGINT) or ctrl-\ (SIGQUIT) then it would kill both the parent process and the Popen process.
What if it exits one second after forking?
Then, during this 1s period your Popen process is vulnerable to ctrl-c etc. If you need to be 100% sure then use the double forking.
I have been searching for a way to start and terminate a long-running "batch jobs" in python. Right now I'm using "os.system()" to launch a long-running batch job inside each child process. As you might have guessed, "os.system()" spawns a new process inside that child process (grandchild process?), so I cannot kill the batch job from the grand-parent process. To provide some visualization of what I have just described:
Main (grandparent) process, with PID = AAAA
|
|------> child process with PID = BBBB
|
|------> os.system("some long-running batch file)
[grandchild process, with PID = CCCC]
So, my problem is I cannot kill the grandchild process from the grandparent...
My question is, is there a way to start a long-running batch job inside a child process, and being able to kill that batch job by just terminating the child process?
What are the alternatives to os.system() that I can use so that I can kill the batch-job from the main process ?
Thanks !!
subprocess module is the proper way to spawn and control processes in Python.
from the docs:
The subprocess module allows you to
spawn new processes, connect to their
input/output/error pipes, and obtain
their return codes. This module
intends to replace several other,
older modules and functions, such as:
os.systemos.spawnos.popenpopen2commands
so... if you are on Python 2.4+, subprocess is the replacement for os.system
for stopping processes, check out the terminate() and communicate() methods of Popen objects.
If you are on a Posix-compatible system (e.g., Linux or OS X) and no Python code has to be run after the child process, use os.execv. In general, avoid os.system and use the subprocess module instead.
If you want control over start and stop of child processes you have to use threading. In that case, look no further than Python's threading module.