Python kill defunct child process itself - python

I'm using multiprocessing python lib.
I create new Process and executing my job.
After finishing job, I just call exit(0), but process is 'defunct' state.
I try to kill as 'kill -9 PID', 'kill -18 PID' and etc, but I can't kill child process with keep parent process live.
How should I do? This is my python code
'process.py'
def process(data, id):
while isWorking:
#work process and change isWorking to False
process = Process(target=process, args=(data,id))
process.start()
it's my process list (ps -al)
0 S 1001 19295 19291 0 80 0 - 45817 poll_s pts/3 00:00:09 python3
1 Z 1001 19339 19295 3 80 0 - 0 - pts/3 00:00:58 pyth <defunct>
I want to keep live parent process(19295) and kill child defunct process 19339.
How should I doe?

Try these commands:
pkill -9 python
ps -ef|grep python
kill -9 <pid>

Related

Impossible to send SIGKILL to supervisord

Supervisord is run in a docker container. It spawns several processes. Its configuration is below:
[supervisord]
nodaemon = true
[group:maria-as]
programs=p1,p2
[program:p1]
priority = 1
command = bash ./launcher.sh
stopasgroup=true
killasgroup=true
autorestart=false
[program:p2]
priority = 1
command = bash ./launcher2.sh
stopasgroup=true
killasgroup=true
autorestart=false
[eventlistener:subprocess_stop]
events=PROCESS_STATE_EXITED,PROCESS_STATE_FATAL,PROCESS_STATE_STOPPED,PROCESS_STATE_BACKOFF
command=/kill.py
My goal is to kill supervisord with all its subprocesses once one of the subprocesses goes down. So event listener was implemented which sends SIGKILL to supervisord.
kill.py code is below:
#!/usr/bin/env python
import os
import signal
def write_stdout(s):
sys.stdout.write(s)
sys.stdout.flush()
def write_stderr(s):
sys.stderr.write(s)
sys.stderr.flush()
def main():
while 1:
write_stdout('READY\n')
try:
print time.time()
os.kill(1, signal.SIGKILL)
print time.time()
except Exception as e:
write_stderr('Could not kill supervisor: ' + e.strerror + '\n')
write_stdout('RESULT 2\nOK')
if __name__ == '__main__':
main()
But nothing happens once one of the subprocesses goes down.
What is more interesting is that running kill -9 1 inside the container has no effect either.
kill -9 -1 causes the container to go down, but it takes several seconds.
How to send SIGKILL to supervisord so that the container would stop immediately and what's the reason for such a behavior?

python subprocess.Popen kill process with child processes

How do I send a Ctrl-C to process or kill process with child processes?
Example of my code (python 2.7):
# --*-- coding: utf-8 --*--
import subprocess
import os
import signal
proc = subprocess.Popen(['ping localhost'],shell=True,stdout=subprocess.PIPE)
print proc.pid
a = raw_input()
os.killpg(proc.pid, signal.SIGTERM)
I see next processes when I run program:
user 16078 0.0 0.0 4476 916 pts/6 S+ 14:41 0:00 /bin/sh -c ping localhost
user 16079 0.0 0.0 8628 1908 pts/6 S+ 14:41 0:00 ping localhost
Program output:
16078
After raw_input:
Traceback (most recent call last):
File "subproc2.py", line 10, in <module>
os.killpg(proc.pid, signal.SIGTERM)
OSError: [Errno 3] No such process
I want kill process pid 16078 and pid 16079.
How would I do this and what the bug in program? Appreciate the help.
How would I do this and what the bug in program?
If you want to kill all processes which includes in process group then you should use parent process id. Like that:
os.killpg(os.getpid(), signal.SIGTERM)
If you want to kill only one child process then use this:
os.kill(proc.pid, signal.SIGTERM)

Terminate subprocess running in thread on program exit

Based on the accepted answer to this question: python-subprocess-callback-when-cmd-exits I am running a subprocess in a separate thread and after the completion of the subprocess a callable is executed. All good, but the problem is that even if running the thread as a daemon, the subprocess continues to run even after the program exits normally or it is killed by kill -9, Ctrl + C, etc...
Below is a very simplified example (runs on 2.7):
import threading
import subprocess
import time
import sys
def on_exit(pid):
print 'Process with pid %s ended' % pid
def popen_with_callback(cmd):
def run_in_thread(command):
proc = subprocess.Popen(
command,
shell=False
)
proc.wait()
on_exit(proc.pid)
return
thread = threading.Thread(target=run_in_thread, args=([cmd]))
thread.daemon = True
thread.start()
return thread
if __name__ == '__main__':
popen_with_callback(
[
"bash",
"-c",
"for ((i=0;i<%s;i=i+1)); do echo $i; sleep 1; done" % sys.argv[1]
])
time.sleep(5)
print 'program ended'
If the main thread lasts longer than the subprocess everything is fine:
(venv)~/Desktop|➤➤ python testing_threads.py 3
> 0
> 1
> 2
> Process with pid 26303 ended
> program ended
If the main thread lasts less than the subprocess, the subprocess continues to run until it eventually hangs:
(venv)~/Desktop|➤➤ python testing_threads.py 8
> 0
> 1
> 2
> 3
> 4
> program ended
(venv)~/Desktop|➤➤ 5
> 6
> 7
# hanging from now on
How to terminate the subprocess if the main program is finished or killed? I tried to use atexit.register(os.kill(proc.pid, signal.SIGTERM)) just before proc.wait but it actually executes when the thread running the subprocess exits, not when the main thread exits.
I was also thinking of polling for the parent pid, but I am not sure how to implement it because of the proc.wait situation.
Ideal outcome would be:
(venv)~/Desktop|➤➤ python testing_threads.py 8
> 0
> 1
> 2
> 3
> 4
> program ended
> Process with pid 1234 ended
Use Thread.join method, which blocks main thread until this thread exits:
if __name__ == '__main__':
popen_with_callback(
[
"bash",
"-c",
"for ((i=0;i<%s;i=i+1)); do echo $i; sleep 1; done" % sys.argv[1]
]).join()
print 'program ended'
I just got an ugly but effective method.
Just set a global variable to handle the proc=subprocess.Popen(),
and you can kill the proc whenever you like:
my_proc = None
def popen_with_callback(cmd):
def run_in_thread(command):
global my_proc
proc = subprocess.Popen(command, shell=False)
my_proc = proc
proc.wait()
on_exit(proc.pid)
return
...
Then you can kill the proc wherever you want in the program.
just do:
my_proc.kill()

python - terminate child process when script invoked from bash

I have a python script: zombie.py
from multiprocessing import Process
from time import sleep
import atexit
def foo():
while True:
sleep(10)
#atexit.register
def stop_foo():
p.terminate()
p.join()
if __name__ == '__main__':
p = Process(target=foo)
p.start()
while True:
sleep(10)
When I run this with python zombie.py & and kill the parent process with kill -2, the stop() is correctly called and both processes terminate.
Now, suppose I have a bash script zombie.sh:
#!/bin/sh
python zombie.py &
echo "done"
And I run ./zombie.sh from the command line.
Now, stop() never gets called when the parent gets killed. If I run kill -2 on the parent process, nothing happens. kill -15 or kill -9 both just kill the parent process, but not the child:
[foo#bar ~]$ ./zombie.sh
done
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27220 1 0 17:57 pts/3 00:00:00 python zombie.py
foo 27221 27220 0 17:57 pts/3 00:00:00 python zombie.py
[foo#bar ~]$ kill -2 27220
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27220 1 0 17:57 pts/3 00:00:00 python zombie.py
foo 27221 27220 0 17:57 pts/3 00:00:00 python zombie.py
[foo#bar ~]$ kill 27220
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27221 1 0 17:57 pts/3 00:00:00 python zombie.py
What is going on here? How can I make sure the child process dies with the parent?
Neither the atexit nor the p.daemon = True will truely ensure that the child process will die with the father. Receiving a SIGTERM will not trigger the atexit routines.
To make sure the child gets killed upon its father's death you will have to install a signal handler in the father. This way you can react on most signals (SIGQUIT, SIGINT, SIGHUP, SIGTERM, ...) but not on SIGKILL; there simply is no way to react on that signal from within the process which receives it.
Install a signal handler for all useful signals and in that handler kill the child process.
Update: This solution doesn't work for processes killed by a signal.
Your child process is not a zombie. It is alive.
If you want the child process to be killed when its parent exits normally then set p.daemon = True before p.start(). From the docs:
When a process exits, it attempts to terminate all of its daemonic child processes.
Looking at the source code, it is clear that multiprocessing uses atexit callback to kill its daemonic children i.e., it won't work if the parent is killed by a signal. For example:
#!/usr/bin/env python
import logging
import os
import signal
import sys
from multiprocessing import Process, log_to_stderr
from threading import Timer
from time import sleep
def foo():
while True:
sleep(1)
if __name__ == '__main__':
log_to_stderr().setLevel(logging.DEBUG)
p = Process(target=foo)
p.daemon = True
p.start()
# either kill itself or exit normally in 5 seconds
if '--kill' in sys.argv:
Timer(5, os.kill, [os.getpid(), signal.SIGTERM]).start()
else: # exit normally
sleep(5)
Output
$ python kill-orphan.py
[INFO/Process-1] child process calling self.run()
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[INFO/MainProcess] calling terminate() for daemon Process-1
[INFO/MainProcess] calling join() for process Process-1
[DEBUG/MainProcess] running the remaining "atexit" finalizers
Notice "calling terminate() for daemon" line.
Output (with --kill)
$ python kill-orphan.py --kill
[INFO/Process-1] child process calling self.run()
The log shows that if the parent is killed by a signal then "atexit" callback is not called (and ps shows that the child is alive in this case). See also Multiprocess Daemon Not Terminating on Parent Exit.

Starting a separate process

I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running.
But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?
From the Python docs:
When a process exits, it attempts to
terminate all of its daemonic child
processes.
This is the expected behavior.
If you are on a unix system, you could use os.fork:
import os
import time
pid=os.fork()
if pid:
# parent
while True:
print("I'm the parent")
time.sleep(0.5)
else:
# child
while True:
print("I'm just a child")
time.sleep(0.5)
Running this creates two processes. You can kill the parent without killing the child.
For example, when you run script you'll see something like:
% script.py
I'm the parent
I'm just a child
I'm the parent
I'm just a child
...
Stop the script with ctrl-Z:
^Z
[1]+ Stopped script.py
Find the process ID number for the parent. It will be the smaller of the two process ID numbers since the parent came first:
% ps axuw | grep script.py
unutbu 6826 0.1 0.1 33792 6388 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6827 0.0 0.1 33792 4352 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6832 0.0 0.0 17472 952 pts/24 S+ 15:09 0:00 grep --color=auto script.py
Kill the parent process:
% kill 6826
Restore script.py to the foreground:
% fg
script.py
Terminated
You'll see the child process is still running:
% I'm just a child
I'm just a child
I'm just a child
...
Kill the child (in a new terminal) with
% kill 6827
Simply use the subprocess module:
import subprocess
subprocess.Popen(["sleep", "60"])
Here is a related question on SO, where one of the answers gives a nice solution to this problem:
"spawning process from python"
If you are on a unix system (using docs):
#!/usr/bin/env python3
import os
import sys
import time
import subprocess
import multiprocessing
from multiprocessing import Process
def to_use_in_separate_process(*args):
print(args)
#check args before using them:
if len(args)>1:
subprocess.call((args[0], args[1]))
print('subprocess called')
def main(apathtofile):
print('checking os')
if os.name == 'posix':
print('os is posix')
multiprocessing.get_context('fork')
p = Process(target=to_use_in_separate_process, args=('xdg-open', apathtofile))
p.run()
print('exiting def main')
if __name__ == '__main__':
#parameter [1] must be some file that can be opened by xdg-open that this
#program uses.
if len(sys.argv)>1:
main(sys.argv[1])
print('we can exit now.')
else:
print('no parameters...')
print('mother program will end now!')
sys.exit(0)
In Ubuntu the following commands keep working even though the python app exit.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

Categories