.kill() in python subprocess kills parent rather than child process - python

I'm attempting to open a child process of a second python script in the function below. It seems to open the process fine, but when I attempt to end the process, the parent process terminates and the child process persists. Any recommendations on why this may be happening?
def thermostat(input):
global ThermostatRunning
global proc
print("Thermostat Function input: %s" % input)
if input == 'stop' and ThermostatRunning == 1:
print("test")
proc.kill()
print proc.poll()
dev2(0) #ensure heater is off
return('Thermostat turned off')
elif input=='stop' and ThermostatRunning == 0:
status = "Thermostat already stopped"
print(status)
return(status)
if input.isdigit() == 0:
return("Thermostat input is not a number or -stop-")
if ThermostatRunning == 1:
print("test2")
proc.kill()
print("test3")
proc = subprocess.Popen('python thermostat.py -t %s' % input, shell=True)#, preexec_fn=os.setsid)
ThermostatRunning = 1
#for line in proc.stdout.readlines():
#print (line)
status = "Thermostat started with set temperature: %s" % input
print(status)
return(status)
The only other issue that may be pertinent is that this is a flask script. I'm not sure if that changes anything.

When you create the subprocess with shell=True, you actually spawn a process that spawns another subprocess, so when you call proc.kill(), you're only killing the parent process. You want to make your subprocess the process group leader so you can kill them all at once.
Uncomment the setsid in the Popen call and kill the whole process group like so:
os.killpg(proc.pid, signal.SIGTERM)

Related

How to run & stop python script from another python script?

I want code like this:
if True:
run('ABC.PY')
else:
if ScriptRunning('ABC.PY):
stop('ABC.PY')
run('ABC.PY'):
Basically, I want to run a file, let's say abc.py, and based on some conditions. I want to stop it, and run it again from another python script. Is it possible?
I am using Windows.
You can use python Popen objects for running processes in a child process
So run('ABC.PY') would be p = Popen("python 'ABC.PY'")
if ScriptRunning('ABC.PY) would be if p.poll() == None
stop('ABC.PY') would be p.kill()
This is a very basic example for what you are trying to achieve
Please checkout subprocess.Popen docs to fine tune your logic for running the script
import subprocess
import shlex
import time
def run(script):
scriptArgs = shlex.split(script)
commandArgs = ["python"]
commandArgs.extend(scriptArgs)
procHandle = subprocess.Popen(commandArgs, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return procHandle
def isScriptRunning(procHandle):
return procHandle.poll() is None
def stopScript(procHandle):
procHandle.terminate()
time.sleep(5)
# Forcefully terminate the script
if isScriptRunning(procHandle):
procHandle.kill()
def getOutput(procHandle):
# stderr will be redirected to stdout due "stderr=subprocess.STDOUT" argument in Popen call
stdout, _ = procHandle.communicate()
returncode = procHandle.returncode
return returncode, stdout
def main():
procHandle = run("main.py --arg 123")
time.sleep(5)
isScriptRunning(procHandle)
stopScript(procHandle)
print getOutput(procHandle)
if __name__ == "__main__":
main()
One thing that you should be aware about is stdout=subprocess.PIPE.
If your python script has a very large output, the pipes may overflow causing your script to block until .communicate is called over the handle.
To avoid this, pass a file handle to stdout, like this
fileHandle = open("main_output.txt", "w")
subprocess.Popen(..., stdout=fileHandle)
In this way, the output of the python process will be dumped into the file.(You will have to modily the getOutput() function too for this)
import subprocess
process = None
def run_or_rerun(flag):
global process
if flag:
assert(process is None)
process = subprocess.Popen(['python', 'ABC.PY'])
process.wait() # must wait or caller will hang
else:
if process.poll() is None: # it is still running
process.terminate() # terminate process
process = subprocess.Popen(['python', 'ABC.PY']) # rerun
process.wait() # must wait or caller will hang

Create python loop as a "detached" child process

I have an potentially infinite python 'while' loop that I would like to keep running even after the main script/process execution has been completed. Furthermore, I would like to be able to later kill this loop from a unix CLI if needed (ie. kill -SIGTERM PID), so will need the pid of the loop as well. How would I accomplish this? Thanks!
Loop:
args = 'ping -c 1 1.2.3.4'
while True:
time.sleep(60)
return_code = subprocess.Popen(args, shell=True, stdout=subprocess.PIPE)
if return_code == 0:
break
In python, parent processes attempt to kill all their daemonic child processes when they exit. However, you can use os.fork() to create a completely new process:
import os
pid = os.fork()
if pid:
#parent
print("Parent!")
else:
#child
print("Child!")
Popen returns an object which has the pid. According to the doc
Popen.pid
The process ID of the child process.
Note that if you set the shell argument to True, this is the process ID of the spawned shell.
You would need to turnoff the shell=True to get the pid of the process, otherwise it gives the pid of the shell.
args = 'ping -c 1 1.2.3.4'
while True:
time.sleep(60)
with subprocess.Popen(args, shell=False, stdout=subprocess.PIPE) as proc:
print('PID: {}'.format(proc.pid))
...

os.fork exit the script if the child fails to run command

I am a novice in python trying to use multi-process with fork. What I wanted to do is to run a command on few hosts. I am able to do with the below code but I also want to stop execution if any of the child fails to run the command or the command itself fails.
def runCommand(host,comp):
if os.system("ssh "+host+" 'somecommand'") != 0:
print "somecommand failed on "+host+" for "+comp
sys.exit(-1)
def runMulti():
children = []
for comp,host in conHosts.iteritems():
pid = os.fork()
if pid:
children.append(pid)
else:
sleep(5)
runCommand(host,comp)
os._exit(0)
for i, child in enumerate(children):
os.waitpid(child, 0)
os.fork() returns 0 in the child process. So you can do:
if not os.fork():
# we now know we're the child process
execute_the_work()
if failed:
sys.exit()
sys.exit() is the pythonic way to exit a python program. Don't forget to import sys.
Since you seem to be a beginner, replace failed with the condition to judge if the task failed.
You can just check the return value of waitpid and see if the child process exited with a status different from 0:
had_error = any(os.waitpid(child, 0)[1] for child in children)
if had_error:
sys.exit(1)
Note: since you are checking the return value of os.fork the list children will be empty in the child processes and so any will always return False, i.e. only the master process will eventually call sys.exit.
I have achieved this by using ThreadPool.
pool = ThreadPool(len(hosts))
try:
pool.map(runMulti(), 'True')
pool.close()
pool.join()
except:
os.system('touch /tmp/failed')
commands.getoutput("killall -q ssh")
os.kill(os.getpid(),9)
I have created a temp file, when a thread in the pool exists with different status.Thank you all :)

subprocess.popen detached from master (Linux)

I am trying to open a subprocess but have it be detached from the parent script that called it. Right now if I call subprocess.popen and the parent script crashes the subprocess dies as well.
I know there are a couple of options for windows but I have not found anything for *nix.
I also don't need to call this using subprocess. All I need is to be able to cal another process detached and get the pid.
With linux, it's no issue at all. Just Popen(). For example, here is a little dying_demon.py
#!/usr/bin/python -u
from time import sleep
from subprocess import Popen
print Popen(["python", "-u", "child.py"]).pid
i = 0
while True:
i += 1
print "demon: %d" % i
sleep(1)
if i == 3:
i = hurz # exception
spinning off a child.py
#!/usr/bin/python -u
from time import sleep
i = 0
while True:
i += 1
print "child: %d" % i
sleep(1)
if i == 20:
break
The child continues to count (to the console), while the demon is dying by exception.
I think this should do the trick: https://www.python.org/dev/peps/pep-3143/#reference-implementation
You can create daemon which will call your subprocess, passing detach_process=True.
This might do what you want:
def cmd_detach(*command, **kwargs) -> subprocess.CompletedProcess:
# https://stackoverflow.com/questions/62521658/python-subprocess-detach-a-process
# if using with ffmpeg remember to run it with `-nostdin`
stdout = os.open(os.devnull, os.O_WRONLY)
stderr = os.open(os.devnull, os.O_WRONLY)
stdin = os.open(os.devnull, os.O_RDONLY)
command = conform(command)
if command[0] in ["fish", "bash"]:
import shlex
command = command[0:2] + [shlex.join(command[2:])]
subprocess.Popen(command, stdin=stdin, stdout=stdout, stderr=stderr, close_fds=True, start_new_session=True, **kwargs)
return subprocess.CompletedProcess(command, 0, "Detached command is async")
On Windows you might need
CREATE_NEW_PROCESS_GROUP = 0x00000200
DETACHED_PROCESS = 0x00000008
creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP
instead of start_new_session=True
I managed to get it working by doing the following using python-daemon:
process = subprocess.Popen(["python", "-u", "Child.py"])
time.sleep(2)
process.kill()
Then in Child.py:
with daemon.DaemonContext():
print("Child Started")
time.sleep(30)
print "Done"
exit()
I do process.kill() because otherwise it creates a defunct python process. The main problem I have now is that the PID that popen returns does not match the final pid of the process. I can get by this by adding a function in Child.py to update a database with the pid.
Let me know if there is something that I am missing or if this is an ok method of doing this.
fork the subprocs using the NOHUP option

How to start a child process and use it as a server in Python?

I need to start a Python script in Python and keep it up.
For argument purposes, say that there is a program called slave.py
if __name__=='__main__':
done = False
while not done:
line = raw_input()
print line
if line.lower() == 'quit' or line.lower() == 'q':
done = True
break
stringLen = len(line)
print "len: %d " % stringLen
The program "slave.py" receives a string, calculates the input length of the string
and outputs the length to stdout with a print statement.
It should run until I give it a "quit" or "q" as an input.
Meanwhile, in another program called "master.py", I will invoke "slave.py"
# Master.py
if __name__=='__main__':
# Start a subprocess of "slave.py"
slave = subprocess.Popen('python slave.py', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
x = "Hello world!"
(stdout, stderr) = slave.communicate(x)
# This works - returns 12
print "stdout: ", stdout
x = "name is"
# The code bombs here with a 'ValueError: I/O operation on closed file'
(stdout, stderr) = slave.communicate(x)
print "stdout: ", stdout
However, the slave.py program that I opened using Popen() only takes one communicate() call. It ends after that one communicate() call.
For this example, I would like to have slave.py keep running, as a server in a client-server model, until it receives a "quit" or "q" string via communicate. How would I do that with the subprocess.Popen() call?
If each input line produces known number of output lines then you could:
import sys
from subprocess import Popen, PIPE
p = Popen([sys.executable, '-u', 'slave.py'], stdin=PIPE, stdout=PIPE)
def send(input):
print >>p.stdin, input
print p.stdout.readline(), # print input
response = p.stdout.readline()
if response:
print response, # or just return it
else: # EOF
p.stdout.close()
send("hello world")
# ...
send("name is")
send("q")
p.stdin.close() # nothing more to send
print 'waiting'
p.wait()
print 'done'
Otherwise you might need threads to read the output asynchronously.
If you indent to keep slave alive over the parent life-cycle you can daemonize it:
http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/
Alternatively you could look multiprocess API:
http://docs.python.org/library/multiprocessing.html
... which allows thread-like processing over different child processes.

Categories