Run subprocess in thread and stop it - python

I am trying to run a python file with another python file in threading with subprocess. I am able to run the file, but not able to stop it.
What I want is
I want to run the test.py with my main.py in thread and stop it by entering stop in the console.
test.py
import time
while True:
print("Hello from test.py")
time.sleep(5)
main.py
import subprocess
import _thread
processes = []
def add_process(file_name):
p = subprocess.Popen(["python", file_name], shell=True)
processes.append(p)
print("Waiting the process...")
p.wait()
while True:
try:
# user input
ui = input("> ")
if ui == "start":
_thread.start_new_thread(add_process, ("test.py",))
print("Process started.")
elif ui == "stop":
if len(processes) > 0:
processes[-1].kill()
print("Process killed.")
else:
pass
except KeyboardInterrupt:
print("Exiting Program!")
break
Output
C:\users\me>python main2.py
> start
Process started.
> Waiting the process...
Hello from test.py, after 0 seconds
Hello from test.py, after 4 seconds
> stop
Process killed.
> Hello from test.py, after 8 seconds
Hello from test.py, after 12 seconds
> stopHello from test.py, after 16 seconds
Process killed.
> Hello from test.py, after 20 seconds
>
The program is still running even after I stop it with kill function, I have tried terminate also. I need to stop that, how can I. Or, is there any alternative module for subprocess to do this?

I suspect you have just started multiple processes. What happens if you replace your stop code with this:
for proc in processes:
proc.kill()
This should kill them all.
Note that you are on windows, and on windows terminate() is an alias for kill(). It is not a good idea to rely on killing processes gracelessly anyway, and if you need to stop your test.py you would be better off having some means of communicating with it and having it exit gracefully.
Incidentally, you don't want shell=True, and you're better off with threading than with _thread.

Related

python subprocess send stout on the fly when running

Is there a way to child.py sends stout "on the fly", when running?
Or main.py needs to wait child.py to terminate?
In these scripts, main.py needs to wait 5 seconds to start printing all lines.
I want that process.stdout.readline() get the last print in child.py when child.py still running.
main.py
import subprocess
import time
process = subprocess.Popen(["./child.py"], stdout=subprocess.PIPE)
i = 1
while i < 5:
print(process.stdout.readline()) #to print, child.py needs to terminate before
time.sleep(1)
i+=1
child.py
#!/usr/bin/env python3
# coding=utf-8
import sys
import time
def run():
i = 1
while i < 5:
time.sleep(1)
print(f'ok {i}')
i+=1
if __name__ == "__main__":
run()
In child.py you wrote this:
print(f'ok {i}')
Replace it with this:
print(f'ok {i}', flush=True)
When testing interactively isatty() returns True,
so child.py will default to unbuffered behavior.
Each line of output will appear immediately.
When running as a subprocess connected to a pipe,
you are seeing it default to buffered behavior.
Use a flush() call to defeat this.

how to get rid of warning `RuntimeWarning: A loop is being detached from a child watcher with pending handlers`

if I use asyncio to spawn subprocess which run another python script, there is warning at the end: RuntimeWarning: A loop is being detached from a child watcher with pending handlers, if the subprocess is terminate by terminate().
For example a very simple dummy:
import datetime
import time
import os
if __name__ == '__main__':
for i in range(3):
msg = 'pid({}) {}: continue'.format(os.getpid(), datetime.datetime.now())
print(msg)
time.sleep(1.0)
Then I spawn the dummy:
import asyncio
from asyncio import subprocess
T = 3 # if the T=5, which allows the thread to finish, then there is no such warning.
async def handle_proc():
p = None
try:
p = await subprocess.create_subprocess_exec(
'python3', 'dummy.py',
#'dummy.sh'
)
await asyncio.sleep(T)
finally:
if p and p.returncode is None:
p.terminate()
print('handle_proc Done!')
if __name__ == '__main__':
asyncio.run(handle_proc())
There will be the warning, if T is short enough to allow p.terminate() get called.
if I spawn a bash script there is no warning:
#!/usr/bin/env bash
set -e
N=10
T=1
for i in $(seq 1 $N)
do
echo "loop=$i/$N, sleep $T"
>&2 echo "msg in stderror: ($i/$N,$T)"
sleep $T
done
What did I do wrong here?
To get rid of this warning, you need to await p.wait() after p.terminate().
The reason you need to do this is that p.terminate() only sends signal.SIGTERM to the process. The process might still do some cleanup work after receiving the signal. If you don't wait for the process to finish terminating, your script finishes before the process and the process is cut off prematurely.

nested subprocess not stopping

I have 3 files:
sleeper.py
import subprocess
print('start sleeper')
subprocess.run(['sleep', '10'])
print('end sleeper')
waker.py
import subprocess
print('The waker begins')
try:
subprocess.run(['python3', 'sleeper.py'], timeout=5)
except subprocess.TimeoutExpired:
pass
print('The waker ends')
awake.py
import subprocess
print('Awake begin')
try:
subprocess.run(['python3', 'waker.py'], timeout=2.5)
except subprocess.TimeoutExpired:
pass
print('Awake end')
and I run python3 awake.py.
but get the following output:
Awake begin
The waker begins
start sleeper
Awake end
end sleeper
actually to be more accurate I get the first 3 lines printer immediatly, then 2.5 seconds later the 4th line prints and I get my bash prompt, then 7.5 seconds later end sleeper appears on my bash prompt.
how do I make it so killing a subprocess via timeout also kills the subprocesses run by that subprocess?
run should terminate the child process when the timeout expires. Does it terminate children as well? it doesn't seem so in your case. A workaround would be to use Popen, poll for the timeout, and kill process & children.
Seems that you cannot have it both ways: use run and be sure that all subprocesses are terminated (when you get a TimeoutException the process is already killed so you lose track of children)
proc = subprocess.Popen(args, stderr=errFile, stdout=outFile, universal_newlines=False)
wait_remaining_sec = 2.5
while proc.poll() is None and wait_remaining_sec > 0:
time.sleep(0.5)
wait_remaining_sec -= 0.5
if proc.poll() is None:
# process is still there: terminate it and subprocesses:
import psutil
parent_pid = proc.pid # we get the process pid
parent = psutil.Process(parent_pid)
for child in parent.children(recursive=True):
child.kill()
parent.kill()
The poll loop is better than a bare time.sleep(2.5) call because if process ends before timeout, you don't want to wait 2.5 seconds. This doesn't induce more than a 0.5s delay if the process ends before.
References:
Using module 'subprocess' with timeout
how to kill process and child processes from python?

Terminate subprocess running in thread on program exit

Based on the accepted answer to this question: python-subprocess-callback-when-cmd-exits I am running a subprocess in a separate thread and after the completion of the subprocess a callable is executed. All good, but the problem is that even if running the thread as a daemon, the subprocess continues to run even after the program exits normally or it is killed by kill -9, Ctrl + C, etc...
Below is a very simplified example (runs on 2.7):
import threading
import subprocess
import time
import sys
def on_exit(pid):
print 'Process with pid %s ended' % pid
def popen_with_callback(cmd):
def run_in_thread(command):
proc = subprocess.Popen(
command,
shell=False
)
proc.wait()
on_exit(proc.pid)
return
thread = threading.Thread(target=run_in_thread, args=([cmd]))
thread.daemon = True
thread.start()
return thread
if __name__ == '__main__':
popen_with_callback(
[
"bash",
"-c",
"for ((i=0;i<%s;i=i+1)); do echo $i; sleep 1; done" % sys.argv[1]
])
time.sleep(5)
print 'program ended'
If the main thread lasts longer than the subprocess everything is fine:
(venv)~/Desktop|➤➤ python testing_threads.py 3
> 0
> 1
> 2
> Process with pid 26303 ended
> program ended
If the main thread lasts less than the subprocess, the subprocess continues to run until it eventually hangs:
(venv)~/Desktop|➤➤ python testing_threads.py 8
> 0
> 1
> 2
> 3
> 4
> program ended
(venv)~/Desktop|➤➤ 5
> 6
> 7
# hanging from now on
How to terminate the subprocess if the main program is finished or killed? I tried to use atexit.register(os.kill(proc.pid, signal.SIGTERM)) just before proc.wait but it actually executes when the thread running the subprocess exits, not when the main thread exits.
I was also thinking of polling for the parent pid, but I am not sure how to implement it because of the proc.wait situation.
Ideal outcome would be:
(venv)~/Desktop|➤➤ python testing_threads.py 8
> 0
> 1
> 2
> 3
> 4
> program ended
> Process with pid 1234 ended
Use Thread.join method, which blocks main thread until this thread exits:
if __name__ == '__main__':
popen_with_callback(
[
"bash",
"-c",
"for ((i=0;i<%s;i=i+1)); do echo $i; sleep 1; done" % sys.argv[1]
]).join()
print 'program ended'
I just got an ugly but effective method.
Just set a global variable to handle the proc=subprocess.Popen(),
and you can kill the proc whenever you like:
my_proc = None
def popen_with_callback(cmd):
def run_in_thread(command):
global my_proc
proc = subprocess.Popen(command, shell=False)
my_proc = proc
proc.wait()
on_exit(proc.pid)
return
...
Then you can kill the proc wherever you want in the program.
just do:
my_proc.kill()

.kill() in python subprocess kills parent rather than child process

I'm attempting to open a child process of a second python script in the function below. It seems to open the process fine, but when I attempt to end the process, the parent process terminates and the child process persists. Any recommendations on why this may be happening?
def thermostat(input):
global ThermostatRunning
global proc
print("Thermostat Function input: %s" % input)
if input == 'stop' and ThermostatRunning == 1:
print("test")
proc.kill()
print proc.poll()
dev2(0) #ensure heater is off
return('Thermostat turned off')
elif input=='stop' and ThermostatRunning == 0:
status = "Thermostat already stopped"
print(status)
return(status)
if input.isdigit() == 0:
return("Thermostat input is not a number or -stop-")
if ThermostatRunning == 1:
print("test2")
proc.kill()
print("test3")
proc = subprocess.Popen('python thermostat.py -t %s' % input, shell=True)#, preexec_fn=os.setsid)
ThermostatRunning = 1
#for line in proc.stdout.readlines():
#print (line)
status = "Thermostat started with set temperature: %s" % input
print(status)
return(status)
The only other issue that may be pertinent is that this is a flask script. I'm not sure if that changes anything.
When you create the subprocess with shell=True, you actually spawn a process that spawns another subprocess, so when you call proc.kill(), you're only killing the parent process. You want to make your subprocess the process group leader so you can kill them all at once.
Uncomment the setsid in the Popen call and kill the whole process group like so:
os.killpg(proc.pid, signal.SIGTERM)

Categories