start a service with popen : command not stopping - python

I try to make a backup script in python and start, stop a service with popen...
Stopping the service is working, but unfortunatly starting the service works, but blocks the rest of the execution, the scripts stays there, why ?
Seems to be somehow linked with the httpd service... :-(
the program config element is like "service;httpd;start" or "/etc/init.d/myprog;start"
class execute(actions):
def __init__(self,config,section,logger):
self.name="execute"
actions.__init__(self,config,section,logger)
def process(self):
try:
program=self.config.get(self.section,"program").split(";")
self.logger.debug("program=%s" % program)
p = subprocess.Popen(program, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if stdout:
self.logger.info(stdout)
if stderr:
self.logger.error(stderr)
return p.returncode
except Exception:
self.logger.exception(Exception)

You have to open a stdin as a pipe as well, and then close it (if you use read() and write() instead of communicate()).
p = subprocess.Popen(..., stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.stdin.close()
print "Stdout:", p.stdout.read()
print "Stderr:", p.stderr.read()
If it doesn't work, and you don't really need any checks, just close all pipes after call to Popen, what will cause program execution and detachment
from pipes.
Warning: This will make program run as a daemon if it doesn't terminate on its own.
After doing this you may call wait() to see whether it'll block as well. And use exitcodes to check for eventual errors.
There are not much of them. Just service started or not. Sometimes even it returns that service is running, but service crashes.
To check whether service script is still running, but without blocking, use:
if p.poll()==None: print "Still running"
Else, poll() returns the exit code.
This works neatly for starting and stopping a service:
from subprocess import Popen, PIPE
service = "brltty"
p = Popen(["service", service, "start"], stdin=PIPE, stdout=PIPE, stderr=PIPE)
# Note: using sequence uses shell=0
stdout, stderr = p.communicate()
print "Stdout:", stdout
print "Stderr:", stderr
Don't forget to change start to stop :D :D :D

The call to p.communicate() waits for the process to terminate.
Refer to: subprocess documentation
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
The optional input argument should be a string to be sent to the child
process, or None, if no data should be sent to the child.
You can try to use p.poll() instead. This method doesn't wait for a process to terminate.

Related

Interact with python using subprocess

I try to interact to python interpreter using subprocess module like this :
import subprocess
def start(executable_file):
return subprocess.Popen(
executable_file,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
def read(process):
return process.stdout.readline().decode("utf-8").strip()
def write(process, message):
process.stdin.write(f"{message.strip()}\n".encode("utf-8"))
process.stdin.flush()
def terminate(process):
process.stdin.close()
process.terminate()
process.wait(timeout=0.2)
process = start("python")
while True:
write(process, input())
print(read(process))
terminate(process)
But it seems it's locked a deadlock.
If anyone knows how to interact with python with python code and recover stdout, stderr with stream mode.
You need to use communicate() rather than read() and write() with subprocesses else it will lead to deadlock.
See red warning breakout towards middle of this page.

Terminating a running script with subprocess

I am using subprocess to execute a python script. I can successfully run the script.
import subprocess
import time
proc = subprocess.Popen(['python', 'test.py', ''], shell=True)
proc.communicate()
time.sleep(10)
proc.terminate()
test.py
import time
while True:
print("I am inside test.py")
time.sleep(1)
I can see the message I am inside test.py printed every second. However, I am not able to terminate the script while it is still running by proc.terminate().
Could someone kindly help me out?
proc.communicate waits for the process to finish unless you include a timeout. Your process doesn't exit and you don't have a timeout so communicate never returns. You could verify that with a print after the call. Instead, add a timeout and catch its exception
import subprocess
import time
proc = subprocess.Popen(['python', 'test.py', ''], shell=True)
try:
proc.communicate(timeout=10)
except subprocess.TimeoutExpired:
proc.terminate()
First things first, don't use a list to pass in the arguments for the subprocess.Popen() if you're using the shell = True!, change the command to string, "python test.py".
Popen.communicate(input=None, timeout=None) is a blocking class method, it shall Interact with process, and Wait for process to terminate and set the returncode attribute.
since your test.py running infinite while loop, he will never return !
you have 2 options to timeout the process proc that you have spawned:
assign the timeout keyword argument in the,e.g. timing the process for 5 seconds, communicate(timeout=5) method. If the process proc does not terminate after timeout seconds, a TimeoutExpired exception will be raised. Catching this exception and retrying communication will not lose any output (in your case you dont need the child outputs, but i will mention this in the example below). ATTENTION The child process is not killed if the timeout expires, so in order to cleanup properly a well-behaved application should kill the child process (proc) and finish communication.
by using the poll method and do the timing by your calling method.
communicate with timeout
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
poll with time.sleep
proc = subprocess.Popen('python test.py', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still sleeping')
time.sleep(1)
t -= 1
proc.kill()

Hanging parent process after running a timed-out subprocess and piping results

I wrote some code to run a script (via a subprocess) and kill the child process after a certain timeout. I'm running a script called "runtime_hang_script.sh" that just contains "./runtime_hang," which runs an infinite loop. I'm also redirecting stdout to a pipe -- I plan to write it to both sys.stdout and to a file (aka I'm trying to implement tee). However, my code hangs after the subprocess times out. Note that this ONLY hangs when running "sh runtime_hang_script.sh" and not "./runtime_hang." Also, this doesn't hang when I try piping directly to a file or when I don't read from the pipe.
I've tried other implementations of creating a timed subprocess, but I keep on getting the same issue. I've even tried raising a signal at the end of the problem -- for some reason, the signal is raised earlier than anticipated, so this doesn't work either. Any help would be appreciated. Thanks in advance!
process = None
def run():
global process
timeout_secs = 5
args = ['sh', 'runtime_hang_script.sh']
sys.stdout.flush()
process = subprocess.Popen(args, stdout=subprocess.PIPE, bufsize=1)
with process.stdout:
for line in iter(process.stdout.readline, b''):
sys.stdout.write(line.decode('utf-8'))
sys.stdout.flush()
process.wait()
proc_thread = threading.Thread(target=run)
proc_thread.start()
proc_thread.join(5)
print(proc_thread.is_alive())
if proc_thread.is_alive():
process.kill()
Assuming you are using Python 3.3 or newer, you can use the timeout argument of the subprocess.communicate() method to implement your 5-second timeout:
import subprocess
import sys
timeout_secs = 5
args = ['sh', 'runtime_hang_script.sh']
process = subprocess.Popen(args, stdout=subprocess.PIPE, bufsize=1)
try:
print("Waiting for data from child process...")
(stdoutData, stderrData) = process.communicate(None, timeout_secs)
print("From child process: stdoutData=[%s] stderrData=[%s]" % (stdoutData, stderrData))
except subprocess.TimeoutExpired:
print("Oops, child process took too long! Now it has to die")
process.kill()
print("Waiting for child process to exit...")
process.wait()
print("Child process exited.")
Note that spawning a child thread isn't necessary with this approach, since the timeout can work directly from the main thread.

How to kill a parallel process(started by subprocess.Popen) and it's subprocess? [duplicate]

I'm launching a subprocess with the following command:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
However, when I try to kill using:
p.terminate()
or
p.kill()
The command keeps running in the background, so I was wondering how can I actually terminate the process.
Note that when I run the command with:
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
It does terminate successfully when issuing the p.terminate().
Use a process group so as to enable sending a signal to all the process in the groups. For that, you should attach a session id to the parent process of the spawned/child processes, which is a shell in your case. This will make it the group leader of the processes. So now, when a signal is sent to the process group leader, it's transmitted to all of the child processes of this group.
Here's the code:
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM) # Send the signal to all the process groups
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.kill()
p.kill() ends up killing the shell process and cmd is still running.
I found a convenient fix this by:
p = subprocess.Popen("exec " + cmd, stdout=subprocess.PIPE, shell=True)
This will cause cmd to inherit the shell process, instead of having the shell launch a child process, which does not get killed. p.pid will be the id of your cmd process then.
p.kill() should work.
I don't know what effect this will have on your pipe though.
If you can use psutil, then this works perfectly:
import subprocess
import psutil
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
proc = subprocess.Popen(["infinite_app", "param"], shell=True)
try:
proc.wait(timeout=3)
except subprocess.TimeoutExpired:
kill(proc.pid)
I could do it using
from subprocess import Popen
process = Popen(command, shell=True)
Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
it killed the cmd.exe and the program that i gave the command for.
(On Windows)
When shell=True the shell is the child process, and the commands are its children. So any SIGTERM or SIGKILL will kill the shell but not its child processes, and I don't remember a good way to do it.
The best way I can think of is to use shell=False, otherwise when you kill the parent shell process, it will leave a defunct shell process.
None of these answers worked for me so Im leaving the code that did work. In my case even after killing the process with .kill() and getting a .poll() return code the process didn't terminate.
Following the subprocess.Popen documentation:
"...in order to cleanup properly a well-behaved application should kill the child process and finish communication..."
proc = subprocess.Popen(...)
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
In my case I was missing the proc.communicate() after calling proc.kill(). This cleans the process stdin, stdout ... and does terminate the process.
As Sai said, the shell is the child, so signals are intercepted by it -- best way I've found is to use shell=False and use shlex to split the command line:
if isinstance(command, unicode):
cmd = command.encode('utf8')
args = shlex.split(cmd)
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Then p.kill() and p.terminate() should work how you expect.
Send the signal to all the processes in group
self.proc = Popen(commands,
stdout=PIPE,
stderr=STDOUT,
universal_newlines=True,
preexec_fn=os.setsid)
os.killpg(os.getpgid(self.proc.pid), signal.SIGHUP)
os.killpg(os.getpgid(self.proc.pid), signal.SIGTERM)
There is a very simple way for Python 3.5 or + (Actually tested on Python 3.8)
import subprocess, signal, time
p = subprocess.Popen(['cmd'], shell=True)
time.sleep(5) #Wait 5 secs before killing
p.send_signal(signal.CTRL_C_EVENT)
Then, your code may crash at some point if you have a keyboard input detection, or sth like this. In this case, on the line of code/function where the error is given, just use:
try:
FailingCode #here goes the code which is raising KeyboardInterrupt
except KeyboardInterrupt:
pass
What this code is doing is just sending a "CTRL+C" signal to the running process, what will cause the process to get killed.
Solution that worked for me
if os.name == 'nt': # windows
subprocess.Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
else:
os.kill(process.pid, signal.SIGTERM)
Full blown solution that will kill running process (including subtree) on timeout reached or specific conditions via a callback function.
Works both on windows & Linux, from Python 2.7 up to 3.10 as of this writing.
Install with pip install command_runner
Example for timeout:
from command_runner import command_runner
# Kills ping after 2 seconds
exit_code, output = command_runner('ping 127.0.0.1', shell=True, timeout=2)
Example for specific condition:
Here we'll stop ping if current system time seconds digit is > 5
from time import time
from command_runner import command_runner
def my_condition():
# Arbitrary condition for demo
return True if int(str(int(time()))[-1]) > 5
# Calls my_condition() every second (check_interval) and kills ping if my_condition() returns True
exit_code, output = command_runner('ping 127.0.0.1', shell=True, stop_on=my_condition, check_interval=1)

Popen communicate is not working

I have a script that has been working properly for the past 3 months. The Server went down last Monday and since then my script stopped working. The script hangs at coords = p.communicate()[0].split().
Here's a part of the script:
class SelectByLatLon(GridSelector):
def __init__(self, from_lat, to_lat, from_lon, to_lon):
self.from_lat = from_lat
self.to_lat = to_lat
self.from_lon = from_lon
self.to_lon = to_lon
def get_selection(self, file):
p = subprocess.Popen(
[
os.path.join(module_root, 'bin/points_from_latlon.tcl'),
file,
str(self.from_lat), str(self.to_lat), str(self.from_lon), str(self.to_lon)
],
stdout = subprocess.PIPE
)
coords = p.communicate()[0].split()
return ZGridSelection(int(coords[0]), int(coords[1]), int(coords[2]), int(coords[3]))
When I run the script on another server everything works just fine.
Can I use something else instead of p.communicate()[0].split() ?
You might have previously run your server without daemonization i.e., you had functional stdin, stdout, stderr streams. To fix, you could redirect the streams to DEVNULL for the subprocess:
import os
from subprocess import Popen, PIPE
DEVNULL = os.open(os.devnull, os.O_RDWR)
p = Popen(tcl_cmd, stdin=DEVNULL, stdout=PIPE, stderr=DEVNULL, close_fds=True)
os.close(DEVNULL)
.communicate() may wait for EOF on stdout even if tcl_cmd already exited: the tcl script might have spawned a child process that inherited the standard streams and outlived its parent.
If you know that you don't need any stdout after the tcl_cmd exits then you could kill the whole process tree when you detect that tcl_cmd is done.
You might need start_new_session=True analog to be able to kill the whole process tree:
import os
import signal
from threading import Timer
def kill_tree_on_exit(p):
p.wait() # wait for tcl_cmd to exit
os.killpg(p.pid, signal.SIGTERM)
t = Timer(0, kill_tree_on_exit, [p])
t.start()
coords = p.communicate()[0].split()
t.cancel()
See How to terminate a python subprocess launched with shell=True

Categories