I want to process output from a subprocess and decide to terminate that subprocess once I received sufficient output. My program logic decides when we have enough input.
Example: Wait for a Udev event
try:
for event in sh.udevadm('monitor', _iter=True):
if event matches usb stick added:
print("Ok, I will work with that USB stick you just plugged in!")
break
except:
pass
print("I copy stuff on your USB stick now!")
The 'break' terminates the process but I cannot catch the exception:
What is the correct way to terminate a subprocess or how can I handle the exception?
Maybe you're better off using subprocess.Popen directly instead of sh library.
Something similar to:
$ cat udevtest.py
import subprocess
try:
proc = subprocess.Popen(["udevadm", "monitor"], stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
while True:
line = proc.stdout.readline()
if line == "" or len(line) == 0:
break # EOF
if line.find("pci0000") != -1: # your trigger
print("TRIGGER %s" % line.strip())
break
proc.terminate()
proc.wait()
except Exception, e:
print(e)
Found a way using interactive callbacks.
http://amoffat.github.io/sh/#interactive-callbacks
Pretty neat, I think.
def wait_for_plugged_in_drive(self):
print("Plugin an external HDD or USB stick.")
sh.udevadm.monitor(_out=self._scan_and_terminate).wait()
def _scan_and_terminate(self, line, stdin, process):
match = re.search('add\\s+/.*/block/sd./(sd..)', line)
if match is not None:
self.device = '/dev/' + match.group(1)
print("USB stick recognized {0}".format(self.device))
process.terminate()
return True
Related
I have the following snippet of code:
def terminal_command(command, timeout=5*60):
"""Executes a terminal command."""
cmd = command.split(" ")
timer = time.strftime('%Hh %Mm %Ss', time.gmtime(timeout))
proc = None
try:
proc = subprocess.run(cmd, timeout=timeout, capture_output=True)
except subprocess.TimeoutExpired:
print("Timeout")
proc.terminate()
reason = "timeout"
stdout = b'error'
stderr = b'error'
if proc != None:
# Finished!
stdout = proc.stdout
stderr = proc.stderr
reason = "finished"
return stdout.decode('utf-8').strip(), stderr.decode('utf-8').strip(), reason
I ran a command which takes significantly longer than 5 minutes. In this instance, subprocess.run raises an exception, but proc is now None so I cannot use proc.terminate(). When the code terminates, as has been well documented elsewhere, the child process continues to run. I would like to terminate it.
Is there any way to terminate a subprocess on a TimeoutExpired, whilst redirecting output? I am on a Linux system so am open to requiring Popen but ideally I would like this to be cross-platform.
Four months later: I got it.
The core issue appears to be that using os.kill with signal.SIGKILL doesn't properly kill the process.
Modifying my code to the following works.
def custom_terminal_command(self, command, timeout=5*60, cwd=None):
with subprocess.Popen(command.split(" "), preexec_fn=os.setsid) as process:
wd = os.getcwd()
try:
if cwd is not None:
# Man fuck linux
for d in cwd.split("/"):
os.chdir(d)
stdout, stderr = process.communicate(None, timeout=timeout)
except subprocess.TimeoutExpired as exc:
import signal
os.killpg(os.getpgid(process.pid), signal.SIGTERM)
try:
import msvcrt
except ModuleNotFoundError:
_mswindows = False
else:
_mswindows = True
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
reason = 'timeout'
stdout, stderr = process.communicate()
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
reason = 'other'
stdout, stderr = process.communicate()
raise
else:
reason = 'finished'
finally:
os.chdir(wd)
try:
return stdout.decode('utf-8').strip(), stderr.decode('utf-8').strip(), reason
except AttributeError:
try:
return stdout.strip(), stderr.strip(), reason
except AttributeError:
return stdout, stderr, reason
See the following SO post for a short discussion: How to terminate a python subprocess launched with shell=True
I need to run a shell command asynchronously from a Python script. By this I mean that I want my Python script to continue running while the external command goes off and does whatever it needs to do.
I read this post:
Calling an external command in Python
I then went off and did some testing, and it looks like os.system() will do the job provided that I use & at the end of the command so that I don't have to wait for it to return. What I am wondering is if this is the proper way to accomplish such a thing? I tried commands.call() but it will not work for me because it blocks on the external command.
Please let me know if using os.system() for this is advisable or if I should try some other route.
subprocess.Popen does exactly what you want.
from subprocess import Popen
p = Popen(['watch', 'ls']) # something long running
# ... do other stuff while subprocess is running
p.terminate()
(Edit to complete the answer from comments)
The Popen instance can do various other things like you can poll() it to see if it is still running, and you can communicate() with it to send it data on stdin, and wait for it to terminate.
If you want to run many processes in parallel and then handle them when they yield results, you can use polling like in the following:
from subprocess import Popen, PIPE
import time
running_procs = [
Popen(['/usr/bin/my_cmd', '-i %s' % path], stdout=PIPE, stderr=PIPE)
for path in '/tmp/file0 /tmp/file1 /tmp/file2'.split()]
while running_procs:
for proc in running_procs:
retcode = proc.poll()
if retcode is not None: # Process finished.
running_procs.remove(proc)
break
else: # No process is done, wait a bit and check again.
time.sleep(.1)
continue
# Here, `proc` has finished with return code `retcode`
if retcode != 0:
"""Error handling."""
handle_results(proc.stdout)
The control flow there is a little bit convoluted because I'm trying to make it small -- you can refactor to your taste. :-)
This has the advantage of servicing the early-finishing requests first. If you call communicate on the first running process and that turns out to run the longest, the other running processes will have been sitting there idle when you could have been handling their results.
This is covered by Python 3 Subprocess Examples under "Wait for command to terminate asynchronously". Run this code using IPython or python -m asyncio:
import asyncio
proc = await asyncio.create_subprocess_exec(
'ls','-lha',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
# do something else while ls is working
# if proc takes very long to complete, the CPUs are free to use cycles for
# other processes
stdout, stderr = await proc.communicate()
The process will start running as soon as the await asyncio.create_subprocess_exec(...) has completed. If it hasn't finished by the time you call await proc.communicate(), it will wait there in order to give you your output status. If it has finished, proc.communicate() will return immediately.
The gist here is similar to Terrels answer but I think Terrels answer appears to overcomplicate things.
See asyncio.create_subprocess_exec for more information.
What I am wondering is if this [os.system()] is the proper way to accomplish such a thing?
No. os.system() is not the proper way. That's why everyone says to use subprocess.
For more information, read http://docs.python.org/library/os.html#os.system
The subprocess module provides more
powerful facilities for spawning new
processes and retrieving their
results; using that module is
preferable to using this function. Use
the subprocess module. Check
especially the Replacing Older
Functions with the subprocess Module
section.
The accepted answer is very old.
I found a better modern answer here:
https://kevinmccarthy.org/2016/07/25/streaming-subprocess-stdin-and-stdout-with-asyncio-in-python/
and made some changes:
make it work on windows
make it work with multiple commands
import sys
import asyncio
if sys.platform == "win32":
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
async def _read_stream(stream, cb):
while True:
line = await stream.readline()
if line:
cb(line)
else:
break
async def _stream_subprocess(cmd, stdout_cb, stderr_cb):
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
await asyncio.wait(
[
_read_stream(process.stdout, stdout_cb),
_read_stream(process.stderr, stderr_cb),
]
)
rc = await process.wait()
return process.pid, rc
except OSError as e:
# the program will hang if we let any exception propagate
return e
def execute(*aws):
""" run the given coroutines in an asyncio loop
returns a list containing the values returned from each coroutine.
"""
loop = asyncio.get_event_loop()
rc = loop.run_until_complete(asyncio.gather(*aws))
loop.close()
return rc
def printer(label):
def pr(*args, **kw):
print(label, *args, **kw)
return pr
def name_it(start=0, template="s{}"):
"""a simple generator for task names
"""
while True:
yield template.format(start)
start += 1
def runners(cmds):
"""
cmds is a list of commands to excecute as subprocesses
each item is a list appropriate for use by subprocess.call
"""
next_name = name_it().__next__
for cmd in cmds:
name = next_name()
out = printer(f"{name}.stdout")
err = printer(f"{name}.stderr")
yield _stream_subprocess(cmd, out, err)
if __name__ == "__main__":
cmds = (
[
"sh",
"-c",
"""echo "$SHELL"-stdout && sleep 1 && echo stderr 1>&2 && sleep 1 && echo done""",
],
[
"bash",
"-c",
"echo 'hello, Dave.' && sleep 1 && echo dave_err 1>&2 && sleep 1 && echo done",
],
[sys.executable, "-c", 'print("hello from python");import sys;sys.exit(2)'],
)
print(execute(*runners(cmds)))
It is unlikely that the example commands will work perfectly on your system, and it doesn't handle weird errors, but this code does demonstrate one way to run multiple subprocesses using asyncio and stream the output.
I've had good success with the asyncproc module, which deals nicely with the output from the processes. For example:
import os
from asynproc import Process
myProc = Process("myprogram.app")
while True:
# check to see if process has ended
poll = myProc.wait(os.WNOHANG)
if poll is not None:
break
# print any new output
out = myProc.read()
if out != "":
print out
Using pexpect with non-blocking readlines is another way to do this. Pexpect solves the deadlock problems, allows you to easily run the processes in the background, and gives easy ways to have callbacks when your process spits out predefined strings, and generally makes interacting with the process much easier.
Considering "I don't have to wait for it to return", one of the easiest solutions will be this:
subprocess.Popen( \
[path_to_executable, arg1, arg2, ... argN],
creationflags = subprocess.CREATE_NEW_CONSOLE,
).pid
But... From what I read this is not "the proper way to accomplish such a thing" because of security risks created by subprocess.CREATE_NEW_CONSOLE flag.
The key things that happen here is use of subprocess.CREATE_NEW_CONSOLE to create new console and .pid (returns process ID so that you could check program later on if you want to) so that not to wait for program to finish its job.
I have the same problem trying to connect to an 3270 terminal using the s3270 scripting software in Python. Now I'm solving the problem with an subclass of Process that I found here:
http://code.activestate.com/recipes/440554/
And here is the sample taken from file:
def recv_some(p, t=.1, e=1, tr=5, stderr=0):
if tr < 1:
tr = 1
x = time.time()+t
y = []
r = ''
pr = p.recv
if stderr:
pr = p.recv_err
while time.time() < x or r:
r = pr()
if r is None:
if e:
raise Exception(message)
else:
break
elif r:
y.append(r)
else:
time.sleep(max((x-time.time())/tr, 0))
return ''.join(y)
def send_all(p, data):
while len(data):
sent = p.send(data)
if sent is None:
raise Exception(message)
data = buffer(data, sent)
if __name__ == '__main__':
if sys.platform == 'win32':
shell, commands, tail = ('cmd', ('dir /w', 'echo HELLO WORLD'), '\r\n')
else:
shell, commands, tail = ('sh', ('ls', 'echo HELLO WORLD'), '\n')
a = Popen(shell, stdin=PIPE, stdout=PIPE)
print recv_some(a),
for cmd in commands:
send_all(a, cmd + tail)
print recv_some(a),
send_all(a, 'exit' + tail)
print recv_some(a, e=0)
a.wait()
There are several answers here but none of them satisfied my below requirements:
I don't want to wait for command to finish or pollute my terminal with subprocess outputs.
I want to run bash script with redirects.
I want to support piping within my bash script (for example find ... | tar ...).
The only combination that satiesfies above requirements is:
subprocess.Popen(['./my_script.sh "arg1" > "redirect/path/to"'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
Thank you guys for seeing my post.
First, the following is my code:
import os
print("You can create your own message for alarm.")
user_message = input(">> ")
print("\n<< Sample alarm sound >>")
for time in range(0, 3):
os.system('say ' + user_message) # this code makes sound.
print("\nOkay, The alarm has been set.")
"""
##### My problem is here #####
##### THIS IS NOT STOPPED #####
while True:
try:
os.system('say ' + user_message)
except KeyboardInterrupt:
print("Alarm stopped")
exit(0)
"""
My problem is that Ctrl + C does not work!
I tried changing position of try block, and making signal(SIGINT) catching function.
But those also does not work.
I have seen https://stackoverflow.com/a/8335212/5247212, https://stackoverflow.com/a/32923070/5247212, and other several answers about this problem.
I am using MAC OS(10.12.3) and python 3.5.2.
This is expected behaviour, as os.system() is a thin wrapper around the C function system(). As noted in the man page, the parent process ignores SIGINT during the execution of the command. In order to exit the loop, you have to manually check the exit code of the child process (this is also mentioned in the man page):
import os
import signal
while True:
code = os.system('sleep 1000')
if code == signal.SIGINT:
print('Awakened')
break
However, the preferred (and more pythonic) way to achieve the same result is to use the subprocess module:
import subprocess
while True:
try:
subprocess.run(('sleep', '1000'))
except KeyboardInterrupt:
print('Awakened')
break
Your code would then look like something like this:
import subprocess
print("You can create your own message for alarm.")
user_message = input(">> ")
print("\n<< Sample alarm sound >>")
for time in range(0, 3):
subprocess.run(['say', user_message]) # this code makes sound.
print("\nOkay, The alarm has been set.")
while True:
try:
subprocess.run(['say', user_message])
except KeyBoardInterrupt:
print("Alarm terminated")
exit(0)
As an added note, subprocess.run() is only available in Python 3.5+. You can use subprocess.call() to achieve the same effect in older versions of Python.
Also catch "SystemExit"
except (KeyboardInterrupt, SystemExit):
print("Alarm stopped")
The problem seems to be that Ctrl+C is captured by the subprocess you call via os.system. This subprocess reacts correspondingly, probably by terminating whatever it is doing. If so, the return value of os.system() will be not zero. You can use that to break the while loop.
Here's an example that works with me (substituting say by sleep):
import os
import sys
while True:
try:
if os.system('sleep 1 '):
raise KeyboardInterrupt
except KeyboardInterrupt:
print("Alarm stopped")
sys.exit(0)
If Ctrl-C is captured by the subprocess, which is the case here, the simplest solution is to check the return value of os.system(). For example in my case it returns value of 2 if Ctrl-C stops it, which is a SIGINT code.
import os
while True:
r = os.system(my_job)
if r == 2:
print('Stopped')
break
elif r != 0:
print('Some other error', r)
The task I try to accomplish is to stream a ruby file and print out the output. (NOTE: I don't want to print out everything at once)
main.py
from subprocess import Popen, PIPE, STDOUT
import pty
import os
file_path = '/Users/luciano/Desktop/ruby_sleep.rb'
command = ' '.join(["ruby", file_path])
master, slave = pty.openpty()
proc = Popen(command, bufsize=0, shell=True, stdout=slave, stderr=slave, close_fds=True)
stdout = os.fdopen(master, 'r', 0)
while proc.poll() is None:
data = stdout.readline()
if data != "":
print(data)
else:
break
print("This is never reached!")
ruby_sleep.rb
puts "hello"
sleep 2
puts "goodbye!"
Problem
Streaming the file works fine. The hello/goodbye output is printed with the 2 seconds delay. Exactly as the script should work. The problem is that readline() hangs in the end and never quits. I never reach the last print.
I know there is a lot of questions like this here a stackoverflow but non of them made me solve the problem. I'm not that into the whole subprocess thing so please give me a more hands-on/concrete answer.
Regards
edit
Fix unintended code. (nothing to do with the actual error)
I assume you use pty due to reasons outlined in Q: Why not just use a pipe (popen())? (all other answers so far ignore your "NOTE: I don't want to print out everything at once").
pty is Linux only as said in the docs:
Because pseudo-terminal handling is highly platform dependent, there
is code to do it only for Linux. (The Linux code is supposed to work
on other platforms, but hasn’t been tested yet.)
It is unclear how well it works on other OSes.
You could try pexpect:
import sys
import pexpect
pexpect.run("ruby ruby_sleep.rb", logfile=sys.stdout)
Or stdbuf to enable line-buffering in non-interactive mode:
from subprocess import Popen, PIPE, STDOUT
proc = Popen(['stdbuf', '-oL', 'ruby', 'ruby_sleep.rb'],
bufsize=1, stdout=PIPE, stderr=STDOUT, close_fds=True)
for line in iter(proc.stdout.readline, b''):
print line,
proc.stdout.close()
proc.wait()
Or using pty from stdlib based on #Antti Haapala's answer:
#!/usr/bin/env python
import errno
import os
import pty
from subprocess import Popen, STDOUT
master_fd, slave_fd = pty.openpty() # provide tty to enable
# line-buffering on ruby's side
proc = Popen(['ruby', 'ruby_sleep.rb'],
stdin=slave_fd, stdout=slave_fd, stderr=STDOUT, close_fds=True)
os.close(slave_fd)
try:
while 1:
try:
data = os.read(master_fd, 512)
except OSError as e:
if e.errno != errno.EIO:
raise
break # EIO means EOF on some systems
else:
if not data: # EOF
break
print('got ' + repr(data))
finally:
os.close(master_fd)
if proc.poll() is None:
proc.kill()
proc.wait()
print("This is reached!")
All three code examples print 'hello' immediately (as soon as the first EOL is seen).
leave the old more complicated code example here because it may be referenced and discussed in other posts on SO
Or using pty based on #Antti Haapala's answer:
import os
import pty
import select
from subprocess import Popen, STDOUT
master_fd, slave_fd = pty.openpty() # provide tty to enable
# line-buffering on ruby's side
proc = Popen(['ruby', 'ruby_sleep.rb'],
stdout=slave_fd, stderr=STDOUT, close_fds=True)
timeout = .04 # seconds
while 1:
ready, _, _ = select.select([master_fd], [], [], timeout)
if ready:
data = os.read(master_fd, 512)
if not data:
break
print("got " + repr(data))
elif proc.poll() is not None: # select timeout
assert not select.select([master_fd], [], [], 0)[0] # detect race condition
break # proc exited
os.close(slave_fd) # can't do it sooner: it leads to errno.EIO error
os.close(master_fd)
proc.wait()
print("This is reached!")
Not sure what is wrong with your code, but the following seems to work for me:
#!/usr/bin/python
from subprocess import Popen, PIPE
import threading
p = Popen('ls', stdout=PIPE)
class ReaderThread(threading.Thread):
def __init__(self, stream):
threading.Thread.__init__(self)
self.stream = stream
def run(self):
while True:
line = self.stream.readline()
if len(line) == 0:
break
print line,
reader = ReaderThread(p.stdout)
reader.start()
# Wait until subprocess is done
p.wait()
# Wait until we've processed all output
reader.join()
print "Done!"
Note that I don't have Ruby installed and hence cannot check with your actual problem. Works fine with ls, though.
Basically what you are looking at here is a race condition between your proc.poll() and your readline(). Since the input on the master filehandle is never closed, if the process attempts to do a readline() on it after the ruby process has finished outputting, there will never be anything to read, but the pipe will never close. The code will only work if the shell process closes before your code tries another readline().
Here is the timeline:
readline()
print-output
poll()
readline()
print-output (last line of real output)
poll() (returns false since process is not done)
readline() (waits for more output)
(process is done, but output pipe still open and no poll ever happens for it).
Easy fix is to just use the subprocess module as it suggests in the docs, not in conjunction with openpty:
http://docs.python.org/library/subprocess.html
Here is a very similar problem for further study:
Using subprocess with select and pty hangs when capturing output
Try this:
proc = Popen(command, bufsize=0, shell=True, stdout=PIPE, close_fds=True)
for line in proc.stdout:
print line
print("This is most certainly reached!")
As others have noted, readline() will block when reading data. It will even do so when your child process has died. I am not sure why this does not happen when executing ls as in the other answer, but maybe the ruby interpreter detects that it is writing to a PIPE and therefore it will not close automatically.
I am using pty to read non blocking the stdout of a process like this:
import os
import pty
import subprocess
master, slave = pty.openpty()
p = subprocess.Popen(cmd, stdout = slave)
stdout = os.fdopen(master)
while True:
if p.poll() != None:
break
print stdout.readline()
stdout.close()
Everything works fine except that the while-loop occasionally blocks. This is due to the fact that the line print stdout.readline() is waiting for something to be read from stdout. But if the program already terminated, my little script up there will hang forever.
My question is: Is there a way to peek into the stdout object and check if there is data available to be read? If this is not the case it should continue through the while-loop where it will discover that the process actually already terminated and break the loop.
Yes, use the select module's poll:
import select
q = select.poll()
q.register(stdout,select.POLLIN)
and in the while use:
l = q.poll(0)
if not l:
pass # no input
else:
pass # there is some input
The select.poll() answer is very neat, but doesn't work on Windows. The following solution is an alternative. It doesn't allow you to peek stdout, but provides a non-blocking alternative to readline() and is based on this answer:
from subprocess import Popen, PIPE
from threading import Thread
def process_output(myprocess): #output-consuming thread
nextline = None
buf = ''
while True:
#--- extract line using read(1)
out = myprocess.stdout.read(1)
if out == '' and myprocess.poll() != None: break
if out != '':
buf += out
if out == '\n':
nextline = buf
buf = ''
if not nextline: continue
line = nextline
nextline = None
#--- do whatever you want with line here
print 'Line is:', line
myprocess.stdout.close()
myprocess = Popen('myprogram.exe', stdout=PIPE) #output-producing process
p1 = Thread(target=process_output, args=(myprocess,)) #output-consuming thread
p1.daemon = True
p1.start()
#--- do whatever here and then kill process and thread if needed
if myprocess.poll() == None: #kill process; will automatically stop thread
myprocess.kill()
myprocess.wait()
if p1 and p1.is_alive(): #wait for thread to finish
p1.join()
Other solutions for non-blocking read have been proposed here, but did not work for me:
Solutions that require readline (including the Queue based ones) always block. It is difficult (impossible?) to kill the thread that executes readline. It only gets killed when the process that created it finishes, but not when the output-producing process is killed.
Mixing low-level fcntl with high-level readline calls may not work properly as anonnn has pointed out.
Using select.poll() is neat, but doesn't work on Windows according to python docs.
Using third-party libraries seems overkill for this task and adds additional dependencies.