wxPython, capturing an output from subprocess in real-time - python

I'm working on application in wxPython which is a GUI for a command line utility. In the GUI there is a text control which should display the output from the application. I'm launching the shell command using subprocess, but I don't get any output from it until it has completed.
I have tried several solutions but none of them seems to work. Below is the code I'm using at the moment (updated):
def onOk(self,event):
self.getControl('infotxt').Clear()
try:
thread = threading.Thread(target=self.run)
thread.setDaemon(True)
thread.start()
except Exception:
print 'Error starting thread'
def run(self):
args = dict()
# creating a command to execute...
cmd = ["aplcorr", "-vvfile", args['vvfile'], "-navfile", args['navfile'], "-lev1file", args['lev1file'], "-dem", args['dem'], "-igmfile", args['outfile']]
proc = subprocess.Popen(' '.join(cmd), shell=True, stdout=subprocess.PIPE, stderr.subprocess.PIPE)
print
while True:
line = proc.stdout.readline()
wx.Yield()
if line.strip() == "":
pass
else:
print line.strip()
if not line: break
proc.wait()
class RedirectInfoText:
""" Class to redirect stdout text """
def __init__(self,wxTextCtrl):
self.out=wxTextCtrl
def write(self,string):
self.out.WriteText(string)
class RedirectErrorText:
""" Class to redirect stderr text """
def __init__(self,wxTextCtrl):
self.out.SetDefailtStyle(wx.TextAttr())
self.out=wxTextCtrl
def write(self,string):
self.out.SetDefaultStyle(wx.TextAttr(wx.RED))
self.out.WriteText(string)
In particular I'm going to need the output in real-time to create a progress-bar.
Edit: I changed my code, based on Mike Driscoll's suggestion. It seems to work sometimes, but most of the time I'm getting one of the following errors:
(python:7698): Gtk-CRITICAL **: gtk_text_layout_real_invalidate:
assertion `layout->wrap_loop_count == 0' failed
or
(python:7893): Gtk-WARNING **: Invalid text buffer iterator: either
the iterator is uninitialized, or the characters/pixbufs/widgets in
the buffer have been modified since the iterator was created. You must
use marks, character numbers, or line numbers to preserve a position
across buffer modifications. You can apply tags and insert marks
without invalidating your iterators, but any mutation that affects
'indexable' buffer contents (contents that can be referred to by
character offset) will invalidate all outstanding iterators
Segmentation fault (core dumped)
Any clues?

The problem is because you are trying to wx.Yield and to update the output widgets from the context of the thread running the process, instead of doing the update from the GUI thread.
Since you are running the process from a thread there should be no need to call wx.Yield, because you are not blocking the GUI thread, and so any pending UI events should be processed normally anyway.
Take a look at the wx.PyOnDemandOutputWindow class for an example of how to handle prints or other output that originate from a non-GUI thread.

This can be a little tricky, but I figured out one way to do it which I wrote about here: http://www.blog.pythonlibrary.org/2010/06/05/python-running-ping-traceroute-and-more/
After you have set up the redirection of the text, you just need to do something like this:
def pingIP(self, ip):
proc = subprocess.Popen("ping %s" % ip, shell=True,
stdout=subprocess.PIPE)
print
while True:
line = proc.stdout.readline()
wx.Yield()
if line.strip() == "":
pass
else:
print line.strip()
if not line: break
proc.wait()
The article shows how to redirect the text too. Hopefully that will help!

Related

output closes prematurely when a pexpect subprocess returns variable outputs

The Problem
I want to interact with interactive terminal programs from Python scripts, these programs might not always be written in Python. I already managed to do it with pexpect and the class in the code snippet below but I struggle to find a way to capture the whole output after each instruction.
The Context
I cannot capture the whole output of the command (all the lines) and keep the program alive for future inputs.
Let's say I want to do this:
terminal.start("/path/to/executable/repl/file") # on start returns 3 lines of output
terminal.run_command("let a = fn(a) { a + 1 }") # this command return 1 line of output
terminal.run_command("var") # this command will return 2 lines of output
terminal.run_command("invalid = invalid") # this command returns 1 line of output
note that the amount of lines on each output might vary because I want to be able to run multiple interactive terminal programs.
What I have tried
Attempt 1
I tried using readlines but as the documentation states
Remember, because this reads until EOF that means the child process should have closed its stdout.
It means that when once I run that it will close my process for future instructions, which is not my expected behaviour. Anyways when I try it I get the following.
def read(self):
return list(self.process.readlines())
For a reason unknown to me the program just does nothing, prints nothing, raises no error, just stays paused with no output whatsoever.
Attempt 2
Read each line until finding an empty line like this
def read(self):
val = self.process.readline()
result = ""
while val != "":
result += val
val = self.process.readline()
return result
Once again the same problem, the program pauses, prints no input, does nothing for a few seconds then it prints the error pexpect.exceptions.TIMEOUT: Timeout exceeded.
Attempt 3
using read_nonblocking method causes my program to read only a few characters, so I use the first parameter size as follows.
def read(self):
return self.process.read_nonblocking(999999999)
Only then I get the expected behavior but only for a few commands, then it reads nothing, besides, If I put an even bigger number, an error on memory overflow is raised.
The Code
This is the implementation of the Terminal class.
import pexpect
class Terminal:
process: pexpect.spawn
def __init__(self):
self.process = None
def start(self, executable_file: str):
'''
run a command that returns an executable TUI program, returns the output,
(if present) of the initialization of program
'''
self.process = pexpect.spawn(executable_file, encoding="utf-8", maxread=1)
return self.read()
def read(self):
'''return entire output of last executed command'''
return self.process.readline() # when executed more than amoutn of output program breaks
def write(self, message):
'''send value to program through keyboard input'''
self.process.sendline(message)
def terminate(self):
'''kill process/program and restart property value to None'''
self.process.kill()
self.process.wait()
self.process = None
def run_command(self, command: str):
'''
run an instruction for the executed program
and get the returned result as string
'''
self.write(command)
return self.read()
How I consume the class. This is what I run to test on each attempt mentioned above
from terminal import Terminal
term = Terminal()
print(term.start("/path/to/executable/repl/file"), end="")
print(term.run_command("let a = fn(a) { a + 1 }"), end="")
print(term.run_command("a(1)"), end="")
print(term.run_command("let b = [1,2,4]"), end="")
print(term.run_command("b[0]"), end="")
print(term.run_command("b[1]"), end="")
print(term.run_command("a(2)"), end="")
If you want to know what kind of specific programs I want to run, its just these two 1 and 2 at the moment but I expect to add more in the future.
The crux of the problem comes from detecting when the command you sent to the console program has finished writing output.
I started by creating a very simple console program with input and output : echo. It just writes back what you wrote.
Here it is :
echo.py
import sys
print("Welcome to PythonEcho Ultimate Edition 2023")
while True:
new_line = sys.stdin.readline()
print(new_line, file=sys.stdout, end="") # because the line already has an \n at the end
print("") # an empty line, because that's how the webshell detects the output for this command has ended
It is a slightly modified version, that prints a line at the start (because that's what your startProgram() expects when doing terminal.read()) and an empty line after each echo (because that's how your runCommand detected that the output finished).
I did without Flask, because it is not needed for making the communication with the console program to work, and helps with debug (see Minimal Reproducible Example). So here is the code I used :
main.py
import subprocess
class Terminal:
def __init__(self):
self.process = None
def start(self, executable_file):
self.process = subprocess.Popen(
executable_file,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
def read(self):
return self.process.stdout.readline().decode("utf-8")
# no `.strip()` there ^^^^^^^^
def write(self, message):
self.process.stdin.write(f"{message.strip()}\n".encode("utf-8"))
self.process.stdin.flush()
def terminate(self):
self.process.stdin.close()
self.process.terminate()
self.process.wait(timeout=0.2)
terminal = Terminal()
def startProgram():
terminal.start(["python3", "echo.py"]) # using my echo program
return terminal.read()
def runCommand(command: str):
terminal.write(command)
result = ""
line = "initialized to something else than \\n"
while line != "\n": # an empty line is considered a separator ?
line = terminal.read()
result += line
return result
def stopProgram():
terminal.terminate()
return "connection terminated"
if __name__ == "__main__": # for simple testing
result = startProgram()
print(result)
result = runCommand("cmd1")
print(result, end="")
result = runCommand("cmd2")
print(result, end="")
result = stopProgram()
print(result)
which gives me
Welcome to PythonEcho Ultimate Edition 2023
cmd1
cmd2
connection terminated
I removed the split() at the end of the Terminal.read otherwise the lines would not be correctly concatenated later in runCommand (or was it voluntary ?).
I changed the runCommand function to stop reading when it encounters an \n (which is an empty line, different from an empty string which would indicate the end of the stream). You did not explain how you detect that the output from your programs ended, you should take care to this.

Python3: Catch SIGTTOU from subprocess and print warning

Hello I have a problem with a small debug GUI I have written (using PySimpleGUI). Part of the GUI is the capability to call a local Linux shell command.
One of the shell commands/programs I want to execute returns a SIGTTOU signal, if I start the GUI in background (with &). Which will freeze the GUI until I bring the GUI to foreground with 'fg'.
Since it's kind of normal to start the GUI in background, I just want to catch the SIGTTOU signal, print a warning and continue.
The following code snippet kind of works, but leaves the shell commands as zombies (and I get no return value from the commands. What even works better is to use signal.signal(signal.SIGTTOU, signal.SIG_IGN) but I really want to print a warning. Is that possible? What does signal.SIG_IGN to remove the zombies?
cmd_output = collections.deque()
...
def _handle_sigttou(signum, frame):
sys.__stdout__.write('WARNING: <SIGTTOU> received\n')
def _run_shell_command(cmd, timeout=None):
# run command
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# process output
for line in p.stdout:
line = line.decode(errors='backslashreplace').rstrip()
cmd_output.appendleft((f'{line}'))
# wait for return code
retval = p.wait(timeout)
# print result
cmd_output.appendleft((f'✘ ({retval})') if retval else ('✅'))
# send command
signal.signal(signal.SIGTTOU, _handle_sigttou)
#signal.signal(signal.SIGTTOU, signal.SIG_IGN) # this works without zombies, but I want to print a warning
threading.Thread(target=_run_shell_command, args=(_line, 60), daemon=True).start()
Using window.write_event_value(event, value) to send an event to your window, then maybe call sg.popup to show the value when event in your event loop. Not try to update GUI on your thread.
...
def _handle_sigttou(signum, frame):
window.write_event_value("<SIGTTOU>", ("WARNING", "<SIGTTOU> received"))
...
while True:
event, values = window.read()
if event == sg.WINDOW_CLOSED:
break
elif event == "<SIGTTOU>":
title, message = values[event]
sg.popup(message, title=title, auto_close=True, auto_close_duration=2)
window.close()

Update bat file output from Qthread to Gui

I am a system administrator and this is the first time I am trying to achieve something using Python. I am working on a small python tool that will run a bat file in a Qthread. On the GUI I have a textedit box where I want to update output/error from the bat file.
Here is the code I have so far,
QThread -
class runbat(QtCore.QThread):
line_printed = QtCore.pyqtSignal(str)
def __init__(self, ):
super(runbat, self).__init__()
def run(self):
popen = subprocess.Popen("install.bat", stdout=subprocess.PIPE, shell=True)
lines_iterator = iter(popen.stdout.readline, b"")
for line in lines_iterator:
self.line_printed.emit(line)
From main -
self.batfile.line_printed.connect(self.batout)
def batout(self, line):
cursor = self.ui.textEdit.textCursor()
cursor.movePosition(cursor.End)
cursor.insertText(line)
self.ui.textEdit.ensureCursorVisible()
but I am getting - TypeError: runbat.line_printed[str].emit(): argument 1 has unexpected type 'bytes'. Another question is does stdout catches errors or just output, What do I need to catch the errors as well?
ok, I was able to get it to work by changing the code to following.
in Qthread
line_printed = QtCore.pyqtSignal(bytes)
in main
def batout(self, line):
output = str(line, encoding='utf-8')
cursor = self.ui.textEdit.textCursor()
cursor.movePosition(cursor.End)
cursor.insertText(output)
self.ui.textEdit.ensureCursorVisible()
Basically the out put was in bytes and I had to convert it to string. Its working as expected with this changes however if anyone have a better solution I would love to try it. Thank you all.

Get realtime output from python subprocess

I'm trying to invoke a command line utility from Python. The code is as follows
import subprocess
import sys
class Executor :
def executeEXE(self,executable ) :
CREATE_NO_WINDOW = 0x08000000
process = subprocess.Popen(executable, stdout=subprocess.PIPE,
creationflags=CREATE_NO_WINDOW )
while True:
line = process.stdout.readline()
if line == '' and process.poll() != None:
break
print line
The problem with above code is I want the real-time output of above process which I'm not getting. What I'm doing wrong here.
there are 2 problems in your code:
first of all, readline() will block untill when a new line is printed out and flushed.
That means you should execute the code
while True:
...
in a new Thread and call a callback function when the output is ready.
Since the readline is waiting for a new line, you must use
print 'Hello World'
sys.stdout.flush()
everytime in your executable.
You can see some code and example on my git:
pyCommunicator
Instead, if your external tool is buffered, the only thing you can try is to use stderr as PIPE:
https://stackoverflow.com/a/11902799/2054758

Python: using threads to call subprocess.Popen multiple times

I have a service that is running (Twisted jsonrpc server). When I make a call to "run_procs" the service will look at a bunch of objects and inspect their timestamp property to see if they should run. If they should, they get added to a thread_pool (list) and then every item in the thread_pool gets the start() method called.
I have used this setup for several other applications where I wanted to run a function within my class with theading. However, when I am using a subprocess.Popen call in the function called by each thread, the calls run one-at-a-time instead of running concurrently like I would expect.
Here is some sample code:
class ProcService(jsonrpc.JSONRPC):
self.thread_pool = []
self.running_threads = []
self.lock = threading.Lock()
def clean_pool(self, thread_pool, join=False):
for th in [x for x in thread_pool if not x.isAlive()]:
if join: th.join()
thread_pool.remove(th)
del th
return thread_pool
def run_threads(self, parallel=10):
while len(self.running_threads)+len(self.thread_pool) > 0:
self.clean_pool(self.running_threads, join=True)
n = min(max(parallel - len(self.running_threads), 0), len(self.thread_pool))
if n > 0:
for th in self.thread_pool[0:n]: th.start()
self.running_threads.extend(self.thread_pool[0:n])
del self.thread_pool[0:n]
time.sleep(.01)
for th in self.running_threads+self.thread_pool: th.join()
def jsonrpc_run_procs(self):
for i, item in enumerate(self.items):
if item.should_run():
self.thread_pool.append(threading.Thread(target=self.run_proc, args=tuple([item])))
self.run_threads(5)
def run_proc(self, proc):
self.lock.acquire()
print "\nSubprocess started"
p = subprocess.Popen('%s/program_to_run.py %s' %(os.getcwd(), proc.data), shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,)
stdout_value = proc.communicate('through stdin to stdout')[0]
self.lock.release()
Any help/suggestions are appreciated.
* EDIT *
OK. So now I want to read back the output from the stdout pipe. This works some of the time, but also fails with select.error: (4, 'Interrupted system call') I assume this is because sometimes the process has already terminated before I try to run the communicate method. the code in the run_proc method has been changed to:
def run_proc(self, proc):
self.lock.acquire()
p = subprocess.Popen( #etc
self.running_procs.append([p, proc.data.id])
self.lock.release()
after I call self.run_threads(5) I call self.check_procs()
check_procs method iterates the list of running_procs to check for poll() is not None. How can I get output from pipe? I have tried both of the following
calling check_procs once:
def check_procs(self):
for proc_details in self.running_procs:
proc = proc_details[0]
while (proc.poll() == None):
time.sleep(0.1)
stdout_value = proc.communicate('through stdin to stdout')[0]
self.running_procs.remove(proc_details)
print proc_details[1], stdout_value
del proc_details
calling check_procs in while loop like:
while len(self.running_procs) > 0:
self.check_procs()
def check_procs(self):
for proc_details in self.running_procs:
if (proc.poll() is not None):
stdout_value = proc.communicate('through stdin to stdout')[0]
self.running_procs.remove(proc_details)
print proc_details[1], stdout_value
del proc_details
I think the key code is:
self.lock.acquire()
print "\nSubprocess started"
p = subprocess.Popen( # etc
stdout_value = proc.communicate('through stdin to stdout')[0]
self.lock.release()
the explicit calls to acquire and release should guarantee serialization -- don't you observe serialization just as invariably if you do other things in this block instead of the subprocess use?
Edit: all silence here, so I'll add the suggestion to remove the locking and instead put each stdout_value on a Queue.Queue() instance -- Queue is intrinsicaly threadsafe (deals with its own locking) so you can get (or get_nowait, etc etc) results from it once they're ready and have been put there. In general, Queue is the best way to arrange thread communication (and often synchronization too) in Python, any time it can be feasibly arranged to do things that way.
Specifically: add import Queue at the start; give up making, acquiring and releasing self.lock (just delete those three lines); add self.q = Queue.Queue() to the __init__; right after the call stdout_value = proc.communicate(... add one statement self.q.put(stdout_value); now e.g finish the jsonrpc_run_procs method with
while not self.q.empty():
result = self.q.get()
print 'One result is %r' % result
to confirm that all the results are there. (Normally the empty method of queues is not reliable, but in this case all threads putting to the queue are already finished, so you should be fine).
Your specific problem is probably caused by the line stdout_value = proc.communicate('through stdin to stdout')[0]. Subprocess.communicate will "Wait for process to terminate", which, when used with a lock, will run one at a time.
What you can do is simply add the p variable to a list and run and use the Subprocess API to wait for the subprocesses to finish. Periodically poll each subprocess in your main thread.
On second look, it looks like you may have an issue on this line as well: for th in self.running_threads+self.thread_pool: th.join(). Thread.join() is another method that will wait for the thread to finish.

Categories