I am a system administrator and this is the first time I am trying to achieve something using Python. I am working on a small python tool that will run a bat file in a Qthread. On the GUI I have a textedit box where I want to update output/error from the bat file.
Here is the code I have so far,
QThread -
class runbat(QtCore.QThread):
line_printed = QtCore.pyqtSignal(str)
def __init__(self, ):
super(runbat, self).__init__()
def run(self):
popen = subprocess.Popen("install.bat", stdout=subprocess.PIPE, shell=True)
lines_iterator = iter(popen.stdout.readline, b"")
for line in lines_iterator:
self.line_printed.emit(line)
From main -
self.batfile.line_printed.connect(self.batout)
def batout(self, line):
cursor = self.ui.textEdit.textCursor()
cursor.movePosition(cursor.End)
cursor.insertText(line)
self.ui.textEdit.ensureCursorVisible()
but I am getting - TypeError: runbat.line_printed[str].emit(): argument 1 has unexpected type 'bytes'. Another question is does stdout catches errors or just output, What do I need to catch the errors as well?
ok, I was able to get it to work by changing the code to following.
in Qthread
line_printed = QtCore.pyqtSignal(bytes)
in main
def batout(self, line):
output = str(line, encoding='utf-8')
cursor = self.ui.textEdit.textCursor()
cursor.movePosition(cursor.End)
cursor.insertText(output)
self.ui.textEdit.ensureCursorVisible()
Basically the out put was in bytes and I had to convert it to string. Its working as expected with this changes however if anyone have a better solution I would love to try it. Thank you all.
Related
The Problem
I want to interact with interactive terminal programs from Python scripts, these programs might not always be written in Python. I already managed to do it with pexpect and the class in the code snippet below but I struggle to find a way to capture the whole output after each instruction.
The Context
I cannot capture the whole output of the command (all the lines) and keep the program alive for future inputs.
Let's say I want to do this:
terminal.start("/path/to/executable/repl/file") # on start returns 3 lines of output
terminal.run_command("let a = fn(a) { a + 1 }") # this command return 1 line of output
terminal.run_command("var") # this command will return 2 lines of output
terminal.run_command("invalid = invalid") # this command returns 1 line of output
note that the amount of lines on each output might vary because I want to be able to run multiple interactive terminal programs.
What I have tried
Attempt 1
I tried using readlines but as the documentation states
Remember, because this reads until EOF that means the child process should have closed its stdout.
It means that when once I run that it will close my process for future instructions, which is not my expected behaviour. Anyways when I try it I get the following.
def read(self):
return list(self.process.readlines())
For a reason unknown to me the program just does nothing, prints nothing, raises no error, just stays paused with no output whatsoever.
Attempt 2
Read each line until finding an empty line like this
def read(self):
val = self.process.readline()
result = ""
while val != "":
result += val
val = self.process.readline()
return result
Once again the same problem, the program pauses, prints no input, does nothing for a few seconds then it prints the error pexpect.exceptions.TIMEOUT: Timeout exceeded.
Attempt 3
using read_nonblocking method causes my program to read only a few characters, so I use the first parameter size as follows.
def read(self):
return self.process.read_nonblocking(999999999)
Only then I get the expected behavior but only for a few commands, then it reads nothing, besides, If I put an even bigger number, an error on memory overflow is raised.
The Code
This is the implementation of the Terminal class.
import pexpect
class Terminal:
process: pexpect.spawn
def __init__(self):
self.process = None
def start(self, executable_file: str):
'''
run a command that returns an executable TUI program, returns the output,
(if present) of the initialization of program
'''
self.process = pexpect.spawn(executable_file, encoding="utf-8", maxread=1)
return self.read()
def read(self):
'''return entire output of last executed command'''
return self.process.readline() # when executed more than amoutn of output program breaks
def write(self, message):
'''send value to program through keyboard input'''
self.process.sendline(message)
def terminate(self):
'''kill process/program and restart property value to None'''
self.process.kill()
self.process.wait()
self.process = None
def run_command(self, command: str):
'''
run an instruction for the executed program
and get the returned result as string
'''
self.write(command)
return self.read()
How I consume the class. This is what I run to test on each attempt mentioned above
from terminal import Terminal
term = Terminal()
print(term.start("/path/to/executable/repl/file"), end="")
print(term.run_command("let a = fn(a) { a + 1 }"), end="")
print(term.run_command("a(1)"), end="")
print(term.run_command("let b = [1,2,4]"), end="")
print(term.run_command("b[0]"), end="")
print(term.run_command("b[1]"), end="")
print(term.run_command("a(2)"), end="")
If you want to know what kind of specific programs I want to run, its just these two 1 and 2 at the moment but I expect to add more in the future.
The crux of the problem comes from detecting when the command you sent to the console program has finished writing output.
I started by creating a very simple console program with input and output : echo. It just writes back what you wrote.
Here it is :
echo.py
import sys
print("Welcome to PythonEcho Ultimate Edition 2023")
while True:
new_line = sys.stdin.readline()
print(new_line, file=sys.stdout, end="") # because the line already has an \n at the end
print("") # an empty line, because that's how the webshell detects the output for this command has ended
It is a slightly modified version, that prints a line at the start (because that's what your startProgram() expects when doing terminal.read()) and an empty line after each echo (because that's how your runCommand detected that the output finished).
I did without Flask, because it is not needed for making the communication with the console program to work, and helps with debug (see Minimal Reproducible Example). So here is the code I used :
main.py
import subprocess
class Terminal:
def __init__(self):
self.process = None
def start(self, executable_file):
self.process = subprocess.Popen(
executable_file,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
def read(self):
return self.process.stdout.readline().decode("utf-8")
# no `.strip()` there ^^^^^^^^
def write(self, message):
self.process.stdin.write(f"{message.strip()}\n".encode("utf-8"))
self.process.stdin.flush()
def terminate(self):
self.process.stdin.close()
self.process.terminate()
self.process.wait(timeout=0.2)
terminal = Terminal()
def startProgram():
terminal.start(["python3", "echo.py"]) # using my echo program
return terminal.read()
def runCommand(command: str):
terminal.write(command)
result = ""
line = "initialized to something else than \\n"
while line != "\n": # an empty line is considered a separator ?
line = terminal.read()
result += line
return result
def stopProgram():
terminal.terminate()
return "connection terminated"
if __name__ == "__main__": # for simple testing
result = startProgram()
print(result)
result = runCommand("cmd1")
print(result, end="")
result = runCommand("cmd2")
print(result, end="")
result = stopProgram()
print(result)
which gives me
Welcome to PythonEcho Ultimate Edition 2023
cmd1
cmd2
connection terminated
I removed the split() at the end of the Terminal.read otherwise the lines would not be correctly concatenated later in runCommand (or was it voluntary ?).
I changed the runCommand function to stop reading when it encounters an \n (which is an empty line, different from an empty string which would indicate the end of the stream). You did not explain how you detect that the output from your programs ended, you should take care to this.
I want to remote control a python application which uses urwid for the user interface.
My idea was to create a file, pass it's name as command line argument to the application and whenever I write to the file the application reads from that file.
Urwid's event loop has a method watch_file(fd, callback).
This method is described as "Call callback() when fd has some data to read."
This sounds exactly like what I want to have, but it causes an infinite loop.
callback is executed as often as possible, despite the fact that the file is empty.
Even if I delete the file, callback is still called.
#!/usr/bin/env python3
import urwid
import atexit
def onkeypress(key, size=None):
if key == 'q':
raise urwid.ExitMainLoop()
text.set_text(key)
def onfilechange():
text.set_text(cmdfile.read())
# clear file so that I don't read already executed commands again
# and don't run into an infinite loop - but I am doing that anyway
with open(cmdfile.name, 'w') as f:
pass
cmdfile = open('/tmp/cmd', 'rt')
atexit.register(cmdfile.close)
text = urwid.Text("hello world")
filler = urwid.Filler(text)
loop = urwid.MainLoop(filler, unhandled_input=onkeypress)
loop.watch_file(cmdfile, onfilechange)
if __name__ == '__main__':
loop.run()
(My initial idea was to open the file only for reading instead of keeping it open all the time but fd has to be a file object, not a path.)
Urwid offers several different event loops.
By default, SelectEventLoop is used.
GLibEventLoop has the same behaviour, it runs into an infinite loop.
AsyncioEventLoop instead throws an "operation not permitted" exception.
TwistedEventLoop and TornadoEventLoop would need additional software to be installed.
I have considered using the independent watchdog library but it seems accessing the user interface from another thread would require to write a new loop, see this stack overflow question.
The answer to that question recommends polling instead which I would prefer to avoid.
If urwid specifically provides a method to watch a file I cannot believe that it does not work in any implementation.
So what am I doing wrong?
How do I react to a file change in a python/urwid application?
EDIT:
I have tried using named pipes (and removed the code to clear the file) but visually it has the same behaviour: the app does not start.
Audibly, however, there is a great difference: It does not go into the infinite loop until I write to the file.
Before I write to the file callback is not called but the app is not started either, it just does nothing.
After I write to the file, it behaves as described above for regular files.
I have found the following work around: read a named pipe in another thread, safe each line in a queue and poll in the UI thread to see if something is in the queue.
Create the named pipe with mkfifo /tmp/mypipe.
Then write to it with echo >>/tmp/mypipe "some text".
#!/usr/bin/env python3
import os
import threading
import queue
import urwid
class App:
POLL_TIME_S = .5
def __init__(self):
self.text = urwid.Text("hello world")
self.filler = urwid.Filler(self.text)
self.loop = urwid.MainLoop(self.filler, unhandled_input=self.onkeypress)
def watch_pipe(self, path):
self._cmd_pipe = path
self.queue = queue.Queue()
threading.Thread(target=self._read_pipe_thread, args=(path,)).start()
self.loop.set_alarm_in(0, self._poll_queue)
def _read_pipe_thread(self, path):
while self._cmd_pipe:
with open(path, 'rt') as pipe:
for ln in pipe:
self.queue.put(ln)
self.queue.put("!! EOF !!")
def _poll_queue(self, loop, args):
while not self.queue.empty():
ln = self.queue.get()
self.text.set_text(ln)
self.loop.set_alarm_in(self.POLL_TIME_S, self._poll_queue)
def close(self):
path = self._cmd_pipe
# stop reading
self._cmd_pipe = None
with open(path, 'wt') as pipe:
pipe.write("")
os.remove(path)
def run(self):
self.loop.run()
def onkeypress(self, key, size=None):
if key == 'q':
raise urwid.ExitMainLoop()
self.text.set_text(key)
if __name__ == '__main__':
a = App()
a.watch_pipe('/tmp/mypipe')
a.run()
a.close()
I'm trying to have what is written to stdout in a subprocess, displayed in a Qt widget.
I'm starting the process this way:
import subprocess
subprocess.call(["program_name", "arguments"])
I think I have to make a class to act like the stdout, and point it out when I call subprocess, I tried this:
class Log:
def __init__(self, qtWidget):
self.qtWidget = qtWidget
def write(self, data):
self.qtWidget.append(data)
# (...)
log = Log(theWidget)
sub.process.call(["program_name", "arguments"], stdout=log)
but I am getting an error saying: AttributeError: 'Log' object has no attribute 'fileno'
Can't really think of any other ideas, apart from redirecting to a file and then having a timer which reads from it from time to time..
[edit]
Ended up with this:
process = QProcess()
process.setProcessChannelMode( QProcess.MergedChannels )
process.start( "program_name", [ "arguments" ] )
process.readyReadStandardOutput.connect( aFunction )
# then in the function...
outputBytes = process.readAll().data()
outputUnicode = outputBytes.decode( 'utf-8' )
messageWidget.append( outputUnicode )
thanks for the help!
Redirecting of input/output streams in the subprocess module works at a lower level, that's why you can only pass filelike objects which have a fileno.
If you want to read the data from python, you can either use subprocess.check_output instead of subprocess.call, which will return the data written to the process's stdout as bytes:
log = subprocess.check_call(["program_name", "arguments"])
... # use log
Or you can use subprocess.Popen and pass stdout=subprocess.PIPE. Then you can read the data from the returned objets's stdout:
p = subprocess.Popen(["program_name", "arguments"], stdout=subprocess.PIPE)
log = subprocess.stdout.read()
... # use log
There should be a lot of examples in the subprocess documentation.
Edit:
If you need tighter integration with Qt, you could also use QProcess to start your program instead of the subprocess module. That would allow you to use it's read channel's readyRead signal to be notified when data is ready to be read from the process.
I've spent ages looking for a way to do this, and I've so far come up with nothing. :(
I'm trying to make a GUI for a little CLI program that I've made - so I thought using Ubuntu's "Quickly" would be the easiest way. Basically it appears to use Glade for making the GUI. I know that I need to run my CLI backend in a subprocess and then send the stdout and stderr to a textview. But I can't figure out how to do this.
This is the code that Glade/Quickly created for the Dialog box that I want the output to appear into:
from gi.repository import Gtk # pylint: disable=E0611
from onice_lib.helpers import get_builder
import gettext
from gettext import gettext as _
gettext.textdomain('onice')
class BackupDialog(Gtk.Dialog):
__gtype_name__ = "BackupDialog"
def __new__(cls):
"""Special static method that's automatically called by Python when
constructing a new instance of this class.
Returns a fully instantiated BackupDialog object.
"""
builder = get_builder('BackupDialog')
new_object = builder.get_object('backup_dialog')
new_object.finish_initializing(builder)
return new_object
def finish_initializing(self, builder):
"""Called when we're finished initializing.
finish_initalizing should be called after parsing the ui definition
and creating a BackupDialog object with it in order to
finish initializing the start of the new BackupDialog
instance.
"""
# Get a reference to the builder and set up the signals.
self.builder = builder
self.ui = builder.get_ui(self)
self.test = False
def on_btn_cancel_now_clicked(self, widget, data=None):
# TODO: Send SIGTERM to the subprocess
self.destroy()
if __name__ == "__main__":
dialog = BackupDialog()
dialog.show()
Gtk.main()
If I put this in the finish_initializing function
backend_process = subprocess.Popen(["python", <path to backend>], stdout=subprocess.PIPE, shell=False)
then the process starts and runs as another PID, which is what I want, but now how do I send backend_process.stdout to the TextView? I can write to the textview with:
BackupDialog.ui.backup_output.get_buffer().insert_at_cursor("TEXT")
But I just need to know how to have this be called each time there is a new line of stdout.
But I just need to know how to have this be called each time there is a new line of stdout.
You could use GObject.io_add_watch to monitor the subprocess output or create a separate thread to read from the subprocess.
# read from subprocess
def read_data(source, condition):
line = source.readline() # might block
if not line:
source.close()
return False # stop reading
# update text
label.set_text('Subprocess output: %r' % (line.strip(),))
return True # continue reading
io_id = GObject.io_add_watch(proc.stdout, GObject.IO_IN, read_data)
Or using a thread:
# read from subprocess in a separate thread
def reader_thread(proc, update_text):
with closing(proc.stdout) as file:
for line in iter(file.readline, b''):
# execute update_text() in GUI thread
GObject.idle_add(update_text, 'Subprocess output: %r' % (
line.strip(),))
t = Thread(target=reader_thread, args=[proc, label.set_text])
t.daemon = True # exit with the program
t.start()
Complete code examples.
I'm working on application in wxPython which is a GUI for a command line utility. In the GUI there is a text control which should display the output from the application. I'm launching the shell command using subprocess, but I don't get any output from it until it has completed.
I have tried several solutions but none of them seems to work. Below is the code I'm using at the moment (updated):
def onOk(self,event):
self.getControl('infotxt').Clear()
try:
thread = threading.Thread(target=self.run)
thread.setDaemon(True)
thread.start()
except Exception:
print 'Error starting thread'
def run(self):
args = dict()
# creating a command to execute...
cmd = ["aplcorr", "-vvfile", args['vvfile'], "-navfile", args['navfile'], "-lev1file", args['lev1file'], "-dem", args['dem'], "-igmfile", args['outfile']]
proc = subprocess.Popen(' '.join(cmd), shell=True, stdout=subprocess.PIPE, stderr.subprocess.PIPE)
print
while True:
line = proc.stdout.readline()
wx.Yield()
if line.strip() == "":
pass
else:
print line.strip()
if not line: break
proc.wait()
class RedirectInfoText:
""" Class to redirect stdout text """
def __init__(self,wxTextCtrl):
self.out=wxTextCtrl
def write(self,string):
self.out.WriteText(string)
class RedirectErrorText:
""" Class to redirect stderr text """
def __init__(self,wxTextCtrl):
self.out.SetDefailtStyle(wx.TextAttr())
self.out=wxTextCtrl
def write(self,string):
self.out.SetDefaultStyle(wx.TextAttr(wx.RED))
self.out.WriteText(string)
In particular I'm going to need the output in real-time to create a progress-bar.
Edit: I changed my code, based on Mike Driscoll's suggestion. It seems to work sometimes, but most of the time I'm getting one of the following errors:
(python:7698): Gtk-CRITICAL **: gtk_text_layout_real_invalidate:
assertion `layout->wrap_loop_count == 0' failed
or
(python:7893): Gtk-WARNING **: Invalid text buffer iterator: either
the iterator is uninitialized, or the characters/pixbufs/widgets in
the buffer have been modified since the iterator was created. You must
use marks, character numbers, or line numbers to preserve a position
across buffer modifications. You can apply tags and insert marks
without invalidating your iterators, but any mutation that affects
'indexable' buffer contents (contents that can be referred to by
character offset) will invalidate all outstanding iterators
Segmentation fault (core dumped)
Any clues?
The problem is because you are trying to wx.Yield and to update the output widgets from the context of the thread running the process, instead of doing the update from the GUI thread.
Since you are running the process from a thread there should be no need to call wx.Yield, because you are not blocking the GUI thread, and so any pending UI events should be processed normally anyway.
Take a look at the wx.PyOnDemandOutputWindow class for an example of how to handle prints or other output that originate from a non-GUI thread.
This can be a little tricky, but I figured out one way to do it which I wrote about here: http://www.blog.pythonlibrary.org/2010/06/05/python-running-ping-traceroute-and-more/
After you have set up the redirection of the text, you just need to do something like this:
def pingIP(self, ip):
proc = subprocess.Popen("ping %s" % ip, shell=True,
stdout=subprocess.PIPE)
print
while True:
line = proc.stdout.readline()
wx.Yield()
if line.strip() == "":
pass
else:
print line.strip()
if not line: break
proc.wait()
The article shows how to redirect the text too. Hopefully that will help!