I am making an application in PySide and I want to add a Console/Terminal like screen, where you have a prompt and you can type commands. How would I be able to accomplish this. I am guessing some combination of QPlainTextEdit/QTextEdit for the output and QLineEdit for the actual prompt. Is there a better way to do this?
You could check out Spyder. They use PyQt (which is similar) and have a terminal. I think you can import their terminal widget, but I haven't played with it.
https://code.google.com/p/spyderlib/
Also, it is my favorite python editor by far!
I've spent a fair deal of time trying to find something like this, but to no avail. Good luck!
I did this with a custom QPlainTextEdit and custom QLineEdit. I also had an indicator label that would display ">>>" on the terminal to show user input. It needs more work. The best way would be to create your own custom widget based off of a QTextEdit and your own io handler. Below is an example of my execute method with self.input being the QLineEdit and self.view being the QTextEdit. It should get you the general idea.
import io, subprocess, shlex, code, rlcompleter, platform
def execute(self, currentText=None):
"""Execute runs the command given based on the console type.
If the console type is "both" then execute will run python commands
unless the user give the input ::os::command or ("::(platform.syste())::command").
Otherwise the console type will determine the what the input will execute with.
Args:
currentText(str): Text to execute. None will run the text from the QLineEdit self.input.
"""
# Check for given text
if currentText is None:
currentText = self.input.text()
self.input.clear()
self.view.display(currentText, "input")
else:
cmd = re.search("^>>>", currentText) # search for start only
if cmd is None:
currentText = ">>>" + currentText
else:
self.view.display(currentText, "input")
# end
# Variables
self.completer.clear()
cmdString = re.sub("^>>>", "", currentText)
result = None
displayType = "output"
run, cmdString = self.getRunType(cmdString)
try:
# Check where the output is going
sys.stdout = self.out = io.StringIO()
sys.stderr = sys.stdout
if run == "python": # Run python command
result = self._runInterpreter(cmdString)
displayType = "python"
elif run == "os": # Run os command
result = self._runSubprocess(cmdString)
displayType = "os"
except Exception as err:
result = str(err)
displayType = "Error"
notFoundPython = "NameError:" in result and "is not defined" in result
notFoundWindows = "returned non-zero exit status" in result
if notFoundPython or notFoundWindows:
result = "Command not found"
finally:
sys.stdout = self.old_stdout
sys.stderr = self.old_stdout
self.display(result, displayType)
# end execute
def getRunType(self, cmdString):
run = self._consoleType
# Check the run type
if self._consoleType == "both":
if re.search("^::python::", cmdString) is not None:
cmdString = re.sub("^::[a-z]*::", "", cmdString)
run = "python"
elif re.search("^(::os::|::"+platform.system()+"::)", cmdString) is not None:
cmdString = re.sub("^::[a-z]*::", "", cmdString)
run = "os"
else:
run = "python"
# end
return run, cmdString
# end getRunType
def _runInterpreter(self, cmdString, outIO=None, run=None):
# Check for a proper console type
if(self._consoleType != "both" and self._consoleType != "python"):
return
# Get the IO
if outIO is None:
outIO = sys.stdout
# Run python command
self.interpreter.push(cmdString)
# Check outIO
result = "Unreadable buffer: Check python's sys.stdout"
if isinstance(outIO, io.StringIO):
result = outIO.getvalue()
else:
if outIO.readable():
result = str(outIO.readlines())
# Check for error
if re.search("^Traceback", result) or re.search("[a-zA-z]*Error:", result):
raise ValueError(result)
return result
# end _runInterpreter
def _runSubprocess(self, cmdString, run=None):
# Check for a proper console type
if(self._consoleType != "both" and self._consoleType != "os"):
return
# Run OS command
cmd = shlex.split(cmdString)
result = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode("utf-8")
# Check for error
if re.search("^Traceback", result) or re.search("[a-zA-z]*Error:", result):
raise ValueError(result)
return result
# end _runSubprocess
Related
I'm trying to make a program which takes an executable name as an argument, runs the executable and reports the inputs and outputs for that run. For example consider a child program named "circle". The following would be desired run for my program:
$ python3 capture_io.py ./circle
Enter radius of circle: 10
Area: 314.158997
[('output', 'Enter radius of circle: '), ('input', '10\n'), ('output', 'Area: 314.158997\n')]
I decided to use pexpect module for this job. It has a method called interact which lets the user interact with the child program as seen above. It also takes 2 optional parameters: output_filter and input_filter. From the documentation:
The output_filter will be passed all the output from the child process. The input_filter will be passed all the keyboard input from the user.
So this is the code I wrote:
capture_io.py
import sys
import pexpect
_stdios = []
def read(data):
_stdios.append(("output", data.decode("utf8")))
return data
def write(data):
_stdios.append(("input", data.decode("utf8")))
return data
def capture_io(argv):
_stdios.clear()
child = pexpect.spawn(argv)
child.interact(input_filter=write, output_filter=read)
child.wait()
return _stdios
if __name__ == '__main__':
stdios_of_child = capture_io(sys.argv[1:])
print(stdios_of_child)
circle.c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
float radius, area;
printf("Enter radius of circle: ");
scanf("%f", &radius);
if (radius < 0) {
fprintf(stderr, "Negative radius values are not allowed.\n");
exit(1);
}
area = 3.14159 * radius * radius;
printf("Area: %f\n", area);
return 0;
}
Which produces the following output:
$ python3 capture_io.py ./circle
Enter radius of circle: 10
Area: 314.158997
[('output', 'Enter radius of circle: '), ('input', '1'), ('output', '1'), ('input', '0'), ('output', '0'), ('input', '\r'), ('output', '\r\n'), ('output', 'Area: 314.158997\r\n')]
As you can observe from the output, input is processed character by character and also echoed back as output which creates such a mess. Is it possible to change this behaviour so that my input_filter will run only when Enter is pressed?
Or more generally, what would be the best way to achieve my goal (with or without pexpect)?
When I started to write a helper, I realized that the main problem is that the input should be logged line buffered, so the backspace and other editing is done before the input reaches the program, but the output should be unbuffered in order to log the prompt which is not terminated by a new line.
To capture the output for the purpose of logging, a pipe is needed, but that automatically turns on line buffering. It is known that a pseudoterminal solves the problem (the expect module is built around a pseudoterminal), but a terminal has both the input and the output and we want to unbuffer only the output.
Fortunately there is the stdbuf utility. On Linux it alters the C library functions of dynamically linked executables. Not universally usable.
I have modified a Python bidirectional copy program to log the data it copies. Combined with the stdbuf it produces the desired output.
import select
import os
STDIN = 0
STDOUT = 1
BUFSIZE = 4096
def main(cmd):
ipipe_r, ipipe_w = os.pipe()
opipe_r, opipe_w = os.pipe()
if os.fork():
# parent
os.close(ipipe_r)
os.close(opipe_w)
fdlist_r = [STDIN, opipe_r]
while True:
ready_r, _, _ = select.select(fdlist_r, [], [])
if STDIN in ready_r:
# STDIN -> program
data = os.read(STDIN, BUFSIZE)
if data:
yield('in', data) # optional: convert to str
os.write(ipipe_w, data)
else:
# send EOF
fdlist_r.remove(STDIN)
os.close(ipipe_w)
if opipe_r in ready_r:
# program -> STDOUT
data = os.read(opipe_r, BUFSIZE)
if not data:
# got EOF
break
yield('out', data)
os.write(STDOUT, data)
os.wait()
else:
# child
os.close(ipipe_w)
os.close(opipe_r)
os.dup2(ipipe_r, STDIN)
os.dup2(opipe_w, STDOUT)
os.execlp(*cmd)
# not reached
os._exit(127)
if __name__ == '__main__':
log = list(main(['stdbuf', 'stdbuf', '-o0', './circle']))
print(log)
It prints:
[('out', b'Enter radius of circle: '), ('in', b'12\n'), ('out', b'Area: 452.388947\n')]
Is it possible to change this behaviour so that my input_filter will run only when Enter is pressed?
Yes, you can do it by inheriting from pexpect.spawn and overwriting the interact method. I will come to that soon.
As VPfB pointed out in their answer, you can't use a pipe and I think it's worth to mentioning that this issue is also addressed in the pexpect's documentation.
You said that:
... input is processed character by character and also echoed back as output ...
If you examine the source code of the interact you can see this line:
tty.setraw(self.STDIN_FILENO)
This will set your terminal to raw mode:
input is available character by character, ..., and all special processing of terminal input and output characters is disabled.
That is why your input_filter function is running for every key press and it sees backspace or other special characters. If you could comment out this line, you would see something like this when you run your program:
$ python3 test.py ./circle
Enter radius of circle: 10
10
Area: 314.158997
[('output', 'Enter radius of circle: '), ('input', '10\n'), ('output', '10\r\n'), ('output', 'Area: 314.158997\r\n')]
And this would also let you edit the input (i. e. 12[Backspace]0 would give you same result). But as you can see, it still echoes the input. This can be disabled by setting a simple flag for child's terminal:
mode = tty.tcgetattr(self.child_fd)
mode[3] &= ~termios.ECHO
tty.tcsetattr(self.child_fd, termios.TCSANOW, mode)
Running with the latest changes:
$ python3 test.py ./circle
Enter radius of circle: 10
Area: 314.158997
[('output', 'Enter radius of circle: '), ('input', '10\n'), ('output', 'Area: 314.158997\r\n')]
Bingo! Now you can inherit from pexpect.spawn and override interact method with these changes or implement the same thing using the builtin pty module of Python:
with pty:
import os
import pty
import sys
import termios
import tty
_stdios = []
def _read(fd):
data = os.read(fd, 1024)
_stdios.append(("output", data.decode("utf8")))
return data
def _stdin_read(fd):
data = os.read(fd, 1024)
_stdios.append(("input", data.decode("utf8")))
return data
def _spawn(argv):
pid, master_fd = pty.fork()
if pid == pty.CHILD:
os.execlp(argv[0], *argv)
mode = tty.tcgetattr(master_fd)
mode[3] &= ~termios.ECHO
tty.tcsetattr(master_fd, termios.TCSANOW, mode)
try:
pty._copy(master_fd, _read, _stdin_read)
except OSError:
pass
os.close(master_fd)
return os.waitpid(pid, 0)[1]
def capture_io_and_return_code(argv):
_stdios.clear()
return_code = _spawn(argv)
return _stdios, return_code >> 8
if __name__ == '__main__':
stdios, ret = capture_io_and_return_code(sys.argv[1:])
print(stdios)
with pexpect:
import sys
import termios
import tty
import pexpect
_stdios = []
def read(data):
_stdios.append(("output", data.decode("utf8")))
return data
def write(data):
_stdios.append(("input", data.decode("utf8")))
return data
class CustomSpawn(pexpect.spawn):
def interact(self, escape_character=chr(29),
input_filter=None, output_filter=None):
self.write_to_stdout(self.buffer)
self.stdout.flush()
self._buffer = self.buffer_type()
mode = tty.tcgetattr(self.child_fd)
mode[3] &= ~termios.ECHO
tty.tcsetattr(self.child_fd, termios.TCSANOW, mode)
if escape_character is not None and pexpect.PY3:
escape_character = escape_character.encode('latin-1')
self._spawn__interact_copy(escape_character, input_filter, output_filter)
def capture_io_and_return_code(argv):
_stdios.clear()
child = CustomSpawn(argv)
child.interact(input_filter=write, output_filter=read)
child.wait()
return _stdios, child.status >> 8
if __name__ == '__main__':
stdios, ret = capture_io_and_return_code(sys.argv[1:])
print(stdios)
I don't think you'll be able to do that easily, however, I think this should work for you:
output_buffer=''
def read(data):
output_buffer+=data
if data == '\r':
_stdios.append(("output", output_buffer.decode("utf8")))
output_buffer = ''
return data
I am working on my own project. In which these steps have to be performed:
Connect to remote server.
Get pid, process name, cpu usage, swap memory usage by each running process on remote server daily on some specific time(say at 4'0 clock).
I have to compare every day's result with previous day's result (e.g. day1-pid with day2 pid and day1 process name with day2 process name etc.)
So far I have done up to step-2. Now I want to know that how to extract the pid, process name, cpu usage, swap memory usage from remote server and store it in some iterable variable. So that I can compare it for checking memory spike?
Any other way apart from my idea will be appreciable.
My code sample is like this:
import paramiko
import re
import psutil
class ShellHandler:
def __init__(self, host, user, psw):
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.ssh.connect(host, username=user, password=psw, port=22)
channel = self.ssh.invoke_shell()
self.stdin = channel.makefile('wb')
self.stdout = channel.makefile('r')
def __del__(self):
self.ssh.close()
#staticmethod
def _print_exec_out(cmd, out_buf, err_buf, exit_status):
print('command executed: {}'.format(cmd))
print('STDOUT:')
for line in out_buf:
print(line, end="")
print('end of STDOUT')
print('STDERR:')
for line in err_buf:
print(line, end="")
print('end of STDERR')
print('finished with exit status: {}'.format(exit_status))
print('------------------------------------')
#print(psutil.pids())
pass
def execute(self, cmd):
"""
:param cmd: the command to be executed on the remote computer
:examples: execute('ls')
execute('finger')
execute('cd folder_name')
"""
cmd = cmd.strip('\n')
self.stdin.write(cmd + '\n')
finish = 'end of stdOUT buffer. finished with exit status'
echo_cmd = 'echo {} $?'.format(finish)
self.stdin.write(echo_cmd + '\n')
shin = self.stdin
self.stdin.flush()
shout = []
sherr = []
exit_status = 0
for line in self.stdout:
if str(line).startswith(cmd) or str(line).startswith(echo_cmd):
# up for now filled with shell junk from stdin
shout = []
elif str(line).startswith(finish):
# our finish command ends with the exit status
exit_status = int(str(line).rsplit(maxsplit=1)[1])
if exit_status:
# stderr is combined with stdout.
# thus, swap sherr with shout in a case of failure.
sherr = shout
shout = []
break
else:
# get rid of 'coloring and formatting' special characters
shout.append(re.compile(r'(\x9B|\x1B\[)[0-?]*[ -/]*[#-~]').sub('', line).replace('\b', '').replace('\r', ''))
# first and last lines of shout/sherr contain a prompt
if shout and echo_cmd in shout[-1]:
shout.pop()
if shout and cmd in shout[0]:
shout.pop(0)
if sherr and echo_cmd in sherr[-1]:
sherr.pop()
if sherr and cmd in sherr[0]:
sherr.pop(0)
self._print_exec_out(cmd=cmd, out_buf=shout, err_buf=sherr, exit_status=exit_status)
return shin, shout, sherr
obj=ShellHandler('Servername','username','password')
pID=[]
## I want this(pid, cmd, swap memory) to store in a varible which would be iterable.
pID=ShellHandler.execute(obj,"ps -eo pid,cmd,lstart,%mem,%cpu|awk '{print $1}'")
print(pID[0])##---------------------------------Problem not giving any output.
Your ShellHandler's execute method returns three items, the first of which is the input you sent to it.
You should probably call it directly like this, anyway:
obj = ShellHandler('Servername','username','password')
in, out, err = obj.execute("ps -eo pid,lstart,%mem,%cpu,cmd")
for line in out.split('\n'):
pid, lstartwd, lstartmo, lstartdd, lstartm, lstartyy, mem, cpu, cmd = line.split(None, 8)
I moved cmd last because it might contain spaces. The lstart value also contains multiple space-separated fields. Here's what the output looks like in Debian:
19626 Tue Jan 15 15:03:57 2019 0.0 0.0 less filename
There are many questions about how to parse ps output in more detail; I'll refer you to them for figuring out how to handle the results from split exactly.
Splitting out the output of ps using Python
Is there any way to get ps output programmatically?
ps aux command should have all the info you need (pid, process name, cpu, memory)
I know only the very basics of python. I have this project for my INFORMATION STORAGE AND MANAGEMENT subject. I have to give an explanation the following code.
I searched every command used in this script but could not find most of them. The code can be found here:
import glob
import json
import os
import re
import string
import sys
from oslo.config import cfg
from nova import context
from nova.db.sqlalchemy import api as db_api
from nova.db.sqlalchemy import models
from nova import utils
CONF = cfg.CONF
def usage():
print("""
Usage:
python %s --config-file /etc/nova/nova.conf
Note: This script intends to clean up the iSCSI multipath faulty devices
hosted by VNX Block Storage.""" % sys.argv[0])
class FaultyDevicesCleaner(object):
def __init__(self):
# Get host name of Nova computer node.
self.host_name = self._get_host_name()
def _get_host_name(self):
(out, err) = utils.execute('hostname')
return out
def _get_ncpu_emc_target_info_list(self):
target_info_list = []
# Find the targets used by VM on the compute node
bdms = db_api.model_query(context.get_admin_context(),
models.BlockDeviceMapping,
session = db_api.get_session())
bdms = bdms.filter(models.BlockDeviceMapping.connection_info != None)
bdms = bdms.join(models.BlockDeviceMapping.instance).filter_by(
host=string.strip(self.host_name))
for bdm in bdms:
conn_info = json.loads(bdm.connection_info)
if 'data' in conn_info:
if 'target_iqns' in conn_info['data']:
target_iqns = conn_info['data']['target_iqns']
target_luns = conn_info['data']['target_luns']
elif 'target_iqn' in conn_info['data']:
target_iqns = [conn_info['data']['target_iqn']]
target_luns = [conn_info['data']['target_lun']]
else:
target_iqns = []
target_luns = []
for target_iqn, target_lun in zip(target_iqns, target_luns):
if 'com.emc' in target_iqn:
target_info = {
'target_iqn': target_iqn,
'target_lun': target_lun,
}
target_info_list.append(target_info)
return target_info_list
def _get_ncpu_emc_target_info_set(self):
target_info_set = set()
for target_info in self._get_ncpu_emc_target_info_list():
target_iqn = target_info['target_iqn']
target_lun = target_info['target_lun']
target_info_key = "%s-%s" % (target_iqn.rsplit('.', 1)[0],
target_lun)
# target_iqn=iqn.1992-04.com.emc:cx.fnm00130200235.a7
# target_lun=203
# target_info_key=iqn.1992-04.com.emc:cx.fnm00130200235-203
target_info_set.add(target_info_key)
return target_info_set
def _get_target_info_key(self, path):
temp_tuple = path.split('-lun-', 1)
target_lun = temp_tuple[1]
target_iqn = temp_tuple[0].split('-iscsi-')[1]
target_info_key = "%s-%s" % (target_iqn.rsplit('.', 1)[0], target_lun)
# path=/dev/disk/by-path/ip-192.168.3.52:3260-iscsi-iqn.1992-
# 04.com.emc:cx.fnm00130200235.a7-lun-203
# target_info_key=iqn.1992-04.com.emc:cx.fnm00130200235-203
return target_info_key
def _get_non_ncpu_target_info_map(self):
# Group the paths by target_info_key
ncpu_target_info_set = self._get_ncpu_emc_target_info_set()
device_paths = self._get_emc_device_paths()
target_info_map = {}
for path in device_paths:
target_info_key = self._get_target_info_key(path)
if target_info_key in ncpu_target_info_set:
continue
if target_info_key not in target_info_map:
target_info_map[target_info_key] = []
target_info_map[target_info_key].append(path)
return target_info_map
def _all_related_paths_faulty(self, paths):
for path in paths:
real_path = os.path.realpath(path)
out, err = self._run_multipath(['-ll', real_path],
run_as_root=True,
check_exit_code=False)
if 'active ready' in out:
# At least one path is still working
return False
return True
def _delete_all_related_paths(self, paths):
for path in paths:
real_path = os.path.realpath(path)
device_name = os.path.basename(real_path)
device_delete = '/sys/block/%s/device/delete' % device_name
if os.path.exists(device_delete):
# Copy '1' from stdin to the device delete control file
utils.execute('cp', '/dev/stdin', device_delete,
process_input='1', run_as_root=True)
else:
print "Unable to delete %s" % real_path
def _cleanup_faulty_paths(self):
non_ncpu_target_info_map = self._get_non_ncpu_target_info_map()
for paths in non_ncpu_target_info_map.itervalues():
if self._all_related_paths_faulty(paths):
self._delete_all_related_paths(paths)
def _cleanup_faulty_dm_devices(self):
out_ll, err_ll = self._run_multipath(['-ll'],
run_as_root=True,
check_exit_code=False)
# Pattern to split the dm device contents as follows
# Each section starts with a WWN and ends with a line with
# " `-" as the prefix
#
# 3600601601bd032007c097518e96ae411 dm-2 ,
# size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
# `-+- policy='round-robin 0' prio=0 status=active
# `- #:#:#:# - #:# active faulty running
# 36006016020d03200bb93e048f733e411 dm-0 DGC,VRAID
# size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
# |-+- policy='round-robin 0' prio=130 status=active
# | |- 3:0:0:2 sdd 8:48 active ready running
# | `- 5:0:0:2 sdj 8:144 active ready running
# `-+- policy='round-robin 0' prio=10 status=enabled
# |- 4:0:0:2 sdg 8:96 active ready running
# `- 6:0:0:2 sdm 8:192 active ready running
dm_pat = r'([0-9a-fA-F]{30,})[^\n]+,[^\n]*\n[^,]* `-[^\n]*'
dm_m = re.compile(dm_pat)
path_pat = r'- \d+:\d+:\d+:\d+ '
path_m = re.compile(path_pat)
for m in dm_m.finditer(out_ll):
if not path_m.search(m.group(0)):
# Only #:#:#:# remain in the output, all the paths of the dm
# device should have been deleted. No need to keep the device
out_f, err_f = self._run_multipath(['-f', m.group(1)],
run_as_root=True,
check_exit_code=False)
def cleanup(self):
self._cleanup_faulty_paths()
# Make sure the following configuration is in /etc/multipath.conf
# Otherwise, there may be "map in use" failure when deleting
# dm device
#
# defaults {
# flush_on_last_del yes
# }
#
self._cleanup_faulty_dm_devices()
def _get_emc_device_paths(self):
# Find all the EMC iSCSI devices under /dev/disk/by-path
# except LUNZ and partition reference
pattern = '/dev/disk/by-path/ip-*-iscsi-iqn*com.emc*-lun-*'
device_paths = [path for path in glob.glob(pattern)
if ('lun-0' not in path and '-part' not in path)]
return device_paths
def _run_multipath(self, multipath_command, **kwargs):
check_exit_code = kwargs.pop('check_exit_code', 0)
(out, err) = utils.execute('multipath',
*multipath_command,
run_as_root=True,
check_exit_code=check_exit_code)
print ("multipath %(command)s: stdout=%(out)s stderr=%(err)s"
% {'command': multipath_command, 'out': out, 'err': err})
return out, err
if __name__ == "__main__":
if len(sys.argv) != 3 or sys.argv[1] != '--config-file':
usage()
exit(1)
out, err = utils.execute('which', 'multipath', check_exit_code=False)
if 'multipath' not in out:
print('Info: Multipath tools not installed. No cleanup need be done.')
exit(0)
multipath_flush_on_last_del = False
multipath_conf_path = "/etc/multipath.conf"
if os.path.exists(multipath_conf_path):
flush_on_last_del_yes = re.compile(r'\s*flush_on_last_del.*yes')
for line in open(multipath_conf_path, "r"):
if flush_on_last_del_yes.match(line):
multipath_flush_on_last_del = True
break
if not multipath_flush_on_last_del:
print("Warning: 'flush_on_last_del yes' is not seen in"
" /etc/multipath.conf."
" 'map in use' failure may show up during cleanup.")
CONF(sys.argv[1:])
# connect_volume and disconnect_volume in nova/virt/libvirt/volume.py
# need be adjusted to take the same 'external=True' lock for
# synchronization
#utils.synchronized('connect_volume', external=True)
def do_cleanup():
cleaner = FaultyDevicesCleaner()
cleaner.cleanup()
do_cleanup()
https://wiki.python.org/moin/BeginnersGuide/Programmers
http://www.astro.ufl.edu/~warner/prog/python.html
looks like this python version 3 so. go for the tutorials of version three.
try downloading any IDE. eric5 is good by the way.
try executing this file once.
learn indentations
and dynamic variable declaration
do not jump into the ocean first try swimming pool : )
Also Try to learn method declaration.
Python is a bit different than java.
I will give you a hint looks like system call are also made to execute os commands so try looking at subprocess and how its output is directed to an output stream and error stream.
I'm using Esky with my frozen app. It has the following properties and methods
are available on the Esky class:
app.version: the current best available version.
app.active_version: the currently-executing version, or None
if the esky isn't for the current app.
app.find_update(): find the best available update, or None
if no updates are available.
app.fetch_version(v): fetch the specified version into local storage.
app.install_version(v): install and activate the specified version.
Now, that's nice and all, but I want to show the progress of the download task in my Gui.
How can I achieve that?
wxPython has wrapped Esky in their own SoftwareUpdate method:
https://github.com/wxWidgets/wxPython/blob/master/wx/lib/softwareupdate.py
In their implementation, the application checks for new versions, and asks the user if they'd like to update (using wx GUI for interaction). If the user chooses to update, the code simply calls esky's auto_update() method to handle the rest, but they provide it with an _updateProgress method which updates a progress bar and provides messages indicating Esky's progress:
self._esky.auto_update(self._updateProgress)
...
def _updateProgress(self, status):
# Show progress of the download and install. This function is passed to Esky
# functions to use as a callback.
if self._pd is None and status.get('status') != 'done':
self._pd = wx.ProgressDialog('Software Update', ' '*40,
style=wx.PD_CAN_ABORT|wx.PD_APP_MODAL,
parent=self._parentWindow)
self._pd.Update(0, '')
if self._parentWindow:
self._pd.CenterOnParent()
simpleMsgMap = { 'searching' : 'Searching...',
'retrying' : 'Retrying...',
'ready' : 'Download complete...',
'installing' : 'Installing...',
'cleaning up' : 'Cleaning up...',}
if status.get('status') in simpleMsgMap:
self._doUpdateProgress(True, simpleMsgMap[status.get('status')])
elif status.get('status') == 'found':
self._doUpdateProgress(True, 'Found version %s...' % status.get('new_version'))
elif status.get('status') == 'downloading':
received = status.get('received')
size = status.get('size')
currentPercentage = 1.0 * received / size * 100
if currentPercentage > 99.5:
self._doUpdateProgress(False, "Unzipping...", int(currentPercentage))
else:
self._doUpdateProgress(False, "Downloading...", int(currentPercentage))
elif status.get('status') == 'done':
if self._pd:
self._pd.Destroy()
self._pd = None
wx.Yield()
def _doUpdateProgress(self, pulse, message, value=0):
if pulse:
keepGoing, skip = self._pd.Pulse(message)
else:
keepGoing, skip = self._pd.Update(value, message)
if not keepGoing: # user pressed the cancel button
self._pd.Destroy()
self._pd = None
raise UpdateAbortedError()
The code above was take directly from https://github.com/wxWidgets/wxPython/blob/master/wx/lib/softwareupdate.py.
This feature is documented in the Esky source, file: init.py, line: 689.
The code itself shows what your callback can expect to see throughout an update. Here are some excerpts where your callback would be called:
callback({"status":"searching"})
callback({"status":"found", "new_version":version})
callback({"status":"installing", "new_version":version})
for status in self.sudo_proxy.fetch_version_iter(version):
if callback is not None:
callback(status)
callback({"status":"cleaning up"})
callback({"status":"cleaning up"})
callback({"status":"error","exception":e})
callback({"status":"done"})
Not properly documenated, there is the fetch_version_iter generator function:
fetch_version_iter: like fetch_version but yielding progress updates
during its execution
It yields the following values:
yield {"status": "downloading",
"size": infile_size,
"received": partfile.tell(),
}
yield {"status":"retrying","size":None}
yield {"status":"ready","path":name}
Also, you can get the filename like that:
app.version_finder.version_graph.get_best_path(app.version,v)
I am a Python re-newbie. I would like advice on handling program parameters which are in a file in json format. Currently, I am doing something like what is shown below, however, it seems too wordy, and the idea of typing the same literal string multiple times (sometimes with dashes and sometimes with underscores) seems juvenile - error prone - stinky... :-) (I do have many more parameters!)
#!/usr/bin/env python
import sys
import os
import json ## for control file parsing
# control parameters
mpi_nodes = 1
cluster_size = None
initial_cutoff = None
# ...
#process the arguments
if len(sys.argv) != 2:
raise Exception(
"""Usage:
run_foo <controls.json>
Where:
<control.json> is a dictionary of run parameters
"""
)
# We expect a .json file with our parameters
controlsFileName = sys.argv[1]
err = ""
err += "" #validateFileArgument(controlsFileName, exists=True)
# read in the control parameters from the .json file
try:
controls = json.load(open(controlsFileName, "r"))
except:
err += "Could not process the file '" + controlsFileName + "'!\n"
# check each control parameter. The first one is optional
if "mpi-nodes" in controls:
mpi_nodes = controls["mpi-nodes"]
else:
mpi_nodes = controls["mpi-nodes"] = 1
if "cluster-size" in controls:
cluster_size = controls["cluster-size"]
else:
err += "Missing control definition for \"cluster-size\".\n"
if "initial-cutoff" in controls:
initial_cutoff = controls["initial-cutoff"]
else:
err += "Missing control definition for \"initial-cutoff\".\n"
# ...
# Quit if any of these things were not true
if len(err) > 0:
print err
exit()
#...
This works, but it seems like there must be a better way. I am stuck with the requirements to use a json file and to use the hyphenated parameter names. Any ideas?
I was looking for something with more static binding. Perhaps this is as good as it gets.
Usually, we do things like this.
def get_parameters( some_file_name ):
source= json.loads( some_file_name )
return dict(
mpi_nodes= source.get('mpi-nodes',1),
cluster_size= source['cluster-size'],
initial_cutoff = source['initial-cutoff'],
)
controlsFileName= sys.argv[1]
try:
params = get_parameters( controlsFileName )
except IOError:
print "Could not process the file '{0}'!".format( controlsFileName )
sys.exit( 1 )
except KeyError, e:
print "Missing control definition for '{0}'.".format( e.message )
sys.exit( 2 )
A the end params['mpi_nodes'] has the value of mpi_nodes
If you want a simple variable, you do this. mpi_nodes = params['mpi_nodes']
If you want a namedtuple, change get_parameters like this
def get_parameters( some_file_name ):
Parameters= namedtuple( 'Parameters', 'mpi_nodes, cluster_size, initial_cutoff' )
return Parameters( source.get('mpi-nodes',1),
source['cluster-size'],
source['initial-cutoff'],
)
I don't know if you'd find that better or not.
the argparse library is nice, it can handle most of the argument parsing and validation for you as well as printing pretty help screens
[1] http://docs.python.org/dev/library/argparse.html
I will knock up a quick demo showing how you'd want to use it this arvo.
Assuming you have many more parameters to process, something like this could work:
def underscore(s):
return s.replace('-','_')
# parameters with default values
for name, default in (("mpi-nodes", 1),):
globals()[underscore(name)] = controls.get(name, default)
# mandatory parameters
for name in ("cluster-size", "initial-cutoff"):
try:
globals()[underscore(name)] = controls[name]
except KeyError:
err += "Missing control definition for %r" % name
Instead of manipulating globals, you can also make this more explicit:
def underscore(s):
return s.replace('-','_')
settings = {}
# parameters with default values
for name, default in (("mpi-nodes", 1),):
settings[underscore(name)] = controls.get(name, default)
# mandatory parameters
for name in ("cluster-size", "initial-cutoff"):
try:
settings[underscore(name)] = controls[name]
except KeyError:
err += "Missing control definition for %r" % name
# print out err if necessary
mpi_nodes = settings['mpi_nodes']
cluster_size = settings['cluster_size']
initial_cutoff = settings['initial_cutoff']
I learned something from all of these responses - thanks! I would like to get feedback on my approach which incorporates something from each suggestion. In addition to the conditions imposed by the client, I want something:
1) that is fairly obvious to use and to debug
2) that is easy to maintain and modify
I decided to incorporate str.replace, namedtuple, and globals(), creating a ControlParameters namedtuple in the globals() namespace.
#!/usr/bin/env python
import sys
import os
import collections
import json
def get_parameters(parameters_file_name ):
"""
Access all of the control parameters from the json filename given. A
variable of type namedtuple named "ControlParameters" is injected
into the global namespace. Parameter validation is not performed. Both
the names and the defaults, if any, are defined herein. Parameters not
found in the json file will get values of None.
Parameter usage example: ControlParameters.cluster_size
"""
parameterValues = json.load(open(parameters_file_name, "r"))
Parameters = collections.namedtuple( 'Parameters',
"""
mpi_nodes
cluster_size
initial_cutoff
truncation_length
"""
)
parameters = Parameters(
parameterValues.get(Parameters._fields[0].replace('_', '-'), 1),
parameterValues.get(Parameters._fields[1].replace('_', '-')),
parameterValues.get(Parameters._fields[2].replace('_', '-')),
parameterValues.get(Parameters._fields[3].replace('_', '-'))
)
globals()["ControlParameters"] = parameters
#process the program argument(s)
err = ""
if len(sys.argv) != 2:
raise Exception(
"""Usage:
foo <control.json>
Where:
<control.json> is a dictionary of run parameters
"""
)
# We expect a .json file with our parameters
parameters_file_name = sys.argv[1]
err += "" #validateFileArgument(parameters_file_name, exists=True)
if err == "":
get_parameters(parameters_file_name)
cp_dict = ControlParameters._asdict()
for name in ControlParameters._fields:
if cp_dict[name] == None:
err += "Missing control parameter '%s'\r\n" % name
print err
print "Done"