I'm trying to develop a Python based wrapper around the Xilinx ISE TCL shell xtclsh.exe. If it works, I'll add support for other shells like PlanAhead or Vivado ...
So what's the big picture? I have a list of VHDL source files, which form an IP core. I would like to open an existing ISE project, search for missing VHDL files and add them if necessary. Because IP cores have overlapping file dependencies, it's possible that a project already contains some files, so I'm only looking for missing files.
The example user Python 3.x and subprocess with pipes. The xtclsh.exe is launched and commands are send line by line to the shell. The output is monitored for results. To ease the example, I redirected STDERR to STDOUT. A dummy output POC_BOUNDARY is inserted into the command stream, to indicate completed commands.
The attached example code can be tested by setting up an example ISE project, which has some VHDL source files.
My problem is that INFO, WARNING and ERROR messages are displayed, but the results from the TCL commands can not be read by the script.
Manually executing search *.vhdl -type file in xtclsh.exe results in:
% search *.vhdl -type file
D:/git/PoC/src/common/config.vhdl
D:/git/PoC/src/common/utils.vhdl
D:/git/PoC/src/common/vectors.vhdl
Executing the script results in:
....
press ENTER for the next step
sending 'search *.vhdl -type file'
stdoutLine='POC_BOUNDARY
'
output consumed until boundary string
....
Questions:
Where does xtclsh write to?
How can I read the results from TCL commands?
Btw: The prompt sign % is also not visible to my script.
Python code to reproduce the behavior:
import subprocess
class XilinxTCLShellProcess(object):
# executable = "sortnet_BitonicSort_tb.exe"
executable = r"C:\Xilinx\14.7\ISE_DS\ISE\bin\nt64\xtclsh.exe"
boundarString = "POC_BOUNDARY"
boundarCommand = bytearray("puts {0}\n".format(boundarString), "ascii")
def create(self, arguments):
sysargs = []
sysargs.append(self.executable)
self.proc = subprocess.Popen(sysargs, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
self.sendBoundardCommand()
while(True):
stdoutLine = self.proc.stdout.readline().decode()
if (self.boundarString in stdoutLine):
break
print("found boundary string")
def terminate(self):
self.proc.terminate()
def sendBoundardCommand(self):
self.proc.stdin.write(self.boundarCommand)
self.proc.stdin.flush()
def sendCommand(self, line):
command = bytearray("{0}\n".format(line), "ascii")
self.proc.stdin.write(command)
self.sendBoundardCommand()
def sendLine(self, line):
self.sendCommand(line)
while(True):
stdoutLine = self.proc.stdout.readline().decode()
print("stdoutLine='{0}'".format(stdoutLine))
if (stdoutLine == ""):
print("reached EOF in stdout")
break
elif ("vhdl" in stdoutLine):
print("found a file name")
elif (self.boundarString in stdoutLine):
print("output consumed until boundary string")
break
def main():
print("creating 'XilinxTCLShellProcess' instance")
xtcl = XilinxTCLShellProcess()
print("launching process")
arguments = []
xtcl.create(arguments)
i = 1
while True:
print("press ENTER for the next step")
from msvcrt import getch
from time import sleep
sleep(0.1) # 0.1 seconds
key = ord(getch())
if key == 27: # ESC
print("aborting")
print("sending 'exit'")
xtcl.sendLine("exit")
break
elif key == 13: # ENTER
if (i == 1):
#print("sending 'project new test.xise'")
#xtcl.sendLine("project new test.xise")
print("sending 'project open PoCTest.xise'")
xtcl.sendLine("project open PoCTest.xise")
i += 1
elif (i == 2):
print("sending 'lib_vhdl get PoC files'")
xtcl.sendLine("lib_vhdl get PoC files")
i += 1
elif (i == 3):
print("sending 'search *.vhdl -type file'")
xtcl.sendLine("search *.vhdl -type file")
i += 1
elif (i == 4):
print("sending 'xfile add ../../src/common/strings.vhdl -lib_vhdl PoC -view ALL'")
xtcl.sendLine("xfile add ../../src/common/strings.vhdl -lib_vhdl PoC -view ALL")
i += 16
elif (i == 20):
print("sending 'project close'")
xtcl.sendLine("project close")
i += 1
elif (i == 21):
print("sending 'exit'")
xtcl.sendCommand("exit")
break
print("exit main()")
xtcl.terminate()
print("the end!")
# entry point
if __name__ == "__main__":
main()
I have tried several approaches on Linux, but it seemes that xtclsh detects whether standard input is connected to a pipe or a (pseudo) terminal. If it is connected to a pipe, xtclsh suppresses any output which would be normally written to standard output (prompt output, command results). I think, the same applies to Windows.
Messages (whether informative, warning or error) which are printed on standard error still go there even if the input is connected to a pipe.
To get the messages printed on standard output you can use the puts tcl command which always prints on standard output. That is, puts [command] takes the standard output of command and prints it always to standard output.
Example: Let's assume we have a test.xise project with two files: the top-level entity in test.vhd and the testbench in test_tb.vhd. And, we want to list all files in the project using this tcl script (commands.tcl):
puts [project open test]
puts "-----------------------------------------------------------------------"
puts [search *.vhd]
exit
Then the call xtclsh < commands.tcl 2> error.log prints this on standard output:
test
-----------------------------------------------------------------------
/home/zabel/tmp/test/test.vhd
/home/zabel/tmp/test/test_tb.vhd
And this is printed on standard error (into file error.log):
INFO:HDLCompiler:1061 - Parsing VHDL file "/home/zabel/tmp/test/test.vhd" into
library work
INFO:ProjectMgmt - Parsing design hierarchy completed successfully.
Related
I am using Pexpect to run a command remotely on a server and saving the output in a file. However, it does not save the whole output as it's truncated due to --More-- . Is there a way to avoid --More--, so that the whole output is saved in the output file?
I have tried using child.setwinsize(1000,1000) but it didn't solve the issue.
Current code:
import pexpect
import time
child = pexpect.spawn('ssh username#ip_address')
time.sleep(1)
child.sendline('password')
time.sleep(1)
child.logfile = open("output.txt", "w")
child.sendline('command')
child.expect(pexpect.EOF)
print child.before, child.after
child.close
Not sure what command you're running but usually you can press SPACE when you see the --More-- prompt. For example:
import pexpect, sys
child = pexpect.spawn('more /etc/services')
child.logfile_read = sys.stdout
patterns = ['--More--', pexpect.EOF]
while True:
ret = child.expect(patterns)
if ret == 0:
child.send(' ')
elif ret == 1:
break
I found one more answer- just execute below command before actual command.
terminal length 0
After that suppose I entered some command like show ip interface. Then, This will show whole output. You don't need to press enter again and again. As,
child.sendline('terminal length 0')
child.expect('# ')
child.sendline('show ip interface') #write your command here
child.expect('# ')
This is the python code which is to be edited
import os
while True:
command=input("Entre Command:")
if command==1:
os.system("sudo python led_test.py")
elif command==2:
os.system("sudo /home/abhi/rpi_x4driver_Final/rpi_x4driver/Runme")
elif command==3:
os.system("aplay /home/abhi/C_music.wav")
.The Output is coming from Command==2 Which needs to be saved in files(Dynamic names) each containing 200 lines.This means that i want to make batches of outputs coming from
os.system("sudo /home/abhi/rpi_x4driver_Final/rpi_x4driver/Runme")
and save them in files with dynamic names.
i have tried this code but it doesn't work
command=input("Entre Command:")
if command==1:
sys.exit()
elif command==2:
proc =
subprocess.Popen(["sudo/home/abhi/rpi_x4driver_Final/rpi_x4driver/Runme"],
stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if line != '':
# save to a file instead of printing
print "test:", line.rstrip()
else:
break
If you want to extract a part of the output from the stream and then save it, consider using subprocess.check_out as follow:
import subprocess
s = subprocess.check_out(['path/to/the/executable','-option'])
s.decode('utf-8')
If you don't care about extracting, consider using the shell redirection as suggested by #Someprogrammerdude with the > redirection in the shell. More details here : http://www.tldp.org/LDP/abs/html/io-redirection.html
I'm trying to interface with a program (GoGui - https://sourceforge.net/projects/gogui/) using the Go Text Protocol (GTP - http://www.lysator.liu.se/~gunnar/gtp/) which has its documentation here. (Link can also be found on the previous website.)
So I've written a bit of code to at least get GoGui to acknowledge the existence of my program:
import sys
import engine
import input_processing
for line in sys.stdin:
if line == 'name\n':
sys.stdout.write(' Muizz-Bot\n\n')
if line == 'protocol_version\n':
sys.stdout.write('2\n\n')
if line == 'version\n':
sys.stdout.write('')
Now this doesn't seem unreasonable per se, but results in GoGui giving me the following error:
Which is of course a problem. So I figured that I made a mistake somewhere in my programming, but when I simply run the program through visual studio, everything works as expected:
This makes me think that the problem lies in interfacing the two applications, and maybe I should be looking at other functions than stdin and stdout. Does anyone know what may be going wrong here?
EDIT FOR COMMENTS: The code I'm currently working on for command parsing (in its entirety) looks like this:
import sys
commands = ['protocol_version', 'name', 'version', 'list_commands', 'known_command', 'quit', 'boardsize',
'clear_board', 'komi', 'play', 'genmove']
pre_game_out = ['2','Muizz-Bot','']
# Define all output functions
def list_commands():
out = '\n'.join(commands)
return(out)
def known():
return(True)
def quit():
return(None)
def boardsize():
return(None)
def clear_board():
return(None)
def komi():
return(None)
def play():
return(None)
def genmove():
return("A1")
# Create dictionary to point to all functions.
output = {'list_commands':list_commands, 'known_command':known, 'quit':quit, 'boardsize':boardsize,
'clear_board':clear_board, 'komi':komi, 'play':play, 'genmove':genmove}
# Define the function that will pass the commands and write outputs.
def parse(line):
if line.strip() in commands:
i = commands.index(line.strip())
if i<3:
sys.stdout.write('= '+ pre_game_out[i]+'\n\n')
sys.stdout.flush()
else:
sys.stdout.write('= ' + output[line.strip()]() + '\n\n')
sys.stdout.flush()
For pre processing:
def input(inp):
# Remove control characters
inp = inp.replace('\r', '')
inp = inp.replace(' ', '')
inp = inp.split('#', 1)[0]
inp = inp.replace('\t', ' ')
# Check if empty
if inp.isspace() or inp==None or inp=='':
return
else:
return(inp)
You're not flushing your response so nothing gets sent back to the caller (as the command is not big enough to trigger auto buffer flush). Also, strolling through the protocol document it clearly says that your response should be in the form of = response\n\n so even if you were flushing it probably still wouldn't work.
Try with something like:
import sys
for line in sys.stdin:
if line.strip() == 'name':
sys.stdout.write('= Muizz-Bot\n\n')
sys.stdout.flush()
elif line.strip() == 'protocol_version':
sys.stdout.write('= 2\n\n')
sys.stdout.flush()
elif line.strip() == 'version':
sys.stdout.write('=\n\n')
sys.stdout.flush()
You might want to create a simple function for parsing commands / responding back instead of repeating the code, tho. Also, this probably won't (fully) work either as the protocol document states that you need to implement quite a number of commands (6.1 Required Commands) but it should get you started.
UPDATE - Here's one way to make it more manageable and in line with the specs - you can create a function for each command so you can easily add/remove them as you please, for example:
def cmd_name(*args):
return "Muizz-Bot"
def cmd_protocol_version(*args):
return 2
def cmd_version(*args):
return ""
def cmd_list_commands(*args):
return " ".join(x[4:] for x in globals() if x[:4] == "cmd_")
def cmd_known_command(*args):
commands = {x[4:] for x in globals() if x[:4] == "cmd_"}
return "true" if args and args[0] in commands else "false"
# etc.
Here all the command functions are prefixed with "cmd_" (and cmd_list_commands() and cmd_known_command() use that fact to check for the command functions in the global namespace) but you can also move them to a different module and then 'scan' the module instead. With such structure it's very easy to add a new command, for example to add the required quit command all you need is to define it:
def cmd_quit(*args):
raise EOFError() # we'll use EOFError to denote an exit state bellow
Also, we'll deal bellow with the situation when a command needs to return an error - all you need to do from your functions is to raise ValueError("error response") and it will be sent back as an error.
Once you have your set of commands added as functions all you need is to parse the input command, call the right function with the right arguments and print back the response:
def call_command(command):
command = "".join(x for x in command if 31 < ord(x) < 127 or x == "\t") # 3.1.1
command = command.strip() # 3.1.4
if not command: # ... return if there's nothing to do
return
command = command.split() # split to get the [id], cmd, [arg1, arg2, ...] structure
try: # try to convert to int the first slice to check for command ID
command_id = int(command[0])
command_args = command[2:] if len(command) > 2 else [] # args or an empty list
command = command[1] # command name
except ValueError: # failed, no command ID present
command_id = "" # set it to blank
command_args = command[1:] if len(command) > 1 else [] # args or an empty list
command = command[0] # command name
# now, lets try to call our command as cmd_<command name> function and get its response
try:
response = globals()["cmd_" + command](*command_args)
if response != "": # response not empty, prepend it with space as per 3.4
response = " {}".format(response)
sys.stdout.write("={}{}\n\n".format(command_id, response))
except KeyError: # unknown command, return standard error as per 3.6
sys.stdout.write("?{} unknown command\n\n".format(command_id))
except ValueError as e: # the called function raised a ValueError
sys.stdout.write("?{} {}\n\n".format(command_id, e))
except EOFError: # a special case when we need to quit
sys.stdout.write("={}\n\n".format(command_id))
sys.stdout.flush()
sys.exit(0)
sys.stdout.flush() # flush the STDOUT
Finally, all you need is to listen to your STDIN and forward the command lines to this function to do the heavy lifting. In that regard, I'd actually explicitly read line-by-line from your STDIN rather than trying to iterate over it as it's a safer approach so:
if __name__ == "__main__": # make sure we're executing instead of importing this script
while True: # main loop
try:
line = sys.stdin.readline() # read a line from STDIN
if not line: # reached the end of STDIN
break # exit the main loop
call_command(line) # call our command
except Exception: # too broad, but we don't care at this point as we're exiting
break # exit the main loop
Of course, as I mentioned earlier, it might be a better idea to pack your commands in a separate module, but this should at least give you an idea how to do 'separation of concerns' so you worry about responding to your commands rather than on how they get called and how they respond back to the caller.
I have a Python routine which invokes some kind of CLI (e.g telnet) and then executes commands in it. The problem is that sometimes the CLI refuses connection and commands are executed in the host shell resulting in various errors. My idea is to check whether the shell prompt alters or not after invoking the CLI.
The question is: how can I get the shell prompt string in Python?
Echoing PS1 is not a solution, because some CLIs cannot run it and it returns a notation-like string instead of the actual prompt:
SC-2-1:~ # echo $PS1
\[\]\h:\w # \[\]
EDIT
My routine:
def run_cli_command(self, ssh, cli, commands, timeout = 10):
''' Sends one or more commands to some cli and returns answer. '''
try:
channel = ssh.invoke_shell()
channel.settimeout(timeout)
channel.send('%s\n' % (cli))
if 'telnet' in cli:
time.sleep(1)
time.sleep(1)
# I need to check the prompt here
w = 0
while (channel.recv_ready() == False) and (w < timeout):
w += 1
time.sleep(1)
channel.recv(9999)
if type(commands) is not list:
commands = [commands]
ret = ''
for command in commands:
channel.send("%s\r\n" % (command))
w = 0
while (channel.recv_ready() == False) and (w < timeout):
w += 1
time.sleep(1)
ret += channel.recv(9999) ### The size of read buffer can be a bottleneck...
except Exception, e:
#print str(e) ### for debugging
return None
channel.close()
return ret
Some explanation needs here: the ssh parameter is a paramiko.SSHClient() instance. I use this code to login to a server and from there I call another CLI which can be SSH, telnet, etc.
I’d suggest sending commands that alter PS1 to a known string. I’ve done so when I used Oracle sqlplus from a Korn shell script, as coprocess, to know when to end reading data / output from the last statement I issued. So basically, you’d send:
PS1='end1>'; command1
Then you’d read lines until you see "end1>" (for extra easiness, add a newline at the end of PS1).
I am trying to get output from a subprocess and then give commands to that process based on the preceding output. I need to do this a variable number of times, when the program needs further input. (I also need to be able to hide the subprocess command prompt if possible).
I figured this would be an easy task given that I have seen this problem being discussed in posts from 2003 and it is nearly 2012 and it appears to be a pretty common need and really seems like it should be a basic part of any programming language. Apparently I was wrong and somehow almost 9 years later there is still no standard way of accomplishing this task in a stable, non-destructive, platform independent way!
I don't really understand much about file i/o and buffering or threading so I would prefer a solution that is as simple as possible. If there is a module that accomplishes this that is compatible with python 3.x, I would be very willing to download it. I realize that there are multiple questions that ask basically the same thing, but I have yet to find an answer that addresses the simple task that I am trying to accomplish.
Here is the code I have so far based on a variety of sources; however I have absolutely no idea what to do next. All my attempts ended in failure and some managed to use 100% of my CPU (to do basically nothing) and would not quit.
import subprocess
from subprocess import Popen, PIPE
p = Popen(r'C:\postgis_testing\shellcomm.bat',stdin=PIPE,stdout=PIPE,stderr=subprocess.STDOUT shell=True)
stdout,stdin = p.communicate(b'command string')
In case my question is unclear I am posting the text of the sample batch file that I demonstrates a situation in which it is necessary to send multiple commands to the subprocess (if you type an incorrect command string the program loops).
#echo off
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
If anyone can help me I would very much appreciate it, and I'm sure many others would as well!
EDIT here is the functional code using eryksun's code (next post) :
import subprocess
import threading
import time
import sys
try:
import queue
except ImportError:
import Queue as queue
def read_stdout(stdout, q, p):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output: %s' % outdata)
if __name__ == '__main__':
#setup
p = subprocess.Popen(['shellcomm.bat'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
bufsize=0, shell=True) # I put shell=True to hide prompt
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q, p))
t.daemon = True
t.start()
#command loop
while p.poll() is None:
printout(q)
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
time.sleep(0.1) # I added this to give some time to check for closure (otherwise it doesn't work)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('Return Code: %d' % rc)
However when the script is run from a command prompt the following happens:
C:\Users\username>python C:\postgis_testing\shellcomm7.py
Input: sth
Traceback (most recent call last):
File "C:\postgis_testing\shellcomm7.py", line 51, in <module>
p.stdin.write(cmd)
IOError: [Errno 22] Invalid argument
It seems that the program closes out when run from command prompt. any ideas?
This demo uses a dedicated thread to read from stdout. If you search around, I'm sure you can find a more complete implementation written up in an object oriented interface. At least I can say this is working for me with your provided batch file in both Python 2.7.2 and 3.2.2.
shellcomm.bat:
#echo off
echo Command Loop Test
echo.
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
Here's what I get for output based on the sequence of commands "wrong", "still wrong", and "command string":
Output:
Command Loop Test
Type the correct command string:
Input: wrong
Output:
Type the correct command string:
Input: still wrong
Output:
Type the correct command string:
Input: command string
Output:
you are correct
Return Code: 0
For reading the piped output, readline might work sometimes, but set /P INPUT in the batch file naturally isn't writing a line ending. So instead I used lambda: stdout.read(1) to read a byte at a time (not so efficient, but it works). The reading function puts the data on a queue. The main thread gets the output from the queue after it writes a a command. Using a timeout on the get call here makes it wait a small amount of time to ensure the program is waiting for input. Instead you could check the output for prompts to know when the program is expecting input.
All that said, you can't expect a setup like this to work universally because the console program you're trying to interact with might buffer its output when piped. In Unix systems there are some utility commands available that you can insert into a pipe to modify the buffering to be non-buffered, line-buffered, or a given size -- such as stdbuf. There are also ways to trick the program into thinking it's connected to a pty (see pexpect). However, I don't know a way around this problem on Windows if you don't have access to the program's source code to explicitly set the buffering using setvbuf.
import subprocess
import threading
import time
import sys
if sys.version_info.major >= 3:
import queue
else:
import Queue as queue
input = raw_input
def read_stdout(stdout, q):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output:\n%s' % outdata)
if __name__ == '__main__':
ARGS = ["shellcomm.bat"] ### Modify this
#setup
p = subprocess.Popen(ARGS, bufsize=0, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q))
t.daemon = True
t.start()
#command loop
while 1:
printout(q)
if p.poll() is not None or p.stdin.closed:
break
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('\nReturn Code: %d' % rc)