Run a python script inside a running program - python

I am running a python script that launches a executable called ./abc This executable enters inside of a program and waits for a command like so:
$./abc
abc > \\waits for a command here.
What I would like to do is to enter a couple of commands like:
$./abc
abc > read_blif alu.blif
abc > resyn2
What I have so far is as follows:
import os
from array import *
os.system('./abc')
for file in os.listdir("ccts/"):
print 'read_blif ' + file + '\n'
print 'resyn2\n'
print 'print_stats\n'
print 'if -K 6\n'
print 'print_stats\n'
print 'write_blif ' + file.split('.')[0] + 'mapped.blif\n'
This however will do the following:
abc > \\stays idle and waits until I ^C and then it prints
read ...blif
resyn2
...
It prints just to the terminal. How do I make it execute this inside the program and wait until it sees the next abc > to run the next command.
Thanks

I have done something similar using subprocess.
import subprocess
cmd = './command_to_execute'
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
result = pro.stdout.read()
This will execute the command specified by cmd, then reads the result into result. It will wait for there to be a result printed to the console before executing anything after the result assignment. I believe this might be what you want though your description was a bit vague.

You may be looking for the pexpect module. Here is the basic example from pexpect's documentation
# This connects to the openbsd ftp site and
# downloads the recursive directory listing.
import pexpect
child = pexpect.spawn('ftp ftp.openbsd.org')
child.expect('Name .*: ')
child.sendline('anonymous')
child.expect('Password:')
child.sendline('noah#example.com')
child.expect('ftp> ')
child.sendline('lcd /tmp')
I think it will work the same way with abc >, if your OS is compatible with pexpect.

You need to spawn a new process using subprocess library and make two pipes one for stdin and one for stdout. Using this pipes ( that are represented in python as files) you can communicate with your process
Here is an example:
import subprocess
cmd = './full/path/to/your/abc/executable'
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
pro.stdin.write("read_blif alu.blif \n")
pro.stdout.read(3)
You can use pro.communicate but I assume you need to take the output for every command you input. something like:
abc > command1
ouput1
abc > command2
output2 -part1
output2 -part2
output2 -part3
In this way I think the PIPE approach is more useful.
Use the dir function in python to find more info about the pro object and what methods and attributes are available dir(proc). Don't forget the help built in that will display the docstrings help(pro) or help(pro.stdin).
You are making a mistake when you run os.system this will run you program in the background an you won't have control over it. Maybe you would like to look into what input/output stream.
More reading can be done here.

If you want to pipe commands into the input of an executable, the easiest way would be to use the subprocess module. You can input into the stdin of the executable and get its output with Popen.communicate.
import subprocess
for f in os.listdir('ccts/'):
p = subprocess.Popen('./abc', stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate('read_blif ' + f + '\n'
'resyn2\n'
'print_stats\n'
'if -K 6\n'
'print_stats\n'
'write_blif ' + f.split('.')[0] + 'mapped.blif\n')
# stdout will be what the program outputs and stderr will be any errors it outputs.
This will, however, close stdin of the subprocess everytime, but is the only way to reliable communicate without deadlocking. According to https://stackoverflow.com/a/28616322/5754656, you should use pexpect for an "interactive session-like" program. This is avoided here by having multiple subprocesses, assuming you can have different children run the program.
I assume that you only need the stdouts of print_stats, so you can do (as an example, you might want to handle errors):
import subprocess
def helper_open(cmd='./abc'):
return subprocess.Popen(cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for f in in os.listdir('cts/'):
helper_open().communicate('read_blif' + f + '\n'
'resyn2\n')
stats, _ = helper_open().communicate('print_stats\n')
# Do stuff with stats
stats, _ = helper_open().communicate('if -K 6\nprint_stats\n')
# Do more stuff with other stats
helper_open().communicate('write_blif ' + f.split('.')[0] + 'mapped.blif\n')

Related

How to run a shell script in python from memory?

The application I'm writing retrieves a shell script through HTTP from Network, I want to run this script in python however I don't want to physically save it to the hard drive because I have its content already in memory, and I would like to just execute it. I have tried something like this:
import subprocess
script = retrieve_script()
popen = subprocess.Popen(scrpit, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdOut, stdErr = popen.communicate()
def retrieve_script_content():
# in reality I retrieve a shell script content from network,
# but for testing purposes I will just hardcode some test content here
return "echo command1" + "\n" + "echo command2" + " \n" + "echo command3"
This snippet will not work because subprocess.Popen expects you to provide only one command at a time.
Are there any alternatives to run a shell script from memory?
This snippet will not work because subprocess.Popen expects you to provide only one command at a time.
That is not the case. Instead, the reason why it doesn't work is:
The declaration of retrieve_script has to come before the call
You call it retrieve_script_content instead of retrieve_script
You misspelled script as scrpit
Just fix those and it's fine:
import subprocess
def retrieve_script():
return "echo command1" + "\n" + "echo command2" + " \n" + "echo command3"
script = retrieve_script()
popen = subprocess.Popen(script, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdOut, stdErr = popen.communicate()
print(stdOut);
Result:
$ python foo.py
command1
command2
command3
However, note that this will ignore the shebang (if any) and run the script with the system's sh every time.
Are you using a Unix-like OS? If so, you should be able to use a virtual filesystem to make an in-memory file-like object at which you could point subprocess.Popen:
import subprocess
import tempfile
import os
import stat
def retrieve_script_content():
# in reality I retrieve a shell script content from network,
# but for testing purposes I will just hardcode some test content here
return "echo command1" + "\n" + "echo command2" + " \n" + "echo command3"
content = retrieve_script_content()
with tempfile.NamedTemporaryFile(mode='w', delete=False, dir='/dev/shm') as f:
f.write(content)
os.chmod(f.name, stat.S_IRUSR | stat.S_IXUSR)
# print(f.name)
popen = subprocess.Popen(f.name, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True)
stdOut, stdErr = popen.communicate()
print(stdOut.decode('ascii'))
# os.unlink(f.name)
prints
command1
command2
command3
Above I used /dev/shm as the virtual filesystem since Linux systems based on Glibc always have a tmpfs mounted on /dev/shm.
If security is a concern you may wish to setup a ramfs.
One reason why you might want to use a virtual file instead of passing the script contents directly to subprocess.Popen is that the maximum size for a single string argument is limited to 131071 bytes.
You can execute multi command script with Popen. Popen only restricts you to one-command string when shell flag is False, yet it is possible to pass a list of commands. Popen's flag shell=True allows for multi-command scripts (it is considered unsecure, though what you are doing - executing scripts from the web - is already very risky).

How to use Popen with an interactive command? nslookup, ftp

Is there any way to use Popen with interactive commands? I mean nslookup, ftp, powershell... I read the whole subprocess documentation several times but I can't find the way.
What I have (removing the parts of the project which aren't of interest here) is:
from subprocess import call, PIPE, Popen
command = raw_input('>>> ')
command = command.split(' ')
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
execution = process.stdout.read()
error = process.stderr.read()
output = execution + error
process.stderr.close()
process.stdout.close()
print(output)
Basically, when I try to print the output with a command like dir, the output is a string, so I can work with the .read() on it. But when I try to use nslookup for example, the output isn't a string, so it can't be read, and the script enters in a deadlock.
I know that I can invoke nslookup in non-interactive mode, but that's not the point. I want to remove all the chances of a deadlock, and make it works with every command you can run in a normal cmd.
The real way the project works is through sockets, so the raw_input is a s.recv() and the output is sending back the output, but I have simplified it to focus on the problem.

Waiting for a prompt on an exe

I have a executable that lets me talk to a temperature controller. When I double-click the exe (SCPI-CLI.exe) it will open up a command window with text "TC_CLI>". I can then type my commands and talk to my controller: eg: TC:COMM:OPEN:SER 8
When I use the subprocess.Popen like this
import subprocess
text = 'tc:comm:open:ser 8'
proc = subprocess.Popen(['C:\\Program Files (x86)\\TC_SCPI\\lib\\SCPI-CLI.exe'],
stdout=subprocess.PIPE,stdin=subprocess.PIPE)
proc.stdin.write(text)
proc.stdin.close()
result = proc.stdout.read()
print(result)
the SCPI-CLI.exe will open up, but will not show me the > prompt. What am I doing wrong here? It will hang at the proc.stdin.write(text).
I am a newbie to sub-process.
You can try to add "\n" to your string, that way you send the enter key.
Also please try to add pipe to stderr also and see if it display any error (or maybe it use stderr instead of stdout for displaing messages)
One more thing, is a good idea to wait for this program to exit before reading the results.
Try this:
import subprocess
import time
import os
text = b'tc:comm:open:ser 8\nexit\n'
proc = subprocess.Popen(['SCPI-CLI.exe'],stdout=subprocess.PIPE,stdin=subprocess.PIPE,stderr=subprocess.PIPE)
out,err = proc.communicate(text)
print(out.decode())

Unable to debug python script with subprocess

I have written a python function which runs another python script in a remote desktop using PSTools (psexec). I have run the script successfully several times when the function is called only once. But when I call the function multiple times from another file, the subprocess does not run in the second call. In fact it immediately quits the entire program in the second iteration without throwing any exception or Traceback.
controlpc_clean_command = self.psexecpath + ' -s -i 2 -d -u ' + self.controlPClogin + ' -p ' + self.controlPCpwd + ' \\' + self.controlPCIPaddr + ' cmd.exe /k ' + self.controlPC_clean_code
logfilePath = self.psexeclogs + 'Ctrl_Clean_Log.txt'
logfile = file(logfilePath,'w')
try:
process = subprocess.Popen(controlpc_clean_command, stdout = subprocess.PIPE,stderr = subprocess.PIPE)
for line in process.stderr:
print "** INSIDE SUBPROCESS STDERR TO START PSEXEC **\n"
sys.stderr.write(line)
logfile.write(line)
process.wait()
except OSError:
print "********COULD NOT FIND PSEXEC.EXE, PLEASE REINSTALL AND SET THE PATH VARIABLE PROPERLY********\n"
The above code runs once perfectly. Even if I run it from a different python file with different parameters, it runs good. The problem happens when I call the function more than once from one file, then in the second call the function quits after printing "** INSIDE SUBPROCESS STDERR TO START PSEXEC **\n" and it does not even print anything in the main program after that.
I am unable to figure out how to debug this issue. As I am completely clueless where the program goes after printing this line. How do I debug this?
Edit:
After doing some search, I added
stdout, stderr = subprocess.communicate()
after the subprocess.Popen line in my script. Now, I am able to proceed with the code but with one problem. Nothing is now getting written in the logfile 'Ctrl_Clean_Log.txt' after adding subprocess.communicate() !! How can I write in the file as well as proceed with the code?
Maybe your first process is stuck waiting and blocking other processes.
https://docs.python.org/2/library/subprocess.html
Popen.wait()
Wait for child process to terminate. Set and return returncode attribute.
Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
child process generates enough output to a pipe such that it blocks waiting
for the OS pipe buffer to accept more data. Use communicate() to avoid that.

Calling multi-level commands/programs from python

I have a shell command 'fst-mor'. It takes an argument in form of file e.g. NOUN.A which is a lex file or something. Final command : fst-mor NOUN.A
It then produces following output:
analyze>INPUT_A_STRING_HERE
OUTPUT_HERE
Now I want to put call that fst-mor from my python script and then input string and want back output in the script.
So far I have:
import os
print os.system("fst-mor NOUN.A")
You want to capture the output of another command. Use the subprocess module for this.
import subprocess
output = subprocess.check_output('fst-mor', 'NOUN.A')
If your command requires interactive input, you have two options:
Use a subprocess.Popen() object, and set the stdin parameter to subprocess.PIPE and write the input to the stdin pipe available. For one input parameter, that's often enough. Study the documentation for the subprocess module for details, but the basic interaction is:
proc = subprocess.Popen(['fst-mor', 'NOUN.A'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output, err = proc.communicate('INPUT_A_STRING_HERE')
Use the pexpect library to drive a process. This let's you create more complex interactions with a subprocess by looking for patterns is the output it generates:
import pexpect
py = pexpect.spawn('fst-mor NOUN.A')
py.expect('analyze>')
py.send('INPUT_A_STRING_HERE')
output = py.read()
py.close()
You could try:
from subprocess import Popen, PIPE
p = Popen(["fst-mor", "NOUN.A"], stdin=PIPE, stdout=PIPE)
output = p.communicate("INPUT_A_STRING_HERE")[0]
A sample that communicates with another process:
pipe = subprocess.Popen(['clisp'],stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(response,err) = pipe.communicate("(+ 1 1)\n(* 2 2)")
#only print the last 6 lines to chop off the REPL intro text.
#Obviously you can do whatever manipulations you feel are necessary
#to correctly grab the input here
print '\n'.join(response.split('\n')[-6:])
Note that communicate will close the streams after it runs, so you have to know all your commands ahead of time for this method to work. It seems like the pipe.stdout doesn't flush until stdin is closed? I'd be curious if there is a way around that I'm missing.
You should use the subprocess module subprocess module
In your example you might run:
subprocess.check_output(["fst-mor", "NOUN.A"])

Categories