How to run some other program interactively from a python script - python

I am new to python. I would like to run a "EDA tool" from python interactively.
Here are the steps I wanted to follow:
Start the tool
Run the first command in the tool
Check for the first command output or parse (in the main pyton) script
Run the second command
Parse the output in python script
[...]
x. Exit the tool
x+1. Do some post processing in main pyhon script
I am looking for some information or pointers related to it so that I can read on my own.

This depends on what you mean by a "command". Is each command a separate process (in the operating-systems definition of that word)? If so, it sounds like you need the subprocess module.
import subprocess
execNamePlusArgs = [ 'ls', '-l' ] # unix-like (i.e. non-Windows) example
sp = subprocess.Popen( execNamePlusArgs, stdout=subprocess.PIPE, stderr=subprocess.PIPE )
stdout, stderr = sp.communicate() # this blocks until the process terminates
print( stdout )
If you don't want it to block until termination (e.g. if you want to feed the subprocess line-by-line input and examine its output line by line) then you would define stdin=subprocess.PIPE as well and then, instead of communicate, you might use calls to sp.stdin.writeline(whatever), sp.stdout.readline() and sp.stderr.readline()

You should look into using something like python-fabric
It allows you to use higher level language constructs such as context managers and makes the shell more usable with python in general.
Example usage:
from fabric.operations import local
from fabric.context_managers import lcd
with lcd(".."): # Prefix all commands with 'cd.. &&'
ls = local('ls',capture=True) # Run 'ls' command and put result into variable
print ls
>>>
[localhost] local: ls
Eigene Bilder
Eigene Musik
Eigene Videos
SynKernelDiag2015-11-07_10-01-13.log
desktop.ini
foo
scripts

Related

Run a program from python several times whitout initialize different shells

I want to run a compiled Fortran numerical model from Python. It is too complex to compile it using F2PY without implement several changes in the Fortran routines. This is why I am just calling its executable using the subprocess module.
The problem is that I have to call it few thousands of times, and I have the feeling that generating soo many shells is slowing the whole thing.
My implememtation (It is difficult to provide a reproducible example, sorry) looks like:
import os
import subprocess
foo_path = '/path/to/compiled/program/'
program_dir = os.path.join(foo_path, "FOO") #FOO is the Fortran executable
instruction = program_dir + " < nlst" #It is necesary to provide FOO a text file (nlst)
#with the configuration to the program
subprocess.call(instruction, shell=True, cwd=foo_path) #run the executable
Running it in this way (inside a loop), it works well and FOO generates a text file output that I can read from python. But I'd like to do the same keeping the shell active and just providing to it the "nlst" file path. Another nice option may be start an empty shell and keep it waiting for the instruction string, that will look like "./FOO < nlst". But I am not sure about how to do it, any ideas?
Thanks!
[Edited] Something like this should work but .comunicate ends process and a second call does not work:
from subprocess import Popen, PIPE
foo_path = '/path/to/FOO/'
process = Popen(['bash'], stdin=PIPE, cwd=foo_path)
process.communicate(b'./FOO < nlst')
I found this solution using the pexpect module,
import pexpect
import os.path
foo_path = '/path/to/FOO/'
out_path = '/path/to/FOO/foo_out_file' #path to output file
child = pexpect.spawn('bash', cwd=foo_path)
child.sendline('./FOO < nlst')
while not os.path.exists(out_path): #wait until out_path is created
continue
To extend my comment, here is an example for threading with your code:
import os
import subprocess
from concurrent.futures import ThreadPoolExecutor
foo_path = '/path/to/compiled/program/'
program_dir = os.path.join(foo_path, "FOO") #FOO is the Fortran executable
instruction = program_dir + " < nlst" #It is necesary to provide FOO a text file (nlst)
#with the configuration to the program
def your_function():
subprocess.call(instruction, shell=True, cwd=foo_path) #run the executable
# create executor object
executor = ThreadPoolExecutor(max_workers=4) # uncertain of how many workers you might need/want
# specify how often you want to run the function
for i in range(10):
# start your function as thread
executor.submit(your_function)
What I meant in my comment was something like the following Python script:
from subprocess import Popen, PIPE
foo_path = '/home/ronald/tmp'
process = Popen(['/home/ronald/tmp/shexecutor'], stdin=PIPE, cwd=foo_path)
process.stdin.write("ls\n")
process.stdin.write("echo hello\n")
process.stdin.write("quit\n")
And the shell script that executes the commands:
#!/bin/bash
while read cmdline; do
if [ "$cmdline" == "quit" ]; then
exit 0
fi
eval "$cmdline" >> x.output
done
Instead of doing an "eval", you can do virtually anything.
Note that this is just an outline of a real implementation.
You'd need to do some error handling. And if you are going to use this in a production environment, be sure to harden the code to the limit.

Multiple shell commands in python (Windows)

I'm working on a windows machine and I want to set a variable in the shell and want to use it with another shell command, like:
set variable = abc
echo %variable%
I know that I could do this using os.system(com1 && com2) but I also know, that this is considered 'bad style' and it should be possible by using the subprocess module, but I don't get how.
Here is what I got so far:
proc = Popen('set variable=abc', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
proc.communicate(input=b'echo %variable%)
But neither line seems to work, both commands don't get executed. Also, if I type in nonexisting commands, I don't get an error. How is the proper way to do it?
Popen can only execute one command or shell script. You can simply provide the whole shell script as single argument using ; to separate the different commands:
proc = Popen('set variable=abc;echo %variable%', shell=True)
Or you can actually just use a multiline string:
>>> from subprocess import call
>>> call('''echo 1
... echo 2
... ''', shell=True)
1
2
0
The final 0 is the return-code of the process
The communicate method is used to write to the stdin of the process. In your case the process immediately ends after running set variable and so the call to communicate doesn't really do anything.
You could spawn a shell and then use communicate to write the commands:
>>> proc = Popen(['sh'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
>>> proc.communicate('echo 1; echo 2\n')
('1\n2\n', '')
Note that communicate also closes the streams when it is done, so you cannot call it mulitple times. If you want an interactive session you hvae to write directly to proc.stdin and read from proc.stdout.
By the way: you can specify an env parameter to Popen so depending on the circumstances you may want to do this instead:
proc = Popen(['echo', '%variable%'], env={'variable': 'abc'})
Obviously this is going to use the echo executable and not shell built-in but it avoids using shell=True.

Executing shell command from python

I am trying to compile a set of lines and execute them and append the output to text file. Instead of writing the same thing, I used a python script to compile and execute in background.
import subprocess
subprocess.call(["ifort","-openmp","mod1.f90","mod2.f90","pgm.f90","-o","op.o"])
subprocess.call(["nohup","./op.o",">","myout.txt","&"])
The program pgm.f90 is getting compliled using the ifort compiler, but the ouput is not getting appended to myout.txt. Instead it is appending output to nohup.out and the program is not running in the background even after specifying "&" in the python script.
What obvious error have I made here?
Thanks in advance
You can call a subprocess as if you were in the shell by using Popen() with the argument shell=True:
subprocess.Popen("nohup ./op.o > myout.txt &", shell=True)
This issue is that when you supply arguments as a list of elements, the subprocess library bypasses the shell and uses the exec syscall to directly run your program (in your case, "nohup"). Thus, rather than the ">" and "&" operators being interpreted by the shell to redirect your output and run in the background, they are being passed as literal arguments to the nohup command.
You can tell subprocess to execute your command via the shell, but this starts a whole extra instance of shell and can be wasteful. For a workaround, use the built-in redirection functionality in subprocess instead of using the shell primitives:
p = subprocess.Popen(['nohup', "./op.o"],
stdout=open('myout.txt', 'w'))
# process is now running in the background.
# if you want to wait for it to finish, use:
p.wait()
# or investigate p.poll() if you want to check to see if
# your process is still running.
For more information: http://docs.python.org/2/library/subprocess.html

Sending multiple commands to a bash shell which must share an environment

I am attempting to follow this answer here: https://stackoverflow.com/a/5087695/343381
I have a need to execute multiple bash commands within a single environment. My test case is simple:
import subprocess
cmd = subprocess.Popen(['bash'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Write the first command
command = "export greeting=hello\n"
cmd.stdin.write(command)
cmd.stdin.flush() # Must include this to ensure data is passed to child process
result = cmd.stdout.read()
print result
# Write the second command
command = "echo $greeting world\n"
cmd.stdin.write(command)
cmd.stdin.flush() # Must include this to ensure data is passed to child process
result = cmd.stdout.read()
print result
What I expected to happen (based on the referenced answer) is that I see "hello world" printed. What actually happens is that it hangs on the first cmd.stdout.read(), and never returns.
Can anyone explain why cmd.stdout.read() never returns?
Notes:
It is absolutely essential that I run multiple bash commands from python within the same environment. Thus, subprocess.communicate() does not help because it waits for the process to terminate.
Note that in my real test case, it is not a static list of bash commands to execute. The logic is more dynamic. I don't have the option of running all of them at once.
You have two problems here:
Your first command does not produce any output. So the first read blocks waiting for some.
You are using read() instead of readline() -- read() will block until enough data is available.
The following modified code (updated with Martjin's polling suggestion) works fine:
import subprocess
import select
cmd = subprocess.Popen(['bash'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
poll = select.poll()
poll.register(cmd.stdout.fileno(),select.POLLIN)
# Write the first command
command = "export greeting=hello\n"
cmd.stdin.write(command)
cmd.stdin.flush() # Must include this to ensure data is passed to child process
ready = poll.poll(500)
if ready:
result = cmd.stdout.readline()
print result
# Write the second command
command = "echo $greeting world\n"
cmd.stdin.write(command)
cmd.stdin.flush() # Must include this to ensure data is passed to child process
ready = poll.poll(500)
if ready:
result = cmd.stdout.readline()
print result
The above has a 500ms timeout - adjust to your needs.

Calling multi-level commands/programs from python

I have a shell command 'fst-mor'. It takes an argument in form of file e.g. NOUN.A which is a lex file or something. Final command : fst-mor NOUN.A
It then produces following output:
analyze>INPUT_A_STRING_HERE
OUTPUT_HERE
Now I want to put call that fst-mor from my python script and then input string and want back output in the script.
So far I have:
import os
print os.system("fst-mor NOUN.A")
You want to capture the output of another command. Use the subprocess module for this.
import subprocess
output = subprocess.check_output('fst-mor', 'NOUN.A')
If your command requires interactive input, you have two options:
Use a subprocess.Popen() object, and set the stdin parameter to subprocess.PIPE and write the input to the stdin pipe available. For one input parameter, that's often enough. Study the documentation for the subprocess module for details, but the basic interaction is:
proc = subprocess.Popen(['fst-mor', 'NOUN.A'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output, err = proc.communicate('INPUT_A_STRING_HERE')
Use the pexpect library to drive a process. This let's you create more complex interactions with a subprocess by looking for patterns is the output it generates:
import pexpect
py = pexpect.spawn('fst-mor NOUN.A')
py.expect('analyze>')
py.send('INPUT_A_STRING_HERE')
output = py.read()
py.close()
You could try:
from subprocess import Popen, PIPE
p = Popen(["fst-mor", "NOUN.A"], stdin=PIPE, stdout=PIPE)
output = p.communicate("INPUT_A_STRING_HERE")[0]
A sample that communicates with another process:
pipe = subprocess.Popen(['clisp'],stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(response,err) = pipe.communicate("(+ 1 1)\n(* 2 2)")
#only print the last 6 lines to chop off the REPL intro text.
#Obviously you can do whatever manipulations you feel are necessary
#to correctly grab the input here
print '\n'.join(response.split('\n')[-6:])
Note that communicate will close the streams after it runs, so you have to know all your commands ahead of time for this method to work. It seems like the pipe.stdout doesn't flush until stdin is closed? I'd be curious if there is a way around that I'm missing.
You should use the subprocess module subprocess module
In your example you might run:
subprocess.check_output(["fst-mor", "NOUN.A"])

Categories