I'm really stuck with a problem I'm hoping someone can help me with. I'm trying to create a wrapper in Python3.1 for a command line program called spooky. I can successfully run this program on the command line like this:
$ spooky -a 4 -b .97
My first Python wrapper attempt for spooky looked like this:
import subprocess
start = "4"
end = ".97"
spooky_path = '/Users/path/to/spooky'
cmd = [spooky_path, '-a', start, '-b', end]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
process.wait()
print('Done')
The above code prints Done, but does not execute the program spooky
Next I tried to just execute the program on the command line like this:
$ /Users/path/to/spooky -a 4 -b .97
The above code also fails, and provides no helpful errors.
My question is: How can I get Python to run this program by sending spooky -a 4 -b .97 to the command line? I would VERY much appreciate any help you can provide. Thanks in advance.
You need to drop the stdout=subprocess.PIPE. Doing that disconnects the stdout of your process from Python's stdout and makes it retrievable using the Popen.communicate() function, like so:
import subprocess
spooky_path = 'ls'
cmd = [spooky_path, '-l']
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output = process.communicate()[0]
print "Output:", output
process.wait()
print('Done')
To make it print directly you can use it without the stdout argument:
process = subprocess.Popen(cmd)
Or you can use the call function:
process = subprocess.call(cmd)
Try making your command into a single string:
cmd = 'spooky_path -a start -b end'
process = subprocess.Popen(cmd, shell=True)
Related
Dear stackoverflow users,
I'm looking for a solution for a probably quite easy problem. I want to automate some quantum chemical calculations and ran into a small problem.
Normally you start your quantum chemical programm (in my case it's called orca) with your input file (*.inp) on a remote server as a background process and pipe the output into an outputfile (*.out) via
nohup orca H2.inp >& H2.out &
or something similar.
Now I wanted to use a python script (with some templating) to write the input file automatically. At the end the script should start the calculation in a way that I could log out of the server without stopping orca. I tried that with
subprocess.run(["orca", input_file], stdout=output_file)
but so far it did not work. How do I "emulate" the command given at the top with the subprocess module?
Regards
Update
I have one file that is called H2.xyz. The script reads and splits the filename by the point and creates an input file name H2.inp and the output should be written into the file H2.out.
Update 2
The input file is derived from the *xyz file via
xyzfile = str(sys.argv[1])
input_file = xyzfile.split(".")[0] + ".inp"
output_file = xyzfile.split(".")[0] + ".out"
and is created within the script via templating. In the end I want to run the script in the following way:
python3 script.py H2_0_1.xyz
Why not simply:
subprocess.Popen(f'orca {input_file} >& {output_file}',
shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
More info:
Run Process and Don't Wait
For me (Windows, Python 2.7) the method call works very fine like this:
with open('H2.out', 'a') as out :
subprocess.call(['orca', infile], stdout=out,
stderr=out,
shell=True) # Yes, I know. But It's Windows.
On Linux you maybe do not need shell=True for a list of arguments.
Is the usage of subprocess important? If not, you could use os.system.
The Python call would get really short, in your case
os.system("nohup orca H2.inp >& H2.out &")
should do the trick.
I had the same problem not long ago.
Here is my solution:
commandLineCode = "nohup orca H2.inp >& H2.out &"
try:
proc = subprocess.Popen(commandLineCode,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd = workingDir)
except OSError:
print("Windows Error occured")
print(traceback.format_exc())
timeoutInSeconds = 100
try:
outs, errs = proc.communicate(timeout = timeoutInSeconds)
except subprocess.TimeoutExpired:
print("timeout")
proc.kill()
outs, errs = proc.communicate()
stdoutDecode = outs.decode("utf-8")
stderrDecode = errs.decode("utf-8")
for line in stdoutDecode.splitlines():
# write line to outputFile
if stderrDecode:
for line in stderrDecode.splitlines():
# write line to error log
The OSError exception is pretty important since you never now what your OS might do wrong.
For more on the communicate() command which actually starts the process read:
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate
I would like to call a complex command line in Python and capture its output, and I don't understand how I should be doing it:
Command line that I'm trying to run is:
cat codegen_query_output.json | jq -r '.[0].code' | echoprint-inverted-query index.bin
As far as I got is:
process = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE)
out, err = process.communicate()
print out
but this is a simple ls -a ([cmd, args]) any idea how should I run/structure my complex command line call?
The cleanest way is to create 2 subprocesses piped together. You don't need a subprocess for the cat command, just pass an opened file handle:
import subprocess
with open("codegen_query_output.json") as input_stream:
jqp = subprocess.Popen(["jq","-r",'.[0].code'],stdin=input_stream,stdout=subprocess.PIPE)
ep = subprocess.Popen(["echoprint-inverted-query","index.bin"],stdin=jqp.stdout,stdout=subprocess.PIPE)
output = ep.stdout.read()
return_code = ep.wait() or jqp.wait()
The jqp process takes the file contents as input. Its output is passed to ep input.
In the end we read output from ep to get the final result. The return_code is a combination of both return codes. If something goes wrong, it's different from 0 (more detailed return code info would be to test separately of course)
Standard error isn't considered here. It will be displayed to the console, unless stderr=subprocess.STDOUT is set (to merge with piped output)
This method doesn't require a shell or shell=True, it's then more portable and secure.
It takes a shell to interpret operators like |. You can ask Python to run a shell, and pass your command as the thing to execute:
cmd = "cat test.py | tail -n3"
process = subprocess.Popen(['bash', '-c', cmd], stdout=subprocess.PIPE)
out, err = process.communicate()
print out
I have a python (v3.3) script that runs other shell scripts. My python script also prints message like "About to run script X" and "Done running script X".
When I run my script I'm getting all the output of the shell scripts separate from my print statements. I see something like this:
All of script X's output
All of script Y's output
All of script Z's output
About to run script X
Done running script X
About to run script Y
Done running script Y
About to run script Z
Done running script Z
My code that runs the shell scripts looks like this:
print( "running command: " + cmnd )
ret_code = subprocess.call( cmnd, shell=True )
print( "done running command")
I wrote a basic test script and do *not* see this behaviour. This code does what I would expect:
print("calling")
ret_code = subprocess.call("/bin/ls -la", shell=True )
print("back")
Any idea on why the output is not interleaved?
Thanks. This works but has one limitation - you can't see any output until after the command completes. I found an answer from another question (here) that uses popen but also lets me see the output in real time. Here's what I ended up with this:
import subprocess
import sys
cmd = ['/media/sf_git/test-automation/src/SalesVision/mswm/shell_test.sh', '4', '2']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, bufsize=1, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(process.stdout.readline, ''):
line = line.replace('\n', '')
print(line)
sys.stdout.flush()
process.wait() # Wait for the underlying process to complete.
errcode = process.returncode # Harvest its returncode, if needed.
print( 'Script ended with return code of: ' + str(errcode) )
This uses Popen and allows me to see the progress of the called script.
It has to do with STDOUT and STDERR buffering. You should be using subprocess.Popen to redirect STDOUT and STDERR from your child process into your application. Then, as needed, output them. Example:
import subprocess
cmd = ['ls', '-la']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
process.wait() # Wait for the underlying process to complete.
out, err = process.communicate() # Capture what it outputted on STDOUT and STDERR
errcode = process.returncode # Harvest its returncode, if needed.
print(out)
print('done running command')
Additionally, I wouldn't use shell = True unless it's really required. It forces subprocess to fire up a whole shell environment just to run a command. It's usually better to inject directly into the env parameter of Popen.
Here's an example of the shell script:
python command_name.py --shell positional_arg1 positional_arg2 --option optional_arg1 --True_flag
I just want to be able to call this from a python script so I can loop through the script, changing different parameters through each loop.
import subprocess
p = subprocess.Popen(["python", "--shell", "positional_arg1", "positional_arg2",
"--option", "optional_arg1", "--True_flag"], shell=True,
stdout=subprocess.PIPE)
out, err = p.communicate()
When I execute the script in R, it is:
$ R --vanilla --args test_matrix.csv < hierarchical_clustering.R > out.txt
In Python, it works if I use:
process = subprocess.call("R --vanilla --args "+output_filename+"_DM_Instances_R.csv < /home/kevin/AV-labels/Results/R/hierarchical_clustering.R > "+output_filename+"_out.txt", shell=True)
But this method doesn't provide the process.wait() function.
So, I would like to use the subprocess.Popen, I tried:
process = subprocess.Popen(['R', '--vanilla', '--args', "\'"+output_filename+"_DM_Instances_R.csv\'", '<', '/home/kevin/AV-labels/Results/R/hierarchical_clustering.R'])
But it didn't work, Python just opened R but didn't execute my script.
Instead of 'R', give it the path to Rscript. I had the same problem. Opens up R but doesn't execute my script. You need to call Rscript (instead of R) to actually execute the script.
retcode = subprocess.call("/Pathto/Rscript --vanilla /Pathto/test.R", shell=True)
This works for me.
Cheers!
I've solved this problem by putting everything into the brackets..
process = subprocess.Popen(["R --vanilla --args "+output_filename+"_DM_Instances_R.csv < /home/kevin/AV-labels/Results/R/hierarchical_clustering.R > "+output_filename+"_out.txt"], shell=True)
process.wait()
A couple of ideas:
You might want to consider using the Rscript frontend, which makes
running scripts easier; you can pass the script filename directly
as a parameter, and do not need to read the script in through standard input.
You don't need the shell for just redirecting standard output to a file, you can
do that directly with subprocess.Popen.
Example:
import subprocess
output_name = 'something'
script_filename = 'hierarchical_clustering.R'
param_filename = '%s_DM_Instances_R.csv' % output_name
result_filename = '%s_out.txt' % output_name
with open(result_filename, 'wb') as result:
process = subprocess.Popen(['Rscript', script_filename, param_filename],
stdout=result);
process.wait()
You never actually execute it fully ^^ try the following
process = subprocess.Popen(['R', '--vanilla', '--args', '\\%s_DM_Instances_R.csv\\' % output_filename, '<', '/home/kevin/AV-labels/Results/R/hierarchical_clustering.R'], stdout=subprocess.PIPE, stdin=subprocess.PIPE, shell=True)
process.communicate()#[0] is stdout
Keven's solution works for my requirement. Just to give another example about #Kevin's solution. You can pass more parameters to the rscript with python-style string:
import subprocess
process = subprocess.Popen(["R --vanilla --args %s %d %.2f < /path/to/your/rscript/transformMatrixToSparseMatrix.R" % ("sparse", 11, 0.98) ], shell=True)
process.wait()
Also, to make things easier you could create an R executable file. For this you just need to add this in the first line of the script:
#! /usr/bin/Rscript --vanilla --default-packages=utils
Reference: Using R as a scripting language with Rscript or this link