I am writing a Python script and I have to call a command from an external software. I am currently using the Popen() function to call such a command. The command has some options too. I want to know how to incorporate these options into the Popen() function. The code I am using now is:
from subprocess import Popen, PIPE
proc = Popen(["halSummarizeMutations", hal_output], stdout=PIPE)
summary_mutation = proc.communicate()[0]
In the Popen() function, I am supposed to take in a variable for an option of the command. The modified code should look like:
proc = Popen(["halSummarizeMutations", --option optioninput, hal_output], stdout=PIPE)
Is the code right or is there a different method to code it? Thanks in advance.
If you want add some parameter to the external software, just add the variables as a string, here is an example of "ls -la", you can add "-la" to the list, you can add any
other parameters to the list.Remember the parameters are all string.
from subprocess import Popen, PIPE
proc = Popen(["ls", '-la'], stdout=PIPE) # if you want more, add after "-la"
print proc.stdout.readlines()
Provide each argument as a separate list item:
from subprocess import check_output
cmd = ["halSummarizeMutations", "--option", "optioninput", hal_output]
summary_mutation = check_output(cmd)
where hal_output is a string variable defined earlier.
Related
INPUTS is the variable I gave for the absolute path of a directory of possible input files. I want to check their status before going through my pipeline. So I tried:
import subprocess
import argparse
INPUTS = '/home/username/WinterResearch/Inputs'
status = subprocess.Popen(['ls', '-lh', INPUTS], shell=True, stdout=subprocess.PIPE)
stdout = status.communicate()
status.stdout.close()
I have also tried the often used
from shlx import split
import subprocess
import argparse
cmd = 'ls -lh INPUTS'
status = subprocess.Popen(cmd.split(), shell=True, stdout=subprocess.PIPE)
and
cmd = "ls -lh 'INPUTS'"
I do not receive an error code. The process simply does not output anything to the terminal window. I am not sure why the python script simply skips over this instead of stating there is an error. I do receive an error when I include close_fds=True that states int cannot use communicate(). So how can I receive an output from some ls -lh INPUTS equivalent using subprocess.Popen()?
You don't see any output because you're not printing to console stdout — it's saved into a variable (named "stdout"). Popen is overkill for this task anyway since you aren't piping the command to another. check_output should work fine with subprocess for this purpose:
import subprocess
subprocess.check_output("ls -lh {0}".format(INPUTS), shell=True)
subprocess.check_output(args, *, stdin=None, stderr=None, shell=False,
universal_newlines=False)
Run command with arguments and return its
output as a byte string.
METHOD WITH LESSER SECURITY RISK: (see warnings plastered throughout this page)
EDIT: Using communicate() can avoid the potential
shell=True security risk:
output = subprocess.Popen(["ls", "-lh", INPUTS]).communicate()[0]
print(output)
From your first snippet:
stdout = status.communicate()
status.stdout.close()
Nothing is being printed here. You may need to change it to the following (or your preferred form/format)
stdout = status.communicate()
print stdout
status.stdout.close()
I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input. How would I now write to standard input?
I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.
It might be better to use communicate:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To clarify some points:
As jro has mentioned, the right way is to use subprocess.communicate.
Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.
Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So, if the streams were not opened explicitly in text mode, then something like below should work:
import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]
I've left the stderr value above deliberately as STDOUT as an example.
That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):
import subprocess
what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'
# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]
# Well the result includes an '\n' at the end,
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()
# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()
This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.
Hope that helps someone out.
Since subprocess 3.5, there is the subprocess.run() function, which provides a convenient way to initialize and interact with Popen() objects. run() takes an optional input argument, through which you can pass things to stdin (like you would using Popen.communicate(), but all in one go).
Adapting jro's example to use run() would look like:
import subprocess
p = subprocess.run(['myapp'], input='data_to_write', capture_output=True, text=True)
After execution, p will be a CompletedProcess object. By setting capture_output to True, we make available a p.stdout attribute which gives us access to the output, if we care about it. text=True tells it to work with regular strings rather than bytes. If you want, you might also add the argument check=True to make it throw an error if the exit status (accessible regardless via p.returncode) isn't 0.
This is the "modern"/quick and easy way to do to this.
One can write data to the subprocess object on-the-fly, instead of collecting all the input in a string beforehand to pass through the communicate() method.
This example sends a list of animals names to the Unix utility sort, and sends the output to standard output.
import sys, subprocess
p = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sys.stdout)
for v in ('dog','cat','mouse','cow','mule','chicken','bear','robin'):
p.stdin.write( v.encode() + b'\n' )
p.communicate()
Note that writing to the process is done via p.stdin.write(v.encode()). I tried using
print(v.encode(), file=p.stdin), but that failed with the message TypeError: a bytes-like object is required, not 'str'. I haven't figured out how to get print() to work with this.
You can provide a file-like object to the stdin argument of subprocess.call().
The documentation for the Popen object applies here.
To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT,
... shell=True)
'ls: non_existent_file: No such file or directory\n'
So I have my .exe program that opens but i want to pass strings to it from my python script.
Im opening the exe like this
import subprocess
p = subprocess.Popen("E:\Work\my.exe", shell=True)
#let user fill in some tables
p.communicate("userInfo")
I want to pass a string to this program while having it just run in the background and not take over any ideas?
From the Python documentation for Using the subprocess module:
You don’t need shell=True to run a batch file, nor to run a
console-based executable.
From the Python documentation for Popen Objects :
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.
Code example:
from subprocess import Popen, PIPE
p = Popen(r"E:\Work\my.exe", stdin=PIPE)
p.communicate("userInfo")
I am trying to execute a shell script(not command) from python:
main.py
-------
from subprocess import Popen
Process=Popen(['./childdir/execute.sh',str(var1),str(var2)],shell=True)
execute.sh
----------
echo $1 //does not print anything
echo $2 //does not print anything
var1 and var2 are some string that I am using as an input to shell script. Am I missing something or is there another way to do it?
Referred: How to use subprocess popen Python
The problem is with shell=True. Either remove that argument, or pass all arguments as a string, as follows:
Process=Popen('./childdir/execute.sh %s %s' % (str(var1),str(var2),), shell=True)
The shell will only pass the arguments you provide in the 1st argument of Popen to the process, as it does the interpretation of arguments itself.
See a similar question answered here. What actually happens is your shell script gets no arguments, so $1 and $2 are empty.
Popen will inherit stdout and stderr from the python script, so usually there's no need to provide the stdin= and stderr= arguments to Popen (unless you run the script with output redirection, such as >). You should do this only if you need to read the output inside the python script, and manipulate it somehow.
If all you need is to get the output (and don't mind running synchronously), I'd recommend trying check_output, as it is easier to get output than Popen:
output = subprocess.check_output(['./childdir/execute.sh',str(var1),str(var2)])
print(output)
Notice that check_output and check_call have the same rules for the shell= argument as Popen.
you actually are sending the arguments ... if your shell script wrote a file instead of printing you would see it. you need to communicate to see your printed output from the script ...
from subprocess import Popen,PIPE
Process=Popen(['./childdir/execute.sh',str(var1),str(var2)],shell=True,stdin=PIPE,stderr=PIPE)
print Process.communicate() #now you should see your output
If you want to send arguments to shellscript from python script in a simple way.. You can use python os module :
import os
os.system(' /path/shellscriptfile.sh {} {}' .format(str(var1), str(var2))
If you have more arguments.. Increase the flower braces and add the args..
In shellscript file.. This will read the arguments and u can execute the commands accordingly
I have a shell command 'fst-mor'. It takes an argument in form of file e.g. NOUN.A which is a lex file or something. Final command : fst-mor NOUN.A
It then produces following output:
analyze>INPUT_A_STRING_HERE
OUTPUT_HERE
Now I want to put call that fst-mor from my python script and then input string and want back output in the script.
So far I have:
import os
print os.system("fst-mor NOUN.A")
You want to capture the output of another command. Use the subprocess module for this.
import subprocess
output = subprocess.check_output('fst-mor', 'NOUN.A')
If your command requires interactive input, you have two options:
Use a subprocess.Popen() object, and set the stdin parameter to subprocess.PIPE and write the input to the stdin pipe available. For one input parameter, that's often enough. Study the documentation for the subprocess module for details, but the basic interaction is:
proc = subprocess.Popen(['fst-mor', 'NOUN.A'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output, err = proc.communicate('INPUT_A_STRING_HERE')
Use the pexpect library to drive a process. This let's you create more complex interactions with a subprocess by looking for patterns is the output it generates:
import pexpect
py = pexpect.spawn('fst-mor NOUN.A')
py.expect('analyze>')
py.send('INPUT_A_STRING_HERE')
output = py.read()
py.close()
You could try:
from subprocess import Popen, PIPE
p = Popen(["fst-mor", "NOUN.A"], stdin=PIPE, stdout=PIPE)
output = p.communicate("INPUT_A_STRING_HERE")[0]
A sample that communicates with another process:
pipe = subprocess.Popen(['clisp'],stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(response,err) = pipe.communicate("(+ 1 1)\n(* 2 2)")
#only print the last 6 lines to chop off the REPL intro text.
#Obviously you can do whatever manipulations you feel are necessary
#to correctly grab the input here
print '\n'.join(response.split('\n')[-6:])
Note that communicate will close the streams after it runs, so you have to know all your commands ahead of time for this method to work. It seems like the pipe.stdout doesn't flush until stdin is closed? I'd be curious if there is a way around that I'm missing.
You should use the subprocess module subprocess module
In your example you might run:
subprocess.check_output(["fst-mor", "NOUN.A"])