I'm trying to get the time of a process, and when I use the keyword time in the shell, I get a nicer output as:
real 0m0,430s
user 0m0,147s
sys 0m0,076s
Instead of the /usr/bin/time which gives a different output. When I try to run it through python's subprocess library with subprocess.call('time command args',shell=True) it gives me the /usr/bin/time instead of the keyword. How can I use the keyword function as opposed to the current one?
shell=True causes subprocess to use /bin/sh, not bash. You need the executable argument as well
subprocess.call('time command args', shell=True, executable='/bin/bash')
Adjust the path to bash as necessary.
Related
To launch a python script (it is needed for running an OLED display) from terminal, I have to use the following bash commands: python demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20. Those parameters after .py are important, otherwise, the script will run with default settings and in my case, the script will not launch with default settings. Thus, those parameters are needed.
The problem arises when I need to launch my script from another python script, (instead of using bash commands on terminal). To launch one of my python script from a parent script. I have used:
import subprocess # to use subprocess
p = subprocess.Popen(['python', 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20'])
in my parent script but I got an error stating:
python: can't open file 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20': [Errno 2] No such file or directory
I suspect that adding the parameters --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20 after .py may be causing difficulty in launching the script. As mentioned, these parameters are otherwise essential for me to include for launching with bash commands on terminal. How can I use subprocess with the required parameters to launch this script?
The subprocess library is interpreting all of your arguments, including demo_oled_v01.py as a single argument to python. That's why python is complaining that it cannot locate a file with that name. Try running it as:
p = subprocess.Popen(['python', 'demo_oled_v01.py', '--display',
'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi',
'--gpio-data-command', '20'])
See more information on Popen here.
This started as a comment thread, but got too long and complex.
Calling Python as a subprocess of Python is an antipattern. You can often fruitfully avoid this by refactoring your Python code so that your program can call the other program as a simple library (or module, or package, or what have you -- there is a bit of terminology here which you'll want to understand more properly ... eventually).
Having said that, there are scenarios where the subprocess needs to be a subprocess (perhaps it is designed to do its own signal handling, for example) so don't apply this blindly.
If you have a script like demo.py which contains something like
def really_demo(something, other, message='No message'):
# .... some functionality here ...
def main():
import argparse
parser = argparse.ArgumentParser(description='Basic boilerplate, ignore the details.')
parser.add_argument('--something', dest='something') # store argument in args.something
parser.add_argument('--other', dest='other') # ends up in args.other
# ... etc etc etc more options
args = parser.parse_args()
# This is the beef: once the arguments are parsed, pass them on
really_demo(args.something, args.other, message=args.message)
if __name__ == '__main__':
main()
Observe how when you run the script from the command line, __name__ will be '__main__' and so it will plunge into the main() function which picks apart the command line, then calls some other function -- in this case, real_demo(). Now, if you are calling this code from an already running Python, there is no need really to collect the arguments into a list and pass them to a new process. Just have your Python script load the function you want to call from the script, and call it with your arguments.
In other words, if you are currently doing
subprocess.call(['demo.py', '--something', 'foo', '--other', value, '--message', 'whatever'])
you can replace the subprocess call with
from demo import real_demo
real_demo('foo', value, message='whatever')
Notice how you are bypassing the main() function and all the ugly command-line parsing, and simply calling another Python function. (Pay attention to the order and names of the arguments; they may be quite different from what the command-line parser accepts.) The fact that it is defined in a different file is a minor detail which import handles for you, and the fact that the file contains other functions is something you can ignore (or perhaps exploit more fully if, for example, you want to access internal functions which are not exposed via the command-line interface in a way which is convenient for you).
As an optimization, Python won't import something twice, so you really need to make sure the functionality you need is not run when you import it. Commonly, you import once, at the beginning of your script (though technically you can do it inside the def which needs it, for example, if there is only one place in your code which depends on the import) and then you call the functions you got from the import as many or as few times as you need them.
This is a lightning recap of a very common question. If this doesn't get you started in the right direction, you should be able to find many existing questions on Stack Overflow about various aspects of this refactoring task.
Add full path to the python script & separate all parameter
EX:
import subprocess
p = subprocess.Popen(['python', 'FULL_PATH_TO_FILE/demo_oled_v01.py', '--display', 'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi', '--gpio-data-command', '20'])
For Windows and Python 3.x, you could :
Use a Windows shell (cmd.exe most probably on Windows by default)
result = subprocess.Popen('cd C:\\Users\\PathToMyPythonScript
&& python myPythonScript.py value1ofParam1 value2ofParam2',
shell=True, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Stay in your Python environment (remove shell=True) you can write :
result = subprocess.Popen(["C:\Windows\System32\cmd.exe", "/k",
"cd", "C:\\Users\\PathToMyPythonScript",
"&&", "dir", "&&", "python", "myPythonScript.py",
"value1ofParam1", "value2ofParam2"], universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Example of script file you can call for a try (the "MyPythonScript.py") :
# =============================================================================
# This script just outputs the arguments you've passed to it
import sys
print('Number of arguments:', len(sys.argv), 'arguments.')
print('Argument List:', str(sys.argv))
# =============================================================================
I'm having some trouble understanding the subprocess function in Python 2.7. I have some commands in shell script that I'm trying to convert into Python, svn export -r 5 ... for example, but I don't want to depend on a library such as pysvn to do this. The solution to that (to my understanding) is to use a subprocess and just run each individual command that would be in a shell script. Should this be achieved by subprocess.call("svn export -r 5", shell=True)? Or is Popen what I should be looking at? I know that it's been said you should avoid shell=True, but there is no security concern or possible user error in my case. Any advice would be appreciated.
subprocess.call is just a thin wrapper around subprocess.Popen that waits for the process to complete:
def call(*args, **kwargs):
return Popen(*args, **kwargs).wait()
The only reason to use the shell to run your command is if you want to run some more or less complicated shell command. With a single simple command and its arguments, it is better to pass a single list of strings consisting of the command name and its arguments.
subprocess.call(["svn", "export", "-r", "5"])
If you were writing a function that could, for example, take a revision number as an argument, you can pass that to svn export as long as you ensure that it is a string:
def svn_export(r):
subprocess.call(["svn", "export", "-r", str(r)])
I'm trying to execute unfluff inside a python script using subprocess, but the result is always empty.
If I execute it from the shell, it goes ok. Here is an example:
From the unfluff documentation I can extract the contents of a webpage through:
curl -s 'http://observador.pt/2016/10/29/espanha-e-portugal-sao-unicos-paises-da-ue-sem-populismo-xenofobo-diz-antonio-costa' | unfluff
This results in a nice json with a good content extraction.
Now, in python I'm using the following:
import subprocess
url = 'http://observador.pt/2016/10/29/espanha-e-portugal-sao-unicos-paises-da-ue-sem-populismo-xenofobo-diz-antonio-costa'
p = subprocess.Popen(['curl','-s',url,'|','unfluff'],stdout=subprocess.PIPE)
print p.communicate()[0]
which results in an empty string.
So, what am I doing wrong?
By using | in your command you're implicitly invoking the OS shell.
So you have to enable shell=True to do that.
p = subprocess.Popen(['curl','-s',url,'|','unfluff'],stdout=subprocess.PIPE,shell=True)
Note: since you have Popen you could do it in a much cleaner way by opening 2 Popen instances, for instance like this:
p1 = subprocess.Popen(['curl','-s',url],stdout=subprocess.PIPE)
p2 = subprocess.Popen('unfluff',stdin=p1.stdout,stdout=subprocess.PIPE)
print(p2.communicate()[0])
(then you don't need the shell=True parameter, EDIT: you still need the shell=True parameter on the second Popen probably because unfluff is not really an executable, so needs the shell to start)
the rule is: if you want to be safe, always set shell=True but the command line then depends on the OS shell and it's less efficient. So if it works without it, it's better.
I am using Python script to invoke a Java virtual machine. The following command works:
subprocess.call(["./rvm"], shell=False) # works
subprocess.call(["./rvm xyz"], shell=True) # works
But,
subprocess.call(["./rvm xyz"], shell=False) # not working
does not work. Python documentation advices to avoid shell=True.
You need to split the commands into separate strings:
subprocess.call(["./rvm", "xyz"], shell=False)
A string will work when shell=True but you need a list of args when shell=False
The shlex module is useful more so for more complicated commands and dealing with input but good to learn about:
import shlex
cmd = "python foo.py"
subprocess.call(shlex.split(cmd), shell=False)
shlex tut
If you want to use shell=True, this is legit, otherwise it would have been removed from the standard library. The documentation doesn't say to avoid it, it says:
Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input.
But in your case you are not constructing the command from user input, your command is constant, so your code doesn't present the shell injection issue. You are in control of what the shell will execute, and if your code is not malicious per se, you are safe.
Example of shell injection
To explain why the shell injection is so bad, this is the example used in the documentation:
>>> from subprocess import call
>>> filename = input("What file would you like to display?\n")
What file would you like to display?
non_existent; rm -rf / #
>>> call("cat " + filename, shell=True) # Uh-oh. This will end badly...
Edit
With the additional information you have provided editing the question, stick to Padraic's answer. You should use shell=True only when necessary.
In addition to Enrico.bacis' answer, there are two ways to call programs. With shell=True, give it a full command string. With shell=False, give it a list.
If you do shell tricks like *.jpg or 2> /dev/null, use shell=True; but in general I suggest shell=False -- it's more durable as Enrico said.
source
import subprocess
subprocess.check_call(['/bin/echo', 'beer'], shell=False)
subprocess.check_call('/bin/echo beer', shell=True)
output
beer
beer
Instead of using the filename directory, add the word python in front of it, provided that you've added python path to your environmental variables. If you're not sure, you can always rerun the python installer, once again, provided that you have a new version of python.
Here's what I mean:
import subprocess
subprocess.Popen('python "C:/Path/To/File/Here.py"')
import os
import subprocess
proc = subprocess.Popen(['ls','*.bc'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out,err = proc.communicate()
print out
This script should print all the files with .bc suffix however it returns an empty list. If I do ls *.bc manually in the command line it works. Doing ['ls','test.bc'] inside the script works as well but for some reason the star symbol doesnt work.. Any ideas ?
You need to supply shell=True to execute the command through a shell interpreter.
If you do that however, you can no longer supply a list as the first argument, because the arguments will get quoted then. Instead, specify the raw commandline as you want it to be passed to the shell:
proc = subprocess.Popen('ls *.bc', shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
Expanding the * glob is part of the shell, but by default subprocess does not send your commands via a shell, so the command (first argument, ls) is executed, then a literal * is used as an argument.
This is a good thing, see the warning block in the "Frequently Used Arguments" section, of the subprocess docs. It mainly discusses security implications, but can also helps avoid silly programming errors (as there are no magic shell characters to worry about)
My main complaint with shell=True is it usually implies there is a better way to go about the problem - with your example, you should use the glob module:
import glob
files = glob.glob("*.bc")
print files # ['file1.bc', 'file2.bc']
This will be quicker (no process startup overhead), more reliable and cross platform (not dependent on the platform having an ls command)
Besides doing shell=True, also make sure that your path is not quoted. Otherwise it will not be expanded by shell.
If your path may have special characters, you will have to escape them manually.