scp with Python3 subprocess [duplicate] - python

When using subprocess.Popen(args, shell=True) to run "gcc --version" (just as an example), on Windows we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc (GCC) 3.4.5 (mingw-vista special r3) ...
So it's nicely printing out the version as I expect. But on Linux we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc: no input files
Because gcc hasn't received the --version option.
The docs don't specify exactly what should happen to the args under Windows, but it does say, on Unix, "If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments." IMHO the Windows way is better, because it allows you to treat Popen(arglist) calls the same as Popen(arglist, shell=True) ones.
Why the difference between Windows and Linux here?

Actually on Windows, it does use cmd.exe when shell=True - it prepends cmd.exe /c (it actually looks up the COMSPEC environment variable but defaults to cmd.exe if not present) to the shell arguments. (On Windows 95/98 it uses the intermediate w9xpopen program to actually launch the command).
So the strange implementation is actually the UNIX one, which does the following (where each space separates a different argument):
/bin/sh -c gcc --version
It looks like the correct implementation (at least on Linux) would be:
/bin/sh -c "gcc --version" gcc --version
Since this would set the command string from the quoted parameters, and pass the other parameters successfully.
From the sh man page section for -c:
Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the remaining argument operands.
This patch seems to fairly simply do the trick:
--- subprocess.py.orig 2009-04-19 04:43:42.000000000 +0200
+++ subprocess.py 2009-08-10 13:08:48.000000000 +0200
## -990,7 +990,7 ##
args = list(args)
if shell:
- args = ["/bin/sh", "-c"] + args
+ args = ["/bin/sh", "-c"] + [" ".join(args)] + args
if executable is None:
executable = args[0]

From the subprocess.py source:
On UNIX, with shell=True: If args is a string, it specifies the
command string to execute through the shell. If args is a sequence,
the first item specifies the command string, and any additional items
will be treated as additional shell arguments.
On Windows: the Popen class uses CreateProcess() to execute the child
program, which operates on strings. If args is a sequence, it will be
converted to a string using the list2cmdline method. Please note that
not all MS Windows applications interpret the command line the same
way: The list2cmdline is designed for applications using the same
rules as the MS C runtime.
That doesn't answer why, just clarifies that you are seeing the expected behavior.
The "why" is probably that on UNIX-like systems, command arguments are actually passed through to applications (using the exec* family of calls) as an array of strings. In other words, the calling process decides what goes into EACH command line argument. Whereas when you tell it to use a shell, the calling process actually only gets the chance to pass a single command line argument to the shell to execute: The entire command line that you want executed, executable name and arguments, as a single string.
But on Windows, the entire command line (according to the above documentation) is passed as a single string to the child process. If you look at the CreateProcess API documentation, you will notice that it expects all of the command line arguments to be concatenated together into a big string (hence the call to list2cmdline).
Plus there is the fact that on UNIX-like systems there actually is a shell that can do useful things, so I suspect that the other reason for the difference is that on Windows, shell=True does nothing, which is why it is working the way you are seeing. The only way to make the two systems act identically would be for it to simply drop all of the command line arguments when shell=True on Windows.

The reason for the UNIX behaviour of shell=True is to do with quoting. When we write a shell command, it will be split at spaces, so we have to quote some arguments:
cp "My File" "New Location"
This leads to problems when our arguments contain quotes, which requires escaping:
grep -r "\"hello\"" .
Sometimes we can get awful situations where \ must be escaped too!
Of course, the real problem is that we're trying to use one string to specify multiple strings. When calling system commands, most programming languages avoid this by allowing us to send multiple strings in the first place, hence:
Popen(['cp', 'My File', 'New Location'])
Popen(['grep', '-r', '"hello"'])
Sometimes it can be nice to run "raw" shell commands; for example, if we're copy-pasting something from a shell script or a Web site, and we don't want to convert all of the horrible escaping manually. That's why the shell=True option exists:
Popen(['cp "My File" "New Location"'], shell=True)
Popen(['grep -r "\"hello\"" .'], shell=True)
I'm not familiar with Windows so I don't know how or why it behaves differently.

Related

How can we execute the following bash commands in python linux [duplicate]

On my local machine, I run a python script which contains this line
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
os.system(bashCommand)
This works fine.
Then I run the same code on a server and I get the following error message
'import site' failed; use -v for traceback
Traceback (most recent call last):
File "/usr/bin/cwm", line 48, in <module>
from swap import diag
ImportError: No module named swap
So what I did then is I inserted a print bashCommand which prints me than the command in the terminal before it runs it with os.system().
Of course, I get again the error (caused by os.system(bashCommand)) but before that error it prints the command in the terminal. Then I just copied that output and did a copy paste into the terminal and hit enter and it works...
Does anyone have a clue what's going on?
Don't use os.system. It has been deprecated in favor of subprocess. From the docs: "This module intends to replace several older modules and functions: os.system, os.spawn".
Like in your case:
import subprocess
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
To somewhat expand on the earlier answers here, there are a number of details which are commonly overlooked.
Prefer subprocess.run() over subprocess.check_call() and friends over subprocess.call() over subprocess.Popen() over os.system() over os.popen()
Understand and probably use text=True, aka universal_newlines=True.
Understand the meaning of shell=True or shell=False and how it changes quoting and the availability of shell conveniences.
Understand differences between sh and Bash
Understand how a subprocess is separate from its parent, and generally cannot change the parent.
Avoid running the Python interpreter as a subprocess of Python.
These topics are covered in some more detail below.
Prefer subprocess.run() or subprocess.check_call()
The subprocess.Popen() function is a low-level workhorse but it is tricky to use correctly and you end up copy/pasting multiple lines of code ... which conveniently already exist in the standard library as a set of higher-level wrapper functions for various purposes, which are presented in more detail in the following.
Here's a paragraph from the documentation:
The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.
Unfortunately, the availability of these wrapper functions differs between Python versions.
subprocess.run() was officially introduced in Python 3.5. It is meant to replace all of the following.
subprocess.check_output() was introduced in Python 2.7 / 3.1. It is basically equivalent to subprocess.run(..., check=True, stdout=subprocess.PIPE).stdout
subprocess.check_call() was introduced in Python 2.5. It is basically equivalent to subprocess.run(..., check=True)
subprocess.call() was introduced in Python 2.4 in the original subprocess module (PEP-324). It is basically equivalent to subprocess.run(...).returncode
High-level API vs subprocess.Popen()
The refactored and extended subprocess.run() is more logical and more versatile than the older legacy functions it replaces. It returns a CompletedProcess object which has various methods which allow you to retrieve the exit status, the standard output, and a few other results and status indicators from the finished subprocess.
subprocess.run() is the way to go if you simply need a program to run and return control to Python. For more involved scenarios (background processes, perhaps with interactive I/O with the Python parent program) you still need to use subprocess.Popen() and take care of all the plumbing yourself. This requires a fairly intricate understanding of all the moving parts and should not be undertaken lightly. The simpler Popen object represents the (possibly still-running) process which needs to be managed from your code for the remainder of the lifetime of the subprocess.
It should perhaps be emphasized that just subprocess.Popen() merely creates a process. If you leave it at that, you have a subprocess running concurrently alongside with Python, so a "background" process. If it doesn't need to do input or output or otherwise coordinate with you, it can do useful work in parallel with your Python program.
Avoid os.system() and os.popen()
Since time eternal (well, since Python 2.5) the os module documentation has contained the recommendation to prefer subprocess over os.system():
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function.
The problems with system() are that it's obviously system-dependent and doesn't offer ways to interact with the subprocess. It simply runs, with standard output and standard error outside of Python's reach. The only information Python receives back is the exit status of the command (zero means success, though the meaning of non-zero values is also somewhat system-dependent).
PEP-324 (which was already mentioned above) contains a more detailed rationale for why os.system is problematic and how subprocess attempts to solve those issues.
os.popen() used to be even more strongly discouraged:
Deprecated since version 2.6: This function is obsolete. Use the subprocess module.
However, since sometime in Python 3, it has been reimplemented to simply use subprocess, and redirects to the subprocess.Popen() documentation for details.
Understand and usually use check=True
You'll also notice that subprocess.call() has many of the same limitations as os.system(). In regular use, you should generally check whether the process finished successfully, which subprocess.check_call() and subprocess.check_output() do (where the latter also returns the standard output of the finished subprocess). Similarly, you should usually use check=True with subprocess.run() unless you specifically need to allow the subprocess to return an error status.
In practice, with check=True or subprocess.check_*, Python will throw a CalledProcessError exception if the subprocess returns a nonzero exit status.
A common error with subprocess.run() is to omit check=True and be surprised when downstream code fails if the subprocess failed.
On the other hand, a common problem with check_call() and check_output() was that users who blindly used these functions were surprised when the exception was raised e.g. when grep did not find a match. (You should probably replace grep with native Python code anyway, as outlined below.)
All things counted, you need to understand how shell commands return an exit code, and under what conditions they will return a non-zero (error) exit code, and make a conscious decision how exactly it should be handled.
Understand and probably use text=True aka universal_newlines=True
Since Python 3, strings internal to Python are Unicode strings. But there is no guarantee that a subprocess generates Unicode output, or strings at all.
(If the differences are not immediately obvious, Ned Batchelder's Pragmatic Unicode is recommended, if not outright obligatory, reading. There is a 36-minute video presentation behind the link if you prefer, though reading the page yourself will probably take significantly less time.)
Deep down, Python has to fetch a bytes buffer and interpret it somehow. If it contains a blob of binary data, it shouldn't be decoded into a Unicode string, because that's error-prone and bug-inducing behavior - precisely the sort of pesky behavior which riddled many Python 2 scripts, before there was a way to properly distinguish between encoded text and binary data.
With text=True, you tell Python that you, in fact, expect back textual data in the system's default encoding, and that it should be decoded into a Python (Unicode) string to the best of Python's ability (usually UTF-8 on any moderately up to date system, except perhaps Windows?)
If that's not what you request back, Python will just give you bytes strings in the stdout and stderr strings. Maybe at some later point you do know that they were text strings after all, and you know their encoding. Then, you can decode them.
normal = subprocess.run([external, arg],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
check=True,
text=True)
print(normal.stdout)
convoluted = subprocess.run([external, arg],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
check=True)
# You have to know (or guess) the encoding
print(convoluted.stdout.decode('utf-8'))
Python 3.7 introduced the shorter and more descriptive and understandable alias text for the keyword argument which was previously somewhat misleadingly called universal_newlines.
Understand shell=True vs shell=False
With shell=True you pass a single string to your shell, and the shell takes it from there.
With shell=False you pass a list of arguments to the OS, bypassing the shell.
When you don't have a shell, you save a process and get rid of a fairly substantial amount of hidden complexity, which may or may not harbor bugs or even security problems.
On the other hand, when you don't have a shell, you don't have redirection, wildcard expansion, job control, and a large number of other shell features.
A common mistake is to use shell=True and then still pass Python a list of tokens, or vice versa. This happens to work in some cases, but is really ill-defined and could break in interesting ways.
# XXX AVOID THIS BUG
buggy = subprocess.run('dig +short stackoverflow.com')
# XXX AVOID THIS BUG TOO
broken = subprocess.run(['dig', '+short', 'stackoverflow.com'],
shell=True)
# XXX DEFINITELY AVOID THIS
pathological = subprocess.run(['dig +short stackoverflow.com'],
shell=True)
correct = subprocess.run(['dig', '+short', 'stackoverflow.com'],
# Probably don't forget these, too
check=True, text=True)
# XXX Probably better avoid shell=True
# but this is nominally correct
fixed_but_fugly = subprocess.run('dig +short stackoverflow.com',
shell=True,
# Probably don't forget these, too
check=True, text=True)
The common retort "but it works for me" is not a useful rebuttal unless you understand exactly under what circumstances it could stop working.
To briefly recap, correct usage looks like
subprocess.run("string for 'the shell' to parse", shell=True)
# or
subprocess.run(["list", "of", "tokenized strings"]) # shell=False
If you want to avoid the shell but are too lazy or unsure of how to parse a string into a list of tokens, notice that shlex.split() can do this for you.
subprocess.run(shlex.split("no string for 'the shell' to parse")) # shell=False
# equivalent to
# subprocess.run(["no", "string", "for", "the shell", "to", "parse"])
The regular split() will not work here, because it doesn't preserve quoting. In the example above, notice how "the shell" is a single string.
Refactoring Example
Very often, the features of the shell can be replaced with native Python code. Simple Awk or sed scripts should probably just be translated to Python instead.
To partially illustrate this, here is a typical but slightly silly example which involves many shell features.
cmd = '''while read -r x;
do ping -c 3 "$x" | grep 'min/avg/max'
done <hosts.txt'''
# Trivial but horrible
results = subprocess.run(
cmd, shell=True, universal_newlines=True, check=True)
print(results.stdout)
# Reimplement with shell=False
with open('hosts.txt') as hosts:
for host in hosts:
host = host.rstrip('\n') # drop newline
ping = subprocess.run(
['ping', '-c', '3', host],
text=True,
stdout=subprocess.PIPE,
check=True)
for line in ping.stdout.split('\n'):
if 'min/avg/max' in line:
print('{}: {}'.format(host, line))
Some things to note here:
With shell=False you don't need the quoting that the shell requires around strings. Putting quotes anyway is probably an error.
It often makes sense to run as little code as possible in a subprocess. This gives you more control over execution from within your Python code.
Having said that, complex shell pipelines are tedious and sometimes challenging to reimplement in Python.
The refactored code also illustrates just how much the shell really does for you with a very terse syntax -- for better or for worse. Python says explicit is better than implicit but the Python code is rather verbose and arguably looks more complex than this really is. On the other hand, it offers a number of points where you can grab control in the middle of something else, as trivially exemplified by the enhancement that we can easily include the host name along with the shell command output. (This is by no means challenging to do in the shell, either, but at the expense of yet another diversion and perhaps another process.)
Common Shell Constructs
For completeness, here are brief explanations of some of these shell features, and some notes on how they can perhaps be replaced with native Python facilities.
Globbing aka wildcard expansion can be replaced with glob.glob() or very often with simple Python string comparisons like for file in os.listdir('.'): if not file.endswith('.png'): continue. Bash has various other expansion facilities like .{png,jpg} brace expansion and {1..100} as well as tilde expansion (~ expands to your home directory, and more generally ~account to the home directory of another user)
Shell variables like $SHELL or $my_exported_var can sometimes simply be replaced with Python variables. Exported shell variables are available as e.g. os.environ['SHELL'] (the meaning of export is to make the variable available to subprocesses -- a variable which is not available to subprocesses will obviously not be available to Python running as a subprocess of the shell, or vice versa. The env= keyword argument to subprocess methods allows you to define the environment of the subprocess as a dictionary, so that's one way to make a Python variable visible to a subprocess). With shell=False you will need to understand how to remove any quotes; for example, cd "$HOME" is equivalent to os.chdir(os.environ['HOME']) without quotes around the directory name. (Very often cd is not useful or necessary anyway, and many beginners omit the double quotes around the variable and get away with it until one day ...)
Redirection allows you to read from a file as your standard input, and write your standard output to a file. grep 'foo' <inputfile >outputfile opens outputfile for writing and inputfile for reading, and passes its contents as standard input to grep, whose standard output then lands in outputfile. This is not generally hard to replace with native Python code.
Pipelines are a form of redirection. echo foo | nl runs two subprocesses, where the standard output of echo is the standard input of nl (on the OS level, in Unix-like systems, this is a single file handle). If you cannot replace one or both ends of the pipeline with native Python code, perhaps think about using a shell after all, especially if the pipeline has more than two or three processes (though look at the pipes module in the Python standard library or a number of more modern and versatile third-party competitors).
Job control lets you interrupt jobs, run them in the background, return them to the foreground, etc. The basic Unix signals to stop and continue a process are of course available from Python, too. But jobs are a higher-level abstraction in the shell which involve process groups etc which you have to understand if you want to do something like this from Python.
Quoting in the shell is potentially confusing until you understand that everything is basically a string. So ls -l / is equivalent to 'ls' '-l' '/' but the quoting around literals is completely optional. Unquoted strings which contain shell metacharacters undergo parameter expansion, whitespace tokenization and wildcard expansion; double quotes prevent whitespace tokenization and wildcard expansion but allow parameter expansions (variable substitution, command substitution, and backslash processing). This is simple in theory but can get bewildering, especially when there are several layers of interpretation (a remote shell command, for example).
Understand differences between sh and Bash
subprocess runs your shell commands with /bin/sh unless you specifically request otherwise (except of course on Windows, where it uses the value of the COMSPEC variable). This means that various Bash-only features like arrays, [[ etc are not available.
If you need to use Bash-only syntax, you can
pass in the path to the shell as executable='/bin/bash' (where of course if your Bash is installed somewhere else, you need to adjust the path).
subprocess.run('''
# This for loop syntax is Bash only
for((i=1;i<=$#;i++)); do
# Arrays are Bash-only
array[i]+=123
done''',
shell=True, check=True,
executable='/bin/bash')
A subprocess is separate from its parent, and cannot change it
A somewhat common mistake is doing something like
subprocess.run('cd /tmp', shell=True)
subprocess.run('pwd', shell=True) # Oops, doesn't print /tmp
The same thing will happen if the first subprocess tries to set an environment variable, which of course will have disappeared when you run another subprocess, etc.
A child process runs completely separate from Python, and when it finishes, Python has no idea what it did (apart from the vague indicators that it can infer from the exit status and output from the child process). A child generally cannot change the parent's environment; it cannot set a variable, change the working directory, or, in so many words, communicate with its parent without cooperation from the parent.
The immediate fix in this particular case is to run both commands in a single subprocess;
subprocess.run('cd /tmp; pwd', shell=True)
though obviously this particular use case isn't very useful; instead, use the cwd keyword argument, or simply os.chdir() before running the subprocess. Similarly, for setting a variable, you can manipulate the environment of the current process (and thus also its children) via
os.environ['foo'] = 'bar'
or pass an environment setting to a child process with
subprocess.run('echo "$foo"', shell=True, env={'foo': 'bar'})
(not to mention the obvious refactoring subprocess.run(['echo', 'bar']); but echo is a poor example of something to run in a subprocess in the first place, of course).
Don't run Python from Python
This is slightly dubious advice; there are certainly situations where it does make sense or is even an absolute requirement to run the Python interpreter as a subprocess from a Python script. But very frequently, the correct approach is simply to import the other Python module into your calling script and call its functions directly.
If the other Python script is under your control, and it isn't a module, consider turning it into one. (This answer is too long already so I will not delve into details here.)
If you need parallelism, you can run Python functions in subprocesses with the multiprocessing module. There is also threading which runs multiple tasks in a single process (which is more lightweight and gives you more control, but also more constrained in that threads within a process are tightly coupled, and bound to a single GIL.)
Call it with subprocess
import subprocess
subprocess.Popen("cwm --rdf test.rdf --ntriples > test.nt")
The error you are getting seems to be because there is no swap module on the server, you should install swap on the server then run the script again
It is possible you use the bash program, with the parameter -c for execute the commands:
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
output = subprocess.check_output(['bash','-c', bashCommand])
You can use subprocess, but I always felt that it was not a 'Pythonic' way of doing it. So I created Sultan (shameless plug) that makes it easy to run command line functions.
https://github.com/aeroxis/sultan
Also you can use 'os.popen'.
Example:
import os
command = os.popen('ls -al')
print(command.read())
print(command.close())
Output:
total 16
drwxr-xr-x 2 root root 4096 ago 13 21:53 .
drwxr-xr-x 4 root root 4096 ago 13 01:50 ..
-rw-r--r-- 1 root root 1278 ago 13 21:12 bot.py
-rw-r--r-- 1 root root 77 ago 13 21:53 test.py
None
According to the error you are missing a package named swap on the server. This /usr/bin/cwm requires it. If you're on Ubuntu/Debian, install python-swap using aptitude.
To run the command without a shell, pass the command as a list and implement the redirection in Python using [subprocess]:
#!/usr/bin/env python
import subprocess
with open('test.nt', 'wb', 0) as file:
subprocess.check_call("cwm --rdf test.rdf --ntriples".split(),
stdout=file)
Note: no > test.nt at the end. stdout=file implements the redirection.
To run the command using the shell in Python, pass the command as a string and enable shell=True:
#!/usr/bin/env python
import subprocess
subprocess.check_call("cwm --rdf test.rdf --ntriples > test.nt",
shell=True)
Here's the shell is responsible for the output redirection (> test.nt is in the command).
To run a bash command that uses bashisms, specify the bash executable explicitly e.g., to emulate bash process substitution:
#!/usr/bin/env python
import subprocess
subprocess.check_call('program <(command) <(another-command)',
shell=True, executable='/bin/bash')
copy paste this:
def run_bash_command(cmd: str) -> Any:
import subprocess
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
if error:
raise Exception(error)
else:
return output
subprocess.Popen() is prefered over os.system() as it offers more control and visibility. However, If you find subprocess.Popen() too verbose or complex, peasyshell is a small wrapper I wrote above it, which makes it easy to interact with bash from Python.
https://github.com/davidohana/peasyshell
The pythonic way of doing this is using subprocess.Popen
subprocess.Popen takes a list where the first element is the command to be run followed by any command line arguments.
As an example:
import subprocess
args = ['echo', 'Hello!']
subprocess.Popen(args) // same as running `echo Hello!` on cmd line
args2 = ['echo', '-v', '"Hello Again"']
subprocess.Popen(args2) // same as running 'echo -v "Hello Again!"` on cmd line

In python, how to pass an array argument to powershell script

I have a PowerShell script, which has two parameters, the first one is a string, the second one is an array of string.
I would like to call this PowerShell script from my python code. How to pass the array type parameter to PowerShell?
If I write something like this:
subprocess.run(['powershell.exe', 'script.ps1', 'arg1', '#("str1", "str2")'])
Powershell think '#("str1", "str2")' is a string, not an array.
Edit
I found a workaround
subprocess.run(['powershell.exe', 'script.ps1 arg1 #("str1", "str2")'])
It doesn't look beautiful, but works. and in this way, I can't use -File after powershell.exe
Your original command does work as written (except that you must use .\script.ps1 rather than script.ps1, unless the script is in the system's path), as does the second one you added later, because it implicitly uses the PowerShell CLI's -Command parameter rather than its
-File parameter.
In short:
Passing arrays is fundamentally only supported with -Command, which interprets the subsequent arguments as PowerShell code, where the usual PowerShell syntax applies.
With -File, by contrast, all arguments after the target-script argument are passed verbatim, as strings, so there is no concept of an array.
I suggest using the following approach, for increased robustness and conceptual clarity:
subprocess.run(['powershell.exe', '-noprofile', '-c', '.\script.ps1 arg1 #("str1", "str2")'])
Note: You can omit #(...) around the array elements - #() is never needed for array literals in PowerShell.
Note:
-noprofile ensures that PowerShell doesn't load the $PROFILE file(s), which avoids potential slow-downs and side effects.
-c (-Command) makes it explicit that you're passing PowerShell code rather than a script file with literal arguments (-File)
Do note that -Command arguments are subject to additional interpretation by PowerShell, so if you pass, say, a token $foo$ you intend to be a literal, PowerShell will expand it to just $ (if no $foo variable is defined), because it expands $foo as a variable reference; passing `$foo`$ (backtick-escaping) prevents that.
Note the .\ before script.ps1: Since you're using -Command you cannot execute a script by file name only (unless the script happens to be located in a directory listed in $env:PATH); as from inside PowerShell, executing scripts from the current directory requires .\ for security reasons; by contrast, file-name-only invocation does work with -File.
The script file as well as its arguments are passed as a single argument, which reflects how PowerShell will process the command.
-Command is the default in Windows PowerShell, but no longer in PowerShell Core (pwsh.exe), which defaults to -File; it is generally a good idea to explicitly use -Command (-c) or -File (-f) to make it obvious how PowerShell will interpret the arguments.
How subprocess.run() builds the command line and how PowerShell parses it:
Your original Python command passes #("str1", "str2") as an individual argument to subprocess.run():
subprocess.run(['powershell.exe', '.\script.ps1', 'arg1', '#("str1", "str2")'])
This results in the following command line executed behind the scenes:
powershell.exe .\script.ps1 arg1 "#(\"str1\", \"str2\")"
Note how only #("str1", "str2") is double-quoted, and how the embedded " chars. are escaped as \".
As an aside: PowerShell's CLI (arguments passed to powershell.exe) uses the customary \-escaping of literal " chars.; inside PowerShell, however, it is ` (backtick) that serves as the escape character.
Your second command combines the script.ps1 and #("str1", "str2") into a single argument:
subprocess.run(['powershell.exe', '.\script.ps1 arg1 #("str1", "str2")'])
This results in the following command line:
powershell.exe ".\script.ps1 arg1 #(\"str1\", \"str2\")"
Note how the single argument passed is double-quoted as a whole.
Generally, subprocess.run() automatically encloses a given argument in "..." (double quotes) if it contains spaces.
Independently, it escapes embedded (literal) " chars. as \".
Even though these command lines are obviously different, PowerShell's (implied) -Command logic processes them the same, because it uses the following algorithm:
First, enclosing double quotes around each argument, if present, are removed.
The resulting strings, if there are multiple, are concatenated with spaces.
The resulting single string is then executed as PowerShell code.
If you apply this algorithm to either of the above command lines, PowerShell ends up executing the same code, namely:
.\script.ps1 arg1 #("str1", "str2")
Lets say your python array is arr
try to do this:
subprocess.run(['powershell.exe', 'script.ps1', 'arg1', '\"{}\"'.format(','.join(arr))])
To send array in powershell script you can send it as "item1,item2,item3"
and the function str.join allow you to get this format easly
If this doesn't work, i would try to edit the script to use the $args argument in the powershell script to change the way you using your arguments
You can use single quotes on the command line - e.g. #('str1', 'str2') or escape the double quotes with backslashes - e.g. #(\"str1\", \"str2\")
For example with this script:
script.ps1
param( [string[]] $s )
write-host $s.GetType().FullName
write-host $s.Length
write-host ($s | fl * | out-string)
You can call it from a command prompt like this:
C:\> powershell.exe .\script.ps1 #('str1', 'str2')
System.String[]
2
str1
str2
or like this:
C:> powershell.exe .\script.ps1 #(\"str1\", \"str2\")
System.String[]
2
str1
str2
You might need to apply some python escape characters to get the desired result in your code though.

Python sys.argv - Get the full command line [with pipe or semicolon]

rI would like to know if it is possible to capture an entered full command line with pipe or semicolon as below:
$> python foo.py arg arg | arg arg
OR
$> python foo.py arg arg ; arg arg
Today in my attempts, sys.argv is returning only what is typed in the left side of the pipe/semicolon and the second part runs as an independent command (what is understandable, but not desired :) ).
I tried the code:
if not '\'' in sys.argv or not '"' in sys.argv:
print 'foo failed'
exit
to force the commands be quoted (and maybe to force the system to see everything as a single command line), but did not work and the second part keeps being executed after the break.
Python is not given access to those parts. Those are not part of the command arguments for Python, those are input for the shell. Pipes, quoting and semicolons are part of the shell syntax, not a command line for subprocesses that the shell starts.
The shell splits out syntax you give it, then calls Python with just the arguments addressed to the python binary. You can't retrieve the whole shell commands from subprocesses, that'd be a potential security issue.
If you want to pass on information to the Python script, you must do so in the command arguments. That means that if you must include quotes in your arguments, you must first escape them at the shell level, so they are not interpreted as shell syntax, e.g.
python foo.py arg1 '|' arg2
is then available in sys.argv as
['foo.py', 'arg1', '|', 'arg2']
where the single quotes around the | tell the shell to treat that character as argument text.
You need to consult the documentation for your specific shell environment for the details on how quoting works. For example, if you use bash, read the Bash manual section on quoting.

Strange python error with subprocess.check_call

I'm having a really strange error with the python subprocess.check_call() function. Here are two tests that should both fail because of permission problems, but the first one only returns a 'usage' (the "unexpected behaviour"):
# Test #1
import subprocess
subprocess.check_call(['git', 'clone', 'https://github.com/achedeuzot/project',
'/var/vhosts/project'], shell=True)
# Shell output
usage: git [--version] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
[-c name=value] [--help]
<command> [<args>]
The most commonly used git commands are:
[...]
Now for the second test (the "expected behaviour" one):
# Test #2
import subprocess
subprocess.check_call(' '.join(['git', 'clone', 'https://github.com/achedeuzot/project',
'/var/vhosts/project']), shell=True)
# Here, we're making it into a string, but the call should be *exactly* the same.
# Shell output
fatal: could not create work tree dir '/var/vhosts/project'.: Permission denied
This second error is the correct one. I don't have the permissions indeed. But why is there a difference between the two calls ? I thought that using a single string or a list is the same with the check_call() function. I have read the python documentation and various usage examples and both look correct.
Did someone have the same strange error ? Or does someone know why is there a difference in output when the commands should be exactly the same ?
Side notes: Python 3.4
Remove shell=True from the first one. If you carefully reread the subprocess module documentation you will see. If shell=False (default) the first argument is a list of the command line with arguments and all (or a string with only the command, no arguments supplied at all). If shell=True, then the first argument is a string representing a shell command line, a shell is executed, which in turn parses the command line for you and splits by white space (+ much more dangerous things you might not want it to do). If shell=True and the first argument is a list, then the first list item is the shell command line, and the rest are passed as arguments to the shell, not the command.
Unless you know you really, really need to, always let shell=False.
Here's the relevant bit from the documentation:
If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional arguments to the shell itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])

Advantage of list over string in subprocess methods

What are the advantages of using list over string in subprocess methods? The ones I understand so far:
Security if input comes from external sources
Portability over different operating systems
Are there any others?
In my particular case, I'm using subprocess library to run tests on a software. Input does not come from external source. Tests are run only on Linux. Therefore, I see no benefit of lists over strings.
On POSIX, list and string arguments have different meaning and are used in different contexts.
You use a string argument and shell=True to run a shell command e.g.:
from subprocess import check_output
output = check_output("dmesg | grep hda", shell=True)
A list argument is used to run a command without the shell e.g.:
from subprocess import check_call
check_call(["ls", "-l"])
One exception is that call("ls") is equivalent to call(["ls"]) (a command with no arguments).
You should use a list argument with shell=False (default) except in those cases when you need the shell so the string argument is used.
It is almost always an error to use a list argument and shell=True (the arguments are interpreted as arguments to the shell itself instead of the command in this case). Don't use it.
If your question: what are the advantages of shell=False and hence the list argument over a string argument:
you don't need to escape the arguments, no shell interpolation such as word splitting, parameter expansion, command substitution occurs: what you see is what you get
support for arguments with spaces
support for arguments with special characters such as quotes, dollar sign, etc
it is clear where arguments boundaries are. They are explicitely separated.
it is clear what program is executed: it is the first item in the list
an argument that is populated from an untrusted source won't be able to execute arbitrary commands
why run a superfluous shell process unless you need it
Sometimes, it might be more convenient/readable to specify an argument as a string in the source code; shlex.split() could be used to convert it to a list:
import shlex
from subprocess import check_call
cmd = shlex.split('/bin/vikings -input eggs.txt -output "spam spam.txt" '
'''-cmd "echo '$MONEY'"''')
check_call(cmd)
See the docs.
On Windows, the arguments are interpreted differently. The native format is a string and the passed list is converted to a string using subprocess.list2cmdline() function that may not work for all Windows programs. shell=True is only necessary to run builtin shell commands.
If list2cmdline() creates a correct command line for your executable (different programs may use different rules for interpreting the command line) then a list argument could be used for portability and to avoid escaping separate arguments manually.

Categories