How to run os command `echo` with `-n` parameter in Python - python

I have below code in python3:
>>> os.system('echo -n abc')
-n abc
0
the output has -n which is not correct. If I run the command on terminal it doesn't print the newline at the end. It seems os.system method doesn't understand -n parameter. How can I solve this issue?
I have tried subprocess module but got the same result:
>>> subprocess.call('echo -n abc', shell=True)
-n abc
0

Is this correct behavior for echo to have?
This is legal, standards-compliant behavior for an echo implementation to have. Quoting from the POSIX echo standard (the APPLICATION USAGE section of which is also strongly recommended reading):
If the first operand is -n, or if any of the operands contain a backslash character, the results are implementation-defined.
Thus, echo is allowed to print -n literally -- or to do anything else when it's given as the first operand.
Why does my interactive shell do something different?
As the above-quoted specification says, behavior is undefined when -n is given as first argument. This means every behavior is equally legal; so not printing a trailing newline is also a legal behavior. Presumably your interactive shell does that (on the system at hand), whereas your /bin/sh does not.
How can I get reliable behavior when starting a shell to print text without a trailing newline?
If you want a command with well-specified behavior, use printf instead:
os.system('printf %s abc')
...will always print the exact string abc with no trailing newline, with any POSIX-compliant /bin/sh.
But of course, don't do any of this in Python; sys.stdout.write('abc') is far more sensible.

Related

In python, how to pass an array argument to powershell script

I have a PowerShell script, which has two parameters, the first one is a string, the second one is an array of string.
I would like to call this PowerShell script from my python code. How to pass the array type parameter to PowerShell?
If I write something like this:
subprocess.run(['powershell.exe', 'script.ps1', 'arg1', '#("str1", "str2")'])
Powershell think '#("str1", "str2")' is a string, not an array.
Edit
I found a workaround
subprocess.run(['powershell.exe', 'script.ps1 arg1 #("str1", "str2")'])
It doesn't look beautiful, but works. and in this way, I can't use -File after powershell.exe
Your original command does work as written (except that you must use .\script.ps1 rather than script.ps1, unless the script is in the system's path), as does the second one you added later, because it implicitly uses the PowerShell CLI's -Command parameter rather than its
-File parameter.
In short:
Passing arrays is fundamentally only supported with -Command, which interprets the subsequent arguments as PowerShell code, where the usual PowerShell syntax applies.
With -File, by contrast, all arguments after the target-script argument are passed verbatim, as strings, so there is no concept of an array.
I suggest using the following approach, for increased robustness and conceptual clarity:
subprocess.run(['powershell.exe', '-noprofile', '-c', '.\script.ps1 arg1 #("str1", "str2")'])
Note: You can omit #(...) around the array elements - #() is never needed for array literals in PowerShell.
Note:
-noprofile ensures that PowerShell doesn't load the $PROFILE file(s), which avoids potential slow-downs and side effects.
-c (-Command) makes it explicit that you're passing PowerShell code rather than a script file with literal arguments (-File)
Do note that -Command arguments are subject to additional interpretation by PowerShell, so if you pass, say, a token $foo$ you intend to be a literal, PowerShell will expand it to just $ (if no $foo variable is defined), because it expands $foo as a variable reference; passing `$foo`$ (backtick-escaping) prevents that.
Note the .\ before script.ps1: Since you're using -Command you cannot execute a script by file name only (unless the script happens to be located in a directory listed in $env:PATH); as from inside PowerShell, executing scripts from the current directory requires .\ for security reasons; by contrast, file-name-only invocation does work with -File.
The script file as well as its arguments are passed as a single argument, which reflects how PowerShell will process the command.
-Command is the default in Windows PowerShell, but no longer in PowerShell Core (pwsh.exe), which defaults to -File; it is generally a good idea to explicitly use -Command (-c) or -File (-f) to make it obvious how PowerShell will interpret the arguments.
How subprocess.run() builds the command line and how PowerShell parses it:
Your original Python command passes #("str1", "str2") as an individual argument to subprocess.run():
subprocess.run(['powershell.exe', '.\script.ps1', 'arg1', '#("str1", "str2")'])
This results in the following command line executed behind the scenes:
powershell.exe .\script.ps1 arg1 "#(\"str1\", \"str2\")"
Note how only #("str1", "str2") is double-quoted, and how the embedded " chars. are escaped as \".
As an aside: PowerShell's CLI (arguments passed to powershell.exe) uses the customary \-escaping of literal " chars.; inside PowerShell, however, it is ` (backtick) that serves as the escape character.
Your second command combines the script.ps1 and #("str1", "str2") into a single argument:
subprocess.run(['powershell.exe', '.\script.ps1 arg1 #("str1", "str2")'])
This results in the following command line:
powershell.exe ".\script.ps1 arg1 #(\"str1\", \"str2\")"
Note how the single argument passed is double-quoted as a whole.
Generally, subprocess.run() automatically encloses a given argument in "..." (double quotes) if it contains spaces.
Independently, it escapes embedded (literal) " chars. as \".
Even though these command lines are obviously different, PowerShell's (implied) -Command logic processes them the same, because it uses the following algorithm:
First, enclosing double quotes around each argument, if present, are removed.
The resulting strings, if there are multiple, are concatenated with spaces.
The resulting single string is then executed as PowerShell code.
If you apply this algorithm to either of the above command lines, PowerShell ends up executing the same code, namely:
.\script.ps1 arg1 #("str1", "str2")
Lets say your python array is arr
try to do this:
subprocess.run(['powershell.exe', 'script.ps1', 'arg1', '\"{}\"'.format(','.join(arr))])
To send array in powershell script you can send it as "item1,item2,item3"
and the function str.join allow you to get this format easly
If this doesn't work, i would try to edit the script to use the $args argument in the powershell script to change the way you using your arguments
You can use single quotes on the command line - e.g. #('str1', 'str2') or escape the double quotes with backslashes - e.g. #(\"str1\", \"str2\")
For example with this script:
script.ps1
param( [string[]] $s )
write-host $s.GetType().FullName
write-host $s.Length
write-host ($s | fl * | out-string)
You can call it from a command prompt like this:
C:\> powershell.exe .\script.ps1 #('str1', 'str2')
System.String[]
2
str1
str2
or like this:
C:> powershell.exe .\script.ps1 #(\"str1\", \"str2\")
System.String[]
2
str1
str2
You might need to apply some python escape characters to get the desired result in your code though.

Python sys.argv - Get the full command line [with pipe or semicolon]

rI would like to know if it is possible to capture an entered full command line with pipe or semicolon as below:
$> python foo.py arg arg | arg arg
OR
$> python foo.py arg arg ; arg arg
Today in my attempts, sys.argv is returning only what is typed in the left side of the pipe/semicolon and the second part runs as an independent command (what is understandable, but not desired :) ).
I tried the code:
if not '\'' in sys.argv or not '"' in sys.argv:
print 'foo failed'
exit
to force the commands be quoted (and maybe to force the system to see everything as a single command line), but did not work and the second part keeps being executed after the break.
Python is not given access to those parts. Those are not part of the command arguments for Python, those are input for the shell. Pipes, quoting and semicolons are part of the shell syntax, not a command line for subprocesses that the shell starts.
The shell splits out syntax you give it, then calls Python with just the arguments addressed to the python binary. You can't retrieve the whole shell commands from subprocesses, that'd be a potential security issue.
If you want to pass on information to the Python script, you must do so in the command arguments. That means that if you must include quotes in your arguments, you must first escape them at the shell level, so they are not interpreted as shell syntax, e.g.
python foo.py arg1 '|' arg2
is then available in sys.argv as
['foo.py', 'arg1', '|', 'arg2']
where the single quotes around the | tell the shell to treat that character as argument text.
You need to consult the documentation for your specific shell environment for the details on how quoting works. For example, if you use bash, read the Bash manual section on quoting.

scp with Python3 subprocess [duplicate]

When using subprocess.Popen(args, shell=True) to run "gcc --version" (just as an example), on Windows we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc (GCC) 3.4.5 (mingw-vista special r3) ...
So it's nicely printing out the version as I expect. But on Linux we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc: no input files
Because gcc hasn't received the --version option.
The docs don't specify exactly what should happen to the args under Windows, but it does say, on Unix, "If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments." IMHO the Windows way is better, because it allows you to treat Popen(arglist) calls the same as Popen(arglist, shell=True) ones.
Why the difference between Windows and Linux here?
Actually on Windows, it does use cmd.exe when shell=True - it prepends cmd.exe /c (it actually looks up the COMSPEC environment variable but defaults to cmd.exe if not present) to the shell arguments. (On Windows 95/98 it uses the intermediate w9xpopen program to actually launch the command).
So the strange implementation is actually the UNIX one, which does the following (where each space separates a different argument):
/bin/sh -c gcc --version
It looks like the correct implementation (at least on Linux) would be:
/bin/sh -c "gcc --version" gcc --version
Since this would set the command string from the quoted parameters, and pass the other parameters successfully.
From the sh man page section for -c:
Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the remaining argument operands.
This patch seems to fairly simply do the trick:
--- subprocess.py.orig 2009-04-19 04:43:42.000000000 +0200
+++ subprocess.py 2009-08-10 13:08:48.000000000 +0200
## -990,7 +990,7 ##
args = list(args)
if shell:
- args = ["/bin/sh", "-c"] + args
+ args = ["/bin/sh", "-c"] + [" ".join(args)] + args
if executable is None:
executable = args[0]
From the subprocess.py source:
On UNIX, with shell=True: If args is a string, it specifies the
command string to execute through the shell. If args is a sequence,
the first item specifies the command string, and any additional items
will be treated as additional shell arguments.
On Windows: the Popen class uses CreateProcess() to execute the child
program, which operates on strings. If args is a sequence, it will be
converted to a string using the list2cmdline method. Please note that
not all MS Windows applications interpret the command line the same
way: The list2cmdline is designed for applications using the same
rules as the MS C runtime.
That doesn't answer why, just clarifies that you are seeing the expected behavior.
The "why" is probably that on UNIX-like systems, command arguments are actually passed through to applications (using the exec* family of calls) as an array of strings. In other words, the calling process decides what goes into EACH command line argument. Whereas when you tell it to use a shell, the calling process actually only gets the chance to pass a single command line argument to the shell to execute: The entire command line that you want executed, executable name and arguments, as a single string.
But on Windows, the entire command line (according to the above documentation) is passed as a single string to the child process. If you look at the CreateProcess API documentation, you will notice that it expects all of the command line arguments to be concatenated together into a big string (hence the call to list2cmdline).
Plus there is the fact that on UNIX-like systems there actually is a shell that can do useful things, so I suspect that the other reason for the difference is that on Windows, shell=True does nothing, which is why it is working the way you are seeing. The only way to make the two systems act identically would be for it to simply drop all of the command line arguments when shell=True on Windows.
The reason for the UNIX behaviour of shell=True is to do with quoting. When we write a shell command, it will be split at spaces, so we have to quote some arguments:
cp "My File" "New Location"
This leads to problems when our arguments contain quotes, which requires escaping:
grep -r "\"hello\"" .
Sometimes we can get awful situations where \ must be escaped too!
Of course, the real problem is that we're trying to use one string to specify multiple strings. When calling system commands, most programming languages avoid this by allowing us to send multiple strings in the first place, hence:
Popen(['cp', 'My File', 'New Location'])
Popen(['grep', '-r', '"hello"'])
Sometimes it can be nice to run "raw" shell commands; for example, if we're copy-pasting something from a shell script or a Web site, and we don't want to convert all of the horrible escaping manually. That's why the shell=True option exists:
Popen(['cp "My File" "New Location"'], shell=True)
Popen(['grep -r "\"hello\"" .'], shell=True)
I'm not familiar with Windows so I don't know how or why it behaves differently.

What's the reverse of shlex.split?

How can I reverse the results of a shlex.split? That is, how can I obtain a quoted string that would "resemble that of a Unix shell", given a list of strings I wish quoted?
Update0
I've located a Python bug, and made corresponding feature requests here.
We now (3.3) have a shlex.quote function. It’s none other that pipes.quote moved and documented (code using pipes.quote will still work). See http://bugs.python.org/issue9723 for the whole discussion.
subprocess.list2cmdline is a private function that should not be used. It could however be moved to shlex and made officially public. See also http://bugs.python.org/issue1724822.
How about using pipes.quote?
import pipes
strings = ["ls", "/etc/services", "file with spaces"]
" ".join(pipes.quote(s) for s in strings)
# "ls /etc/services 'file with spaces'"
.
There is a feature request for adding shlex.join(), which would do exactly what you ask. As of now, there does not seem any progress on it, though, mostly as it would mostly just forward to shlex.quote(). In the bug report, a suggested implementation is mentioned:
' '.join(shlex.quote(x) for x in split_command)
See https://bugs.python.org/issue22454
It's shlex.join() in python 3.8
subprocess uses subprocess.list2cmdline(). It's not an official public API, but it's mentioned in the subprocess documentation and I think it's pretty safe to use. It's more sophisticated than pipes.open() (for better or worse).
While shlex.quote is available in Python 3.3 and shlex.join is available in Python 3.8, they will not always serve as a true "reversal" of shlex.split. Observe the following snippet:
import shlex
command = "cd /home && bash -c 'echo $HOME'"
print(shlex.split(command))
# ['cd', '/home', '&&', 'bash', '-c', 'echo $HOME']
print(shlex.join(shlex.split(command)))
# cd /home '&&' bash -c 'echo $HOME'
Notice that after splitting and then joining, the && token now has single quotes around it. If you tried running the command now, you'd get an error: cd: too many arguments
If you use subprocess.list2cmdline() as others have suggested, it works nicer with bash operators like &&:
import subprocess
print(subprocess.list2cmdline(shlex.split(command)))
# cd /home && bash -c "echo $HOME"
However you may notice now that the quotes are now double instead of single. This results in $HOME being expanded by the shell rather than being printed verbatim as if you had used single quotes.
In conclusion, there is no 100% fool-proof way of undoing shlex.split, and you will have to choose the option that best suites your purpose and watch out for edge cases.

Pipe output of a command to an interactive python session?

What I'd like to do is something like
$echo $PATH | python --remain-interactive "x = raw_input().split(':')"
>>>
>>> print x
['/usr/local/bin', '/usr/bin', '/bin']
I suppose ipython solution would be best. If this isn't achievable, what would be your solution for the situation where I want to process output from various other commands? I've used subprocess before to do it when I was desperate, but it is not ideal.
UPDATE: So this is getting closer to the end result:
echo $PATH > /tmp/stdout.txt; ipython -i -c 'stdout = open("/tmp/stdout.txt").read()'
Now how can we go about bending this into a form
echo $PATH | pyout
where pyout is the "magic solution to all my problems". It could be a shell script that writes the piped output and then runs the ipython. Everything done fails for the same reasons bp says.
In IPython you can do this
x = !echo $$$$PATH
The double escape of $ is a pain though
You could do this I guess
PATH="$PATH"
x = !echo $PATH
x[0].split(":")
The --remain-interactive switch you are looking for is -i. You also can use the -c switch to specify the command to execute, such as __import__("sys").stdin.read().split(":"). So what you would try is: (do not forget about escaping strings!)
echo $PATH | python -i -c x = __import__(\"sys\").stdin.read().split(\":\")
However, this is all that will be displayed:
>>>
So why doesn't it work? Because you are piping. The python intepreter is trying to interactively read commands from the same sys.stdin you are reading arguments from. Since echo is done executing, sys.stdin is closed and no further input can happen.
For the same reason, something like:
echo $PATH > spam
python -i -c x = __import__(\"sys\").stdin.read().split(\":\") < spam
...will fail.
What I would do is:
echo $PATH > spam.bar
python -i my_app.py spam.bar
After all, open("spam.bar") is a file object just like sys.stdin is :)
Due to the Python axiom of "There should be one - and preferably only one - obvious way to do it" I'm reasonably sure that there won't be a better way to interact with other processes than the subprocess module.
It might help if you could say why something like the following "is not ideal":
>>> process = subprocess.Popen(['cmd', '/c', 'echo %PATH%'], stdout=subprocess.PIPE)
>>> print process.communicate()[0].split(';')
(In your specific example you could use os.environ but I realise that's not really what you're asking.)

Categories