I'm testing out some argparse code. I wanted to have an optional argument, which collects n number of inputs from a list of choices. So, I wrote:
import argparse
modules = ["geo", "loc"]
parser = argparse.ArgumentParser()
parser.add_argument("--modules", nargs='*', choices=modules)
With this set up, I'm reliably able to kill the interpreter completely.
It works fine if you pass a valid set of arguments:
>>> parser.parse_args("--module geo loc geo".split())
Namespace(modules=['geo', 'loc', 'geo'])
But if you pass in a miss formed argument, it kills python completely:
>>> parser.parse_args("--module geo metro".split())
usage: [-h] [--modules [{geo,loc} [{geo,loc} ...]]]
: error: argument --modules: invalid choice: 'metro' (choose from 'geo', 'loc')
PS C:\Users\myname\mycode>
My question is two-fold:
Is this expected behavior? If so, what is the reasoning for this?
Will I be okay using this code, since I don't mind if my program dies with ill-formed arguments? Or is there some compelling reason to avoid this?
As a note, I am using Python2.7 on Windows 7.
Yes, this is intended, and documented:
While parsing the command line, parse_args() checks for a variety of errors, including ambiguous options, invalid types, invalid options, wrong number of positional arguments, etc. When it encounters such an error, it exits and prints the error along with a usage message:
The idea is that, if the user gives an invalid option or argument which you don't know how to handle, the best option is to give up instead of second-guess the user's actual intentions.
If you don't mind, then it should be ok, right? Unless you know a reason to implement different behavior, your program is completely consistent with all well-behaved command line tools on all platforms.
If you do want to implement different behavior, catch the SystemExit exception that parse_args might raise.
(The only program that I can think of that behaves differently from the way I just described is the version control tool Git, which does try to guess what the user meant and prints its guesses. It then still exits, though.)
argparse is designed for use when your Python script is run from a command line. That's why invalid arguments cause the program to quit.
This behavior is consistent with virtually all shell (bash/sh/dos/etc.) utilities. Invalid command line args cause the program to quit with an error string and (optionally) a usage message.
Related
I was trying to pass some arguments via PyCharm when I noticed that it's behaving differently that my console. When I pass arguments with no space in between all works fine, but when my arguments contains spaces inside it the behavior diverge.
def main():
"""
Main function
"""
for i, arg in enumerate(sys.argv):
print('Arg#{}: {}'.format(i, arg))
If I run the same function:
python3 argumnents_tester.py 'argument 1' argument2
Run in PyCharm:
Arg#0: /home/gorfanidis/PycharmProjects/test1/argparse_test.py
Arg#1: 'argument
Arg#2: 1'
Arg#3: argument2
Run in Console:
Arg#0: argparse_test.py
Arg#1: argument 1
Arg#2: argument2
So, PyCharm tends to ignore quotes altogether and splits the arguments using the spaces regardless of any quotes. Also, arguments with quotes are treated differently than the same arguments without quotes.
Question is why it this happening and at a practical level how am I suppose to pass an argument that contains spaces using PyCharm for example?
I am using Ubuntu 16.04 by the way.
What you are complaining about is a shell issue. The shell applies its convention of single quotes to parse arguments. Actually I find the PyCharm behaviour as understandable and consistent; if no shell is involved, nobody does the job you expect.
If you insist on running that from pycharm, I'd suggest another passing method (e.g. via file) or to apply some unquoting mechanism as urllib.parse.unquote.
See also here for a dated but still correct description of command line parameters in general, and specifically:
The ANSI C standard does not specify what constitutes a command-line argument, because operatring systems vary considerably on this point. However, the most common convention is as follows:
Each command-line argument must be separated by a space or a tab character. Commas, semicolons, and the like are not considered separators.
Disclaimer: What turns out to be the correct solution to #Eypros question was a passing suspicion that it is a matter of difference between how PyCharm implements its arguments parsing in its command-line run & how actual system console/shell behaves, which was also pointed out by #guidot in his answer. I provide more thoughts in Postscript below.
To circumvent the behaviour exhibited by PyCharm in how it interprets the argument 1 part in python3 argumnents_tester.py 'argument 1' argument2 (as 2 arguments), use a different type of quote in the code, a double quote " for str.format(), and a single quote ' for the argument in the run command.
PS:
While this seems like a simple workaround, I do think in case of any possibility the code would be executed in any other system, one should choose to adhere to most common/widely accepted standard behaviour of system shells (bash, zsh, sh, any *nix flavors) in interpreting the argument passing instead of PyCharm's implementation. This way the code will be much more portable and users don't have to figure out a different way to pass/feed an argument.
As consequence of that, I offer no guarantee this will work aside from this specific way the code formulated & in configuration similar to #Eypros system.
(Background info) Well, an original comment from #cryptonome seemed to work for me but since the provided answer by the same user is not exactly the same I summarized the solution that worked for me.
PyCharm for some reason treats different single (') and double quotes (") when parsing arguments. Programming in python this may or may not seem natural. Anyway double quotes (") seem to work exactly the same both in console and PyCharm. So, when arguments are passed using double quotes (") the same behavior is expected.
Single quotes should be avoided in PyCharm but seem to work in console (at least in mine: bash in Ubunut 16.04) because the argument splitting occurs in spaces and not quote boundaries.
Is there a difference between os.execl() and os.execv() in python? I was using
os.execl(python, python, *sys.argv)
to restart my script (from here). But it seems to start from where the previous script left.
I want the script to start from the beginning when it restarts. Will this
os.execv(__file__,sys.argv)
do the job? command and idea from here. I couldn't find difference between them from the python help/documentation. Is there a way do clean restart?
For a little more background on what I am trying to do please see my other question
At the low level they do the same thing: they replace the running process image with a new process.
The only difference between execv and execl is the way they take arguments. execv expects a single list of arguments (the first of which should be the name of the executable), while execl expects a variable list of arguments.
Thus, in essence, execv(file, args) is exactly equivalent to execl(file, *args).
Note that sys.argv[0] is already the script name. However, this is the script name as passed into Python, and may not be the actual script name that the program is running under. To be correct and safe, your argument list passed to exec* should be
['python', __file__] + sys.argv[1:]
I have just tested a restart script with the following:
os.execl(sys.executable, 'python', __file__, *sys.argv[1:])
and this works fine. Be sure you're not ignoring or silently catching any errors from execl - if it fails to execute, you'll end up "continuing where you left off".
According to the Python documentation there's no real functional difference between execv and execl:
The “l” and “v” variants of the exec* functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the execl*() functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the args parameter. In either case, the arguments to the child process should start with the name of the command being run, but this is not enforced.
No idea why one seems to restart the script where it left off but I'd guess that that is unrelated.
I need to execute a command line in the bakground in python 2.7. I need to fire and forget.
Here is the command:
cmd = "/usr/local/bin/fab -H %s aws_bootstrap initial_chef_run:%s,%s,%s -w" % (...)
How do I use the subproccess module?
e.g. is it
subprocess.call([cmd])
or
subprocess.call(["/usr/local/bin/fab", "-H %s aws_bootstrap initial_chef_run:%s,%s,%s -w"])
I dont get how to use the list. Or is every element of the list what would be a white space.
Thanks
each thing that would be seperated by whitespace is a seperate entity of the list
subprocess.call is blocking however
subprocess.popen is non-blocking
cmd = ["/usr/local/bin/fab", "-H",var1,"aws_bootstrap initial_chef_run:%s,%s,%s"%(var2,var3,var4), "-w"]
subprocess.popen(cmd) # dopnt wait just keep going
#or
subprocess.call(cmd) # wait until the command returns
you may however alternatively pass the command as one big string
cmd = "/usr/local/bin/fab -H %s aws_bootstrap initial_chef_run:%s,%s,%s -w" % (...)
subprocess.call(cmd)
in general this method(passing a single string) is frowned upon for some reason that has never been explained sufficiently to me
I used this recently to fire a perl script, like so:
var = "C:\Users\user\Desktop"
retcode = subprocess.call(["perl", '.\hgncDL.pl',var])
Working code
Define hParam and runParams in following code and you're good to go:
hParam = 'hParam'
runParams = (a,b,c)
args = ('/usr/local/bin/fab', '-H', hParam, 'aws_bootstrap', 'initial_chef_run:%s,%s,%s' % runParams, '-w')
subprocess.Popen(args)
Details
How do I use <any python module> module?
https://docs.python.org is a good starting point.
In particular, docs for subprocess module available here.
I can't provide direct links for each case later in this answer due to restriction imposed by low reputation. Each time I will be referring to 'docs', look for a section in docs on the module.
I need to execute a command line in the background in python 2.7. I need to fire and forget
Consider subprocess.Popen(args). Note capital 'P'.
See docs for more details.
subprocess.call(args) works in similar way, but it would block until the command completes. As stated in docs:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
How to use the sequence form of args parameter?
This is covered in "Frequently used arguments" section of docs:
args is required for all calls and should be a string, or a sequence of program arguments. Providing a sequence of arguments is generally preferred, as it allows the module to take care of any required escaping and quoting of arguments (e.g. to permit spaces in file names).
Also, passing an args in a string form has its limitation:
If passing a single string, either shell must be True or else the string must simply name the program to be executed without specifying any arguments.
Despite mentioned limitation, subprocess.Popen('cmd.exe /?') works for me. Win7, Python 2.7.8 64bit.
HTH, cheers.
I'm having a really strange error with the python subprocess.check_call() function. Here are two tests that should both fail because of permission problems, but the first one only returns a 'usage' (the "unexpected behaviour"):
# Test #1
import subprocess
subprocess.check_call(['git', 'clone', 'https://github.com/achedeuzot/project',
'/var/vhosts/project'], shell=True)
# Shell output
usage: git [--version] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
[-c name=value] [--help]
<command> [<args>]
The most commonly used git commands are:
[...]
Now for the second test (the "expected behaviour" one):
# Test #2
import subprocess
subprocess.check_call(' '.join(['git', 'clone', 'https://github.com/achedeuzot/project',
'/var/vhosts/project']), shell=True)
# Here, we're making it into a string, but the call should be *exactly* the same.
# Shell output
fatal: could not create work tree dir '/var/vhosts/project'.: Permission denied
This second error is the correct one. I don't have the permissions indeed. But why is there a difference between the two calls ? I thought that using a single string or a list is the same with the check_call() function. I have read the python documentation and various usage examples and both look correct.
Did someone have the same strange error ? Or does someone know why is there a difference in output when the commands should be exactly the same ?
Side notes: Python 3.4
Remove shell=True from the first one. If you carefully reread the subprocess module documentation you will see. If shell=False (default) the first argument is a list of the command line with arguments and all (or a string with only the command, no arguments supplied at all). If shell=True, then the first argument is a string representing a shell command line, a shell is executed, which in turn parses the command line for you and splits by white space (+ much more dangerous things you might not want it to do). If shell=True and the first argument is a list, then the first list item is the shell command line, and the rest are passed as arguments to the shell, not the command.
Unless you know you really, really need to, always let shell=False.
Here's the relevant bit from the documentation:
If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional arguments to the shell itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])
Is anyone able to tell me how to write a conditional for an argument on a python script? I want it to print "Argument2 Entered" if it is run with a second command line arguments such as:
python script.py argument1 argument2
And print "No second argument" if it is run without command line arguments, like this:
python script.py argument1
Is this possible?
import sys
if len(sys.argv)==2: # first entry in sys.argv is script itself...
print "No second argument"
elif len(sys.argv)==3:
print "Second argument"
There are many answers to this, depending on what exactly you want to do and how much flexibility you are likely to need.
The simplest solution is to examine the variable sys.argv, which is a list containing all of the command-line arguments. (It also contains the name of the script as the first element.) To do this, simply look at len(sys.argv) and change behaviour based on its value.
However, this is often not flexible enough for what people expect command-line programs to do. For example, if you want a flag (-i, --no-defaults, ...) then it's not obvious how to write one with just sys.argv. Likewise for arguments (--dest-dir="downloads"). There are therefore many modules people have written to simplify this sort of argument parsing.
The built-in solution is argparse, which is powerful and pretty easy-to-use but not particularly concise.
A clever solution is plac, which inspects the signature of the main function to try to deduce what the command-line arguments should be.
There are many ways to do this simple thing in Python. If you are interested to know more than I recommend to read this article. BTW I am giving you one solution below:
import click
'''
Prerequisite: # python -m pip install click
run: python main.py ttt yyy
'''
#click.command(context_settings=dict(ignore_unknown_options=True))
#click.argument("argument1")
#click.argument("argument2")
def main(argument1, argument2):
print(f"argument1={argument1} and argument2={argument2}")
if __name__ == '__main__':
main()
Following block should be self explanatory
$ ./first.py second third 4th 5th
5
$ cat first.py
#!/usr/bin/env python
import sys
print (len(sys.argv))
This is related to many other posts depending upon where you are going with this, so I'll put four here:
What's the best way to grab/parse command line arguments passed to a Python script?
Implementing a "[command] [action] [parameter]" style command-line interfaces?
How can I process command line arguments in Python?
How do I format positional argument help using Python's optparse?
But the direct answer to your question from the Python docs:
sys.argv -
The list of command line arguments passed to a Python script. argv[0] is the script name (it is operating system dependent whether this is a full pathname or not). If the command was executed using the -c command line option to the interpreter, argv[0] is set to the string '-c'. If no script name was passed to the Python interpreter, argv[0] is the empty string.
To loop over the standard input, or the list of files given on the command line, see the fileinput module.