I am working on a project with many command line arguments and I'd like to be able to specify all of the arguments in the form of a file (i.e. JSON), and load this file into an argparse object, instead of pasting them into the terminal every time. I also need to enforce the presence of required arguments. I've found a couple options but none of them do exactly what I need.
If I have something like this
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--image_path", type=str, required=True)
parser.add_argument("--count", type=int, required=True)
args = parser.parse_args()
One approach might be to add something like this to update the dictionary with the JSON contents
args_dict = vars(args)
with open('path/to/args.json', 'rb') as f:
args_dict.update(json.load(f))
But an error occurs at the parse_args() line, since it didn't see the required arguments passed in.
The other approach I considered is to use
parser = argparse.ArgumentParser(fromfile_prefix_chars='#')
and just pass the file name, prefixed with #, to the command. The problem here is that I have to convert the dictionary-like format (JSON) to a line-separated list of arguments, as noted in the documentation. This is not trivial in my case.
I also considered just saving the arguments as a line-separated list to begin with, but the way that I know how to do that involves iterating over sys.argv and printing to a text file, which is problematic because I have some args that are assigned default values, which are not represented in sys.argv. I need to record the default values that were used for later reference.
It seems like there should be a simple way to do this, but I'm stumped at the moment.
Related
Is it possible to tell argparse to give the same errors on default argument values as it would on user-specified argument values?
For example, the following will not result in any error:
parser = argparse.ArgumentParser()
parser.add_argument('--choice', choices=['a', 'b', 'c'], default='invalid')
args = vars(parser.parse_args()) # args = {'choice': 'invalid'}
whereas omitting the default, and having the user specify --choice=invalid on the command-line will result in an error (as expected).
Reason for asking is that I would like to have the user to be able to specify default command-line options in a JSON file which are then set using ArgumentParser.set_defaults(), but unfortunately the behaviour demonstrated above prevents these user-specified defaults from being validated.
Update: argparse is inconsistent and I now consider the behavior above to be a bug. The following does trigger an error:
parser = argparse.ArgumentParser()
parser.add_argument('--num', type=int, default='foo')
args = parser.parse_args() # triggers exception in case --num is not
# specified on the command-line
I have opened a bug report for this: https://github.com/python/cpython/issues/100949
I took the time to dig into the source code, and what is happening is that a check is only happening for arguments you gave on the command line. The only way to enforce a check, in my opinion, is to subclass ArgumentParser and have it do the check when you add the argument:
class ValidatingArgumentParser(argparse.ArgumentParser):
def add_argument(self, *args, **kwargs):
super().add_argument(*args, **kwargs)
self._check_value(self._actions[-1],kwargs['default'])
No. Explicit arguments need to be validated because they originate from outside the source code. Default values originate in the source code, so it's the job of the programmer, not the argument parser, to ensure they are valid.
(This is the difference between validation and debugging.)
(Using set_defaults on unvalidated user input still falls under the purview of debugging, as it's not the argument parser itself adding the default values, but the programmer.)
I have a use case where I'd like the user to be able to provide, as an argument to argparse, EITHER a single string OR a filename where each line has a string.
Assume the user launches ./myscript.py -i foobar
The logical flow I'm looking for is something like this:
The script determines whether the string foobar is a readable file.
IF it is indeed a readable file, we call some function from the script, passing each line in foobar as an argument to that function. If foobar is not a readable file, we call the same function but just use the string foobar as the argument and return.
I have no ability to guarantee that a filename argument will have a specific extension (or even an extension at all).
Is there a more pythonic way to do this OTHER than just coding up the logic exactly as I've described above? I looked through the argparse tutorial and didn't see anything, but it also seems reasonable to think that there would be some specific hooks for filenames as arguments, so I figured I'd ask.
A way would be:
Let's say that you have created a parser like this:
parser.add_argument('-i',
help='...',
type=function)
Where type points to the function which will be an outer function that evaluates the input of the user and decides if it is a string or a filename
More information about type you can find in the documentation.
Here is a minimal example that demonstrates this use of type:
parser.add_argument('-d','--directory',
type=Val_dir,
help='...')
# ....
def Val_dir(dir):
if not os.path.isdir(dir):
raise argparse.ArgumentTypeError('The directory you specified does not seem to exist!')
else:
return dir
The above example shows that with type we can control the input at parsing time. Of course in your case the function would implement another logic - evaluate if the input is a string or a filename.
This doesn't look like an argparse problem, since all you want from it is a string. That string can be a filename or a function argument. To a parser these will look the same. Also argparse isn't normally used to run functions. It is used to parse the commandline. Your code determines what to do with that information.
So here's a script (untested) that I think does your task:
import argparse
def somefunction(*args):
print(args)
if __name__=='__main__':
parser=argparse.ArgumentParser()
parser.add_argument('-i','--input')
args = parser.parse_args()
try:
with open(args.input) as f:
lines = f.read()
somefunction(*lines)
# or
# for line in lines:
# somefuncion(line.strip())
except:
somefunction(arg.input)
argparse just provides the args.input string. It's the try/except block that determines how it is used.
================
Here's a prefix char approach:
parser=argparse.ArgumentParser(fromfile_prefix_chars='#',
description="use <prog -i #filename> to load values from file")
parser.add_argument('-i','--inputs')
args=parser.parse_args()
for arg in args.inputs:
somefunction(arg)
this is supposed to work with a file like:
one
two
three
https://docs.python.org/3/library/argparse.html#fromfile-prefix-chars
I am using Argparse to parse shell input to my Python function.
The tricky part is that this script first reads in a file that partially determines what kind of arguments are available to Argparse (it's a JSON file containing criteria by which the user can specify what data to output).
But before these arguments are added to my parser, I would like to read in some arguments relating to the file reading itself. (e.g. whether to fix the formatting of the input file). Kinda like this:
test.py (fix_formatting=True, **more arguments added later)
When I try to run args = parser.parse_args() twice, after the initial input and after adding more keys, things fall apart: Argparse quite predictably complains that some of the user input are unrecognized arguments:. I thought I might use subparsers to that end.
So I tried variations of (following the example in the docs as best as I could):
def main():
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(help='sub-command help')
settingsparser = subparsers.add_parser('settings') #i want a subparser called 'settings'
settingsparser.add_argument('--fix_formatting', action='store_true') #this subparser shall have a --fix_formatting
Then I try to parse only the "settings" part like so:
settings=parser.parse_args(['settings'])
This seems to work. But then I add my other keys and things break:
keys=['alpha','beta','gamma','delta']
for key in keys:
parser.add_argument("--"+key, type=str, help="X")
args = parser.parse_args()
If I parse any input for any of the arguments from keys, Argparse complains that I make an invalid choice: [...] (choose from 'settings'). Now I don't understand why I have to choose from "settings"; the docs say that the parse
will only contain attributes for the main parser and the subparser that was selected by the command line (and not any other subparsers)
what is my error of understanding here?
and if this is the wrong approach, how would one go about parsing one bit of input before another bit?
Any help is much appreciated!
parse_args calls parse_known_args. This returns the args namesparse along with a list of strings (from sys.argv) that it could not process (extras). parse_args raises this error if this list is not empty.
https://docs.python.org/3/library/argparse.html#partial-parsing
Thus parse_known_args is useful if you want to parse some of the input.
sys.argv remains unchanged. Subsequent calls to a parser (whether it was the original one or not) use that again, unless you pass the extras.
I don't think subparsers help you here. They aren't meant for delayed or two stage parsing. I'd suggest playing with the documentation examples for subparsers first.
To the main parser, the subparsers look like
subparsers = parser.add_argument('cmd', choices=['select',...])
In other words, it adds a positional argument where the choices are the subparser names that you define. That may help you see why it expects you to name select. Positionals are normally required.
(there's an exception to this in recent versions, https://stackoverflow.com/a/22994500/901925)
I am converting Bash shell installer utility to Python 2.7 and need to implement complex CLI so I am able to parse tens of parameters (potentially up to ~150). These are names of Puppet class variables in addition to a dozen of generic deployment options, which where available in shell version.
However after I have started to add more variables I faced are several challenges:
1. I need to group parameters into separate dictionaries so deployment options are separated from Puppet variables. If they are thrown into the same bucket, then I will have to write some logic to sort them, potentially renaming parameters and then dictionary merges will not be trivial.
2. There might be variables with the same name but belonging to different Puppet class, so I thought subcommands would allow me to filter what goes where and avoiding name collisions.
At the momment I have implemented parameter parsing via simply adding multiple parsers:
parser = argparse.ArgumentParser(description='deployment parameters.')
env_select = parser.add_argument_group(None, 'Environment selection')
env_select.add_argument('-c', '--client_id', help='Client name to use.')
env_select.add_argument('-e', '--environment', help='Environment name to use.')
setup_type = parser.add_argument_group(None, 'What kind of setup should be done:')
setup_type.add_argument('-i', '--install', choices=ANSWERS, metavar='', action=StoreBool, help='Yy/Nn Do normal install and configuration')
# MORE setup options
...
args, unk = parser.parse_known_args()
config['deploy_cfg'].update(args.__dict__)
pup_class1_parser = argparse.ArgumentParser(description=None)
pup_class1 = pup_class1_parser.add_argument_group(None, 'Puppet variables')
pup_class1.add_argument('--ad_domain', help='AD/LDAP domain name.')
pup_class1.add_argument('--ad_host', help='AD/LDAP server name.')
# Rest of the parameters
args, unk = pup_class1_parser.parse_known_args()
config['pup_class1'] = dict({})
config['pup_class1'].update(args.__dict__)
# Same for class2, class3 and so on.
The problem with this approach that it does not solve issue 2. Also first parser consumes "-h" option and rest of parameters are not shown in help.
I have tried to use example selected as an answer but I was not able to use both commands at once.
## This function takes the 'extra' attribute from global namespace and re-parses it to create separate namespaces for all other chained commands.
def parse_extra (parser, namespace):
namespaces = []
extra = namespace.extra
while extra:
n = parser.parse_args(extra)
extra = n.extra
namespaces.append(n)
return namespaces
pp = pprint.PrettyPrinter(indent=4)
argparser=argparse.ArgumentParser()
subparsers = argparser.add_subparsers(help='sub-command help', dest='subparser_name')
parser_a = subparsers.add_parser('command_a', help = "command_a help")
## Setup options for parser_a
parser_a.add_argument('--opt_a1', help='option a1')
parser_a.add_argument('--opt_a2', help='option a2')
parser_b = subparsers.add_parser('command_b', help = "command_b help")
## Setup options for parser_a
parser_b.add_argument('--opt_b1', help='option b1')
parser_b.add_argument('--opt_b2', help='option b2')
## Add nargs="*" for zero or more other commands
argparser.add_argument('extra', nargs = "*", help = 'Other commands')
namespace = argparser.parse_args()
pp.pprint(namespace)
extra_namespaces = parse_extra( argparser, namespace )
pp.pprint(extra_namespaces)
Results me in:
$ python argtest.py command_b --opt_b1 b1 --opt_b2 b2 command_a --opt_a1 a1
usage: argtest.py [-h] {command_a,command_b} ... [extra [extra ...]]
argtest.py: error: unrecognized arguments: command_a --opt_a1 a1
The same result was when I tried to define parent with two child parsers.
QUESTIONS
Can I somehow use parser.add_argument_group for argument parsing or is it just for the grouping in help print out? It would solve issue 1 without missing help side effect. Passing it as parse_known_args(namespace=argument_group) (if I correctly recall my experiments) gets all the variables (thats ok) but also gets all Python object stuff in resulting dict (that's bad for hieradata YAML)
What I am missing in the second example to allow to use multiple subcommands? Or is that impossible with argparse?
Any other suggestion to group command line variables? I have looked at Click, but did not find any advantages over standard argparse for my task.
Note: I am sysadmin, not a programmer so be gently on me for the non object style coding. :)
Thank you
RESOLVED
Argument grouping solved via the answer suggested by hpaulj.
import argparse
import pprint
parser = argparse.ArgumentParser()
group_list = ['group1', 'group2']
group1 = parser.add_argument_group('group1')
group1.add_argument('--test11', help="test11")
group1.add_argument('--test12', help="test12")
group2 = parser.add_argument_group('group2')
group2.add_argument('--test21', help="test21")
group2.add_argument('--test22', help="test22")
args = parser.parse_args()
pp = pprint.PrettyPrinter(indent=4)
d = dict({})
for group in parser._action_groups:
if group.title in group_list:
d[group.title]={a.dest:getattr(args,a.dest,None) for a in group._group_actions}
print "Parsed arguments"
pp.pprint(d)
This gets me desired result for the issue No.1. until I will have multiple parameters with the same name. Solution may look ugly, but at least it works as expected.
python argtest4.py --test22 aa --test11 yy11 --test21 aaa21
Parsed arguments
{ 'group1': { 'test11': 'yy11', 'test12': None},
'group2': { 'test21': 'aaa21', 'test22': 'aa'}}
Your question is too complicated to understand and respond to in one try. But I'll throw out some preliminary ideas.
Yes, argument_groups are just a way of grouping arguments in the help. They have no effect on parsing.
Another recent SO asked about parsing groups of arguments:
Is it possible to only parse one argument group's parameters with argparse?
That poster initially wanted to use a group as a parser, but the argparse class structure does not allow that. argparse is written in object style. parser=ArguementParser... creates one class of object, parser.add_arguement... creates another, add_argument_group... yet another. You customize it by subclassing ArgumentParser or HelpFormatter or Action classes, etc.
I mentioned a parents mechanism. You define one or more parent parsers, and use those to populate your 'main' parser. They could be run indepdently (with parse_known_args), while the 'main' is used to handle help.
We also discussed grouping the arguments after parsing. A namespace is a simple object, in which each argument is an attribute. It can also be converted to a dictionary. It is easy to pull groups of items from a dictionary.
There have SO questions about using multiple subparsers. That's an awkward proposition. Possible, but not easy. Subparsers are like issueing a command to a system program. You generally issue one command per call. You don't nest them or issue sequences. You let shell piping and scripts handle multiple actions.
IPython uses argparse to parse its inputs. It traps help first, and issues its own message. Most arguments come from config files, so it is possible to set values with default configs, custom configs and in the commandline. It's an example of naming a very large set of arguments.
Subparsers let you use the same argument name, but without being able to invoke multiple subparsers in one call that doesn't help much. And even if you could invoke several subparsers, they would still put the arguments in the same namespace. Also argparse tries to handle flaged arguments in an order independent manner. So a --foo at the end of the command line gets parsed the same as though it were at the start.
There was SO question where we discussed using argument names ('dest') like 'group1.argument1', and I've even discussed using nested namespaces. I could look those up if it would help.
Another thought - load sys.argv and partition it before passing it to one or more parsers. You could split it on some key word, or on prefixes etc.
If you have so many arguments this seems like a design issue. It seems very unmanageable. Can you not implement this using a configuration file that has reasonable set of defaults? Or defaults in the code with a reasonable (i.e. SMALL) number of arguments in the command line, and allow everything or everything else to be overridden with parameters in a 'key:value' configuration file? I can't imagine having to use a CLI with the number of variables you are proposing.
This first link has the same question in the first section, but it is unanswered
(python argparse: parameter=value). And this second question is similar, but I can't seem to get it working for my particular case
( Using argparse to parse arguments of form "arg= val").
So my situation is this -- I am re-writing a Python wrapper which is used by many other scripts (I would prefer not to modify these other scripts). Currently, the Python wrapper is called with command line arguments of the form --key=value for a number of different arguments, but was parsed manually. I would like to parse them with argparse.
N.B. The argument names are unwieldy, so I am renaming using the dest option in add_argument.
parser = argparse.ArgumentParser(description='Wrappin Ronnie Reagan')
parser.add_argument("--veryLongArgName1", nargs=1, dest="arg1", required=True)
parser.add_argument("--veryLongArgName2", nargs=1, dest="arg2")
parser.add_argument("--veryLongArgName3", nargs=1, dest="arg3")
userOpts = vars(parser.parse_args())
Which, while apparently parsing the passed command lines correctly, displays this as the help:
usage: testing_argsparse.py [-h] --veryLongArgName1 ARG1
[--veryLongArgName2 ARG2]
[--veryLongArgName3 ARG3]
testing_argsparse.py: error: argument --veryLongArgName1 is required
But what I want is that all parameters are specified with the --key=value format, not --key value. i.e.
usage: testing_argsparse.py [-h] --veryLongArgName1=ARG1
[--veryLongArgName2=ARG2]
[--veryLongArgName3=ARG3]
testing_argsparse.py: error: argument --veryLongArgName1 is required
testing_argsparse.py --veryLongArgName1=foo
works. argparse module accepts both --veryLongArgName1=foo and --veryLongArgName1 foo formats.
What exact command line arguments are you trying to pass to argparse that's causing it to not work?
A little late but for anyone with a similar request as the OP you could use a custom HelpFormatter.
class ArgFormatter(argparse.HelpFormatter):
def _format_args(self, *args):
result = super(ArgFormatter, self)._format_args(*args)
return result and '%%%' + result
def _format_actions_usage(self, *args):
result = super(ArgFormatter, self)._format_actions_usage(*args)
return result and result.replace(' %%%', '=')
This can then be passed to ArgumentParser to give the wanted behavior.
parser = argparse.ArgumentParser(
description='Wrappin Ronnie Reagan',
formatter_class=ArgFormatter)
This intercepts the args (ARG1, ARG2, ...) and adds a custom prefix which is later replaced (along with the unwanted space) for an = symbol. The and in the return statements makes sure to only modify the result if it's non-empty.