argparse fails at dealing with sub-commands receiving global options:
import argparse
p = argparse.ArgumentParser()
p.add_argument('--arg', action='store_true')
s = p.add_subparsers()
s.add_parser('test')
will have p.parse_args('--arg test'.split()) work,
but fails on p.parse_args('test --arg'.split()).
Anyone aware of a python argument parser that handles global options to sub-commands properly?
You can easily add this argument to both parsers (main parser and subcommand parser):
import argparse
main = argparse.ArgumentParser()
subparser = main.add_subparsers().add_parser('test')
for p in [main,subparser]:
p.add_argument('--arg', action='store_true')
print main.parse_args('--arg test'.split()).arg
print main.parse_args('test --arg'.split()).arg
Edit: As #hpaulj pointed in comment, there is also parents argument which you can pass to ArgumentParser constructor or to add_parser method. You can list in this value parsers which are bases for new one.
import argparse
base = argparse.ArgumentParser(add_help=False)
base.add_argument('--arg', action='store_true')
main = argparse.ArgumentParser(parents=[base])
subparser = main.add_subparsers().add_parser('test', parents=[base])
print main.parse_args('--arg test'.split()).arg
print main.parse_args('test --arg'.split()).arg
More examples/docs:
looking for best way of giving command line arguments in python, where some params are req for some option and some params are req for other options
Python argparse - Add argument to multiple subparsers (I'm not sure if this question is not overlaping with this one too much)
http://docs.python.org/dev/library/argparse.html#parents
Give docopt a try:
>>> from docopt import docopt
>>> usage = """
... usage: prog.py command [--test]
... prog.py another [--test]
...
... --test Perform the test."""
>>> docopt(usage, argv='command --test')
{'--test': True,
'another': False,
'command': True}
>>> docopt(usage, argv='--test command')
{'--test': True,
'another': False,
'command': True}
There's a ton of argument-parsing libs in the Python world. Here are a few that I've seen, all of which should be able to handle address the problem you're trying to solve (based on my fuzzy recollection of them when I played with them last):
opster—I think this is what mercurial uses, IIRC
docopt—This one is new, but uses an interesting approach
cliff—This is a relatively new project by Doug Hellmann (PSF member, virtualenvwrapper author, general hacker extraordinaire) is a bit more than just an argument parser, but is designed from the ground up to handle multi-level commands
clint—Another project that aims to be "argument parsing and more", this one by Kenneth Reitz (of Requests fame).
Here's a dirty workaround --
import argparse
p = argparse.ArgumentParser()
p.add_argument('--arg', action='store_true')
s = p.add_subparsers()
s.add_parser('test')
def my_parse_args(ss):
#parse the info the subparser knows about; don't issue an error on unknown stuff
namespace,leftover=p.parse_known_args(ss)
#reparse the unknown as global options and add it to the namespace.
if(leftover):
s.add_parser('null',add_help=False)
p.parse_args(leftover+['null'],namespace=namespace)
return namespace
#print my_parse_args('-h'.split()) #This works too, but causes the script to stop.
print my_parse_args('--arg test'.split())
print my_parse_args('test --arg'.split())
This works -- And you could modify it pretty easily to work with sys.argv (just remove the split string "ss"). You could even subclass argparse.ArgumentParser and replace the parse_args method with my_parse_args and then you'd never know the difference -- Although subclassing to replace a single method seems overkill to me.
I think however, that this is a lit bit of a non-standard way to use subparsers. In general, global options are expected to come before subparser options, not after.
The parser has a specific syntax: command <global options> subcommand <subcommand ptions>, you are trying to feed the subcommand with an option and but you didn't define one.
Related
In Python, how can I parse the command line, edit the resulting parsed arguments object and generate a valid command line back with the updated values?
For instance, I would like python cmd.py --foo=bar --step=0 call python cmd.py --foo=bar --step=1 with all the original --foo=bar arguments, potentially without extra arguments added when default value is used.
Is it possible with argparse?
You can use argparse to parse the command-line arguments, and then modify those as desired. At the moment however, argparse lacks the functionality to work in reverse and convert those values back into a command-line string. There is however a package for doing precisely that, called argunparse. For example, the following code in cmd.py
import sys
import argparse
import argunparse
parser = argparse.ArgumentParser()
unparser = argunparse.ArgumentUnparser()
parser.add_argument('--foo')
parser.add_argument('--step', type=int)
kwargs = vars(parser.parse_args())
kwargs['step'] += 1
prefix = f'python {sys.argv[0]} '
arg_string = unparser.unparse(**kwargs)
print(prefix + arg_string)
will print the desired command line:
python cmd.py --foo=bar --step=1
argparse is clearly designed to go one way, from sys.argv to the args namespace. No thought has been given to preserving information that would let you map things back the other way, much less do the mapping itself.
In general, multiple sys.argv could produce the same args. You could, for example, have several arguments that have the same dest. Or you can repeat 'optionals'. But for a restricted 'parser' setup there may be enough information to recreate a usable argv.
Try something like:
parser = argparser.ArgumentParser()
arg1 = parser.add_argument('--foo', default='default')
arg2 = parser.add_argument('bar', nargs=2)
and then examine the arg1 and arg2 objects. They contain all the information that you supplied to the add_argument method. Of course you could have defined those values in your own data structures before hand, e.g.
{'option_string':'--foo', 'default':'default'}
{'dest':'bar', 'nargs':2}
and used those as input to add_argument.
While the parser may have enough information to recreate a useable sys.argv, you have to figure out how to do that yourself.
default=argparse.SUPPRESS may be handy. It keeps the parser from adding a default entry to the namespace. So if the option isn't used, it won't appear in the namespace.
This isn't possible in any easy way that I know of, then again I've never needed to do this.
But with the lack of information in the question in regards to how you call your script, I'll assume the following:
python test.py cmd --foo=bar --step=0
And what you could do is do:
from sys import argv
for index in range(1, len(argv)): # the first object is the script itself
if '=' in argv[index]:
param, value = argv[index].split('=', 1)
if param == '--step':
value = '1'
argv[index] = param + '=' + value
print(argv)
Note that this is very specific to --step and may be what you've already thought of and just wanted a "better way", but again, I don't think there is.
depending on the scope, this works at the same module at least:
pprint(argparse._sys.argv)
Per the other answers, rebuilding is imperfect, but if you aren't doing anything too fancy and are okay with imperfect, something like this could work for you as a starting point:
def unparse_args(parser, parsed_args):
"""Unparse argparsed args"""
positional_args = [action.dest
for action in parser._actions
if not action.option_strings]
optionals = []
positionals = []
for key, value in vars(parsed_args).items():
if not value:
# none and false flags go away
continue
elif key in positional_args:
positionals.append(value)
elif value is True:
optionals.append(f"--{key}")
else:
optionals.append(f"--{key}={value}")
return " ".join(optionals + positionals)
Here's an example using this with a git clone clone:
parser = argparse.ArgumentParser(description='A sample git clone wrapper')
# options
parser.add_argument("-v", "--verbose", action="store_true",
help="be more verbose")
parser.add_argument("-q", "--quiet", action="store_true",
help="be more quiet")
parser.add_argument("--recurse-submodules", nargs='?',
help="initialize submodules in the clone")
parser.add_argument("--recursive", nargs='?',
help="alias of --recurse-submodules")
parser.add_argument("-b", "--branch",
help=" checkout <branch> instead of the remote's HEAD")
parser.add_argument("--depth", type=int,
help="create a shallow clone of that depth")
parser.add_argument("--shallow-submodules", action="store_true",
help="any cloned submodules will be shallow")
# positional
parser.add_argument("repo", help="The git repo to clone")
parser.add_argument("dir", nargs='?',help="The location to clone the repo")
# make a fake call to your git clone clone and parse the args
cmdargs = ["--depth=1", "-q", "ohmyzsh/ohmyzsh"]
parsedargs = parser.parse_args(cmdargs)
# now unparse them
unparsed = unparse_args(parser, parsedargs)
print(unparsed)
I have been trying to set up a main parser with two subs parser so that when called alone, the main parser would display a help message.
def help_message():
print "help message"
import argparse
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest='sp')
parser_a = subparsers.add_parser('a')
parser_a.required = False
#some options...
parser_b = subparsers.add_parser('b')
parser_b.required = False
#some options....
args = parser.parse_args([])
if args.sp is None:
help_message()
elif args.sp == 'a':
print "a"
elif args.sp == 'b':
print "b"
This code works well on Python 3 and I would like it to work aswell on Python 2.x
I am getting this when running 'python myprogram.py'
myprogram.py: error: too few arguments
Here is my question : How can i manage to write 'python myprogram.py' in shell and get the help message instead of the error.
I think you are dealing the bug discussed in http://bugs.python.org/issue9253
Your subparsers is a positional argument. That kind of argument is always required, unless nargs='?' (or *). I think that is why you are getting the error message in 2.7.
But in the latest py 3 release, the method of testing for required arguments was changed, and subparsers fell through the cracks. Now they are optional (not-required). There's a suggested patch/fudge to make argparse behave as it did before (require a subparser entry). I expect that eventually py3 argparse will revert to the py2 practice (with a possible option of accepting a required=False parameter).
So instead of testing args.sp is None, you may want to test sys.argv[1:] before calling parse_args. Ipython does this to produce it's own help message.
For others - I ended up on this page while trying to figure out why I couldn't just call my script with no arguments while using argparse in general.
The tutorial demonstrates that the difference between an optional argument and a required argument is adding "--" to the name:
parser.add_argument("--show") <--- Optional arg
parser.add_argument("show") <--- Not optional arg
I have a program which can be used in the following way:
program install -a arg -b arg
program list
program update
There can only ever be one of the positional arguments specified (install, list or update). And there can only be other arguments in the install scenario.
The argparse documentation is a little dense and I'm having a hard time figuring out how to do this correctly. What should my add_arguments look like?
This seems like you want to use subparsers.
from argparse import ArgumentParser
parser = ArgumentParser()
subparsers = parser.add_subparsers()
install = subparsers.add_parser('install')
install.add_argument('-b')
install.add_argument('-a')
install.set_defaults(subparser='install')
lst = subparsers.add_parser('list')
lst.set_defaults(subparser='list')
update = subparsers.add_parser('update')
update.set_defaults(subparser='update')
print parser.parse_args()
As stated in the docs, I have combined with set_defaults so that you can know which subparser was invoked.
Normally, to add a subparser in argparse you have to do:
parser = ArgumentParser()
subparsers = parser.add_subparser()
subparser = subparsers.add_parser()
The problem I'm having is I'm trying to add another command line script, with its own parser, as a subcommand of my main script. Is there an easy way to do this?
EDIT: To clarify, I have a file script.py that looks something like this:
def initparser():
parser = argparse.ArgumentParser()
parser.add_argument('--foo')
parser.add_argument('--bar')
return parser
def func(args):
#args is a Namespace, this function does stuff with it
if __name__ == '__main__':
initparser().parse_args()
So I can run this like:
python script.py --foo --bar
I'm trying to write a module app.py that's a command line interface with several subcommands, so i can run something like:
python app.py script --foo --bar
Rather than copy and pasting all of the initparser() logic over to app.py, I'd like to be able to directly use the parser i create from initparser() as a sub-parser. Is this possible?
You could use the parents parameter
p=argparse.ArgumentParser()
s=p.add_subparsers()
ss=s.add_parser('script',parents=[initparser()],add_help=False)
p.parse_args('script --foo sst'.split())
ss is a parser that shares all the arguments defined for initparser. The add_help=False is needed on either ss or initparser so -h is not defined twice.
You might want to take a look at the shlex module as it sounds to me like you're trying to hack the ArgumentParser to do something that it wasn't actually intended to do.
Having said that, it's a little difficult to figure out a good answer without examples of what it is, exactly, that you're trying to parse.
I think your problem can be addressed by a declarative wrapper for argparse. The one I wrote is called Argh. It helps with separating definition of commands (with all arguments-related stuff) from assembling (including subparsers) and dispatching.
This is a way old question, but I wanted to throw out another alternative. And that is to think in terms of inversion of control. By this I mean the root ArgumentParser would manage the creation of the subparsers:
# root_argparser.py
from argparse import ArgumentParser, Namespace
__ARG_PARSER = ArgumentParser('My Script')
__SUBPARSERS = __ARG_PARSER.add_subparsers(dest='subcommand')
__SUBPARSERS.required = True
def get_subparser(name: str, **kwargs) -> ArgumentParser:
return __SUBPARSERS.add_parser(name, **kwargs)
def parse_args(**kwargs) -> Namespace:
return __ARG_PARSER.parse_args(**kwargs)
# my_script.py
from argparse import ArgumentParser
from root_argparse import get_subparser
__ARG_PARSER = get_subparser('script')
__ARG_PARSER.add_argument('--foo')
__ARG_PARSER.add_argument('--bar')
def do_stuff(...):
...
# main.py
from root_argparse import parse_args
import my_script
if __name__ == '__main__':
args = parse_args()
# do stuff with args
Seems to work okay from some quick testing I did.
Let's say I have a script that does some work on a file. It takes this file's name on the command line, but if it's not provided, it defaults to a known filename (content.txt, say). With python's argparse, I use the following:
parser = argparse.ArgumentParser(description='my illustrative example')
parser.add_argument('--content', metavar='file',
default='content.txt', type=argparse.FileType('r'),
help='file to process (defaults to content.txt)')
args = parser.parse_args()
# do some work on args.content, which is a file-like object
This works great. The only problem is that if I run python myscript --help, I get an ArgumentError if the file isn't there (which I guess makes sense), and the help text is not shown. I'd rather it not try to open the file if the user just wants --help. Is there any way to do this? I know I could make the argument a string and take care of opening the file myself later (and I've been doing that), but it would be convenient to have argparse take care of it.
You could subclass argparse.FileType:
import argparse
import warnings
class ForgivingFileType(argparse.FileType):
def __call__(self, string):
try:
super(ForgivingFileType,self).__call__(string)
except IOError as err:
warnings.warn(err)
parser = argparse.ArgumentParser(description='my illustrative example')
parser.add_argument('--content', metavar='file',
default='content.txt', type=ForgivingFileType('r'),
help='file to process (defaults to content.txt)')
args = parser.parse_args()
This works without having to touch private methods like ArgumentParser._parse_known_args.
Looking at the argparse code, I see:
ArgumentParser.parse_args calls parse_known_args and makes sure that there isn't any pending argument to be parsed.
ArgumentParser.parse_known_args sets default values and calls ArgumentParser._parse_known_args
Hence, the workaround would be to use ArgumentParser._parse_known_args directly to detect -h and, after that, use ArgumentParser.parse_args as usual.
import sys, argparse
parser = argparse.ArgumentParser(description='my illustrative example', argument_default=argparse.SUPPRESS)
parser.add_argument('--content', metavar='file',
default='content.txt', type=argparse.FileType('r'),
help='file to process (defaults to content.txt)')
parser._parse_known_args(sys.argv[1:], argparse.Namespace())
args = parser.parse_args()
Note that ArgumentParser._parse_known_args needs a couple of parameters: the arguments from the command line and the namespace.
Of course, I wouldn't recommend this approach since it takes advantage of the internal argparse implementation and that might change in the future. However, I don't find it too messy, so you still might want to use it if you think maintenance risks pay off.
Use stdin as default:
parser.add_argument('file', default='-', nargs='?', type=argparse.FileType('r'))
Perhaps you could define your own type or action in the add_argument call that checks if the file exists, and returns a file handle if it does and None (or something else) otherwise.
This would require you to write some code of yourself as well though, but if the default value can not always be used you probably have to do some checking sooner or later. Like Manny D argues you might want to reconsider your default value.