Specifying default filenames with argparse, but not opening them on --help? - python

Let's say I have a script that does some work on a file. It takes this file's name on the command line, but if it's not provided, it defaults to a known filename (content.txt, say). With python's argparse, I use the following:
parser = argparse.ArgumentParser(description='my illustrative example')
parser.add_argument('--content', metavar='file',
default='content.txt', type=argparse.FileType('r'),
help='file to process (defaults to content.txt)')
args = parser.parse_args()
# do some work on args.content, which is a file-like object
This works great. The only problem is that if I run python myscript --help, I get an ArgumentError if the file isn't there (which I guess makes sense), and the help text is not shown. I'd rather it not try to open the file if the user just wants --help. Is there any way to do this? I know I could make the argument a string and take care of opening the file myself later (and I've been doing that), but it would be convenient to have argparse take care of it.

You could subclass argparse.FileType:
import argparse
import warnings
class ForgivingFileType(argparse.FileType):
def __call__(self, string):
try:
super(ForgivingFileType,self).__call__(string)
except IOError as err:
warnings.warn(err)
parser = argparse.ArgumentParser(description='my illustrative example')
parser.add_argument('--content', metavar='file',
default='content.txt', type=ForgivingFileType('r'),
help='file to process (defaults to content.txt)')
args = parser.parse_args()
This works without having to touch private methods like ArgumentParser._parse_known_args.

Looking at the argparse code, I see:
ArgumentParser.parse_args calls parse_known_args and makes sure that there isn't any pending argument to be parsed.
ArgumentParser.parse_known_args sets default values and calls ArgumentParser._parse_known_args
Hence, the workaround would be to use ArgumentParser._parse_known_args directly to detect -h and, after that, use ArgumentParser.parse_args as usual.
import sys, argparse
parser = argparse.ArgumentParser(description='my illustrative example', argument_default=argparse.SUPPRESS)
parser.add_argument('--content', metavar='file',
default='content.txt', type=argparse.FileType('r'),
help='file to process (defaults to content.txt)')
parser._parse_known_args(sys.argv[1:], argparse.Namespace())
args = parser.parse_args()
Note that ArgumentParser._parse_known_args needs a couple of parameters: the arguments from the command line and the namespace.
Of course, I wouldn't recommend this approach since it takes advantage of the internal argparse implementation and that might change in the future. However, I don't find it too messy, so you still might want to use it if you think maintenance risks pay off.

Use stdin as default:
parser.add_argument('file', default='-', nargs='?', type=argparse.FileType('r'))

Perhaps you could define your own type or action in the add_argument call that checks if the file exists, and returns a file handle if it does and None (or something else) otherwise.
This would require you to write some code of yourself as well though, but if the default value can not always be used you probably have to do some checking sooner or later. Like Manny D argues you might want to reconsider your default value.

Related

Importing and using a function that takes argparse as a parameter

I'm trying to import a program and use a couple functions in it, but I'm running into an issue pertaining to argparse.
In the functions I would like to use, the creator passes his parser args to the function like so.
args = parser.parse_args()
def write_flash(esp, args):
if args.compress is None and not args.no_compress:
args.compress = not args.no_stub
# verify file sizes fit in flash
flash_end = flash_size_bytes(args.flash_size)
for address, argfile in args.addr_filename:
argfile.seek(0,2) # seek to end
if address + argfile.tell() > flash_end:
I'm wondering how I can use this function in another program I'm writing. Do I somehow create a parser.parse_args() object with the same arguments as him? One thing I thought of is using subprocess.popen to run it like so:
p = subprocess.Popen(['python', 'esptool.py', '--port',
'COM3', 'write_flash', '0x00000', 'boot_v1.7.bin', '0xfc000', 'esp_init_data_ default_v08.bin', '0xfb000',
'blank.bin', '0x01000', 'user1.1024.new.2.bin'])
But this seems less than ideal. I'm really lost in general on how to approach argparse in general and any help would be much appreciated, thank you.
parse_args() returns a Namespace object. You can just create one yourself.
from argparse import Namespace`
args = Namespace()
args.compress = True
args.no_stub = 3
print(args)
and then pass it.

Is there a way to add a parameter that supress the need of other parameters using argparse on Python?

I am using Argparse on python to do a script on command line. I have this for my script:
parser = argparse.ArgumentParser(prog = 'manageAdam')
parser.add_argument("-s", action='store_true', default=False, help='Shows configuration file')
parser.add_argument("d", type=str, help="device")
parser.add_argument("o", type=str, help="operation")
parser.add_argument("-v", "--value", type=int, nargs='*', help="value or list to send in the operation")
I am looking that if I call manageAdam -s it would work and don't ask for the positional arguments, something like the -h, which can be called without any other positional argument that is defined. Is it possible?
There is no built-in way to do this. You might be able to achieve something by writing some custom Action classes that keep track on the parser about their state, but I believe it will become quite messy and buggy.
I believe the best bet is to simply improve your UI. The -s is not an option. It's a separate command that completely alters how your script executes. In such cases you should use the subparsers functionality which allows to introduce sub-commands. This is a better interface then the one you thought, and is used by a lot of other tools (e.g. Git/mercurial).
In this case you'd have a config command to handle the configuration and a run (or how you want to call it) command to perform the operations on the device:
subparsers = parser.add_subparsers(dest='command')
parser_config = subparsers.add_parser('config', help='Configuration')
parser_run = subparsers.add_parser('run', help='Execute operation on device')
parser_run.add_argument('d', type=str, ...)
parser_run.add_argument('o', type=str, ...)
parser_run.add_argument('-v', type=int, nargs='*', ...)
# later:
args = parser.parse_args()
if args.command == 'config':
print('Configuration')
else:
print('Run operation')
Used from the command line as:
$ manageAdam config
# or
$ manageAdam run <device> <operation> <values...>
No, there are no such way.
You can make all arguments optional and set default value to None then perform check that all of them aren't None otherwise raise argparse.ArgumentError, if manageAdam provided skip check for other arguments.

argparse - Build back command line

In Python, how can I parse the command line, edit the resulting parsed arguments object and generate a valid command line back with the updated values?
For instance, I would like python cmd.py --foo=bar --step=0 call python cmd.py --foo=bar --step=1 with all the original --foo=bar arguments, potentially without extra arguments added when default value is used.
Is it possible with argparse?
You can use argparse to parse the command-line arguments, and then modify those as desired. At the moment however, argparse lacks the functionality to work in reverse and convert those values back into a command-line string. There is however a package for doing precisely that, called argunparse. For example, the following code in cmd.py
import sys
import argparse
import argunparse
parser = argparse.ArgumentParser()
unparser = argunparse.ArgumentUnparser()
parser.add_argument('--foo')
parser.add_argument('--step', type=int)
kwargs = vars(parser.parse_args())
kwargs['step'] += 1
prefix = f'python {sys.argv[0]} '
arg_string = unparser.unparse(**kwargs)
print(prefix + arg_string)
will print the desired command line:
python cmd.py --foo=bar --step=1
argparse is clearly designed to go one way, from sys.argv to the args namespace. No thought has been given to preserving information that would let you map things back the other way, much less do the mapping itself.
In general, multiple sys.argv could produce the same args. You could, for example, have several arguments that have the same dest. Or you can repeat 'optionals'. But for a restricted 'parser' setup there may be enough information to recreate a usable argv.
Try something like:
parser = argparser.ArgumentParser()
arg1 = parser.add_argument('--foo', default='default')
arg2 = parser.add_argument('bar', nargs=2)
and then examine the arg1 and arg2 objects. They contain all the information that you supplied to the add_argument method. Of course you could have defined those values in your own data structures before hand, e.g.
{'option_string':'--foo', 'default':'default'}
{'dest':'bar', 'nargs':2}
and used those as input to add_argument.
While the parser may have enough information to recreate a useable sys.argv, you have to figure out how to do that yourself.
default=argparse.SUPPRESS may be handy. It keeps the parser from adding a default entry to the namespace. So if the option isn't used, it won't appear in the namespace.
This isn't possible in any easy way that I know of, then again I've never needed to do this.
But with the lack of information in the question in regards to how you call your script, I'll assume the following:
python test.py cmd --foo=bar --step=0
And what you could do is do:
from sys import argv
for index in range(1, len(argv)): # the first object is the script itself
if '=' in argv[index]:
param, value = argv[index].split('=', 1)
if param == '--step':
value = '1'
argv[index] = param + '=' + value
print(argv)
Note that this is very specific to --step and may be what you've already thought of and just wanted a "better way", but again, I don't think there is.
depending on the scope, this works at the same module at least:
pprint(argparse._sys.argv)
Per the other answers, rebuilding is imperfect, but if you aren't doing anything too fancy and are okay with imperfect, something like this could work for you as a starting point:
def unparse_args(parser, parsed_args):
"""Unparse argparsed args"""
positional_args = [action.dest
for action in parser._actions
if not action.option_strings]
optionals = []
positionals = []
for key, value in vars(parsed_args).items():
if not value:
# none and false flags go away
continue
elif key in positional_args:
positionals.append(value)
elif value is True:
optionals.append(f"--{key}")
else:
optionals.append(f"--{key}={value}")
return " ".join(optionals + positionals)
Here's an example using this with a git clone clone:
parser = argparse.ArgumentParser(description='A sample git clone wrapper')
# options
parser.add_argument("-v", "--verbose", action="store_true",
help="be more verbose")
parser.add_argument("-q", "--quiet", action="store_true",
help="be more quiet")
parser.add_argument("--recurse-submodules", nargs='?',
help="initialize submodules in the clone")
parser.add_argument("--recursive", nargs='?',
help="alias of --recurse-submodules")
parser.add_argument("-b", "--branch",
help=" checkout <branch> instead of the remote's HEAD")
parser.add_argument("--depth", type=int,
help="create a shallow clone of that depth")
parser.add_argument("--shallow-submodules", action="store_true",
help="any cloned submodules will be shallow")
# positional
parser.add_argument("repo", help="The git repo to clone")
parser.add_argument("dir", nargs='?',help="The location to clone the repo")
# make a fake call to your git clone clone and parse the args
cmdargs = ["--depth=1", "-q", "ohmyzsh/ohmyzsh"]
parsedargs = parser.parse_args(cmdargs)
# now unparse them
unparsed = unparse_args(parser, parsedargs)
print(unparsed)

argumentparser close file argument

argumentparser can take file type argument and leave the file open directly, for example:
parser.add_argument('infile', nargs='?', type=argparse.FileType('r'))
args = parser.parse_args().__dict__
input = args['infile'].readlines()
do I need to close args['infile'] in my program? Would argumentparser close it for me? I didn't find anywhere mention this in the documentations.
NO, it does not close the filetype object.. see this
The problem here is that FileType may return stdin or stdout, so it can’t just always close the file object.
A great number of unclosed file handles may cause problem to some OSes, but that’s it. On the plus side, the fact that argparse accepts for its type argument any callable that can check and convert a string input is simple, clean and works.
Also look at this
Some digging in the source reveals that it doesn't close it for you. That makes sense, as it also has to open files for writing, and you probably wouldn't want it to close those.
I seem to have stumbled onto a clear and concise way to close argparse files:
parser = argparse.ArgumentParser(description='Hello!')
parser.add_argument('src_file',
type=argparse.FileType('r', encoding='Windows-1252'), help='Input .txt file')
parser.add_argument('dst_file',
type=argparse.FileType('w', encoding='Windows-1252'), help='Output .foo file')
args = parser.parse_args()
# do your thing...
args.src_file.close()
args.dst_file.close()
#close(args.src_file) don't do this... gives error
This is with Python 3.10.4; googling the PEPs for this 'feature', nothing was found.
So either this:
Has existed all along but nobody knew,
Was added at some point but not mentioned,
Or it doesn't actually close the file (and doesn't give any error.)*
*The Python debugger in Code/Win10 reports the <_io.TextIOWrapper>'s closed boolean does change to True from this .close(), so it seems to be working as expected.

Python: argument parser that handles global options to sub-commands properly

argparse fails at dealing with sub-commands receiving global options:
import argparse
p = argparse.ArgumentParser()
p.add_argument('--arg', action='store_true')
s = p.add_subparsers()
s.add_parser('test')
will have p.parse_args('--arg test'.split()) work,
but fails on p.parse_args('test --arg'.split()).
Anyone aware of a python argument parser that handles global options to sub-commands properly?
You can easily add this argument to both parsers (main parser and subcommand parser):
import argparse
main = argparse.ArgumentParser()
subparser = main.add_subparsers().add_parser('test')
for p in [main,subparser]:
p.add_argument('--arg', action='store_true')
print main.parse_args('--arg test'.split()).arg
print main.parse_args('test --arg'.split()).arg
Edit: As #hpaulj pointed in comment, there is also parents argument which you can pass to ArgumentParser constructor or to add_parser method. You can list in this value parsers which are bases for new one.
import argparse
base = argparse.ArgumentParser(add_help=False)
base.add_argument('--arg', action='store_true')
main = argparse.ArgumentParser(parents=[base])
subparser = main.add_subparsers().add_parser('test', parents=[base])
print main.parse_args('--arg test'.split()).arg
print main.parse_args('test --arg'.split()).arg
More examples/docs:
looking for best way of giving command line arguments in python, where some params are req for some option and some params are req for other options
Python argparse - Add argument to multiple subparsers (I'm not sure if this question is not overlaping with this one too much)
http://docs.python.org/dev/library/argparse.html#parents
Give docopt a try:
>>> from docopt import docopt
>>> usage = """
... usage: prog.py command [--test]
... prog.py another [--test]
...
... --test Perform the test."""
>>> docopt(usage, argv='command --test')
{'--test': True,
'another': False,
'command': True}
>>> docopt(usage, argv='--test command')
{'--test': True,
'another': False,
'command': True}
There's a ton of argument-parsing libs in the Python world. Here are a few that I've seen, all of which should be able to handle address the problem you're trying to solve (based on my fuzzy recollection of them when I played with them last):
opster—I think this is what mercurial uses, IIRC
docopt—This one is new, but uses an interesting approach
cliff—This is a relatively new project by Doug Hellmann (PSF member, virtualenvwrapper author, general hacker extraordinaire) is a bit more than just an argument parser, but is designed from the ground up to handle multi-level commands
clint—Another project that aims to be "argument parsing and more", this one by Kenneth Reitz (of Requests fame).
Here's a dirty workaround --
import argparse
p = argparse.ArgumentParser()
p.add_argument('--arg', action='store_true')
s = p.add_subparsers()
s.add_parser('test')
def my_parse_args(ss):
#parse the info the subparser knows about; don't issue an error on unknown stuff
namespace,leftover=p.parse_known_args(ss)
#reparse the unknown as global options and add it to the namespace.
if(leftover):
s.add_parser('null',add_help=False)
p.parse_args(leftover+['null'],namespace=namespace)
return namespace
#print my_parse_args('-h'.split()) #This works too, but causes the script to stop.
print my_parse_args('--arg test'.split())
print my_parse_args('test --arg'.split())
This works -- And you could modify it pretty easily to work with sys.argv (just remove the split string "ss"). You could even subclass argparse.ArgumentParser and replace the parse_args method with my_parse_args and then you'd never know the difference -- Although subclassing to replace a single method seems overkill to me.
I think however, that this is a lit bit of a non-standard way to use subparsers. In general, global options are expected to come before subparser options, not after.
The parser has a specific syntax: command <global options> subcommand <subcommand ptions>, you are trying to feed the subcommand with an option and but you didn't define one.

Categories