I'm parsing CLI arguments in my program with the argparse library. I would like to parse an argument that can repeat, with the following behaviour:
if the argument appears at least once, its values are stored in a list,
if the argument doesn't appear, the value is some default list.
I have the following code so far:
import argparse
ap = argparse.ArgumentParser(description="Change channel colours.")
ap.add_argument('-c', '--channel', action='append', default=['avx', 'fbx'])
print(ap.parse_known_args(['-c', 'iasdf', '-c', 'fdas']))
print(ap.parse_known_args())
This appropriately sets a default list, however it doesn't start with an empty list when the argument appears. In other words, the second print statement prints the correct value (the default list), but the first one prints
['avx', 'fbx', 'iasdf', 'fdas']
instead of
['iasdf', 'fdas']
Is there a way in argparse to do what I want without doing something like
if len(args.channel) > 2:
args.channel = args.channel[2:]
after the fact?
There's a bug/issue discussing this behavior. I wrote several posts to that.
https://bugs.python.org/issue16399 argparse: append action with default list adds to list instead of overriding
For now the only change is in documentation, not in behavior.
All defaults are placed in the namespace at the start of parsing. For ordinary actions, user values overwrite the default. But in the append case, they are just added to what's there already. It doesn't try to distinguish between values placed by the default, and previous user values.
I think the simplest solution is to leave the default as is, and check after parsing for None or empty list (I don't recall which), and insert your default. You don't get extra points for doing all the parsing in argparse. A bit of post parsing processing is quite ok.
Related
I have the following code section that triggers Pylint error "E1120: No value for argument if".
framework/packages/create_network.py:79:21: E1120: No value for argument 'bridge1' in function call (no-value-for-parameter)
Is there a flag for Pylint that can bypass this check for this specific case? I am looking for a solution that doesn't require changes in the code itself and is a parameter passed to Pylint.
In my answer I will put the flag I am familiar with that is an annotation in the code.
Description of the case
This happens when passing an unpacked list as values to parameters defined by name and not by *args.
The script passes a list
bridges = create_bridge(interface1, interface2) # Returns an array of 4 values
routers = get_routers(*bridges) # Unpacks the array and passes values to 4 parameters
And the function signature is
def get_routers(bridge1, bridge2, bridge3, bridge4):
This can be solved by disabling the specific check for E1120 for the specific line that causes the issue.
In this case the code will now have a pylint disable comment above the
bridges = create_bridge(interface1, interface2) # Returns an array of 4 values
#pylint: disable = no-value-for-parameter # Disables the pylint check for the next line
routers = get_routers(*bridges) # Unpacks the array and passes values to 4 parameters
I dislike this solution because it forces me to change the itself. I'd prefer to disable this check when running Pylint as an optional flag just for unpacking list to parameters.
I have a function that updates 4 dictionaries by priority.
When testing, I found out that in some cases not all 4 will be provided.
The function fails of course because I am trying to use update on a NoneType.
#staticmethod
def create_configuration(layer1, layer2, layer3, layer4):
configuration = {}
configuration.update(layer4)
configuration.update(layer3)
configuration.update(layer2)
configuration.update(layer1)
return configuration
I tried to set the parameters, in the function signature, to be
layer3={} and layer3=dict() but in both ways, when I run it, the dictionary will be a NoneType.
Is there a more elegant way to do it, rather than looping over the variables and setting them to an empty dict if they are NoneType?
Many options.
The reason your default arguments do nothing is that you do in fact give an argument for those parameters, that happens to be None. So the default is unused.
One option, fix the code that calls this function. If this function expects four dicts and gets None instead of a dict, then something is wrong on that side.
But there is nothing special about needing four dicts in this code. It could instead be:
def combines_dicts(*dicts):
combined = {}
for d in dicts:
combined.update(d)
return combined
Now the calling code could give two, or five, arguments if it had that many dicts.
You could also fix it using if:
if layer4:
configuration.update(layer4)
Et cetera.
I am trying to create an option that takes two arguments. The first argument should be validated by choices, but the second argument is an arbitrary user-supplied value. For example.
> app -h
usage: app [--option {count,label} arg]
Correct usage examples:
> app --option count 1
> app --option count 912
> app --option label userfoo
I tried setting it up like this:
parser.add_argument('--option', choices=['count','label'], nargs=2)
This does not work, as it tries to validate BOTH arguments using the choices. The help string shows this:
usage: app [--option {count,label} {count,label}]
There are several ways I could do it manually:
remove the choices field and manually validate the first argument in code.
separate it into --option count --value 3, but that is awkward as --value is required by option but invalid without it. It is really a single option with two values
make --option have a compound value, for example --option count=3 and then parse the value
Part of what I want is to have the auto-generated help string show the choices for the first argument. I would also like for argparse to detect and report errors whenever possible with a minimum of custom code. The reason is that we have a very complex CLI, and this helps maintain consistency.
Is there a way to do this with argparse?
parser._get_values does, when nargs is number:
value = [self._get_value(action, v) for v in arg_strings]
for v in value:
self._check_value(action, v)
_get_value applies the type function, while _check_value test the choices. After this the values list is passed to the store Action.
So the normal processing applies the same type and choices test to each string.
I can imagine writing a type function that accepts both numbers and strings from a list. But it couldn't distinguish between the first and second arguments. A custom Action will see both, and could do further testing.
But often it's simpler to just do your own value testing after parsing. The parser doesn't have to do everything. It's primary function is to figure out what the user wants. Checking values and raising a standardized error is nice part of parsing, but isn't its primary purpose.
Also think about how you'd specify the usage/help for the expected input.
For example, I'd like to do something like: greet(,'hola'), where greet is:
def greet(person='stranger', greeting='hello')
This would help greatly for testing while writing code
Upon calling a function you can use the variable names to make it even more clear what variable will assume which value. At the same time, if defaults are provided in the function definition, skipping variables when calling the function does not raise any errors. So, in short you can just do this:
def greet(person='stranger', greeting='hello')
print('{} {}'.format(greeting, person))
return
greet(greeting='hola') # same as greet(person='stranger', greeting='hola')
# returns 'hola stranger'
Note that, as I said above this would not work if for example your function definition was like this:
def greet(person, greeting)
print('{} {}'.format(greeting, person))
return
Since in this case, Python would complain saying that it does not know what to do with person; no default is supplied..
And by the way, the problem you are describing is most likely the very reason defaults are used in the first place
Without knowing the other parameters, and only knowing that the parameter you want to change is in second position you could use the inspect module to get function signature & associated default values.
Then make a copy of the default values list and change the one at the index you want:
import inspect
def greet(person='stranger', greeting='hello'):
print(person,greeting)
argspec = inspect.getargspec(greet)
defaults = list(argspec.defaults)
defaults[1] = "hola" # change second default parameter
greet(**dict(zip(argspec.args,defaults)))
Assuming that all parameters have default values (else it shifts the lists an that fails) that prints:
stranger hola
I am trying to create an user interface using argparse module.
One of the argument need to be converted, so I use the type keyword:
add_argument('positional', ..., type=myfunction)
and there is another optional argument:
add_argument('-s', dest='switch', ...)
in addition, I have
parsed_argument=parse_args()
However, in myfunction, I hope I can use an additional parameter to control the behavior, which is the optional argument above, i.e.
def myfunction(positional, switch=parsed_argument.switch):
...
How can I achieve that?
Simple answer: You can’t. The arguments are parsed separately, and there is no real guarantee that some order is maintained. Instead of putting your logic into the argument type, just store it as a string and do your stuff after parsing the command line:
parser.add_argument('positional')
parser.add_argument('-s', '--switch')
args = parser.parse_args()
myfunction(args.positional, switch=args.switch)
I'm not sure I did understand correctly what you want to achieve, but if what you want to do is something that looks like:
myprog.py cmd1 --switcha
myprog.py cmd2 --switchb
yes you can, you need to use subparsers. I wrote a good example of it for a little PoC I wrote to access stackoverflow's API from CLI. The whole logic is a bit long to put thoroughly here, but mainly the idea is:
create your parser using parser = argparse.ArgumentParser(...)
create the subparsers using subparsers = parser.add_subparsers(...)
add the commands with things like `subparser.add_parser('mycommand', help='Its only a command').set_defaults(func=mycmd_fn) where
mycmd_fn takes args as parameters where you have all the switches you issued to the command!
the difference from what you ask, is that you'll need one function per command, and not one function with the positional argument as first argument. But you can leverage that easily by having mycmd_fn being like: mycmd_fn = lambda *args: myfunction('mycmd', *args)
HTH
From the documentation:
type= can take any callable that takes a single string argument and returns the converted value:
Python functions like int and float are good examples of a type function should be like. int takes a string and returns a number. If it can't convert the string it raises a ValueError. Your function could do the same. argparse.ArgumentTypeError is another option. argparse isn't going to pass any optional arguments to it. Look at the code for argparse.FileType to see a more elaborate example of a custom type.
action is another place where you can customize behavior. The documentation has an example of a custom Action. Its arguments include the namespace, the object where the parser is collecting the values it will return to you. This object contains any arguments have already been set. In theory your switch value will be available there - if it occurs first.
There are many SO answers that give custom Actions.
Subparsers are another good way of customizing the handling of arguments.
Often it is better to check for the interaction of arguments after parse_args. In your case 'switch' could occur after the positional and still have effect. And argparse.Error lets you use the argparse error mechanism (e.g. displaying the usage)