Let's say you want to use subcommands and at its core the subcommands want the same object data points to be stored in Namespace but perhaps grouped by subcommands. How can one extend argparse but not lose any of its standard behavior while achieving this?
For example:
import argparse
parser = argparse.ArgumentParser()
subparser = parser.add_subparsers()
fooparser = subparser.add_parser('foo')
fooparser.add_argument('rawr', dest='rawr')
barparser = subparser.add_parser('bar')
barparser.add_argument('rawr', dest='rawr')
# It would be nice that in the Namespace object this shows up as the following:
# args: foo 0
# Namespace(foo.rawr=0)
# args: bar 1
# Namespace(bar.rawr=1)
The above example just tries to explain my point but the main issue is that what happens is that, when the above code executes parse_args() returns a Namespace that just has rawr=N but what if my code distinguishes behavior based on the subcommand so its important that there be an object that has an attribute rawr within the Namespace object. For example:
if args.foo.rawr:
# do action 1
pass
if args.bar.rawr:
# do action 2
pass
If args only has args.rawr, then you cannot discriminate action 1 or action 2, they both are legal actions without the additional nested layer.
To save the subcommand name, use .add_subparsers(dest=), like this:
subparser = parser.add_subparsers(dest='command')
fooparser = subparser.add_parser('foo')
fooparser.add_argument('rawr')
barparser = subparser.add_parser('bar')
barparser.add_argument('rawr')
for a in ['foo', '0'], ['bar', '1']:
args = parser.parse_args(a)
print(args)
if args.command == 'foo':
print('doing foo!')
elif args.command == 'bar':
print('doing bar!')
Output:
Namespace(command='foo', rawr='0')
doing foo!
Namespace(command='bar', rawr='1')
doing bar!
Thanks to George Shuklin for pointing this out on Medium
Supplying a dest for subparser is desirable, though not required. But it may be enough to further identify the arguments.
Positionals can take any name you want to supply; you can't supply an extra dest. That name will be used in the args Namespace. Use metavar to control the string used in the help.
For flagged arguments (optionals), use the dest.
subparser = parser.add_subparsers(dest='cmd')
fooparser = subparser.add_parser('foo')
fooparser.add_argument('-b','--baz', dest='foo_baz')
fooparser.add_argument('foo_rawr', metavar='rawr')
barparser = subparser.add_parser('bar')
barparser.add_argument('-b','--baz', dest='bar_baz')
barparser.add_argument('bar_rawr', metavar='rawr')
Include a print(args) during debugging to get a clear idea of what the parser does.
In previous SO we have discussed using custom Namespace class and custom Action subclasses to create some sort of nesting or dict like behavior, but I think that's more work than most people need.
Docs also illustrate the use of
parser_foo.set_defaults(func=foo)
to set an extra argument based on the subparser. In this example the value may be an actual function object. The use of the dest is also mentioned in the docs, though perhaps as too much of an afterthought.
Related
I have a legacy Python application which uses some options in its CLI, using argparse, like:
parser = argparse.ArgumentParser()
parser.add_argument('-f', default='foo')
Now I need to remove this option, since now its value cannot be overwritten by users but it has to assume its default value (say 'foo' in the example). Is there a way to keep the option but prevent it to show up and be overwritten by users (so that I can keep the rest of the code as it is)?
It's not entirely clear what you can do, not with the parser. Can you edit the setup? Or just modify results of parsing?
If you can edit the setup, you could replace the add_argument line with a
parser.setdefaults(f='foo')
https://docs.python.org/3/library/argparse.html#parser-defaults
The -f won't appear in the usage or help, but it will appear in the args
Or you could leave it in, but suppress the help display
parser.add_argument('-f', default='foo', help=argparse.SUPPRESS)
https://docs.python.org/3/library/argparse.html#help
Setting the value after parsing is also fine.
Yes you can do that. After the parser is parsed (args = parser.parse_args()) it is a NameSpace so you can do this:
parser = argparse.ArgumentParser()
args = parser.parse_args()
args.foo = 'foo value'
print(args)
>>> Namespace(OTHER_OPTIONS, foo='foo value')
I assumed that you wanted to add test to your parser, so your original code will still work, but you do not want it as an option for the user.
I think it doesn't make sense for the argparse module to provide this as a standard option, but there are several easy ways to achieve what you want.
The most obvious way is to just overwrite the value after having called parse_args() (as already mentioned in comments and in another answer):
args.f = 'foo'
However, the user may not be aware that the option is not supported anymore and that the application is now assuming the value "foo". Depending on the use case, it might be better to warn the user about this. The argparse module has several options to do this.
Another possibility is to use an Action class to do a little magic. For example, you could print a warning if the user provided an option that is not supported anymore, or even use the built-in error handling.
import argparse
class FooAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
if values != 'foo':
print('Warning: option `-f` has been removed, assuming `-f foo` now')
# Or use the built-in error handling like this:
# parser.error('Option "-f" is not supported anymore.')
# You could override the argument value like this:
# setattr(namespace, self.dest, 'foo')
parser = argparse.ArgumentParser()
parser.add_argument('-f', default='foo', action=FooAction)
args = parser.parse_args()
print('option=%s' % args.f)
You could also just limit the choices to only "foo" and let argparse create an error for other values:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-f', default='foo', choices=['foo'])
args = parser.parse_args()
print('option=%s' % args.f)
Calling python test.py -f bar would then result in:
usage: test.py [-h] [-f {foo}]
test.py: error: argument -f: invalid choice: 'bar' (choose from 'foo')
For a project involving multiple scripts (data processing, model tuning, model training, testing, etc.) I may keep all my <class 'argparse.ArgumentParser'> objects as the return value of functions for each script task in a module I name cli.py. However, in the calling script itself (__main__), after calling args = parser.parse_args(), if I type args. in VS Code I get no suggested attributes that are specific to that object. How can I get suggested attributes of argparse.NameSpace objects in VS Code?
E.g.,
"""Module for command line interface for different scripts."""
import argparse
def some_task_parser(description: str) -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description=description)
parser.add_argument('some_arg')
return parser
then
"""Script for executing some task."""
from cli import some_task_parser
if __name__ == '__main__':
parser = some_task_parser('arbitrary task')
args = parser.parse_args()
print(args.) # I WANT SUGGESTED ATTRIBUTES (SUCH AS args.some_arg) TO POP UP!!
I don't know about VSCode, in an ipython session, the tab complete does show the available attributes of Namespace. argparse.Namespace is a relatively simple object class. What you want, I think, are just the attributes of this object.
In [238]: args = argparse.Namespace(test='one', foo='three', bar=3)
In [239]: args
Out[239]: Namespace(bar=3, foo='three', test='one')
In [240]: args.
bar test
foo args.txt
But those attributes will not be available during development. They are put there by the parsing action, at run time. They cannot be reliably deduced from the parser setup. Certainly argparse does not provide such a help.
I often recommend that developers include a
print(args)
during the debugging phase, so they get a clear idea of what the parser does.
Use args.__dict__ to get the names and values. If you need just the argument names, then [key for key in args.__dict__.keys()] will give the list of them.
I have done as much research as possible but I haven't found the best way to make certain cmdline arguments necessary only under certain conditions, in this case only if other arguments have been given. Here's what I want to do at a very basic level:
p = argparse.ArgumentParser(description='...')
p.add_argument('--argument', required=False)
p.add_argument('-a', required=False) # only required if --argument is given
p.add_argument('-b', required=False) # only required if --argument is given
From what I have seen, other people seem to just add their own check at the end:
if args.argument and (args.a is None or args.b is None):
# raise argparse error here
Is there a way to do this natively within the argparse package?
I've been searching for a simple answer to this kind of question for some time. All you need to do is check if '--argument' is in sys.argv, so basically for your code sample you could just do:
import argparse
import sys
if __name__ == '__main__':
p = argparse.ArgumentParser(description='...')
p.add_argument('--argument', required=False)
p.add_argument('-a', required='--argument' in sys.argv) #only required if --argument is given
p.add_argument('-b', required='--argument' in sys.argv) #only required if --argument is given
args = p.parse_args()
This way required receives either True or False depending on whether the user as used --argument. Already tested it, seems to work and guarantees that -a and -b have an independent behavior between each other.
You can implement a check by providing a custom action for --argument, which will take an additional keyword argument to specify which other action(s) should become required if --argument is used.
import argparse
class CondAction(argparse.Action):
def __init__(self, option_strings, dest, nargs=None, **kwargs):
x = kwargs.pop('to_be_required', [])
super(CondAction, self).__init__(option_strings, dest, **kwargs)
self.make_required = x
def __call__(self, parser, namespace, values, option_string=None):
for x in self.make_required:
x.required = True
try:
return super(CondAction, self).__call__(parser, namespace, values, option_string)
except NotImplementedError:
pass
p = argparse.ArgumentParser()
x = p.add_argument("--a")
p.add_argument("--argument", action=CondAction, to_be_required=[x])
The exact definition of CondAction will depend on what, exactly, --argument should do. But, for example, if --argument is a regular, take-one-argument-and-save-it type of action, then just inheriting from argparse._StoreAction should be sufficient.
In the example parser, we save a reference to the --a option inside the --argument option, and when --argument is seen on the command line, it sets the required flag on --a to True. Once all the options are processed, argparse verifies that any option marked as required has been set.
Your post parsing test is fine, especially if testing for defaults with is None suits your needs.
http://bugs.python.org/issue11588 'Add "necessarily inclusive" groups to argparse' looks into implementing tests like this using the groups mechanism (a generalization of mutuall_exclusive_groups).
I've written a set of UsageGroups that implement tests like xor (mutually exclusive), and, or, and not. I thought those where comprehensive, but I haven't been able to express your case in terms of those operations. (looks like I need nand - not and, see below)
This script uses a custom Test class, that essentially implements your post-parsing test. seen_actions is a list of Actions that the parse has seen.
class Test(argparse.UsageGroup):
def _add_test(self):
self.usage = '(if --argument then -a and -b are required)'
def testfn(parser, seen_actions, *vargs, **kwargs):
"custom error"
actions = self._group_actions
if actions[0] in seen_actions:
if actions[1] not in seen_actions or actions[2] not in seen_actions:
msg = '%s - 2nd and 3rd required with 1st'
self.raise_error(parser, msg)
return True
self.testfn = testfn
self.dest = 'Test'
p = argparse.ArgumentParser(formatter_class=argparse.UsageGroupHelpFormatter)
g1 = p.add_usage_group(kind=Test)
g1.add_argument('--argument')
g1.add_argument('-a')
g1.add_argument('-b')
print(p.parse_args())
Sample output is:
1646:~/mypy/argdev/usage_groups$ python3 issue25626109.py --arg=1 -a1
usage: issue25626109.py [-h] [--argument ARGUMENT] [-a A] [-b B]
(if --argument then -a and -b are required)
issue25626109.py: error: group Test: argument, a, b - 2nd and 3rd required with 1st
usage and error messages still need work. And it doesn't do anything that post-parsing test can't.
Your test raises an error if (argument & (!a or !b)). Conversely, what is allowed is !(argument & (!a or !b)) = !(argument & !(a and b)). By adding a nand test to my UsageGroup classes, I can implement your case as:
p = argparse.ArgumentParser(formatter_class=argparse.UsageGroupHelpFormatter)
g1 = p.add_usage_group(kind='nand', dest='nand1')
arg = g1.add_argument('--arg', metavar='C')
g11 = g1.add_usage_group(kind='nand', dest='nand2')
g11.add_argument('-a')
g11.add_argument('-b')
The usage is (using !() to mark a 'nand' test):
usage: issue25626109.py [-h] !(--arg C & !(-a A & -b B))
I think this is the shortest and clearest way of expressing this problem using general purpose usage groups.
In my tests, inputs that parse successfully are:
''
'-a1'
'-a1 -b2'
'--arg=3 -a1 -b2'
Ones that are supposed to raise errors are:
'--arg=3'
'--arg=3 -a1'
'--arg=3 -b2'
For arguments I've come up with a quick-n-dirty solution like this.
Assumptions: (1) '--help' should display help and not complain about required argument and (2) we're parsing sys.argv
p = argparse.ArgumentParser(...)
p.add_argument('-required', ..., required = '--help' not in sys.argv )
This can easily be modified to match a specific setting.
For required positionals (which will become unrequired if e.g. '--help' is given on the command line) I've come up with the following: [positionals do not allow for a required=... keyword arg!]
p.add_argument('pattern', ..., narg = '+' if '--help' not in sys.argv else '*' )
basically this turns the number of required occurrences of 'pattern' on the command line from one-or-more into zero-or-more in case '--help' is specified.
Here is a simple and clean solution with these advantages:
No ambiguity and loss of functionality caused by oversimplified parsing using the in sys.argv test.
No need to implement a special argparse.Action or argparse.UsageGroup class.
Simple usage even for multiple and complex deciding arguments.
I noticed just one considerable drawback (which some may find desirable): The help text changes according to the state of the deciding arguments.
The idea is to use argparse twice:
Parse the deciding arguments instead of the oversimplified use of the in sys.argv test. For this we use a short parser not showing help and the method .parse_known_args() which ignores unknown arguments.
Parse everything normally while reusing the parser from the first step as a parent and having the results from the first parser available.
import argparse
# First parse the deciding arguments.
deciding_args_parser = argparse.ArgumentParser(add_help=False)
deciding_args_parser.add_argument(
'--argument', required=False, action='store_true')
deciding_args, _ = deciding_args_parser.parse_known_args()
# Create the main parser with the knowledge of the deciding arguments.
parser = argparse.ArgumentParser(
description='...', parents=[deciding_args_parser])
parser.add_argument('-a', required=deciding_args.argument)
parser.add_argument('-b', required=deciding_args.argument)
arguments = parser.parse_args()
print(arguments)
Until http://bugs.python.org/issue11588 is solved, I'd just use nargs:
p = argparse.ArgumentParser(description='...')
p.add_argument('--arguments', required=False, nargs=2, metavar=('A', 'B'))
This way, if anybody supplies --arguments, it will have 2 values.
Maybe its CLI result is less readable, but code is much smaller. You can fix that with good docs/help.
This is really the same as #Mira 's answer but I wanted to show it for the case where when an option is given that an extra arg is required:
For instance, if --option foo is given then some args are also required that are not required if --option bar is given:
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--option', required=True,
help='foo and bar need different args')
if 'foo' in sys.argv:
parser.add_argument('--foo_opt1', required=True,
help='--option foo requires "--foo_opt1"')
parser.add_argument('--foo_opt2', required=True,
help='--option foo requires "--foo_opt2"')
...
if 'bar' in sys.argv:
parser.add_argument('--bar_opt', required=True,
help='--option bar requires "--bar_opt"')
...
It's not perfect - for instance proggy --option foo --foo_opt1 bar is ambiguous but for what I needed to do its ok.
Add additional simple "pre"parser to check --argument, but use parse_known_args() .
pre = argparse.ArgumentParser()
pre.add_argument('--argument', required=False, action='store_true', default=False)
args_pre=pre.parse_known_args()
p = argparse.ArgumentParser()
p.add_argument('--argument', required=False)
p.add_argument('-a', required=args_pre.argument)
p.add_argument('-b', required=not args_pre.argument)
I have a file structure like this:
project/
main_prog.py
tools/
script.py
md_script/
__init__.py
md_script.py
I search in tools for local python modules. In this example it's md_script. And i want to use it as positional argument like install in my code, but when i use it, I'v got an error:
./jsh.py md_script
usage: jsh.py [-h] {install,call,list,log,restore} ... [md_script]
jsh.py: error: invalid choice: 'md_script' (choose from 'install', 'call', 'list', 'log', 'restore')
python3.4 on ubuntu14.10
Here is my code:
parser = argparse.ArgumentParser(prog='jsh.py',
description='Some help.', epilog='Example of usage: some help')
subparsers = parser.add_subparsers()
parser_install = subparsers.add_parser('install', help = 'Install new project.')
parser_install.add_argument('install', nargs='?', help = 'Name of project to be installed')
if os.path.isdir(full/path/to/tools/):
name_arg = next(os.walk(full/path/to/tools))[1]
tools_arg = parser.add_argument_group('Tools', 'Modules from tools')
for element in name_arg:
tools_arg.add_argument(element, nargs='?', help='md_script description')
args = parser.parse_args()
try:
if not len(sys.argv) > 1:
parser.print_help()
elif 'install' in args:
do_some_stuff
elif element in args:
do_some_md_script_stuff
else:
parser.print_help()
The usage line shows what's wrong:
usage: jsh.py [-h] {install,call,list,log,restore} ... [md_script]
You need to use something like
jsh.py install md_script
You specified subparsers, so you have to give it a subparser name.
From the usage it also looks like you created other subparsers, call, list, etc that you don't show in the code.
You also define positional arguments after creating subparser. That's where the [md_script] comes from. Be careful about making a lot of nargs='?' positionals (including the argument for the install subparser). This could make things confusing for your users. In fact it seems to confusing you. Remember that subparser is in effect a positional argument (one that requires 1 string).
I'd suggest experimenting with a simplier parser before creating one this complicated.
So from your comments and examples I see that you goal is let the user name a module, so your script can invoke it in some way or other. For that populating the subparsers with these names makes sense.
I wonder why you also create an optional positional argument with the same name:
module_pars = subparsers.add_parser(element, help = 'Modules from tools')
module_pars.add_argument(element, nargs='?', help=element+' description')
Is that because you are using the presence of the attribute as evidence that this subparser was invoked?
elif element in args:
do_some_md_script_stuff
The argparse documentation has a couple of other ideas.
One particularly effective way of handling sub-commands is to combine the use of the add_subparsers() method with calls to set_defaults() so that each subparser knows which Python function it should execute.
and
However, if it is necessary to check the name of the subparser that was invoked, the dest keyword argument to the add_subparsers() call will work:
These avoid the messiness of a '?' positional argument, freeing you to use subparser arguments for real information.
subparsers = parser.add_subparsers(dest='module')
....
for element in name_arg:
# module_pars = 'parser_'+element # this does nothing
module_pars = subparsers.add_parser(element, help = 'Modules from tools')
module_pars.set_defaults(func = do_some_md_script_stuff)
# or module_pars.set_default(element='I am here')
module_pars.add_argument('real_argument')
Now you can check:
if args.module='md_script':
do_some_md_script_stuff(args)
or
if hasattr(args, 'func'):
func(args)
With the alternative set_defaults, your original test should still work:
if element in args:
do_some_md_script_stuff
I did it like this. It's exactly what I want to.
if os.path.isdir(TOOLS_PATH):
name_arg = next(os.walk(TOOLS_PATH))[1]
for element in name_arg:
module_pars = 'parser_'+element
module_pars = subparsers.add_parser(element, help = 'Modules from tools')
module_pars.add_argument(element, nargs='?', help=element+' description')
I didn't test it, because i dont have a test module, but ./jsh.py md_script goes into elif element in args: print('md_script') and print string. So it looks like it works.
Thanks for all replies.
Edit: I tested it. In add_argument i must change nargs='?' for nargs='*' to catch more than one argument.
And to catch arguments from command line I used this:
elif args:
for element in name_arg:
if element in args:
element_arg = sys.argv[2:]
done_cmd,msg = opt_exec_module(element,*element_arg)
my_logger(done_cmd,msg)
Not very elegant but it works.
I am converting Bash shell installer utility to Python 2.7 and need to implement complex CLI so I am able to parse tens of parameters (potentially up to ~150). These are names of Puppet class variables in addition to a dozen of generic deployment options, which where available in shell version.
However after I have started to add more variables I faced are several challenges:
1. I need to group parameters into separate dictionaries so deployment options are separated from Puppet variables. If they are thrown into the same bucket, then I will have to write some logic to sort them, potentially renaming parameters and then dictionary merges will not be trivial.
2. There might be variables with the same name but belonging to different Puppet class, so I thought subcommands would allow me to filter what goes where and avoiding name collisions.
At the momment I have implemented parameter parsing via simply adding multiple parsers:
parser = argparse.ArgumentParser(description='deployment parameters.')
env_select = parser.add_argument_group(None, 'Environment selection')
env_select.add_argument('-c', '--client_id', help='Client name to use.')
env_select.add_argument('-e', '--environment', help='Environment name to use.')
setup_type = parser.add_argument_group(None, 'What kind of setup should be done:')
setup_type.add_argument('-i', '--install', choices=ANSWERS, metavar='', action=StoreBool, help='Yy/Nn Do normal install and configuration')
# MORE setup options
...
args, unk = parser.parse_known_args()
config['deploy_cfg'].update(args.__dict__)
pup_class1_parser = argparse.ArgumentParser(description=None)
pup_class1 = pup_class1_parser.add_argument_group(None, 'Puppet variables')
pup_class1.add_argument('--ad_domain', help='AD/LDAP domain name.')
pup_class1.add_argument('--ad_host', help='AD/LDAP server name.')
# Rest of the parameters
args, unk = pup_class1_parser.parse_known_args()
config['pup_class1'] = dict({})
config['pup_class1'].update(args.__dict__)
# Same for class2, class3 and so on.
The problem with this approach that it does not solve issue 2. Also first parser consumes "-h" option and rest of parameters are not shown in help.
I have tried to use example selected as an answer but I was not able to use both commands at once.
## This function takes the 'extra' attribute from global namespace and re-parses it to create separate namespaces for all other chained commands.
def parse_extra (parser, namespace):
namespaces = []
extra = namespace.extra
while extra:
n = parser.parse_args(extra)
extra = n.extra
namespaces.append(n)
return namespaces
pp = pprint.PrettyPrinter(indent=4)
argparser=argparse.ArgumentParser()
subparsers = argparser.add_subparsers(help='sub-command help', dest='subparser_name')
parser_a = subparsers.add_parser('command_a', help = "command_a help")
## Setup options for parser_a
parser_a.add_argument('--opt_a1', help='option a1')
parser_a.add_argument('--opt_a2', help='option a2')
parser_b = subparsers.add_parser('command_b', help = "command_b help")
## Setup options for parser_a
parser_b.add_argument('--opt_b1', help='option b1')
parser_b.add_argument('--opt_b2', help='option b2')
## Add nargs="*" for zero or more other commands
argparser.add_argument('extra', nargs = "*", help = 'Other commands')
namespace = argparser.parse_args()
pp.pprint(namespace)
extra_namespaces = parse_extra( argparser, namespace )
pp.pprint(extra_namespaces)
Results me in:
$ python argtest.py command_b --opt_b1 b1 --opt_b2 b2 command_a --opt_a1 a1
usage: argtest.py [-h] {command_a,command_b} ... [extra [extra ...]]
argtest.py: error: unrecognized arguments: command_a --opt_a1 a1
The same result was when I tried to define parent with two child parsers.
QUESTIONS
Can I somehow use parser.add_argument_group for argument parsing or is it just for the grouping in help print out? It would solve issue 1 without missing help side effect. Passing it as parse_known_args(namespace=argument_group) (if I correctly recall my experiments) gets all the variables (thats ok) but also gets all Python object stuff in resulting dict (that's bad for hieradata YAML)
What I am missing in the second example to allow to use multiple subcommands? Or is that impossible with argparse?
Any other suggestion to group command line variables? I have looked at Click, but did not find any advantages over standard argparse for my task.
Note: I am sysadmin, not a programmer so be gently on me for the non object style coding. :)
Thank you
RESOLVED
Argument grouping solved via the answer suggested by hpaulj.
import argparse
import pprint
parser = argparse.ArgumentParser()
group_list = ['group1', 'group2']
group1 = parser.add_argument_group('group1')
group1.add_argument('--test11', help="test11")
group1.add_argument('--test12', help="test12")
group2 = parser.add_argument_group('group2')
group2.add_argument('--test21', help="test21")
group2.add_argument('--test22', help="test22")
args = parser.parse_args()
pp = pprint.PrettyPrinter(indent=4)
d = dict({})
for group in parser._action_groups:
if group.title in group_list:
d[group.title]={a.dest:getattr(args,a.dest,None) for a in group._group_actions}
print "Parsed arguments"
pp.pprint(d)
This gets me desired result for the issue No.1. until I will have multiple parameters with the same name. Solution may look ugly, but at least it works as expected.
python argtest4.py --test22 aa --test11 yy11 --test21 aaa21
Parsed arguments
{ 'group1': { 'test11': 'yy11', 'test12': None},
'group2': { 'test21': 'aaa21', 'test22': 'aa'}}
Your question is too complicated to understand and respond to in one try. But I'll throw out some preliminary ideas.
Yes, argument_groups are just a way of grouping arguments in the help. They have no effect on parsing.
Another recent SO asked about parsing groups of arguments:
Is it possible to only parse one argument group's parameters with argparse?
That poster initially wanted to use a group as a parser, but the argparse class structure does not allow that. argparse is written in object style. parser=ArguementParser... creates one class of object, parser.add_arguement... creates another, add_argument_group... yet another. You customize it by subclassing ArgumentParser or HelpFormatter or Action classes, etc.
I mentioned a parents mechanism. You define one or more parent parsers, and use those to populate your 'main' parser. They could be run indepdently (with parse_known_args), while the 'main' is used to handle help.
We also discussed grouping the arguments after parsing. A namespace is a simple object, in which each argument is an attribute. It can also be converted to a dictionary. It is easy to pull groups of items from a dictionary.
There have SO questions about using multiple subparsers. That's an awkward proposition. Possible, but not easy. Subparsers are like issueing a command to a system program. You generally issue one command per call. You don't nest them or issue sequences. You let shell piping and scripts handle multiple actions.
IPython uses argparse to parse its inputs. It traps help first, and issues its own message. Most arguments come from config files, so it is possible to set values with default configs, custom configs and in the commandline. It's an example of naming a very large set of arguments.
Subparsers let you use the same argument name, but without being able to invoke multiple subparsers in one call that doesn't help much. And even if you could invoke several subparsers, they would still put the arguments in the same namespace. Also argparse tries to handle flaged arguments in an order independent manner. So a --foo at the end of the command line gets parsed the same as though it were at the start.
There was SO question where we discussed using argument names ('dest') like 'group1.argument1', and I've even discussed using nested namespaces. I could look those up if it would help.
Another thought - load sys.argv and partition it before passing it to one or more parsers. You could split it on some key word, or on prefixes etc.
If you have so many arguments this seems like a design issue. It seems very unmanageable. Can you not implement this using a configuration file that has reasonable set of defaults? Or defaults in the code with a reasonable (i.e. SMALL) number of arguments in the command line, and allow everything or everything else to be overridden with parameters in a 'key:value' configuration file? I can't imagine having to use a CLI with the number of variables you are proposing.