I'm using argparse to take in command line input and also to produce help text. I want to use ArgumentDefaultsHelpFormatter as the formatter_class, however this prevents me from also using RawDescriptionHelpFormatter which would allow me to add custom formatting to my description or epilog.
Is there a sensible method of achieving this aside from writing code to produce text for default values myself? According to the argparse docs, all internals of ArgumentParser are considered implementation details, not public API, so sub-classing isn't an attractive option.
I just tried a multiple inheritance approach, and it works:
class CustomFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter):
pass
parser = argparse.ArgumentParser(description='test\ntest\ntest.',
epilog='test\ntest\ntest.',
formatter_class=CustomFormatter)
This may break if the internals of these classes change though.
I don't see why subclassing a HelpFormatter should be a problem. That isn't messing with the internals of ArgumentParser. The documentation has examples of custom Action and Type classes (or functions). I take the 'there are four such classes' line to be an invitation to write my own HelpFormatter if needed.
The provided HelpFormatter subclasses make quite simple changes, changing just one function. So they can be easily copied or altered.
RawDescription just changes:
def _fill_text(self, text, width, indent):
return ''.join(indent + line for line in text.splitlines(keepends=True))
In theory it could be changed without altering the API, but it's unlikely.
The defaults formatter just changes:
def _get_help_string(self, action):
help = action.help
if '%(default)' not in action.help:
if action.default is not SUPPRESS:
defaulting_nargs = [OPTIONAL, ZERO_OR_MORE]
if action.option_strings or action.nargs in defaulting_nargs:
help += ' (default: %(default)s)'
return help
You could get the same effect by just including %(default)s in all of your argument help lines. In contrast to the Raw subclasses, this is just a convenience class. It doesn't give you more control over the formatting.
Related
I'd like to provide documentation (within my program) on certain dynamically created objects, but still fall back to using their class documentation. Setting __doc__ seems a suitable way to do so. However, I can't find many details in the Python help in this regard, are there any technical problems with providing documentation on an instance? For example:
class MyClass:
"""
A description of the class goes here.
"""
a = MyClass()
a.__doc__ = "A description of the object"
print( MyClass.__doc__ )
print( a.__doc__ )
__doc__ is documented as a writable attribute for functions, but not for instances of user defined classes. pydoc.help(a), for example, will only consider the __doc__ defined on the type in Python versions < 3.9.
Other protocols (including future use-cases) may reasonably bypass the special attributes defined in the instance dict, too. See Special method lookup section of the datamodel documentation, specifically:
For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary.
So, depending on the consumer of the attribute, what you intend to do may not be reliable. Avoid.
A safe and simple alternative is just to use a different attribute name of your own choosing for your own use-case, preferably not using the __dunder__ syntax convention which usually indicates a special name reserved for some specific use by the implementation and/or the stdlib.
There are some pretty obvious technical problems; the question is whether or not they matter for your use case.
Here are some major uses for docstrings that your idiom will not help with:
help(a): Type help(a) in an interactive terminal, and you get the docstring for MyClass, not the docstring for a.
Auto-generated documentation: Unless you write your own documentation generator, it's not going to understand that you've done anything special with your a value. Many doc generators do have some way to specify help for module and class constants, but I'm not aware of any that will recognize your idiom.
IDE help: Many IDEs will not only auto-complete an expression, but show the relevant docstring in a tooltip. They all do this statically, and without some special-case code designed around your idiom (which they're unlikely to have, given that it's an unusual idiom), they're almost certain to fetch the docstring for the class, not the object.
Here are some where it might help:
Source readability: As a human reading your source, I can tell the intent from the a.__doc__ = … right near the construction of a. Then again, I could tell the same intent just as easily from a Sphinx comment on the constant.
Debugging: pdb doesn't really do much with docstrings, but some GUI debuggers wrapped around it do, and most of them are probably going to show a.__doc__.
Custom dynamic use of docstrings: Obviously any code that you write that does something with a.__doc__ is going to get the instance docstring if you want it to, and therefore can do whatever it wants with it. However, keep in mind that if you want to define your own "protocol", you should use your own name, not one reserved for the implementation.
Notice that most of the same is true for using a descriptor for the docstring:
>>> class C:
... #property
... def __doc__(self):
... return('C doc')
>>> c = C()
If you type c.__doc__, you'll get 'C doc', but help(c) will treat it as an object with no docstring.
It's worth noting that making help work is one of the reasons some dynamic proxy libraries generate new classes on the fly—that is, a proxy to underlying type Spam has some new type like _SpamProxy, instead of the same GenericProxy type used for proxies to Hams and Eggseses. The former allows help(myspam) to show dynamically-generated information about Spam. But I don't know how important a reason it is; often you already need dynamic classes to, e.g., make special method lookup work, at which point adding dynamic docstrings comes for free.
I think it's preferred to keep it under the class via your doc string as it will also aid any developer that works on the code. However if you are doing something dynamic that requires this setup then I don't see any reason why not. Just understand that it adds a level of indirection that makes things less clear to others.
Remember to K.I.S.S. where applicable :)
I just stumbled over this and noticed that at least with python 3.9.5 the behavior seems to have changed.
E.g. using the above example, when I call:
help(a)
I get:
Help on MyClass in module __main__:
<__main__.MyClass object>
A description of the object
Also for reference, have a look at the pydoc implementation which shows:
def _getowndoc(obj):
"""Get the documentation string for an object if it is not
inherited from its class."""
try:
doc = object.__getattribute__(obj, '__doc__')
if doc is None:
return None
if obj is not type:
typedoc = type(obj).__doc__
if isinstance(typedoc, str) and typedoc == doc:
return None
return doc
except AttributeError:
return None
(A simplified form of the problem.) I'm writing an API involving some Python components. These might be functions, but for concreteness let's say they're objects. I want to be able to parse options for the various components from the command line.
from argparse import ArgumentParser
class Foo(object):
def __init__(self, foo_options):
"""do stuff with options"""
"""..."""
class Bar(object):
def __init__(sef, bar_options):
"""..."""
def foo_parser():
"""(could also be a Foo method)"""
p = ArgumentParser()
p.add_argument('--option1')
#...
return p
def bar_parser(): "..."
But now I want to be able to build larger components:
def larger_component(options):
f1 = Foo(options.foo1)
f2 = Foo(options.foo2)
b = Bar(options.bar)
# ... do stuff with these pieces
Fine. But how to write the appropriate parser? We might wish for something like this:
def larger_parser(): # probably need to take some prefix/ns arguments
# general options to be overridden by p1, p2
# (this could be done automagically or by hand in `larger_component`):
p = foo_parser(prefix=None, namespace='foo')
p1 = foo_parser(prefix='first-foo-', namespace='foo1')
p2 = foo_parser(prefix='second-foo-', namespace='foo2')
b = bar_parser()
# (you wouldn't actually specify the prefix/namespace twice: )
return combine_parsers([(p1, namespace='foo1', prefix='first-foo-'),
(p2,...),p,b])
larger_component(larger_parser().parse_args())
# CLI should accept --foo1-option1, --foo2-option1, --option1 (*)
which looks a bit like argparse's parents feature if you forget that we want prefixing (so as to be able to add multiple parsers of the same type)
and probably namespacing (so that we can build tree-structured namespaces to reflect the structure of the components).
Of course, we want larger_component and larger_parser to be composable in the same way, and the namespace object passed to a certain component should always have the same internal shape/naming structure.
The trouble seems to be that the argparse API is basically about mutating your parsers, but querying them is more difficult - if you turned a
datatype into a parser directly, you could just walk these objects. I managed to hack something that somewhat works if the user writes a bunch of functions to add arguments to parsers by hand, but each add_argument call must then take a prefix, and the whole thing becomes quite inscrutable and probably non-composable. (You could abstract over this at the cost of duplicating some parts of the internal data structures ...). I also tried to subclass the parser and group objects ...
You could imagine this might be possible using a more algebraic CLI-parsing API, but I don't think rewriting argparse is a good solution here.
Is there a known/straightforward way to do this?
Some thoughts that may help you construct the larger parser:
parser = argparse.ArgumentParser(...)
arg1 = parser.add_argument('--foo',...)
Now arg1 is a reference to the Action object created by add_argument. I'd suggest doing this in an interactive shell and looking at its attributes. Or at least print its repr. You can also experiment with modifying attributes.
Most of what a parser 'knows' about the arguments is contained in these actions. In a sense a parser is an object that 'contains' a bunch of 'actions'.
Look also at:
parser._actions
This is the parser's master list of actions, which will include the default help as well as the ones you add.
The parents mechanism copies Action references from the parent to the child. Note, it does not make copies of the Action objects. It also recreates argument groups - but these groups only serve to group help lines. They have nothing to do with parsing.
args1, extras = parser.parse_known_args(argv, namespace)
is very useful when dealing with multiple parsers. With it, each parser can handle the arguments it knows about, and pass the rest on to others. Try to understand the inputs and outputs to that method.
We have talked about composite Namespace objects in earlier SO questions. The default argparse.Namespace class is a simple object class with a repr method. The parser just uses hasattr, getattr and setattr, trying to be as non-specific as it can. You could construct a more elaborate namespace class.
argparse subcommands with nested namespaces
You can also customize the Action classes. That's where most values are inserted into the Namespace (though defaults are set elsewhere).
IPython uses argparse, both for the main call, and internally for magic commands. It constructs many arguments from config files. Thus it is possible to set many values either with default configs, custom configs, or at the last moment via the commandline arguments.
You might be able to use the concept of composing actions to achieve the functionality that you need. You can build actions that modify the namespace, dest, etc as you need and then compose them with:
def compose_actions(*actions):
"""Compose many argparse actions into one callable action.
Args:
*actions: The actions to compose.
Returns:
argparse.Action: Composed action.
"""
class ComposableAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
for action in actions:
action(option_string, self.dest).__call__(parser,
namespace,
values,
option_string)
return ComposableAction
See example: https://gist.github.com/mnm364/edee068a5cebbfac43547b57b7c842f1
Many languages support ad-hoc polymorphism (a.k.a. function overloading) out of the box. However, it seems that Python opted out of it. Still, I can imagine there might be a trick or a library that is able to pull it off in Python. Does anyone know of such a tool?
For example, in Haskell one might use this to generate test data for different types:
-- In some testing library:
class Randomizable a where
genRandom :: a
-- Overload for different types
instance Randomizable String where genRandom = ...
instance Randomizable Int where genRandom = ...
instance Randomizable Bool where genRandom = ...
-- In some client project, we might have a custom type:
instance Randomizable VeryCustomType where genRandom = ...
The beauty of this is that I can extend genRandom for my own custom types without touching the testing library.
How would you achieve something like this in Python?
Python is not a strongly typed language, so it really doesn't matter if yo have an instance of Randomizable or an instance of some other class which has the same methods.
One way to get the appearance of what you want could be this:
types_ = {}
def registerType ( dtype , cls ) :
types_[dtype] = cls
def RandomizableT ( dtype ) :
return types_[dtype]
Firstly, yes, I did define a function with a capital letter, but it's meant to act more like a class. For example:
registerType ( int , TheLibrary.Randomizable )
registerType ( str , MyLibrary.MyStringRandomizable )
Then, later:
type = ... # get whatever type you want to randomize
randomizer = RandomizableT(type) ()
print randomizer.getRandom()
A Python function cannot be automatically specialised based on static compile-time typing. Therefore its result can only depend on its arguments received at run-time and on the global (or local) environment, unless the function itself is modifiable in-place and can carry some state.
Your generic function genRandom takes no arguments besides the typing information. Thus in Python it should at least receive the type as an argument. Since built-in classes cannot be modified, the generic function (instance) implementation for such classes should be somehow supplied through the global environment or included into the function itself.
I've found out that since Python 3.4, there is #functools.singledispatch decorator. However, it works only for functions which receive a type instance (object) as the first argument, so it is not clear how it could be applied in your example. I am also a bit confused by its rationale:
In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects.
I understand that anti-pattern is a jargon term for a pattern which is considered undesirable (and does not at all mean the absence of a pattern). The rationale thus claims that inspecting types of arguments is undesirable, and this claim is used to justify introducing a tool that will simplify ... dispatching on the type of an argument. (Incidentally, note that according to PEP 20, "Explicit is better than implicit.")
The "Alternative approaches" section of PEP 443 "Single-dispatch generic functions" however seems worth reading. There are several references to possible solutions, including one to "Five-minute Multimethods in Python" article by Guido van Rossum from 2005.
Does this count for ad hock polymorphism?
class A:
def __init__(self):
pass
def aFunc(self):
print "In A"
class B:
def __init__(self):
pass
def aFunc(self):
print "In B"
f = A()
f.aFunc()
f = B()
f.aFunc()
output
In A
In B
Another version of polymorphism
from module import aName
If two modules use the same interface, you could import either one and use it in your code.
One example of this is from xml.etree.ElementTree import XMLParser
It appears to me that there's no easy way to use the RawDescriptionHelpFormatter in the argparse module without either violating PEP8 or cluttering your namespace.
Here is the most obvious way to format it:
parser = argparse.ArgumentParser(prog='PROG',
....
formatter_class=argparse.RawDescriptionHelpFormatter)
This violates the stipulation that lines should not exceed 80 characters
Here's how the example in the argparse documentation looks (spoiler: this is actually correct; see comments below):
parser = argparse.ArgumentParser(
prog='PROG',
formatter_class=argparse.RawDescriptionHelpFormatter,
....
This violates PEP8 E128 regarding the indentation of continuation lines.
Here'another possibility:
parser = argparse.ArgumentParser(
prog='PROG',
formatter_class=
argparse.RawDescriptionHelpFormatter,
....
This violates PEP8 E251 regarding spaces around = for keyward arguments.
(Of course, this doesn't even address the fact that my character-count for the line assumes that the parser token starts on the first column, which is the best case scenario; what if we want to create a parser inside a class and/or a function?)
So the only remaining alternative, as far as I can tell, is to either clutter the namespace:
from argparse import RawDescriptionHelpFormatter, ArgumentParser
...or use a silly temporary variable (which also clutters the namespace):
rawformatter = argparse.RawDescriptionHelpFormatter
parser = argparse.ArgumentParser(prog='PROG',
....
formatter_class=rawformatter)
Am I missing something? I guess having RawDescriptionHelpFormatter and ArgumentParser directly in the current namespace isn't a big deal, but this seems like an unnecessary frustration.
Your second example looks fine to me, and seems to match the "# Hanging indents should add a level." example here: http://legacy.python.org/dev/peps/pep-0008/#indentation
Also seems to tally with this similar question/answer: What is PEP8's E128: continuation line under-indented for visual indent?
A couple of other variations:
from argparse import RawDescriptionHelpFormatter as formatter
parser = argparse.ArgumentFormatter(prog='PROG')
# you can reassign a parser attribute after initialization
parser.formatter_class = formatter
But there are other inputs to ArgumentParser that may be long enough to require wrapping or assignment to separate variables.
usage = 'PROG [-h] --foo FOO BAR etc'
description = """\
This is a long multiline description
that may require dedenting.
"""
description = textwrap.dedent(description)
parser=Argparse(usage=usage, description=description, formatter_class=formatter)
Take a look at test_argparse.py to see the many ways that a long and multifaceted parser can be defined.
http://bugs.python.org/issue13023 raises the issue of what if you wanted several formatter modifications, e.g.:
This means we can either pass argparse.RawDescriptionHelpFormatter or argparse.ArgumentDefaultsHelpFormatter, but not both.
The recommended solution is to subclass the formatter:
class MyFormatter(argparse.RawDescriptionHelpFormatter,
argparse.ArgumentDefaultsHelpformatter):
pass
Another tactic to keep the namespace clean is to wrap the parser definition in a function or module.
http://ipython.org/ipython-doc/2/api/generated/IPython.core.magic_arguments.html
is an example of how IPython wraps argparse to make new API for its users.
Another parser built on argparse, plac first builds a cfg dictionary:
https://code.google.com/p/plac/source/browse/plac_core.py
def pconf(obj):
...
cfg = dict(description=obj.__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
...
return cfg
def parser_from(obj, **confparams):
...
conf = pconf(obj).copy()
conf.update(confparams)
parser = ArgumentParser(**conf)
I am using Python's dir() function to determine what attributes and methods a class has.
For example to determine the methods in wx.Frame, I use dir(wx.Frame)
Is there any command to determine the list of arguments for each method? For example, if I want to know what arguments belong to wx.Frame.CreateToolBar().
As mentioned in the comments, you can use help(fun) to enter the help editor with the function's signature and docstring. You can also simply use print fun.__doc__ and for most mature libraries you should get reasonable documentation about the parameters and the function signature.
If you're talking about interactive help, consider using IPython which has some useful extras. For instance you could type %psource fun to get a printout of the source code for the function fun, and with tab completion you could just type wx.Frame. and then hit TAB to see a list of all of the methods and attributes available within wx.Frame.
Even though GP89 seems to have already answered this question, I thought I'd jump in with a little more detail.
First, GP89's suggestion was the use Python's built-in help() method. This is a method you can use in the interactive console. For methods, it will print the method's declaration line along with the class' docstring, if it is defined. You can also access this with <object>.__doc__ For example:
>>> def testHelp(arg1, arg2=0):
... """This is the docstring that will print when you
... call help(testHelp). testHelp.__doc__ will also
... return this string. Here is where you should
... describe your method and all its arguments."""
...
>>> help(testHelp)
Help on function testHelp in module __main__:
testHelp(arg1, arg2=0)
This is the docstring that will print when you
call help(testHelp). testHelp.__doc__ will also
return this string. Here is where you should
describe your method and all its arguments.
>>>
However, another extremely important tool for understanding methods, classes and functions is the toolkit's API. For built-in Python functions, you should check the Python Doc Library. That's where I found the documentation for the help() function. You're using wxPython, whose API can be found here, so a quick search for "wx.Frame api" and you can find this page describing all of wx.Frame's methods and variables. Unfortunately, CreatteToolBar() isn't particularly well documented but you can still see it's arguments:
CreateToolBar(self, style, winid, name)
Happy coding!