How to document a module constant in Python? - python

I have a module, errors.py in which several global constants are defined (note: I understand that Python doesn't have constants, but I've defined them by convention using UPPERCASE).
"""Indicates some unknown error."""
API_ERROR = 1
"""Indicates that the request was bad in some way."""
BAD_REQUEST = 2
"""Indicates that the request is missing required parameters."""
MISSING_PARAMS = 3
Using reStructuredText how can I document these constants? As you can see I've listed a docstring above them, but I haven't found any documentation that indicates to do that, I've just done it as a guess.

Unfortunately, variables (and constants) do not have docstrings. After all, the variable is just a name for an integer, and you wouldn't want to attach a docstring to the number 1 the way you would to a function or class object.
If you look at almost any module in the stdlib, like pickle, you will see that the only documentation they use is comments. And yes, that means that help(pickle) only shows this:
DATA
APPEND = b'a'
APPENDS = b'e'
…
… completely ignoring the comments. If you want your docs to show up in the built-in help, you have to add them to the module's docstring, which is not exactly ideal.
But Sphinx can do more than the built-in help can. You can configure it to extract the comments on the constants, or use autodata to do it semi-automatically. For example:
#: Indicates some unknown error.
API_ERROR = 1
Multiple #: lines before any assignment statement, or a single #: comment to the right of the statement, work effectively the same as docstrings on objects picked up by autodoc. Which includes handling inline rST, and auto-generating an rST header for the variable name; there's nothing extra you have to do to make that work.
As a side note, you may want to consider using an enum instead of separate constants like this. If you're not using Python 3.4 (which you probably aren't yet…), there's a backport.enum package for 3.2+, or flufl.enum (which is not identical, but it is similar, as it was the main inspiration for the stdlib module) for 2.6+.
Enum instances (not flufl.enum, but the stdlib/backport version) can even have docstrings:
class MyErrors(enum.Enum):
"""Indicates some unknown error."""
API_ERROR = 1
"""Indicates that the request was bad in some way."""
BAD_REQUEST = 2
"""Indicates that the request is missing required parameters."""
MISSING_PARAMS = 3
Although they unfortunately don't show up in help(MyErrors.MISSING_PARAMS), they are docstrings that Sphinx autodoc can pick up.

If you put a string after the variable, then sphinx will pick it up as the variable's documentation. I know it works because I do it all over the place. Like this:
FOO = 1
"""
Constant signifying foo.
Blah blah blah...
""" # pylint: disable=W0105
The pylint directive tells pylint to avoid flagging the documentation as being a statement with no effect.

This is an older question, but I noted that a relevant answer was missing.
Or you can just include a description of the constants in the docstring of the module via .. py:data::. That way the documentation is also made available via the interactive help. Sphinx will render this nicely.
"""
Docstring for my module.
.. data:: API_ERROR
Indicates some unknown error.
.. data:: BAD_REQUEST
Indicates that the request was bad in some way.
.. data:: MISSING_PARAMS
Indicates that the request is missing required parameters.
"""

You can use hash + colon to document attributes (class or module level).
#: Use this content as input for moo to do bar
MY_CONSTANT = "foo"
This will be picked up by some document generators.
An example here, could not find a better one: Sphinx document module properties

the following worked for me with Sphinx 2.4.4:
in foo.py :
API_ERROR = 1
"""int: Indicates some unknown error."""
then to document it:
.. automodule:: foo.py
:members:

I think you're out of luck here.
Python don't support directly docstrings on variables: there is no attribute that can be attached to variables and retrieved interactively like the __doc__ attribute on modules, classes and functions.
Source.

The Sphinx Napoleon Python documentation extension allows to document module-level variables in an Attributes section.
Per https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html :
Attributes
----------
module_level_variable1 : int
Module level variables may be documented in either the ``Attributes``
section of the module docstring, or in an inline docstring immediately
following the variable.
Either form is acceptable, but the two should not be mixed. Choose
one convention to document module level variables and be consistent
with it.

Writing only because I haven't seen this option in the answers so far:
You can also define your constants as functions that simply return the desired constant value when called, so for example:
def get_const_my_const() -> str:
"""Returns 'my_const'."""
return "my_const"
This way they'll be a bit "more constant" on one hand (less worrying about reassignment) and they'll also provide the opportunity for regular documentation, as with any other function.

Related

Linting classes created at runtime in Python

For context, I am using the Python ctypes library to interface with a C library. It isn't necessary to be familiar with C or ctypes to answer this question however. All of this is taking place in the context of a python module I am creating.
In short, my question is: how can I allow Python linters (e.g. PyCharm or plugin for neovim) to lint objects that are created at runtime? "You can't" is not an answer ;). Of course there is always a way, with scripting and the like. I want to know what I would be looking at for the easiest way.
First I introduce my problem and the current approach I am taking. Second, I will describe what I want to do, and ask how.
Within this C library, a whole bunch of error codes are defined. I translated this information from the .h header file into a Python enum:
# CustomErrors.py
from enum import Enum
class CustomErrors(Enum):
ERROR_BROKEN = 1
ERROR_KAPUTT = 2
ERROR_BORKED = 3
Initially, my approach is to have a single exception class containing a type field which described the specific error:
# CustomException.py
from CustomErrors import CustomErrors
class CustomException(Exception):
def __init__(self, customErr):
assert type(customErr) is CustomError
self.type = customErr
super().__init__()
Then, as needed I can raise CustomException(CustomErrors.ERROR_KAPUTT).
Now, what I want to do is create a separate exception class corresponding to each of the enum items in CustomErrors. I believe it is possible to create types at runtime with MyException = type('MyException', (Exception,), {'__doc__' : 'Docstring for ABC class.'}).
I can create the exception classes at runtime like so:
#CustomException.py
from CustomErrors import CustomErrors
...
for ce in CustomErrors:
n = ce.name
vars()[n] = type(n, (Exception,), {'__doc__' : 'Docstring for {0:s} class.'.format(n)})
Note: the reason I want to create these at runtime is to avoid hard-coding of an Exception list that change in the future. I already have the problem of extracting the C enum automatically on the backburner.
This is all well and good, but I have a problem: static analysis cannot resolve the names of these exceptions defined in CustomException. This means PyCharm and other editors for Python will not be able to automatically resolve the names of the exceptions as a suggested autocomplete list when the user types CustomException.. This is not acceptable, as this is code for the end user, who will need to access the exception names for use in try-except constructs.
Here is the only solution I have been able to think of: writing a script which generates the .py files containing the exception names. I can do this using bash. Maybe people will tell me this is really the only option. But I would like to know what other approaches are suggested for solving this problem. Thanks for reading.
You can add a comment to tell mypy to ignore dynamically defined attribute errors. Perhaps the linters that you use share a similar way to silence such errors.
mypy docs on silencing errors based on error codes
This example shows how to ignore an error about an imported name mypy thinks is undefined:
# 'foo' is defined in 'foolib', even though mypy can't see the
# definition.
from foolib import foo # type: ignore[attr-defined]

When using Sphinx, how can I document members that don't have docstrings?

I'm writing documentation for package I've published, and I find the more thorough your documentation, the easier people find your package to use (duh). I'm actually having a lot of fun lovingly writing up all the features and details of my code.
However, I'm completely flummoxed by how to write Sphinx-compatible documentation for class-level variables. In particular, I've got some enum classes I'd like to document, but for the life of me I can't figure out a way to attach documentation to the enum values. The result is I've got these long and awkward sections of my documentation where there's no documentation except variable names.
I realize using straight docstrings is out of the question because variables don't have docstrings, but surely Sphinx has some sort of functionality around this? Otherwise, how would people document publicly visible values like constants?
While it's true Python variables can't have docstrings. Using Sphinx autodoc extension, the autodata and autoattribute directives allow documenting variables and constants. Notice the use is different in case of a module level variable or a class member.
Additionally, should you want to arbitrate a value for the member in the documentation that is different from the programmatic value, the best way is using annotations.
autodata and autoattribute support the annotation option.
Sphinx can pick up comments on variable declarations and include them in the documentation (while those comments aren't docstrings they will be rendered in the documentation). Let's look at a minimal working example:
Source file your_module_name.py:
"""This modules documentation."""
ONE_CONSTANT = "A constant value."
"""Turns out the comment is rendered as a docstring if we put it underneath."""
#: Lets try it like this
TWO_CONSTANTS = 2000
class OneClass:
"""Commenting members of a class."""
#: Lets try the third comment like this.
THREE_CONSTANTS = 3000
#: Lets try the forth comment like this.
FOUR_CONSTANTS = 4000
Corresponding your_module_name.rst:
your\_module\_name module
=========================
.. automodule:: your_module_name
:members: ONE_CONSTANT, TWO_CONSTANTS
.. autodata:: ONE_CONSTANT
:annotation: =this annotation
.. autoclass:: OneClass
:members:
:undoc-members:
:show-inheritance:
The resulting HTML:
Final note: This may force adapting some conventions you previously used for commenting variables in your source code. Also, if using annotations you'll want to not include that member in autodata or automodule to avoid it being included twice.

How to specialize documenters in sphinx autodoc

I am documenting a project with Sphinx I want to create a specialized version of the autoclass:: directive that allows me to modify the documentation string for certain classes.
Here is one thing I have tried: searching the sphinx source, I found that the autoclass directive is created via the ClassDocumenter object. Following this, my idea was to register a subclass of ClassDocumenter for the classes of interest, and then override get_doc to modify the docstring.
Here's my attempt at such an extension:
from six import class_types
from sphinx.ext.autodoc import ClassDocumenter
from testmodule import Foo # the class that needs modified documentation
class MyClassDocumenter(ClassDocumenter):
objtype = 'myclass'
priority = 20 # higher priority than ClassDocumenter
#classmethod
def can_document_member(cls, member, membername, isattr, parent):
return isinstance(member, class_types) and issubclass(member, Foo)
def get_doc(self, encoding=None, ignore=1):
doc = super(MyClassDocumenter, self).get_doc(encoding, ignore)
# do something to modify the output documentation
doc[0].insert(0, "ADD SOMETHING TO THE DOC")
return doc
def setup(app):
app.add_autodocumenter(MyClassDocumenter)
The problem is, when I run this I get an error: ERROR: Unknown directive type "py:myclass". It seems that registering a documenter is not enough to register the associated directive, but I've not been able to find any clues in the sphinx source to tell me how such a registration is supposed to happen. It's not as simple as using the standard add_directive() methods, because I have no explicit directive to register.
How can I correctly accomplish such a specialization of an auto-documenter in sphinx?
(note: the full set of files to reproduce the error is available in this gist)
I have found something in Docutils markup API
Directives are handled by classes derived from docutils.parsers.rst.Directive. They have to be registered by an extension using Sphinx.add_directive() or Sphinx.add_directive_to_domain().
Is it what you are looking for?
Documenter classes require an existing directive to apply, when appropriate. This can either be a "built in" directive, or it can be a custom directive you create and register. Either way, the directive to use is specified by the directivetype class attribute. You can specify this value explicitly:
class MyClassDocumenter(ClassDocumenter):
directivetype = 'foo' # corresponds to :foo: in the doc source
objtype = 'bar'
priority = 20
Or, it looks like you can omit directivetype and it will use the same value as objtype by default. (This is an educated guess)
So then the question is: have you registered a new directive for :myclass:? If not I think that is the problem. Alternatively, if there is some other directive that you want to be used (whether built-in, or custom) the answer is probably to specify it explicitly by including a value for directivetype in your Documenter class definition.

Python - Should I alias imports with underscores?

This is a conceptual question rather than an actual problem, I wanted to ask the great big Internet crowd for feedback.
We all know imported modules end up in the namespace of that module:
# Module a:
import b
__all__ = ['f']
f = lambda: None
That allows you to do this:
import a
a.b # <- Valid attribute
Sometimes that's great, but most imports are side effects of the feature your module provides. In the example above I don't mean to expose b as a valid interface for callers of a.
To counteract that we could do:
import b as _b
This marks the import as private. But I can't find that practice described anywhere nor does PEP8 talk about using aliasing to mark imports as private. So I take it it's not common practice. But from a certain angle I'd say it's definitely semantically clearer, because it cleans up the exposed bits of your module leaving only the relevant interfaces you actually mean to expose. Working with an IDE with autocomplete it makes the suggested list much slimmer.
My question boils down to if you've seen that pattern in use? Does it have a name? What arguments would go against using it?
I have not had success using the the __all__ functionality to hide the b import. I'm using PyCharm and do not see the autocomplete list change.
E.g. from some module I can do:
import a
And the autocomplete box show both b and f.
While Martijn Pieters says that no one actually uses underscore-hiding module imports, that's not exactly true. The traces of this technique can be easily seen in the Python's standard library itself (see a related question). Let's check it:
$ git clone --depth 1 git#github.com:python/cpython.git
$ cd cpython/Lib
$ find -iname '*.py' | xargs grep 'as \+_' | wc -l
183
$ find -iname '*.py' | xargs grep '^import' | wc -l
4578
So, about 4% of all imports are underscore-prefixed — not a majority, but yet far from “no one”. There also are some examples in numpy and matplotlib packages.
For me, this import-underscoring is the only right way to import module without exposing it at public. Unfortunately, it totally ruins code appearance, so many developers avoid using it. But it has some advantages over the __all__ approach:
Library user can decide whether a name is private or not without consulting documentation by just looking at the name. Looking to just __all__ is not enough to tell private from public as some public names may be not listed there.
No need to maintain a refactoring-unfriendly list of code entity names.
To the conclusion, both _name and __all__ are just plain evil, but the thing which actually needs fixing is the Python's module system, designed under an impression of “simple is better than complex” mantra. Compare to, for example, the way how modules behave in Haskell.
UPD:
It looks like PEP-8 has already answered this question in its “Public and internal-interfaces” section:
Even with __all__ set appropriately, internal interfaces (packages, modules, classes, functions, attributes or other names) should still be prefixed with a single leading underscore.
No one uses that pattern, and it is not named.
That's because the proper method to use is to explicitly mark your exported names with the __all__ variable. IDEs will honour this variable, as do tools like help().
Quoting the import statement documentation:
The public names defined by a module are determined by checking the module’s namespace for a variable named __all__; if defined, it must be a sequence of strings which are names defined or imported by that module. The names given in __all__ are all considered public and are required to exist. If __all__ is not defined, the set of public names includes all names found in the module’s namespace which do not begin with an underscore character ('_'). __all__ should contain the entire public API. It is intended to avoid accidentally exporting items that are not part of the API (such as library modules which were imported and used within the module).
(Emphasis mine).
Also see Can someone explain __all__ in Python?

Pylint best practices

Pylint looks like a good tool for running analysis of Python code.
However, our main objective is to catch any potential bugs and not coding conventions. Enabling all Pylint checks seems to generate a lot of noise. What is the set of Pylint features you use and is effective?
You can block any warnings/errors you don't like, via:
pylint --disable=error1,error2
I've blocked the following (description from http://www.logilab.org/card/pylintfeatures):
W0511: Used when a warning note as FIXME or XXX is detected
W0142: Used * or * magic*. Used when a function or method is called using *args or **kwargs to dispatch arguments. This doesn't improve readability and should be used with care.
W0141: Used builtin function %r. Used when a black listed builtin function is used (see the bad-function option). Usual black listed functions are the ones like map, or filter, where Python offers now some cleaner alternative like list comprehension.
R0912: Too many branches (%s/%s). Used when a function or method has too many branches, making it hard to follow.
R0913: Too many arguments (%s/%s). Used when a function or method takes too many arguments.
R0914: Too many local variables (%s/%s). Used when a function or method has too many local variables.
R0903: Too few public methods (%s/%s). Used when class has too few public methods, so be sure it's really worth it.
W0212: Access to a protected member %s of a client class. Used when a protected member (i.e. class member with a name beginning with an underscore) is access outside the class or a descendant of the class where it's defined.
W0312: Found indentation with %ss instead of %ss. Used when there are some mixed tabs and spaces in a module.
C0111: Missing docstring. Used when a module, function, class or method has no docstring. Some special methods like __init__ don't necessarily require a docstring.
C0103: Invalid name "%s" (should match %s). Used when the name doesn't match the regular expression associated to its type (constant, variable, class...).
To persistently disable warnings and conventions:
Create a ~/.pylintrc file by running pylint --generate-rcfile > ~/.pylintrc
Edit ~/.pylintrc
Uncomment disable= and change that line to disable=W,C
Pyflakes should serve your purpose well.
-E will only flag what Pylint thinks is an error (i.e., no warnings, no conventions, etc.)
Using grep like:
pylint my_file.py | grep -v "^C"
Edit :
As mentionned in the question, to remove the conventions advices from pylint output, you remove the lines that start with an uppercase C.
From the doc of pylint, the output consists in lines that fit the format
MESSAGE_TYPE: LINE_NUM:[OBJECT:] MESSAGE
and the message type can be:
[R]efactor for a “good practice” metric violation
[C]onvention for coding standard violation
[W]arning for stylistic problems, or minor programming issues
[E]rror for important programming issues (i.e. most probably bug)
[F]atal for errors which prevented further processing
Only the first letter is displayed, so you can play with grep to select/remove the level of message type you want.
I didn't use Pylint recently, but I would probably use a parameter inside Pylint to do so.

Categories